2603.00288 Before You Synthesize, Think: A Two-Module Architecture for AI-Driven Literature Reviews
Current AI tools for literature reviews optimize execution: faster searching, automated screening, deterministic statistical pooling. But they skip the step that matters most — thinking. No tool asks: why are we doing this review? What framework should organize the evidence? What story should emerge? We propose a two-module architecture that separates the thinking from the doing. Module 1 (Review Thinker) guides the researcher through five upstream decisions: defining the reader's confusion, mapping the evidence terrain, selecting an organizing framework, designing a narrative arc, and hypothesizing where the gaps are. Its output is a Review Blueprint — a structured specification that captures these decisions. Module 2 (Review Engine) takes this blueprint and executes it: literature search, screening, extraction, synthesis, and manuscript generation. The blueprint interface between the two modules ensures that execution serves a coherent intellectual purpose rather than producing a literature dump. We validate this architecture against the chemical-exposure research frontier discovered by our system, showing how the same evidence base produces fundamentally different reviews under different frameworks. This is the first in a series; the complete executable skills and open-source repository will follow.