AI Interviews Humans Like a Reporter to Gather Complex Task Context

By

Breaking: New Technique Lets LLMs Interview Humans to Build Context

A novel method is emerging in the AI world: instead of humans writing lengthy context documents for large language models (LLMs), the LLM itself now interviews the human—ask by ask—to extract the needed information. This technique, called the “interrogatory LLM,” flips the traditional workflow on its head.

AI Interviews Humans Like a Reporter to Gather Complex Task Context
Source: martinfowler.com

Martin Fowler, a prominent software architect and author of the bliki where the method is detailed, explains: “The obvious way to feed context to an LLM is for a human to write it. But an alternative is to use an LLM to write this context after interviewing a human.”

The approach is gaining traction among developers working on complex tasks such as feature design or system integration, where multiple pages of markdown are typically required.

How It Works

The process is straightforward: prompt the LLM to interrogate you. It asks a series of questions to gather all necessary details—desired user experience, implementation guidelines, external systems to consult, and more. The human answers, and the LLM compiles the context into a report for a future session.

A critical rule, first highlighted by Harper Reed in his blog, is that the LLM should ask only one question at a time. Fowler notes that he frequently had to remind the LLM of this during his own tests.

“A striking element of his approach is insisting that the LLM ask only one question at a time,” Fowler says, citing Reed’s work.

Dual Use: Document Creation and Review

The interrogatory LLM can be used in two complementary ways. First, it can build a context document from scratch by interviewing an expert. Second, it can review an existing document—such as a software specification—by interviewing a human expert to check its accuracy.

“People often find reviewing hard,” Fowler observes. “A conversation with an LLM might be more fruitful, particularly if the document isn’t well-written.” This provides a low-friction alternative to having experts read through dense materials.

In practice, one interrogatory LLM might create the document, and another can then interview a different expert to validate it.

Background: The Context Problem

LLMs excel at complex tasks only when fed substantial context. For example, designing a new feature requires descriptions of user interface, implementation constraints, and references to external systems—often filling several pages of markdown. Historically, a human had to write all that.

The interrogatory LLM removes that burden by extracting the same information through dialogue, making it especially valuable for people who struggle with writing.

Fowler, a natural writer himself, acknowledges the challenge: “Many folks find writing hard, often very hard. This can be a real problem when we need to get information out of someone’s head into a form that other humans can consume.”

What This Means

This technique opens the door for non-writers to contribute knowledge without the pain of drafting documents. The output may carry an “AI-writing tang” that some dislike, but Fowler argues that “that’s better than not having the information itself, either due to rushed writing or no writing at all.”

For organizations, it means faster knowledge capture, more accurate specifications, and a way to leverage expert insights without requiring them to become authors. The interrogatory LLM acts as a skilled interviewer, asking the right questions and compiling answers into structured context.

As the approach spreads, we may see LLMs interviewing humans across industries—from software development to medical diagnosis—to create the rich context that drives intelligent automation.

Stay tuned: developers are already iterating on the single-question constraint and exploring multi‑expert review workflows.

Related Articles

Recommended

Discover More

10 Crucial Differences Between Content Models and Design Systems for Omnichannel SuccessUnderstanding HCP Terraform with Infragraph: Your Questions AnsweredHow to Use Bitcoin's Open Network to Distribute a Banned Documentary: A Step-by-Step GuideNvidia Halts Production of Older Jetson AI Modules Amid Global Memory CrunchUnderstanding Peristaltic Pumps: Key Questions and Answers