Mastering Prompt-Driven Development: A Step-by-Step Guide to SPDD
Introduction
Structured Prompt-Driven Development (SPDD) is a workflow pioneered by Thoughtworks' internal IT organization to harness the power of large language models (LLMs) for team-based development. Unlike ad-hoc prompting, SPDD treats prompts as first-class artifacts—stored in version control, reviewed, and iterated upon—aligning code generation with business needs. To succeed, developers need three core skills: alignment (ensuring prompts match business goals), abstraction-first (designing prompts that generalize), and iterative review (refining through cycles). This guide walks you through the SPDD process using a practical step-by-step approach, based on the methodology detailed in the GitHub repository by Wei Zhang and Jessie Jie Xia.

What You Need
- An LLM programming assistant (e.g., GitHub Copilot, ChatGPT, or a specialized code model)
- Version control system (e.g., Git)
- A shared repository for code and prompts
- Clear business requirements or user stories
- A code editor or IDE that supports prompt integration
- Collaboration tools for peer review (e.g., pull request workflows)
Step-by-Step SPDD Workflow
-
Step 1: Define the Business Objective
Start by articulating the specific business need in plain language. For example, “We need a function to calculate discount rates based on customer loyalty tier.” Document this objective as a comment or separate specification. This step ensures alignment between the prompt and real-world value. Avoid technical jargon—focus on what the code must achieve from a business perspective.
-
Step 2: Draft the Prompt Using Abstraction-First Principles
Write a prompt that specifies the desired code at an abstract level. Instead of detailing every line, describe the function’s purpose, inputs, outputs, and constraints. For instance: “Create a Python function that takes a customer’s `loyalty_tier` (string) and `purchase_amount` (float) and returns the discounted total. Use a 5% discount for ‘Silver’, 10% for ‘Gold’, and 15% for ‘Platinum’.” This abstraction-first approach lets the LLM generate appropriate implementation while keeping the prompt reusable across similar tasks.
-
Step 3: Align the Prompt with Business Needs
Review the prompt against the original business objective. Ensure the language correctly captures the intent—e.g., check that tier names match company standards. Add context if needed, such as edge cases (e.g., “What if the tier is unknown?”). This alignment step prevents misunderstandings and costly rework. Use comments in the prompt file to annotate business rules that are not obvious.
-
Step 4: Generate Code from the Prompt
Feed the finalized prompt into your LLM assistant. Review the output critically—do not trust it blindly. Verify the code compiles, runs, and passes basic unit tests. The prompt becomes a first-class artifact because it now lives alongside the generated code. Save both the prompt and the code in version control, ideally in a dedicated
/promptsdirectory with a naming convention likediscount_calculator_v1.md. -
Step 5: Commit Prompt Alongside Code
Create a commit that includes both the code and the prompt file. In the commit message, reference the prompt file (e.g., “Implement discount calculator per prompt/discount_calculator.md”). This ensures full traceability: future developers can see why code was written a certain way by reading the prompt. Treat prompts as living documents—just as code evolves, prompts can be updated via new commits.
-
Step 6: Iterative Review and Refinement
Conduct peer reviews of both the code and the prompt. Ask questions like: “Does the prompt still reflect the business requirement? Can it be simplified? Does the code match the prompt’s intent?” Use the iterative review skill to refine both artifact types. For example, if the generated code lacks error handling, update the prompt to add “include error handling for invalid inputs.” Then regenerate the code or manually patch it. Close the loop by updating the prompt to reflect the final implementation.
Tips for Success
- Master the three skills: Alignment, abstraction-first, and iterative review. Without these, prompts become unreliable.
- Store prompts in version control: Always commit prompts alongside code to maintain a single source of truth.
- Use clear file structures: Keep prompts in a
/promptsfolder with descriptive filenames and metadata. - Collaborate on prompts: Just as you review code, review prompts with teammates to catch ambiguities.
- Update prompts when requirements change: If business logic evolves, update the prompt before regenerating code.
- Test generated code thoroughly: LLM outputs may contain subtle bugs—apply your normal testing standards.
- Document lessons learned: Maintain a shared “prompt patterns” wiki to spread effective techniques across the team.
By following the SPDD workflow, your team can leverage LLMs consistently and reliably, turning raw AI assistance into a disciplined engineering practice.
Related Articles
- Microsoft Releases 86-DOS 1.00 Source Code to Public on 45th Anniversary
- Pyroscope 2.0: Accelerating Continuous Profiling with Enhanced Scalability and OTLP Support
- Your Guide to Publishing on the Python Insider Blog (New Home)
- Exploring Alan Turing's Legacy Through 'Breaking the Code' in Cambridge, MA
- Taming Time in JavaScript: Why Dates Break and How Temporal Fixes It
- NVIDIA Unveils Nemotron 3 Nano Omni: A Unified Multimodal Model for Smarter, Faster AI Agents
- Google Invites Developers to Co-Create I/O 2026 Countdown with AI Tools
- 2025 Go Developer Survey: Key Insights on Developer Challenges, AI Usage, and Documentation Gaps