HLDD Generator
A four-stage Claude API pipeline I built to turn SME interview notes into a gold-standard High-Level Design Document for an eLearning course. Validated against 13 quality rules I derived from a gap analysis of best-in-class HLDDs, so every output is audit-ready by construction.
A High-Level Design Document is the contract between an instructional designer and everyone else on a course build, the SME, the project manager, the storyboard artist, the developer. When the HLDD is good, the course goes smoothly. When it’s bad, the disagreements emerge three weeks into production and the budget melts. I built the HLDD Generator to make “good HLDD” the cheap default.
What I built
A four-stage Claude API pipeline that produces gold-standard HLDDs from a structured SME interview transcript:
- SME Architecture, extract the learning need, audience, business context, and constraints
- Content Architecture, structure modules, sections, and learning objectives in Mager’s performance objective format
- Design Specification, pedagogical strategy, interactivity types, media, accessibility decisions
- Assessment Architecture, aligned assessment methods, mastery criteria, validation logic
The interesting work is in the validation layer. A separate prompt evaluates the output against 13 quality rules I derived from a gap analysis of three reference-grade HLDDs. The rules cover everything from objective-assessment alignment to scope discipline to the right number of branching scenarios per section. Anything that fails gets rewritten. Anything that passes goes into the final document.
What it improved
What this project taught me is that the bottleneck for AI-generated artifacts is not the model. It’s the evaluation layer. A smaller model with a strict critic outperforms a larger model with no critic, every time. The 13-rule validation step is the discipline that makes the difference between “impressive demo” and “defensible production output.”
The first time we ran the full pipeline against a real SME interview, we had a publishable HLDD in under thirty minutes. The previous record on a manual write-up was three days. That’s the kind of efficiency improvement that changes what an L&D team can take on in a quarter.