13 Rules for a Gold-Standard HLDD
Thirteen quality rules I derived from a gap analysis of three best-in-class High-Level Design Documents, the discipline that separates a course that ships smoothly from one that melts in production.
A High-Level Design Document is the contract between an instructional designer and everyone else who touches a course. When it’s good, the storyboard artist, the developer, the SME, and the project manager all agree on what they’re building before anyone touches a tool. When it’s bad, the disagreements emerge three weeks into production and the budget melts.
After running a gap analysis on three reference-grade HLDDs, the kind that produce courses that actually ship on time, I distilled thirteen rules. They’re uneven in scope on purpose. Some are structural. Some are about scope discipline. Some are about the small disciplines that quietly separate an excellent document from a competent one.
- One performance gap, named in the first paragraph. Not three. Not “a range of needs.” One.
- Audience defined by what they currently do, not what they are. Job titles are a poor proxy for current behaviour.
- Learning objectives in Mager format. Behaviour, condition, criterion. No exceptions.
- Every objective traces to an assessment item. If you can’t test it, it’s not an objective.
- Every assessment item traces to a learning activity. If they can’t practise it, you can’t test it.
- Module count justified, not assumed. Three is not a default.
- Section count regime declared. Same number of sections per module, or an explicit reason why not.
- Branching scenarios specified at the structural level, not left for storyboard.
- Interactivity types named per section, with frequency budget.
- Media decisions made, not deferred. “TBD” in an HLDD is a future fight.
- Accessibility decisions explicit. WCAG level, captions, audio descriptions, keyboard nav, all stated up front.
- Out of scope, written down. The unspoken assumptions are what kill projects.
- Acceptance criteria for the build phase, named and measurable. Not “looks polished.”
Every one of these came from watching a project go wrong because the rule was missing. They’re not theoretical. They’re scar tissue.
Derived from a gap analysis comparing reference-grade HLDDs to typical industry output.