An evidence layer that feeds your AI curated learning-science research and quality rubrics. 183 notes. 32 tools. 14 rubrics. Your AI reasons from real research instead of plausible-sounding pattern matching.
Same prompt. Same AI. With and without Learning Brain.
Claude produced a beautiful interactive HTML course with tabbed modules, concept cards, and dark mode. No retrieval practice. No assessment. No transfer design. A content delivery artifact that looks premium and teaches nothing.
Cumulative retrieval practice between modules. Worked-example fading grounded in expertise-reversal research. Honest transfer scoping. Post-training retrieval schedule. None of these appeared without Learning Brain.
The prettiest output was the emptiest pedagogically. That's the problem Learning Brain solves.
183 research-backed notes across 9 domains. Each note is a complete reasoning unit — the evidence, the effect sizes, the boundary conditions, and the design implications for different delivery modalities.
Every note carries an evidence-strength rating. Over 110 rated strong (meta-analyses, replicated findings). The tools surface these ratings — moderate evidence is never laundered as strong.
Domains: cognitive load, retrieval practice, motivation, multimedia design, assessment, instructional design, course design, learner psychology, accessibility and UDL, behaviour change.
AI models are trained to be agreeable. Their default mode is to soften criticism and pass work that should fail. Every Learning Brain rubric carries explicit instructions to resist this. Vague objectives get hard fails, not gentle suggestions. Polished courses with no retrieval practice get identified as engagement illusions.
Most designers can name three. The audit checks all ten, systematically, every time.
Evaluates evidence for claims. Explains principles. Diagnoses learning problems. Resolves design tensions.
Course architectures, module design against Merrill's principles, content sequencing, assessment blueprints, adaptive paths.
Objectives, MCQs, question banks, worked examples, explanations, feedback copy, diagnostic assessments.
Live session run sheets, discussion structures, facilitator guides, stuck-learner diagnosis.
Module audits, objective audits, MCQ audits, illusion scans, transfer prediction.
The five highest-risk tools were tested against 25 adversarial inputs — each designed to trigger the specific failure mode the tool exists to prevent. Polished-but-hollow modules. Objectives dressed in active language that promise nothing assessable. MCQs with surface-cue giveaways.
50 adversarial outputs across two AI models. Zero actively misleading.
Both models correctly identified every embedded flaw, gave hard fails where warranted, and provided direct fixes rather than hedged suggestions.
It won't produce finished courseware on its own. Learning Brain feeds your AI the evidence. Your AI produces the design output. The quality depends partly on which AI you use — Claude currently renders the strongest results.
It won't replace design judgment. The tools handle the science. Your people handle the context.
It won't guarantee learning outcomes. It guarantees the design is structurally sound. What happens in delivery is still up to humans.
Learning Brain was built by Laurie Harrison — VP of Skills Products at a Fortune 100 enablement company, 25+ years in learning design. The evidence base isn't academic. It's the same knowledge base Laurie uses daily to design training for global sales and customer success teams.
Works with any AI that supports the MCP protocol. Free during the pre-commercial beta.
Settings → Integrations → Add custom connector
https://learningbrain.ai/mcp
OAuth connects automatically. No API key needed.
Settings → Apps & Connectors → Add connector
https://learningbrain.ai/mcp
Developer Mode required. Pro, Team, Enterprise, or Edu plan.