Sunday, April 5

Review Agent: Automated Quality Assurance for Your Obsidian Vault

If you’ve ever run a batch enhancement pass across an Obsidian vault — automatically generating tags, creating MOCs, suggesting backlinks, extracting entities — you already know the uncomfortable truth: automation at scale introduces drift. Tags get duplicated. Links get suggested without semantic grounding. MOCs reference notes that don’t exist. Frontmatter fields get stamped with the wrong date format. The output looks complete until you dig in and realize you’ve just scaled up a mess.

The Review Agent exists to close that loop. It’s a quality assurance specialist that runs after your enhancement agents have done their work, systematically cross-checking every major category of change — metadata, connections, tags, MOCs, and image organization — and surfacing problems before they calcify into your vault’s permanent structure. Instead of manually auditing hundreds of modified files after every enhancement pass, you hand that work to an agent that follows a rigorous checklist, spot-checks changes against reported actions, and generates a structured summary of what’s solid and what needs human eyes.

For developers managing knowledge systems at any meaningful scale, this is the difference between trusting your vault and just hoping it’s okay.

When to Use the Review Agent

The Review Agent is designed to be used proactively and systematically, not as a last resort when something breaks. Here are the concrete scenarios where it delivers the most value:

After Any Batch Enhancement Pass

Any time you run a link suggestion agent, tag normalization pass, or orphaned note connector across more than a handful of files, the Review Agent should run immediately after. It checks the enhancement reports generated by those agents, spot-samples modified files, and validates that the reported changes actually match the file state on disk.

Before Publishing or Exporting Vault Content

If your vault feeds into any external output — a digital garden, a documentation site, an exported knowledge base — running the Review Agent first ensures broken links, malformed frontmatter, and inconsistent tags don’t propagate downstream. Catching a broken MOC reference before it renders as a 404 is significantly cheaper than fixing it after the fact.

Onboarding New Content Sources

When you import content from another system — a Notion export, a Roam migration, a collection of markdown notes from a previous project — the Review Agent validates that the imported material meets your vault’s structural standards before it gets integrated into the wider graph.

Periodic Vault Health Audits

Even without active enhancement passes, vaults accumulate technical debt. Notes go stale, tags drift from the taxonomy, MOCs stop covering their directories. A scheduled Review Agent run gives you a current picture of vault health without requiring you to audit it manually.

Validating Agent Collaboration

In multi-agent Obsidian workflows — where a link agent, a tag agent, and a MOC agent all operate on overlapping content — the Review Agent checks for conflicting changes. Two agents that both touch a note’s frontmatter can produce inconsistent results. The Review Agent detects those cases systematically.

Real-World Examples

Scenario: Post-Enhancement Audit After a Tag Normalization Pass

A developer has just run a tag normalization agent across 340 notes in their VAULT01 system. The agent reported 218 tags standardized and 14 new hierarchical paths created. Rather than manually verifying the output, they invoke the Review Agent.

User: The tag normalization agent just finished. Can you audit its output and
tell me if there are any issues I need to handle manually?

Review Agent: Running audit against Tag_Analysis_Report.md and spot-checking
modified files.

Found 3 issues requiring attention:

1. DUPLICATE TAG PATH — "tech/javascript" and "tech/JavaScript" both appear
   in 12 files. The normalization agent applied case-correction to new tags
   but missed existing lowercase variants. Recommend a find-replace pass on
   the affected files.

2. HIERARCHY DEPTH VIOLATION — 2 files have tags at 4 levels
   ("tech/frontend/frameworks/react/hooks"). Vault standard is max 3 levels.
   Suggest collapsing to "tech/frontend/react/hooks".

3. TAXONOMY MISFIT — Tag "misc/to-sort" was applied to 8 notes that already
   have specific category tags. This appears to be a normalization artifact.
   Recommend removing the redundant tag from those files.

Remaining 215 tag changes verified as consistent with vault taxonomy.
Overall tag health improved significantly — orphaned tags reduced by 34.
No conflicting changes detected with concurrent MOC updates.

Scenario: MOC Completeness Check After Structural Reorganization

After reorganizing the vault’s directory structure, the developer wants to confirm that MOC coverage is complete and that all cross-references between MOCs are valid.

User: I just moved a bunch of content into three new subdirectories. Can you
check whether MOC coverage is complete and whether there are any broken
cross-references between MOCs?

Review Agent: Checking MOC structure against current directory layout.

COVERAGE GAPS:
- /Projects/Active/ has no MOC. 12 notes present. Recommend creating
  "MOC - Active Projects.md" following standard naming convention.
- /Research/Drafts/ has an existing MOC but it references 4 notes that no
  longer exist at those paths — they were moved during reorganization.

CROSS-REFERENCE ISSUES:
- "MOC - Frontend Development.md" links to "MOC - JavaScript Ecosystem.md"
  which was renamed to "MOC - JS Toolchain.md" during the reorganization.
  Link is broken. Needs updating in 3 referencing MOCs.

CORRECTLY STRUCTURED:
- 11 other MOCs validated. Naming convention compliant, hierarchy intact,
  links resolve correctly.

Recommended actions in priority order:
1. Fix the 4 broken note references in /Research/Drafts/ MOC
2. Update the renamed MOC reference across 3 files
3. Create the missing MOC for /Projects/Active/

What Makes This Agent Powerful

Systematic Checklist Coverage

The Review Agent doesn’t do a general pass and hope it catches things. It runs structured checklists across five distinct domains — metadata, connections, tags, MOCs, and image organization — ensuring that every category of vault quality gets explicit attention on every run. Nothing falls through because it wasn’t top of mind.

Report-First Verification

Rather than re-scanning the entire vault from scratch, the agent reads the reports generated by enhancement agents first, then spot-checks modified files against those reports. This approach is both efficient and rigorous — it validates that agents did what they said they did, which is exactly the kind of trust-but-verify behavior you want in an automated pipeline.

Conflict Detection Across Agents

When multiple agents touch overlapping content, the Review Agent cross-references their changes to identify conflicts. This is a capability that no individual enhancement agent has visibility into — only a dedicated review pass that sees the full picture can catch cross-agent inconsistencies.

Actionable, Prioritized Output

The agent is explicitly instructed to focus on systemic issues over minor inconsistencies, prioritize high-impact improvements, and consider user workflow impact. The output isn’t a raw list of every deviation detected — it’s a prioritized set of actionable recommendations that a developer can act on without needing to re-analyze the analysis.

Quantified Vault Health Metrics

Beyond qualitative feedback, the Review Agent tracks and reports concrete metrics: files enhanced, orphaned notes reduced, new connections created, tags standardized, MOCs generated. This gives you a measurable baseline for vault health over time, which matters when you’re making the case that your knowledge management system is actually improving.

How to Install the Review Agent

Installing the Review Agent takes less than two minutes. Claude Code automatically discovers and loads agents defined in your project’s .claude/agents/ directory — no registration step, no configuration file to modify.

Start by creating the agents directory if it doesn’t already exist:

mkdir -p .claude/agents

Then create the agent file:

touch .claude/agents/review-agent.md

Open .claude/agents/review-agent.md in your editor and paste the full agent system prompt — beginning with the role definition and covering all five checklist sections, the review process, quality metrics, and important notes. Save the file.

That’s it. The next time you start a Claude Code session in that project, the Review Agent is available. You can invoke it directly by name or let Claude select it automatically when your request maps to vault quality assurance work. No restart required between sessions once the file exists.

If you’re running a multi-agent Obsidian workflow, the convention is to keep all your Obsidian Ops Team agents in the same .claude/agents/ directory, which makes it straightforward to reference them in sequence — run your enhancement agents, then explicitly invoke the Review Agent to validate the output.

Conclusion and Next Steps

Automation without verification is just faster chaos. The Review Agent gives your Obsidian enhancement pipeline an explicit quality gate — one that runs the same thorough checks every time, doesn’t miss categories because it’s tired, and produces output you can act on immediately.

Start by installing the agent and running it against your most recent enhancement pass, even if that pass happened weeks ago. The report it generates will tell you exactly where your vault’s quality actually stands, not where you hope it stands.

From there, the practical next step is building the Review Agent into your standard workflow as a mandatory final stage: no enhancement pass is complete until the Review Agent has signed off on it. If you’re working with multiple Obsidian Ops Team agents, consider creating a Claude Code task or shell script that chains them in sequence, with the Review Agent running last and its output determining whether the session closes cleanly or flags items for manual follow-up.

The vault is only as trustworthy as the last thing that checked it.

Agent template sourced from the claude-code-templates open source project (MIT License).

Share.
Leave A Reply