Sunday, April 5

SEO Analyzer Agent for Claude Code: Automate Technical SEO Audits in Your Dev Workflow

Every developer who has shipped a feature-complete web application knows the feeling: the product works perfectly, the code is clean, and then someone asks, “Why aren’t we ranking on Google?” At that point, you’re either context-switching into a world of meta tags, Core Web Vitals, and structured data schemas — or you’re paying an SEO consultant to tell you things you could have caught at build time.

The SEO Analyzer agent for Claude Code eliminates that context switch. Instead of toggling between your editor and a checklist of SEO best practices, you get a technical SEO specialist embedded directly in your development environment. It audits as you build, surfaces issues with priority rankings, and provides implementation-ready code — not vague recommendations. For senior developers shipping production web applications, this is the difference between SEO as an afterthought and SEO as a first-class engineering concern.

What the SEO Analyzer Agent Does

This agent is a technical SEO specialist scoped to six core domains:

  • Technical SEO audits and site structure analysis — evaluating crawlability, URL hierarchies, sitemap integrity, and canonical tag implementation
  • Meta tag, title, and description optimization — reviewing and rewriting head elements for click-through rate and indexability
  • Core Web Vitals and page performance analysis — diagnosing LCP, CLS, and INP issues with specific fix strategies
  • Schema markup and structured data implementation — generating valid JSON-LD for articles, products, breadcrumbs, FAQs, and more
  • Internal linking structure and URL optimization — mapping anchor text distribution, identifying orphaned pages, and improving crawl depth
  • Mobile-first indexing and responsive design validation — catching viewport issues and touch target failures that affect Google’s primary crawl index

The output isn’t a report you read and file away. It’s a prioritized audit with specific implementation examples and expected impact metrics — the kind of output that feeds directly into a sprint backlog.

When to Use This Agent

The agent description says to use it proactively, and that word is doing real work. Most SEO problems are cheapest to fix before they ship. Here are the scenarios where this agent pays for itself immediately:

Pre-launch Technical Audits

Before deploying a new site or major redesign, run a comprehensive audit against your HTML templates, routing configuration, and component library. Catch missing canonical tags, duplicate title patterns from CMS templates, and robots.txt misconfigurations before they spend weeks accumulating crawl debt in Google’s index.

Framework Migrations

Migrating from a client-side rendered React SPA to Next.js, or from Gatsby to Astro? URL structures change, rendering behavior changes, and metadata handling changes. The SEO Analyzer can audit your migration plan and flag SEO regressions — like accidentally making previously-indexed JavaScript-rendered content server-render with different meta tags.

Performance Investigations

When Core Web Vitals scores drop in Search Console and you need to trace the cause back to specific code, this agent connects the performance data to implementation-level fixes. It’s not just “improve your LCP” — it’s identifying that your hero image is missing explicit width/height attributes and lacks a fetchpriority hint.

Content-Heavy Application Development

Building a blog engine, documentation site, e-commerce catalog, or news platform means generating hundreds or thousands of pages from templates. A single meta tag bug in a template multiplies across every page. The SEO Analyzer catches these systemic issues at the template level before they scale.

Structured Data Implementation

Schema markup is notoriously easy to get wrong and tedious to write by hand. When you need valid JSON-LD for product pages, review snippets, or FAQ accordions, the agent generates correct, Google-spec-compliant markup with the right property nesting — saving the back-and-forth with Google’s Rich Results Test tool.

Competitive Benchmarking

When you’re analyzing a competitor’s page source to understand why they’re outranking you, the agent can help interpret what their technical SEO implementation is doing and translate that into specific changes for your own codebase.

Real-World Examples

Scenario: Diagnosing a Core Web Vitals Regression After a Dependency Update

A developer notices their LCP score dropped from 1.8s to 3.4s after updating a UI component library. They open Claude Code and invoke the SEO Analyzer.

Developer: Our LCP jumped from 1.8s to 3.4s after upgrading our component library 
from v2 to v3. The hero section renders a full-bleed image. Here's the relevant 
component code and the before/after bundle analysis. What's causing this and 
how do we fix it?

Agent: Looking at your component diff, the v3 upgrade changed the Image component 
to lazy-load by default — including the hero image that's above the fold. That's 
your LCP regression.

Three fixes in priority order:

1. Add loading="eager" and fetchpriority="high" to the hero Image component. 
   This is the immediate fix and should recover most of your LCP delta.

2. Add a <link rel="preload"> tag in your document head for the hero image URL. 
   Since the image URL is known at render time, you can preload it before the 
   component tree resolves.

3. Explicitly set width and height on the Image component to prevent layout 
   shift during load — your CLS score is also at risk here.

Here's the corrected component implementation: [implementation code]

Expected impact: LCP should return to sub-2s range. The preload addition 
typically contributes an additional 200-400ms improvement depending on 
your CDN configuration.

Scenario: Generating Schema Markup for an E-Commerce Product Page

Developer: I'm building product pages for a furniture e-commerce site. 
Each product has a name, price, availability status, brand, SKU, 
aggregate ratings, and breadcrumb navigation. Generate the complete 
JSON-LD structured data for a product page.

Agent: Here's the complete JSON-LD for a furniture product page that 
qualifies for Google's Product rich results and breadcrumb display:

[complete JSON-LD output with Product, BreadcrumbList, and AggregateRating 
schemas properly nested, with notes on required vs. recommended properties 
and which fields trigger rich result eligibility]

Important implementation notes:
- The "availability" field must use the Schema.org enum values 
  (schema.org/InStock), not custom strings
- Price must include priceCurrency — Google rejects numeric-only price fields
- AggregateRating requires both ratingValue and reviewCount to be eligible 
  for star snippets in SERPs

Validate this in Google's Rich Results Test before deploying. I'd also 
recommend adding an Organization schema to your site's global layout 
to establish brand entity association.

Scenario: Auditing Internal Linking for a Documentation Site

Developer: Our docs site has 200+ pages. We're getting good traffic to 
top-level category pages but almost no organic traffic to individual 
reference pages. Can you review our navigation structure and suggest 
internal linking improvements?

Agent: Based on the site structure you've shared, here are the primary issues:

Your reference pages are sitting at crawl depth 4-5 from the homepage — 
Google's crawl budget deprioritizes pages that deep, especially on sites 
without strong domain authority. 

Specific recommendations:
1. Add a "Related reference pages" section to each category page linking 
   directly to its 5-8 most important child pages
2. Implement breadcrumb navigation (with matching BreadcrumbList schema) 
   to create additional crawl paths and establish hierarchy signals
3. Your most-visited reference pages should be linked from the global 
   sidebar or footer — elevation to crawl depth 2 significantly improves 
   indexation rate for high-value pages

Here's a prioritized implementation roadmap with estimated crawl depth 
reduction for each change...

What Makes This Agent Powerful

The technical depth of this agent comes from its focus on implementation-level specificity. Generic SEO advice is useless to a developer. “Improve your page speed” tells you nothing. “Add fetchpriority=’high’ to the LCP image and remove render-blocking third-party scripts from your critical path” tells you exactly what to do.

The agent’s structured approach — assessment, then content analysis, then performance evaluation, then mobile testing, then structured data, then competitive benchmarking — mirrors how a professional SEO technical audit actually works. It doesn’t jump to recommendations before understanding the site architecture.

The prioritized output format matters too. Not every SEO issue has the same impact. A missing meta description on an interior page is not the same severity as a misconfigured robots.txt blocking your entire site from being crawled. The agent ranks by expected impact, so you’re working on what moves the needle first.

How to Install the SEO Analyzer Agent

Installation is straightforward. Claude Code automatically discovers agents stored in your project’s .claude/agents/ directory.

Step 1: In your project root, create the directory if it doesn’t exist:

mkdir -p .claude/agents

Step 2: Create the agent file:

touch .claude/agents/seo-analyzer.md

Step 3: Open the file and paste the following system prompt:

You are an SEO analysis specialist focused on technical SEO audits, 
content optimization, and search engine performance improvements.

## Focus Areas

- Technical SEO audits and site structure analysis
- Meta tags, titles, and description optimization
- Core Web Vitals and page performance analysis
- Schema markup and structured data implementation
- Internal linking structure and URL optimization
- Mobile-first indexing and responsive design validation

## Approach

1. Comprehensive technical SEO assessment
2. Content quality and keyword optimization analysis
3. Performance metrics and Core Web Vitals evaluation
4. Mobile usability and responsive design testing
5. Structured data validation and enhancement
6. Competitive analysis and benchmarking

## Output

- Detailed SEO audit reports with priority rankings
- Meta tag optimization recommendations
- Core Web Vitals improvement strategies
- Schema markup implementations
- Internal linking structure improvements
- Performance optimization roadmaps

Focus on actionable recommendations that improve search rankings and user 
experience. Include specific implementation examples and expected impact metrics.

Step 4: Save the file. Claude Code will automatically load it the next time you start a session. No configuration files to edit, no CLI commands to run — the agent is available immediately.

You can invoke it by name in any Claude Code conversation, or let Claude route to it automatically when your query involves SEO, meta tags, performance analysis, or structured data.

Practical Next Steps

After installing the agent, put it to work immediately with a real audit. Open a production HTML file or your main layout template and ask for a technical SEO assessment. Even mature codebases typically surface three to five quick wins in the first pass — missing canonical tags, suboptimal title tag patterns, or images without explicit dimensions.

For teams building content-driven applications, consider adding the SEO Analyzer to your code review workflow. Before merging changes to page templates, routing logic, or anything that touches the document head, run a quick audit. The issues are always cheaper to fix before they’re in production than after they’ve been indexed.

The agent works best when you give it context — paste your actual HTML, share your routing configuration, include your current performance metrics. The more specific your input, the more implementation-ready the output.

Agent template sourced from the claude-code-templates open source project (MIT License).

Share.
Leave A Reply