Database Optimizer: The Claude Code Agent That Eliminates Query Performance Bottlenecks
Every senior developer has been there: a production alert fires at 2am, users are complaining about slow load times, and you’re staring at a query execution plan trying to figure out why a straightforward lookup is taking over a second. Database performance problems are insidious — they compound silently as data volumes grow, they manifest differently across PostgreSQL, MySQL, and MongoDB, and diagnosing them correctly requires deep, simultaneous knowledge of query planning, indexing strategy, memory configuration, and I/O patterns.
The Database Optimizer agent for Claude Code encodes that expertise into a systematic workflow. Instead of context-switching between documentation tabs, running ad-hoc EXPLAIN ANALYZE queries, and manually cross-referencing index statistics, you get a single agent that handles the full performance analysis and remediation cycle — from identifying slow queries to implementing covering indexes to configuring buffer pool sizing. It’s the difference between spending three hours diagnosing a degraded analytics pipeline and having an actionable optimization plan in fifteen minutes.
When to Use This Agent
The Database Optimizer agent isn’t just for emergencies. It’s designed for any scenario where query performance needs systematic attention rather than ad-hoc guesswork.
Production Performance Incidents
A query that was running at 80ms six months ago is now taking 1.4 seconds. Data volume has grown, statistics are stale, and the query planner is choosing a suboptimal path. This agent analyzes execution plans, identifies missing or bloated indexes, and recommends targeted fixes without requiring you to already know the answer before you start.
Pre-Launch Performance Audits
Before a major feature ships, you want confidence that the new query patterns will hold up under load. The agent can analyze proposed queries against your actual schema, identify covering index gaps, and flag join patterns that will degrade as the table grows — before they become production incidents.
Scaling Bottlenecks in Data-Heavy Applications
Analytics platforms are particularly vulnerable to performance degradation as data volumes scale. What worked at 10 million rows starts to break down at 500 million. The agent evaluates partitioning strategies, assesses aggregation efficiency, and identifies whether your query patterns need columnar storage or materialized views to stay within acceptable SLAs.
Multi-Database Environments
Engineering teams running PostgreSQL for transactional data, MongoDB for document storage, and Redis for caching need consistent performance baselines across all systems. This agent handles cross-system audits, applying database-specific optimization strategies rather than forcing a one-size-fits-all approach.
Regular Performance Maintenance
Index bloat accumulates. Table statistics drift. Replication lag creeps up. Using the agent for scheduled performance reviews — not just incident response — catches these problems before they surface as user-facing slowdowns.
Real-World Examples
Scenario 1: Slow User Profile Lookups
A web application’s main user profile endpoint has degraded to 1.2 seconds per query despite moderate data volumes. The indexes exist but aren’t being used efficiently.
Developer: Our user profile queries are taking over a second. Can you analyze
the execution plans and suggest optimizations?
Agent: I'll analyze the execution plans, identify missing or inefficient indexes,
review table statistics, and test query rewrites. Let me benchmark the current
performance and implement strategic indexing adjustments.
First, I'll run EXPLAIN (ANALYZE, BUFFERS) on your profile lookup query to see
what the planner is actually doing versus what we'd expect. From there I'll check
index usage rates in pg_stat_user_indexes, look for sequential scans on large
tables, and assess whether your statistics are current. If the planner is
choosing a seq scan over an available index, it usually means either the
statistics are stale, the index isn't selective enough for the query pattern,
or we need a covering index that includes the SELECT columns to avoid heap
fetches entirely.
Scenario 2: Analytics Platform Degrading at Scale
A data analytics platform has seen query response times grow from 500ms to 5 seconds as data volume increased tenfold. The team needs a scalable solution, not just index tweaks.
Developer: Our analytics queries have gotten 10x slower as our data grew.
What optimizations can help us scale?
Agent: I'll profile slow queries, analyze join patterns, review index coverage,
assess partitioning strategies, and optimize aggregations. I can design covering
indexes, implement table partitioning, and tune parallel query execution to
restore performance at your current data volumes — and design for the next
order of magnitude as well.
For analytical workloads at this scale, the usual culprits are full table scans
on fact tables without partition pruning, aggregations that can't use index-only
scans, and joins that spill to disk because sort_mem is undersized for the
actual row widths. I'll also evaluate whether materialized views on your most
common aggregation patterns would trade acceptable staleness for a 10-100x
query speedup.
Scenario 3: Cross-System Performance Audit
A company running PostgreSQL, MySQL, and MongoDB in production wants a unified performance audit and consistent optimization baselines across all three systems.
Developer: We have several databases in production. How do we ensure all are
optimized and performing well?
Agent: I'll assess performance across all systems, identify common bottlenecks,
design database-specific optimization strategies, and establish performance
baselines. I can implement indexing strategies suited to each system's query
planner and data model.
For PostgreSQL I'll focus on index bloat, vacuum settings, and connection pool
configuration. For MySQL, I'll audit the InnoDB buffer pool size, slow query
log patterns, and covering index coverage. For MongoDB, the focus is on index
selectivity, query plan cache behavior, and whether any collections need
compound index redesign. Each system gets a performance baseline document
so you have objective metrics to compare against after optimizations ship.
What Makes This Agent Powerful
Systematic Optimization Checklist
The agent operates against a concrete set of performance targets: query time under 100ms, index usage above 95%, cache hit rate above 90%, lock waits below 1%, and replication lag under one second. These aren’t aspirational — they’re the benchmarks the agent uses to evaluate whether an optimization is actually complete, not just improved.
Full-Stack Database Knowledge
Most performance tools are database-specific. This agent covers PostgreSQL, MySQL, MongoDB, Redis, Cassandra, ClickHouse, Elasticsearch, and Oracle — and applies the right optimization techniques for each rather than pattern-matching from one system’s playbook onto another’s architecture.
End-to-End Workflow Coverage
The agent handles every layer of database performance: query rewriting and execution plan analysis, index design including partial and expression indexes, schema optimization including partitioning and normalization decisions, memory configuration for buffer pools and sort operations, I/O tuning, replication settings, and monitoring setup. You don’t need to coordinate across five different specialists or tools.
Advanced Technique Awareness
Beyond basic indexing, the agent can evaluate when to use materialized views, columnar storage, sharding patterns, and OLAP-specific optimizations — the kind of architectural decisions that require experience with how systems behave at scale, not just how they’re documented.
Monitoring and Observability Setup
Optimization without measurement is guesswork. The agent includes monitoring setup as a core deliverable: performance metrics, wait event tracking, lock analysis, trend analysis, and alert thresholds. You leave each optimization session with both the fix and the visibility to know if it holds.
How to Install
Installing the Database Optimizer agent takes about two minutes. Claude Code automatically loads any agents defined in the .claude/agents/ directory of your project.
Create the agent file at the following path in your project root:
.claude/agents/database-optimizer.md
Paste the full agent system prompt into that file and save it. The next time you open Claude Code in that project, the agent will be available. You can invoke it directly by asking Claude to use the database optimizer agent, or by referencing it in your workflow when working on database-related tasks.
If you want the agent available across all your projects rather than a single repository, place the file in your global Claude Code agents directory instead. Either way, no additional configuration, API keys, or plugin installation is required — the file presence is enough for Claude Code to recognize and load it.
Conclusion and Next Steps
Database performance problems are expensive — in engineering time, in user experience, and in infrastructure costs when the solution to a slow query is throwing more hardware at it rather than fixing the root cause. The Database Optimizer agent gives you a structured, expert-level approach to performance analysis and remediation without requiring every developer on your team to carry deep DBA expertise in their heads.
Start by pointing the agent at your most problematic slow queries. Run it against your current production slow query log and let it build a prioritized optimization plan. If you’re operating across multiple database systems, use it for a cross-system audit to establish honest performance baselines before your next growth inflection point hits.
The agent is most valuable when used proactively — not just when something is already on fire. Building it into your regular development workflow, alongside schema migrations and query reviews, is how you prevent the 2am incident in the first place.
Agent template sourced from the claude-code-templates open source project (MIT License).
