Architecture Modernizer: The Claude Code Agent That Guides You Through Legacy System Transformations
Modernizing a legacy architecture is one of the most expensive, risky, and time-consuming undertakings in software engineering. It’s not just about rewriting code — it’s about understanding domain boundaries, designing event flows, managing data migration, preserving uptime, and communicating the plan across engineering teams. Most developers have gone through this at least once and know how quickly it becomes a months-long slog of architectural debates, half-finished diagrams, and late-night production incidents.
The Architecture Modernizer agent for Claude Code is designed to short-circuit that process. Rather than spending days researching decomposition strategies or arguing about service boundaries in Confluence, you get an experienced architectural collaborator available in your terminal — one that understands domain-driven design, the Strangler Fig pattern, CQRS, event-driven systems, and resilience engineering at a level that lets you move from “we need to break this monolith apart” to a concrete, phased plan in a single working session.
This is not a generic “help me write code” agent. It’s a specialist. And that specificity is exactly what makes it useful.
When to Use the Architecture Modernizer
This agent earns its place when you’re facing architectural decisions that have long-term structural consequences. Here are the real-world scenarios where it pays off immediately:
- Decomposing a monolith under scaling pressure: Your Rails or Django app has served you well, but you’re hitting database bottlenecks and deployment coupling is slowing down your teams. You need a decomposition strategy that doesn’t require a flag-day rewrite.
- Designing a greenfield microservices system: You’re building something new and want to get the service boundaries right the first time, based on actual domain boundaries rather than org chart mirroring.
- Migrating from synchronous to event-driven architecture: You have point-to-point REST integrations between services that are creating tight coupling, cascading failures, and deployment nightmares. You want to introduce a message broker but need a coherent design.
- Implementing CQRS and read/write separation: Your query patterns are diverging from your write patterns and you’re bending your data model into uncomfortable shapes to accommodate both.
- Planning data migration strategies: You need to move from a shared monolithic database to per-service databases without taking the system offline or losing data integrity.
- Adding resilience to distributed systems: You’ve gone microservices but now you have cascading failures, no observability, and no circuit breakers. You need a systematic approach to hardening the system.
- Preparing for an architectural review: You’re presenting a modernization plan to senior engineering leadership and need structured, defensible recommendations backed by industry patterns.
Real-World Examples
Scenario 1: Breaking Apart a Legacy E-Commerce Monolith
A senior engineer is working on a seven-year-old Java monolith handling inventory, orders, payments, and customer management. The team has been told to “go microservices” but has no roadmap.
Developer: We have a Spring Boot monolith with about 400k lines of code covering
inventory, orders, payments, shipping, and customer profiles. All on one Postgres
database. We need to decompose this without a big bang rewrite. Where do we start?
Agent: Start with domain-driven design and event storming before touching a single
line of code. Your first task is identifying bounded contexts — the natural seams
in your domain where language, ownership, and data models diverge.
Based on what you've described, your candidate bounded contexts are likely:
- Order Management (lifecycle: placed → confirmed → fulfilled → closed)
- Inventory (stock levels, reservations, replenishment)
- Payments (authorization, capture, refunds — likely a good candidate for early
extraction since it has the clearest external boundary via payment gateway APIs)
- Shipping & Fulfillment (carrier integration, tracking)
- Customer Identity (authentication, profile, preferences)
For extraction sequencing, use the Strangler Fig pattern. Don't extract the most
complex domain first. Extract Payments first — it has a clear API boundary, well-
understood failure modes, and the monolith already treats it somewhat separately
via your gateway integration. Build the new Payment Service, route new traffic to
it via an API gateway, and let the monolith's payment code atrophy.
For the shared Postgres database, don't try to split it immediately. In phase one,
introduce logical schema separation — each future service owns its schema namespace.
Enforce it through application-layer access controls. Physical database separation
comes in phase two, after service boundaries are validated.
What does your current inter-module communication look like inside the monolith —
direct method calls, shared repositories, or something else?
Scenario 2: Designing an Event-Driven Order Processing System
A team is building a new order fulfillment pipeline and wants to use Kafka but isn’t sure how to structure the event flows.
Developer: We're building an order processing system from scratch. We want event-
driven with Kafka. The flow is: order placed → inventory reserved → payment
charged → shipping label created → customer notified. How do we model this?
Agent: Model this as a choreography-based saga, not orchestration. Each service
listens for domain events and publishes its own — no central coordinator.
Your event topology:
Topic: orders.placed
Publisher: Order Service
Consumers: Inventory Service, Analytics
Topic: inventory.reserved
Publisher: Inventory Service
Consumers: Payment Service
Topic: inventory.reservation_failed
Publisher: Inventory Service
Consumers: Order Service (triggers order cancellation flow)
Topic: payment.captured
Publisher: Payment Service
Consumers: Shipping Service
Topic: payment.failed
Publisher: Payment Service
Consumers: Order Service, Inventory Service (release reservation)
Topic: shipment.label_created
Publisher: Shipping Service
Consumers: Notification Service, Order Service (status update)
Key design decisions to make now:
1. Event schema registry — use Confluent Schema Registry or AWS Glue. Enforce
backward-compatible schema evolution from day one.
2. Idempotency — every consumer must be idempotent. Use idempotency keys on
payment capture, deduplication on inventory reservation.
3. Saga failure compensation — map every failure path before you write code.
Inventory reservation failure must trigger order cancellation. Payment failure
must release inventory. These are not edge cases; design them as first-class flows.
4. Outbox pattern — use transactional outbox for Order Service to guarantee that
database writes and event publishes are atomic.
What's your target latency for the happy path, and are you self-hosting Kafka or
using a managed service?
What Makes This Agent Powerful
Domain-Driven Design as a First Principle
The agent doesn’t jump to technical solutions. It leads with bounded contexts, ubiquitous language, and domain modeling. This prevents the most common microservices failure mode: services cut along technical layers rather than business domains, leading to chatty interfaces and distributed monoliths.
Pattern Fluency
The agent has deep familiarity with the patterns that matter in modernization work: Strangler Fig, Saga (choreography and orchestration), CQRS, Event Sourcing, Outbox Pattern, API Gateway, Circuit Breaker, Sidecar, and more. It applies them contextually, not as buzzword decoration.
Migration Safety by Default
Every recommendation from this agent includes considerations for maintaining system reliability during the transition. It thinks in phases, rollback procedures, and incremental validation — because the agent’s system prompt explicitly mandates it. This is the difference between an architectural diagram and an architectural plan.
Data Architecture Depth
Most architecture conversations stall when they hit the database. This agent addresses data migration and synchronization strategies directly — including the politically difficult question of the shared monolithic database — with practical approaches like logical schema separation before physical split, dual-write patterns, and event-sourced migration strategies.
Observability as a Design Concern
Distributed tracing, structured logging, metrics, and alerting are treated as architectural requirements, not afterthoughts. The agent will surface observability gaps in your proposed design before they become production incidents.
How to Install the Architecture Modernizer Agent
Installing Claude Code agents is straightforward. The agent runs as a subagent within Claude Code, loaded automatically from a local markdown file.
Follow these steps:
- Navigate to the root of your project (or your home directory if you want the agent available globally).
- Create the directory
.claude/agents/if it doesn’t exist. - Create a new file at
.claude/agents/architecture-modernizer.md. - Paste the following system prompt as the file contents:
---
name: architecture-modernizer
description: Software architecture modernization specialist. Use PROACTIVELY for monolith decomposition, microservices design, event-driven architecture, and scalability improvements.
---
You are an architecture modernization specialist focused on transforming legacy systems into modern, scalable architectures.
## Focus Areas
- Monolith decomposition into microservices
- Event-driven architecture implementation
- API design and gateway implementation
- Data architecture modernization and CQRS
- Distributed system patterns and resilience
- Performance optimization and scalability
## Approach
1. Domain-driven design for service boundaries
2. Strangler Fig pattern for gradual migration
3. Event storming for business process modeling
4. Bounded contexts and service contracts
5. Observability and distributed tracing
6. Circuit breakers and resilience patterns
## Output
- Service decomposition strategies and boundaries
- Event-driven architecture designs and flows
- API specifications and gateway configurations
- Data migration and synchronization strategies
- Distributed system monitoring and alerting
- Performance optimization recommendations
Include comprehensive testing strategies and rollback procedures. Focus on maintaining system reliability during transitions.
Once the file is saved, Claude Code loads it automatically. You can invoke it in your Claude Code session by referencing the agent by name, or Claude Code will surface it proactively when your prompts involve architectural topics that match its description.
No configuration files, no API keys, no build steps. The file is the agent.
Conclusion: Practical Next Steps
If you have a modernization project on your roadmap — even a vague one — install this agent today and start a session with your most pressing architectural question. Don’t wait until you have a formal project kickoff. The most valuable use of this agent is often early-stage thinking: validating your domain model, stress-testing your decomposition strategy before anyone writes a line of code, or getting a second opinion on a migration sequence.
Concretely: open your terminal, create the agent file, then describe your current system to it — technology stack, rough domain areas, the pain points driving the modernization. Ask it to propose a phased decomposition plan. Evaluate the output against what your team has been debating. The goal isn’t to blindly follow the agent’s output; it’s to use it as a structured thinking partner that forces rigor into a process that typically runs on gut feel and conference talk slides.
Architecture decisions made poorly cost quarters of engineering time to unwind. An agent that helps you make them carefully is worth installing before your next sprint planning meeting.
Agent template sourced from the claude-code-templates open source project (MIT License).
