Sunday, April 5

Most sales teams are drowning in unqualified leads. Someone fills out a form, a rep spends 30 minutes researching them, writes a custom email, and the prospect never replies. Multiply that by 50 leads a week and you’ve got a serious throughput problem. An AI sales agent built on Claude can handle the qualification pass, score the prospect, and generate a tailored proposal draft — before a human ever touches the lead. This article shows you exactly how to build that, with working code and honest caveats about where it breaks down.

What the Agent Actually Does (And What It Doesn’t)

Let’s be specific about scope before touching any code. This agent handles three distinct jobs:

  • Lead qualification: Evaluating inbound lead data against an ideal customer profile (ICP)
  • Prospect scoring: Producing a numeric score with reasoning your reps can audit
  • Proposal drafting: Generating a personalized first-draft proposal the rep can refine and send

What it doesn’t do: close deals, handle objections in real-time, or replace the discovery call. Anyone selling you an AI agent that fully replaces your sales team is selling you fantasy. The value here is eliminating the repetitive cognitive work that kills your team’s capacity — not replacing human judgment on the deals that actually matter.

The architecture is straightforward: inbound lead data hits a webhook (from your CRM, Typeform, or wherever), gets passed to Claude with a structured prompt, and the output — score, qualification notes, and proposal draft — gets written back to your CRM or sent to a Slack channel for rep review. I’ll show the Claude integration in Python, but the same logic works inside n8n or Make if you prefer a no-code wrapper.

The Qualification Prompt: Where Most Implementations Fall Apart

The biggest mistake I see is sending Claude a blob of lead data and asking “is this a good lead?” Claude will give you an answer, but it’ll be inconsistent between runs, unauditable, and structurally useless. You need a structured output contract from the start.

Define your ICP criteria explicitly in the system prompt. For a B2B SaaS tool targeting mid-market ops teams, that might look like this:

SYSTEM_PROMPT = """
You are a sales qualification analyst for Acme Corp, a B2B SaaS platform for operations teams.

Our Ideal Customer Profile (ICP):
- Company size: 50–500 employees
- Industry: SaaS, fintech, or e-commerce
- Role of contact: VP Ops, Head of Operations, COO, or similar
- Pain point signals: mentions of manual processes, spreadsheet dependency, scaling issues
- Disqualifiers: freelancers, agencies, companies under 10 employees, government sector

You will receive lead data in JSON format. Respond ONLY with a JSON object containing:
{
  "qualification_score": <integer 0-100>,
  "tier": <"hot" | "warm" | "cold" | "disqualified">,
  "icp_match_reasons": [<list of strings explaining why this lead fits>],
  "concerns": [<list of strings about risks or gaps>],
  "recommended_action": <string, one sentence>,
  "proposal_context": <string, 2-3 sentences summarizing what to emphasize in the proposal>
}

Be concise. Do not explain your reasoning outside the JSON structure.
"""

That proposal_context field is the bridge to the next stage — it carries qualified signal into the proposal generator without needing to re-evaluate the lead from scratch.

Building the Qualification Function

import anthropic
import json
from typing import Optional

client = anthropic.Anthropic(api_key="your-api-key")

def qualify_lead(lead_data: dict) -> Optional[dict]:
    """
    Takes a lead dict, returns structured qualification output.
    Returns None if Claude fails to return valid JSON (handle upstream).
    """
    lead_json = json.dumps(lead_data, indent=2)

    message = client.messages.create(
        model="claude-haiku-4-5",  # Haiku is fast and cheap for this; ~$0.001 per lead
        max_tokens=512,
        system=SYSTEM_PROMPT,
        messages=[
            {
                "role": "user",
                "content": f"Qualify this lead:\n\n{lead_json}"
            }
        ]
    )

    raw = message.content[0].text.strip()

    # Claude occasionally wraps JSON in markdown fences even when told not to
    if raw.startswith("```"):
        raw = raw.split("```")[1]
        if raw.startswith("json"):
            raw = raw[4:]

    try:
        return json.loads(raw)
    except json.JSONDecodeError:
        # Log this — it happens maybe 2-3% of runs with complex lead data
        print(f"JSON parse failure. Raw output: {raw}")
        return None


# Example lead coming from a Typeform webhook or CRM
sample_lead = {
    "name": "Sarah Chen",
    "email": "sarah@growthops.io",
    "company": "GrowthOps",
    "company_size": "120 employees",
    "role": "Head of Operations",
    "industry": "SaaS",
    "message": "We're managing everything in spreadsheets and it's breaking down as we scale. Looking for something that can handle our ops workflows without requiring engineering resources.",
    "source": "website_contact_form"
}

result = qualify_lead(sample_lead)
print(json.dumps(result, indent=2))

Running this against the sample lead above, you’d expect a score in the 80–90 range, tier “hot”, with the ICP match reasons citing role alignment, company size, and the explicit pain point about spreadsheets. That’s the qualification stage done in about 800ms at roughly $0.001 per call using Haiku.

Why I Use Haiku Here Instead of Sonnet

Qualification is a pattern-matching task against explicit criteria. Haiku handles it accurately when the prompt is well-structured. Save Sonnet or Claude 3.7 for the proposal drafting, where nuanced personalization actually matters. At 50 leads per week, using Haiku for qualification costs you about $0.05/week. Using Sonnet for the same task costs roughly 15x more with no meaningful quality gain on this specific job.

Drafting the Proposal with Sonnet

The proposal generator takes the qualification output — specifically the proposal_context and icp_match_reasons — plus your standard proposal template and produces a personalized first draft. The key is that the personalization is targeted, not generic. “I noticed you’re managing operations at a 120-person SaaS company” is not personalization. “Given your team’s dependency on spreadsheets for ops workflows and the scaling pressure you’re feeling at 120 people, here’s how we’d structure an implementation for you” — that’s personalization.

PROPOSAL_SYSTEM = """
You are a senior account executive writing a sales proposal for Acme Corp.
Write in a professional but direct tone — no fluff, no corporate speak.
The proposal should make the prospect feel understood, not sold to.

Structure:
1. Opening (2-3 sentences): Show you understand their specific situation
2. Recommended Solution (1 paragraph): Tailored to their use case
3. Key Outcomes (3 bullet points): Specific, quantifiable where possible
4. Next Step (1 sentence): A clear, low-friction call to action

Keep the total length under 350 words. This is an email proposal, not a PDF deck.
"""

def draft_proposal(lead_data: dict, qualification: dict) -> str:
    """
    Generates a personalized proposal draft.
    Uses Sonnet for quality — this is the output the rep will actually send.
    """
    context = f"""
Lead Information:
- Name: {lead_data['name']}
- Company: {lead_data['company']}
- Role: {lead_data['role']}
- Their message: {lead_data['message']}

Qualification Analysis:
- Score: {qualification['qualification_score']}/100
- Key fit reasons: {', '.join(qualification['icp_match_reasons'])}
- Context for proposal: {qualification['proposal_context']}

Write a proposal email for this prospect.
"""

    message = client.messages.create(
        model="claude-sonnet-4-5",  # ~$0.01-0.02 per proposal at current pricing
        max_tokens=600,
        system=PROPOSAL_SYSTEM,
        messages=[{"role": "user", "content": context}]
    )

    return message.content[0].text.strip()


# Only draft proposals for leads scoring above threshold
if result and result["qualification_score"] >= 60:
    proposal = draft_proposal(sample_lead, result)
    print(proposal)

Wiring It Into Your Stack

The functions above are the core logic. How you trigger them depends on your stack:

n8n Workflow

Set a Webhook node as the trigger, add an HTTP Request node pointing to a small FastAPI wrapper around these functions, then route the output to a Slack message (for rep review) and a HubSpot node (to update the contact record with score and proposal draft). The whole workflow takes about 20 minutes to build once your Python service is running.

Make (formerly Integromat)

Same pattern — Webhook → HTTP module → Router (hot/warm/cold) → Slack + CRM. Make’s HTTP module handles JSON response parsing cleanly, which saves you a custom parser step.

Standalone FastAPI Service

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel

app = FastAPI()

class LeadPayload(BaseModel):
    name: str
    email: str
    company: str
    company_size: str
    role: str
    industry: str
    message: str
    source: str

@app.post("/qualify")
async def qualify_and_draft(lead: LeadPayload):
    lead_dict = lead.dict()
    qualification = qualify_lead(lead_dict)

    if qualification is None:
        raise HTTPException(status_code=500, detail="Qualification parsing failed")

    response = {"qualification": qualification, "proposal": None}

    # Only generate proposals for leads worth pursuing
    if qualification["qualification_score"] >= 60:
        response["proposal"] = draft_proposal(lead_dict, qualification)

    return response

Deploy this on Railway or Render for under $10/month, point your CRM webhook at it, and you’re live.

What Actually Breaks in Production

Three things will bite you if you skip them:

  • Inconsistent lead data quality: Your ICP criteria assume clean inputs. When someone types “idk maybe 50 people?” as company size, the score degrades. Add a normalization step before the Claude call — a simple regex pass or a cheap preprocessing prompt that standardizes fields.
  • JSON parse failures: Claude Haiku will occasionally hallucinate markdown formatting around JSON, especially when the lead message contains code-like text or special characters. The startswith("```") check above handles the common case, but add proper retry logic with exponential backoff for production.
  • Score calibration drift: Your ICP criteria will evolve. A score of 75 means something different after you’ve refined your target market. Build in a human feedback loop — let reps mark qualification results as accurate/inaccurate — and re-evaluate your system prompt quarterly against that data.

Cost Reality Check

For a team processing 200 leads/month:

  • Qualification (Haiku): 200 × ~$0.001 = ~$0.20/month
  • Proposals for 40% of leads (Sonnet): 80 × ~$0.015 = ~$1.20/month
  • Total Claude API cost: under $2/month

Add your hosting ($10/month on Railway), and you’re running an AI sales agent infrastructure for under $15/month. The ROI math against a rep spending 30 minutes per lead on manual qualification is not subtle.

Who Should Build This and Who Shouldn’t

Solo founders and small sales teams (under 5 reps): Build this. Your biggest constraint is throughput and consistency. This solves both and costs almost nothing to run. Spend a weekend on the FastAPI wrapper and n8n workflow and you’ll have it live.

Mid-size teams with existing CRM automation: Add the qualification and proposal functions as a service your existing CRM webhooks hit. Don’t rebuild your whole stack — bolt this onto HubSpot or Salesforce via their webhook/workflow tools.

Enterprises with complex procurement or highly technical products: The proposal drafting quality may not meet your bar without significant prompt engineering and human-in-the-loop review. Use this for the first qualification pass only, and keep proposal drafting in human hands until you’ve validated output quality against your specific product complexity.

The AI sales agent pattern works best when your ICP is well-defined and your qualification criteria are explicit. If your sales team still debates internally whether a given company profile is a good fit, get that alignment first — Claude can only be as consistent as the criteria you give it.

Editorial note: API pricing, model capabilities, and tool features change frequently — always verify current details on the vendor’s website before building in production. Code examples are tested at time of writing; pin your dependency versions to avoid breaking changes. Some links in this article may be affiliate links — we may earn a commission if you sign up, at no extra cost to you.

Share.
Leave A Reply