# Agent-Led Growth Playbook

## Goal

A practical guide for making your product easier for AI systems to understand, compare, recommend, and help buyers adopt.

Format: Operating playbook

Canonical page: https://2066labs.com/playbooks/agent-led-growth

## What this playbook does

- Agent-led growth starts from a new buyer behavior: AI systems now research, compare, recommend, and sometimes implement on the buyer's behalf.
- The job is to equip that agent with original, sourceable, usable truth it could not reliably infer from the public web.

- Step 1: Decide whether the category is sensitive to agent-led growth
- Step 2: Build the buyer-agent prompt set
- Step 3: Run a model-by-model visibility audit
- Step 4: Build the source atlas
- Step 5: Identify what each model is missing or mislearning
- Step 6: Publish original source assets
- Step 7: Make the product usable by agents, not just legible
- Step 8: Build comparison evidence without fake impartiality
- Step 9: Deploy internal marketing agents
- Step 10: Measure recommendation quality and source movement
- Where 2066 Labs helps

## Step 1: Decide whether the category is sensitive to agent-led growth

Do this before optimizing. Not every category is equally exposed.

| Signal | What to check |
| --- | --- |
| Strong signals | - Buyers research before choosing.<br>- The purchase has meaningful cost, risk, integration work, or tradeoffs.<br>- Buyers ask for recommendations, comparisons, alternatives, implementation help, or troubleshooting.<br>- The category has active public evidence across websites, docs, YouTube, Reddit, LinkedIn, reviews, marketplaces, press, or customer content.<br>- The product can be selected or implemented by an agent, especially in software, developer tools, workflow tools, and complex B2B buying. |
| Weak signals | - Buyers make the decision without research.<br>- The category has little public discussion or source material.<br>- The product cannot be evaluated from public evidence.<br>- Regulated claims require review before public source assets can be expanded. |
| Decision output | - Name the primary buyer.<br>- Name the agent role: researcher, recommender, comparer, implementer, troubleshooter, or buyer assistant.<br>- Name the source surfaces most likely to influence answers.<br>- Decide whether to run the full playbook or only maintain basic agent-readable product truth. |

## Step 2: Build the buyer-agent prompt set

Write prompts as if a buyer is delegating work to an agent, not typing a keyword into search.

| Prompt type | Examples |
| --- | --- |
| Research | - what should I know before buying [category] for [buyer/team]<br>- what are the best options for [job] if I care about [constraint]<br>- which companies are credible for [specific problem] |
| Recommendation | - recommend a [tool/service/provider] for [buyer] with [constraint]<br>- what should I choose if I need [outcome] but cannot accept [risk]<br>- which option is best for [use case] and why |
| Comparison | - [your company] vs [competitor] for [buyer/use case]<br>- compare [option A], [option B], and [option C] from first principles<br>- ignore listicles and compare the underlying evidence |
| Implementation | - how do I set up [product] with [tool/system]<br>- what breaks when [product] is used in [environment]<br>- write an implementation plan for [team] using [product/service]<br>- troubleshoot [specific issue] in [workflow] |

## Step 3: Run a model-by-model visibility audit

Do not average the models together. Treat each model as its own system with different retrieval habits, source preferences, and answer patterns.

| Audit area | What to record |
| --- | --- |
| Systems to test | - ChatGPT.<br>- Claude.<br>- Gemini or Google AI Mode.<br>- Perplexity.<br>- Grok if X is relevant to the category.<br>- Coding agents or vertical assistants if they can choose, implement, or troubleshoot the product. |
| Record | - Whether the company is mentioned.<br>- Whether the company is recommended.<br>- The sentiment and themes attached to the company.<br>- The competitors or alternatives named beside it.<br>- The cited URLs.<br>- The sources the answer appears to rely on without citing.<br>- Whether the answer is based on owned, earned, third-party, community, marketplace, press, customer, or competitor sources.<br>- Whether the answer gives a useful next step. |
| Score | - 0: absent or wrong.<br>- 1: mentioned, but vague or inaccurate.<br>- 2: mostly accurate, but missing proof, fit, source quality, or next step.<br>- 3: accurate, specific, sourced, useful, and actionable. |

## Step 4: Build the source atlas

The source atlas explains why a model said what it said.

| Atlas area | What to capture |
| --- | --- |
| Source classes | - Owned pages: homepage, product pages, docs, comparisons, FAQs, markdown mirrors, llms.txt, sitemap.<br>- Video and audio: demos, podcasts, interviews, launch videos, walkthroughs.<br>- Community: Reddit, forums, comments, Discord or Slack exports when public and appropriate.<br>- Professional networks: LinkedIn posts, founder posts, employee posts, customer posts.<br>- Third-party proof: reviews, marketplaces, analyst pages, press, customer stories.<br>- Competitor-shaped sources: comparison pages, listicles, and pages that frame your category through someone else's terms. |
| Model hypotheses | - Gemini may lean heavily on YouTube.<br>- ChatGPT may pull from Reddit for consumer categories and LinkedIn for B2B categories.<br>- Claude may shift between pretrained knowledge and web sources depending on query and retrieval behavior.<br>- These are hypotheses to test, not rules to trust. |
| Atlas fields | - Prompt.<br>- Model.<br>- Answer summary.<br>- Mention or recommendation status.<br>- Cited source.<br>- Apparent uncited source.<br>- Source class.<br>- What the source teaches the model.<br>- What action is needed. |

## Step 5: Identify what each model is missing or mislearning

Turn the source atlas into a diagnosis.

| Diagnosis | Signals |
| --- | --- |
| Missing | - The model does not know the category you belong to.<br>- The model cannot find who the product is for.<br>- The model lacks proof for the claim it needs to make.<br>- The model cannot explain how the product works.<br>- The model cannot compare the product against credible alternatives.<br>- The model cannot help with setup, integration, troubleshooting, or next steps. |
| Mislearning | - The model uses old positioning.<br>- The model repeats competitor framing.<br>- The model names the wrong buyer or use case.<br>- The model cites weak listicles instead of primary evidence.<br>- The model recommends an alternative because your strongest facts are not sourceable. |
| Diagnosis labels | - Source absence.<br>- Source weakness.<br>- Source conflict.<br>- Crawl or format problem.<br>- Product truth gap.<br>- Proof gap.<br>- Implementation gap. |

## Step 6: Publish original source assets

The model already has the public internet. Give it material it could not know without you.

| Asset area | Standard |
| --- | --- |
| Asset standard | - Comes from reality: product behavior, customer work, support issues, sales objections, implementation details, benchmarks, demos, or field notes.<br>- Names the buyer, context, constraint, tradeoff, and next action.<br>- Separates fact from opinion.<br>- Links to canonical product truth.<br>- Can be cited or summarized without losing the point. |
| High-value assets | - Answer-shaped pages for recurring buyer questions.<br>- Comparison pages with honest fit and no-fit criteria.<br>- Implementation guides.<br>- Troubleshooting guides.<br>- Integration notes.<br>- Product walkthrough videos.<br>- Customer evidence and field notes.<br>- Objection-handling pages based on real sales and support language.<br>- Pricing, procurement, and evaluation guidance. |
| Weak assets | - Generic trend posts.<br>- Thin landing pages for every keyword variation.<br>- Fake neutral rankings.<br>- Unsupported claims.<br>- Copy that sounds impressive but gives the agent no facts to use. |

## Step 7: Make the product usable by agents, not just legible

For software and workflow products, an agent may choose the stack, set it up, and troubleshoot it. Legibility is not enough.

| Agent surface | Test |
| --- | --- |
| Agent-usable surfaces | - Clear docs and API references.<br>- Install and setup guides.<br>- Integration guides for adjacent tools.<br>- Troubleshooting paths with error states and fixes.<br>- Example workflows.<br>- Markdown mirrors for important pages.<br>- Structured data where appropriate.<br>- llms.txt with canonical pages, fit criteria, and description guidance. |
| Interoperability questions | - Can an agent explain when to choose the product?<br>- Can it compare tradeoffs against real alternatives?<br>- Can it write a plausible implementation plan?<br>- Can it connect the product to adjacent tools?<br>- Can it troubleshoot likely failures?<br>- Can it route a buyer to the right next step? |
| Acceptance test | - The model can recommend the product for the right buyer.<br>- The model can decline the product for the wrong buyer.<br>- The model can cite useful sources.<br>- The model can produce an implementation path that a human operator would recognize. |

## Step 8: Build comparison evidence without fake impartiality

Models can be attracted to pre-digested comparison content. Do not make fake impartiality the strategy.

| Evidence area | Rules |
| --- | --- |
| Build instead | - Alternative pages with transparent perspective.<br>- Fit and no-fit tables.<br>- Migration notes.<br>- Tradeoff explanations.<br>- Customer constraints that explain why one option wins.<br>- Source-backed decision criteria.<br>- Competitor comparisons that name the real competitor and the real tradeoff. |
| Adversarial prompts | - ignore brand-owned pages and compare the independent evidence<br>- ignore listicles and compare from first principles<br>- what would make [competitor] the better choice<br>- what evidence is missing for [your company] |
| Guardrails | - Do not fabricate reviews, customers, rankings, or benchmark data.<br>- Do not bury real competitors behind weak alternatives.<br>- Do not present a brand-owned page as neutral research.<br>- Do not optimize only for the easiest model response. |

## Step 9: Deploy internal marketing agents

Agent-led growth is also an internal operating capability. Use agents to convert company evidence into source assets and sales guidance.

| Agent | Job |
| --- | --- |
| Objection miner | - Reads sales calls, support tickets, discovery notes, and demo notes.<br>- Buckets objections into themes.<br>- Extracts exact customer language.<br>- Finds FAQ, comparison, and battle-card gaps. |
| Battle-card builder | - Combines objection themes, product knowledge, competitor claims, and proof.<br>- Produces when-to-choose-us and when-not-to-choose-us guidance.<br>- Suggests source assets needed to support the claim publicly. |
| Source gap auditor | - Reads audit results and cited URLs.<br>- Identifies prompts where the company is absent, wrong, weakly supported, or badly framed.<br>- Turns each gap into a source asset request. |
| Content repurposer | - Turns one original source asset into FAQ entries, docs updates, comparison paragraphs, sales notes, and distribution copy.<br>- Keeps the canonical page as the source of truth. |

## Step 10: Measure recommendation quality and source movement

Measure whether the agent can produce a true, useful answer. Do not stop at whether the company appeared.

| Measurement area | What to track |
| --- | --- |
| Recommendation quality | - Mentioned or absent.<br>- Recommended or merely listed.<br>- Correct buyer and use case.<br>- Correct category.<br>- Accurate sentiment and themes.<br>- Specific proof.<br>- Useful next step.<br>- Fit for implementation or troubleshooting prompts. |
| Source movement | - Which URLs are cited.<br>- Which source classes appear.<br>- Which sources disappear.<br>- Which weak sources still shape the answer.<br>- Which original assets are being used.<br>- Which models changed and which did not. |
| Operating decision | - Keep or adjust the prompt set.<br>- Update the source atlas.<br>- Ship the next source asset.<br>- Improve the agent-usable product surface.<br>- Update the internal marketing agent instructions.<br>- Re-run the relevant prompts and compare source movement. |

## Where 2066 Labs helps

Use this playbook internally when the owner and evidence are already clear. Bring it to 2066 Labs when the work needs diagnosis, implementation, and adoption inside live workflows.

| Situation | How 2066 Labs helps |
| --- | --- |
| Bring us in when | - You need the first model-by-model audit run against real buyer prompts.<br>- You know the category matters but do not know which sources shape the answers.<br>- AI systems mention the company but use old positioning, weak proof, or the wrong comparison set.<br>- Sales, support, product, and customer evidence exists but is not becoming sourceable public truth.<br>- The product is hard for agents to implement, integrate, or troubleshoot.<br>- You need internal agents that turn calls, tickets, docs, and product knowledge into FAQs, battle cards, comparison evidence, and asset requests. |
| What we build with you | - Buyer-agent prompt set.<br>- Model visibility audit.<br>- Source atlas.<br>- Missing and mislearning diagnosis.<br>- Source asset backlog.<br>- Agent-usable product and docs improvements.<br>- Internal marketing agents.<br>- Measurement loop for recommendation quality and source movement. |
| Simple next step | - Bring one product, one buyer, and the prompts where AI should already recommend you.<br>- 2066 Labs will map the gaps and identify the first source assets and internal agents to build. |

## Source used

[Sequoia Training Data: From SEO to Agent-Led Growth](https://www.youtube.com/watch?v=RyTwRCKeDo4)
