Skip to main content
Glama

simulate_buyer_persona

Read-only

Practice sales pitches against realistic buyer personas like CFOs or VPs to refine your messaging and handle objections before actual meetings.

Instructions

Practice your pitch against a realistic buyer — pick a CFO, CTO, COO, VP Sales, or VP Engineering and get their opening challenge. They'll push back the way real buyers do, so you can sharpen your story before the actual meeting.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
personaYesWhich buyer to simulate. Pick based on who the user is preparing to meet.
stageIdNoBuyer journey stage (0=Unaware through 7=Advocating). Default: 3.
productDescriptionNoWhat the user's product does. Infer from conversation context.
productNameNoProduct name. Infer from context.
modeNoopening = buyer's opening message. list = available personas. Default: opening.

Implementation Reference

  • Tool definition and input schema for `simulate_buyer_persona`.
      name: 'simulate_buyer_persona',
      description: 'Practice your pitch against a realistic buyer — pick a CFO, CTO, COO, VP Sales, or VP Engineering and get their opening challenge. They\'ll push back the way real buyers do, so you can sharpen your story before the actual meeting.',
      annotations: READ_ONLY,
      inputSchema: {
        type: 'object',
        properties: {
          persona: {
            type: 'string',
            enum: ['CFO', 'CTO', 'COO', 'VP Sales', 'VP Engineering'],
            description: 'Which buyer to simulate. Pick based on who the user is preparing to meet.',
          },
          stageId: {
            type: 'number',
            enum: [0, 1, 2, 3, 4, 5, 6, 7],
            description: 'Buyer journey stage (0=Unaware through 7=Advocating). Default: 3.',
          },
          productDescription: {
            type: 'string',
            description: 'What the user\'s product does. Infer from conversation context.',
          },
          productName: {
            type: 'string',
            description: 'Product name. Infer from context.',
          },
          mode: {
            type: 'string',
            enum: ['opening', 'list'],
            description: 'opening = buyer\'s opening message. list = available personas. Default: opening.',
          },
        },
        required: ['persona'],
      },
    },
  • The tool handler in `server.js` proxies all tool executions (including `simulate_buyer_persona`) to the `AndruClient` backend API.
    server.setRequestHandler(
      CallToolRequestSchema,
      async (request) => {
        if (!client) {
          return {
            content: [{ type: 'text', text: JSON.stringify({ error: 'ANDRU_API_KEY not configured. Tool execution requires an API key.' }) }],
            isError: true,
          };
        }
        const { name, arguments: args } = request.params;
        try {
          return await client.callTool(name, args || {});
        } catch (error) {
          return {
            content: [{
              type: 'text',
              text: JSON.stringify({ error: error.message }),
            }],
            isError: true,
          };
        }
      }
    );
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and openWorldHint=true, indicating safe, exploratory use. The description adds valuable behavioral context beyond annotations: it explains the tool simulates realistic buyer pushback, provides opening challenges, and helps users practice. However, it doesn't specify response format or whether the simulation is interactive or single-response.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded and concise - two sentences that efficiently convey purpose, scope, and value. Every word earns its place with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simulation with multiple parameters), rich annotations, and 100% schema coverage, the description provides strong context about what the tool does and why to use it. The main gap is lack of output format information (no output schema), but the description compensates well by explaining the behavioral outcome.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 5 parameters thoroughly. The description doesn't add parameter-specific information beyond what's in the schema. The baseline score of 3 reflects adequate but not enhanced parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('practice your pitch', 'get their opening challenge') and resources ('realistic buyer', 'CFO, CTO, COO, VP Sales, or VP Engineering'). It distinguishes from siblings by focusing on simulation and practice rather than analysis or data retrieval like most other tools on the server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('practice your pitch', 'sharpen your story before the actual meeting'), but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools. The guidance is practical but lacks explicit exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/geter-andru/andru-revenue-intelligence'

If you have feedback or need assistance with the MCP directory API, please join our Discord server