Skip to main content
Glama

get_disqualification_signals

Read-only

Identify deals unlikely to close by analyzing company ICP fit, anti-patterns, and churn signals to determine whether to continue investing or walk away.

Instructions

Find out if you're wasting time on a deal that won't close. Runs the company through three layers of signal — ICP fit, anti-pattern matching, and churn patterns — and tells you whether to keep investing or walk away.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
companyNameNoCompany name to check
industryNoIndustry
employeeCountNoNumber of employees
revenueNoRevenue range
geographyNoLocation
techStackNoTechnologies they use
dealContextNoCurrent deal context (if applicable)
productDescriptionNoA brief description of what the user's product does and who it's for. Infer this from the conversation if the user has already described their product. If the user hasn't mentioned their product yet, ask them: "What does your product do, and who do you sell to?" before calling this tool.
verticalNoThe industry the user sells into (e.g., "fintech", "healthcare", "defense"). Infer from conversation context — the user's product description, company name, or the companies they're asking about. If unclear, ask.
targetRoleNoThe buyer role being evaluated (e.g., "CFO", "CTO", "VP Sales"). Infer from context — often explicit in the user's question. If not mentioned, default to the most senior relevant role for their vertical.

Implementation Reference

  • Definition and schema for the get_disqualification_signals tool.
      name: 'get_disqualification_signals',
      description: 'Find out if you\'re wasting time on a deal that won\'t close. Runs the company through three layers of signal — ICP fit, anti-pattern matching, and churn patterns — and tells you whether to keep investing or walk away.',
      annotations: READ_ONLY,
      inputSchema: {
        type: 'object',
        properties: {
          companyName: { type: 'string', description: 'Company name to check' },
          industry: { type: 'string', description: 'Industry' },
          employeeCount: { type: 'number', description: 'Number of employees' },
          revenue: { type: 'string', description: 'Revenue range' },
          geography: { type: 'string', description: 'Location' },
          techStack: { type: 'array', items: { type: 'string' }, description: 'Technologies they use' },
          dealContext: {
            type: 'object',
            properties: {
              dealValue: { type: 'number', description: 'Deal value' },
              stage: { type: 'string', description: 'Current deal stage' },
              daysInPipeline: { type: 'number', description: 'Days since deal entered pipeline' },
              championIdentified: { type: 'boolean', description: 'Has a champion been identified?' },
            },
            description: 'Current deal context (if applicable)',
          },
          ...COLD_START_PARAMS,
        },
      },
    },
  • The request handler for all tools, which proxies tool execution to the Andru backend API via the AndruClient.
    server.setRequestHandler(
      CallToolRequestSchema,
      async (request) => {
        if (!client) {
          return {
            content: [{ type: 'text', text: JSON.stringify({ error: 'ANDRU_API_KEY not configured. Tool execution requires an API key.' }) }],
            isError: true,
          };
        }
        const { name, arguments: args } = request.params;
        try {
          return await client.callTool(name, args || {});
        } catch (error) {
          return {
            content: [{
              type: 'text',
              text: JSON.stringify({ error: error.message }),
            }],
            isError: true,
          };
        }
      }
    );
  • The underlying API client method that makes the actual HTTP POST request to the backend to execute the tool logic.
    async callTool(name, args) {
      return this.post('/api/mcp/tools/call', { tool: name, arguments: args });
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true, indicating a safe, exploratory operation. The description adds behavioral context by explaining the three analysis layers (ICP fit, anti-pattern matching, churn patterns) and the actionable outcome ('tells you whether to keep investing or walk away'), which goes beyond annotations. However, it does not disclose details like rate limits, authentication needs, or specific output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by specific analysis details and outcome. It uses two efficient sentences with zero wasted words, each earning its place by clarifying the tool's function and value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (10 parameters, nested objects) and lack of output schema, the description is reasonably complete for a read-only analysis tool. It explains the analysis method and decision outcome, but does not detail the return format (e.g., score, reasons, confidence) or error handling, which could be important given the open-world hint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 10 parameters. The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining how 'techStack' influences disqualification or how 'dealContext' affects the analysis. It only implies general usage of company and deal data.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('find out', 'runs', 'tells you') and resources ('company', 'three layers of signal'), and distinguishes it from siblings by focusing on disqualification signals rather than scoring, classification, or prospecting. It explicitly answers 'whether to keep investing or walk away', which is unique among the listed tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('wasting time on a deal that won't close'), but does not explicitly state when not to use it or name alternatives among siblings. It implies usage for deal evaluation but lacks explicit exclusions or comparisons to tools like 'classify_opportunity' or 'get_icp_fit_score'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/geter-andru/andru-revenue-intelligence'

If you have feedback or need assistance with the MCP directory API, please join our Discord server