Skip to main content
Glama

get_evaluation_criteria

Read-only

Score how well your product matches buyer needs across pain coverage, outcome clarity, and capability fit to improve sales alignment and win rates.

Instructions

Scores how well you actually match what this buyer needs — across pain coverage, outcome clarity, capability fit, and 3 more dimensions. Returns 0-100 per dimension plus overall alignment score.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
buyerPainPointsNoPain points the buyer has expressed or you expect them to have
buyerIndustryNoBuyer's industry
buyerSizeNoBuyer company size
requiredCapabilitiesNoCapabilities the buyer needs from a solution
productDescriptionNoA brief description of what the user's product does and who it's for. Infer this from the conversation if the user has already described their product. If the user hasn't mentioned their product yet, ask them: "What does your product do, and who do you sell to?" before calling this tool.
verticalNoThe industry the user sells into (e.g., "fintech", "healthcare", "defense"). Infer from conversation context — the user's product description, company name, or the companies they're asking about. If unclear, ask.
targetRoleNoThe buyer role being evaluated (e.g., "CFO", "CTO", "VP Sales"). Infer from context — often explicit in the user's question. If not mentioned, default to the most senior relevant role for their vertical.

Implementation Reference

  • src/catalog.js:242-260 (registration)
    The tool `get_evaluation_criteria` is defined and registered in the `catalog.js` file. The tool is executed by proxying the tool name and arguments to a backend API via the `callTool` method in `src/client.js` (invoked by `src/server.js`).
    {
      name: 'get_evaluation_criteria',
      description: 'Scores how well you actually match what this buyer needs — across pain coverage, outcome clarity, capability fit, and 3 more dimensions. Returns 0-100 per dimension plus overall alignment score.',
      annotations: READ_ONLY,
      inputSchema: {
        type: 'object',
        properties: {
          buyerPainPoints: {
            type: 'array',
            items: { type: 'string' },
            description: 'Pain points the buyer has expressed or you expect them to have',
          },
          buyerIndustry: { type: 'string', description: 'Buyer\'s industry' },
          buyerSize: { type: 'string', description: 'Buyer company size' },
          requiredCapabilities: {
            type: 'array',
            items: { type: 'string' },
            description: 'Capabilities the buyer needs from a solution',
          },
  • The tool execution handler in `src/server.js` receives the tool name (`get_evaluation_criteria`) and arguments, and delegates the execution to the `AndruClient.callTool` method.
    const { name, arguments: args } = request.params;
    try {
      return await client.callTool(name, args || {});
  • The actual logic for executing the tool is a proxy call to the Andru backend API (`/api/mcp/tools/call`), which handles the execution on the server side.
    async callTool(name, args) {
      return this.post('/api/mcp/tools/call', { tool: name, arguments: args });
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and openWorldHint=true, indicating a safe read operation with open-ended inputs. The description adds value by specifying the scoring dimensions (pain coverage, outcome clarity, capability fit, etc.) and output format (0-100 per dimension plus overall score), which aren't covered by annotations. However, it doesn't disclose rate limits, auth needs, or other behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—two sentences with zero waste. It front-loads the core purpose and immediately states the output format. Every word earns its place, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no output schema) and rich annotations, the description is minimally complete. It explains what the tool does and the output format, but lacks details on error handling, example usage, or how results should be interpreted. With no output schema, more guidance on return values would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description doesn't add any parameter-specific semantics beyond implying that inputs relate to buyer needs and product context. It mentions dimensions like 'pain coverage' and 'capability fit', which loosely map to parameters like 'buyerPainPoints' and 'requiredCapabilities', but no additional syntax or format details are provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: scoring how well something matches buyer needs across specific dimensions (pain coverage, outcome clarity, capability fit, and 3 more). It uses specific verbs ('Scores', 'Returns') and identifies the resource (evaluation criteria). However, it doesn't explicitly differentiate from sibling tools like 'get_icp_fit_score' or 'batch_fit_score', which appear related.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools like 'get_icp_fit_score', 'classify_opportunity', and 'get_competitive_positioning', there's no indication of context, prerequisites, or exclusions. Usage is implied only through the tool's name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/geter-andru/andru-revenue-intelligence'

If you have feedback or need assistance with the MCP directory API, please join our Discord server