get_evaluation_criteria
Score how well your product matches buyer needs across pain coverage, outcome clarity, and capability fit to improve sales alignment and win rates.
Instructions
Scores how well you actually match what this buyer needs — across pain coverage, outcome clarity, capability fit, and 3 more dimensions. Returns 0-100 per dimension plus overall alignment score.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| buyerPainPoints | No | Pain points the buyer has expressed or you expect them to have | |
| buyerIndustry | No | Buyer's industry | |
| buyerSize | No | Buyer company size | |
| requiredCapabilities | No | Capabilities the buyer needs from a solution | |
| productDescription | No | A brief description of what the user's product does and who it's for. Infer this from the conversation if the user has already described their product. If the user hasn't mentioned their product yet, ask them: "What does your product do, and who do you sell to?" before calling this tool. | |
| vertical | No | The industry the user sells into (e.g., "fintech", "healthcare", "defense"). Infer from conversation context — the user's product description, company name, or the companies they're asking about. If unclear, ask. | |
| targetRole | No | The buyer role being evaluated (e.g., "CFO", "CTO", "VP Sales"). Infer from context — often explicit in the user's question. If not mentioned, default to the most senior relevant role for their vertical. |
Implementation Reference
- src/catalog.js:242-260 (registration)The tool `get_evaluation_criteria` is defined and registered in the `catalog.js` file. The tool is executed by proxying the tool name and arguments to a backend API via the `callTool` method in `src/client.js` (invoked by `src/server.js`).
{ name: 'get_evaluation_criteria', description: 'Scores how well you actually match what this buyer needs — across pain coverage, outcome clarity, capability fit, and 3 more dimensions. Returns 0-100 per dimension plus overall alignment score.', annotations: READ_ONLY, inputSchema: { type: 'object', properties: { buyerPainPoints: { type: 'array', items: { type: 'string' }, description: 'Pain points the buyer has expressed or you expect them to have', }, buyerIndustry: { type: 'string', description: 'Buyer\'s industry' }, buyerSize: { type: 'string', description: 'Buyer company size' }, requiredCapabilities: { type: 'array', items: { type: 'string' }, description: 'Capabilities the buyer needs from a solution', }, - src/server.js:56-58 (handler)The tool execution handler in `src/server.js` receives the tool name (`get_evaluation_criteria`) and arguments, and delegates the execution to the `AndruClient.callTool` method.
const { name, arguments: args } = request.params; try { return await client.callTool(name, args || {}); - src/client.js:36-38 (handler)The actual logic for executing the tool is a proxy call to the Andru backend API (`/api/mcp/tools/call`), which handles the execution on the server side.
async callTool(name, args) { return this.post('/api/mcp/tools/call', { tool: name, arguments: args }); }