get_performance_stats
Retrieve performance statistics comparing Chain of Draft and Chain of Thought approaches to analyze token efficiency and accuracy metrics.
Instructions
Get performance statistics for CoD vs CoT approaches
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| domain | No | Filter for specific domain |
Implementation Reference
- server.py:142-169 (handler)Primary Python handler for the MCP 'get_performance_stats' tool. Uses @app.tool() decorator for registration and execution. Fetches stats from AnalyticsService and formats output string.@app.tool() async def get_performance_stats( domain: str = None ) -> str: """Get performance statistics for CoD vs CoT approaches. Args: domain: Filter for specific domain (optional) """ stats = await analytics.get_performance_by_domain(domain) result = "Performance Comparison (CoD vs CoT):\n\n" if not stats: return "No performance data available yet." for stat in stats: result += f"Domain: {stat['domain']}\n" result += f"Approach: {stat['approach']}\n" result += f"Average tokens: {stat['avg_tokens']:.1f}\n" result += f"Average time: {stat['avg_time_ms']:.1f}ms\n" if stat['accuracy'] is not None: result += f"Accuracy: {stat['accuracy'] * 100:.1f}%\n" result += f"Sample size: {stat['count']}\n\n" return result
- index.js:684-711 (handler)JavaScript handler in the CallToolRequestSchema request handler. Dispatches 'get_performance_stats' tool calls, computes stats from in-memory analyticsDb, and returns formatted text response.if (name === "get_performance_stats") { const stats = analyticsDb.getPerformanceByDomain(args.domain); let result = "Performance Comparison (CoD vs CoT):\n\n"; if (!stats || stats.length === 0) { result = "No performance data available yet."; } else { for (const stat of stats) { result += `Domain: ${stat.domain}\n`; result += `Approach: ${stat.approach}\n`; result += `Average tokens: ${stat.avg_tokens.toFixed(1)}\n`; result += `Average time: ${stat.avg_time_ms.toFixed(1)}ms\n`; if (stat.accuracy !== null) { result += `Accuracy: ${(stat.accuracy * 100).toFixed(1)}%\n`; } result += `Sample size: ${stat.count}\n\n`; } } return { content: [{ type: "text", text: result }] };
- index.js:528-540 (schema)Explicit input schema for the get_performance_stats tool in the JavaScript MCP server implementation.const PERFORMANCE_TOOL = { name: "get_performance_stats", description: "Get performance statistics for CoD vs CoT approaches", inputSchema: { type: "object", properties: { domain: { type: "string", description: "Filter for specific domain" } } } };
- index.js:581-591 (registration)Registration of get_performance_stats tool (as PERFORMANCE_TOOL) in the list of tools advertised via ListToolsRequestHandler.server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: [ CHAIN_OF_DRAFT_TOOL, MATH_TOOL, CODE_TOOL, LOGIC_TOOL, PERFORMANCE_TOOL, TOKEN_TOOL, COMPLEXITY_TOOL ], }));
- client.py:310-312 (helper)Helper method in ChainOfDraftClient class that proxies performance stats retrieval from its analytics service, used internally by the server tool.async def get_performance_stats(self, domain=None): """Get performance statistics for CoD vs CoT approaches.""" return await self.analytics.get_performance_by_domain(domain)