Skip to main content
Glama

batch_fit_score

Read-only

Score and rank up to 50 companies simultaneously by assigning tiers and scores, providing individual results and aggregate statistics for revenue intelligence analysis.

Instructions

Score up to 50 companies at once — gives each a tier and score so you can rank a list in under a second. Returns individual scores plus aggregate statistics.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
companiesYesCompanies to score (max 50)

Implementation Reference

  • The `callTool` method in `AndruClient` acts as the generic handler that proxies the tool execution (including `batch_fit_score`) to the remote Andru API backend.
    async callTool(name, args) {
      return this.post('/api/mcp/tools/call', { tool: name, arguments: args });
    }
  • The input schema and definition for the `batch_fit_score` tool are registered in the static `tools` catalog.
    {
      name: 'batch_fit_score',
      description: 'Score up to 50 companies at once — gives each a tier and score so you can rank a list in under a second. Returns individual scores plus aggregate statistics.',
      annotations: READ_ONLY,
      inputSchema: {
        type: 'object',
        properties: {
          companies: {
            type: 'array',
            items: {
              type: 'object',
              properties: {
                companyName: { type: 'string', description: 'Company name' },
                domain: { type: 'string', description: 'Company domain' },
                industry: { type: 'string', description: 'Industry vertical' },
                employeeCount: { type: 'number', description: 'Number of employees' },
                revenue: { type: 'string', description: 'Revenue range' },
                geography: { type: 'string', description: 'HQ location' },
                techStack: { type: 'array', items: { type: 'string' }, description: 'Technologies used' },
                painPoints: { type: 'array', items: { type: 'string' }, description: 'Known pain points' },
                triggerEvents: { type: 'array', items: { type: 'string' }, description: 'Recent trigger events' },
              },
            },
            description: 'Companies to score (max 50)',
          },
        },
        required: ['companies'],
      },
    },
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true, indicating safe read operations with flexible inputs. The description adds valuable behavioral context beyond annotations: the 50-company limit, performance expectation ('under a second'), and output structure ('individual scores plus aggregate statistics'). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence establishes purpose and scope, second sentence describes output. Perfectly front-loaded with essential information, no redundant phrasing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only batch scoring tool with good annotations and full schema coverage, the description provides adequate context about purpose, constraints, and output. The main gap is lack of output schema, but the description compensates by describing return values ('individual scores plus aggregate statistics'). Could be more specific about scoring methodology.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents the single parameter and its nested structure. The description adds minimal parameter semantics beyond the schema, only mentioning the 50-company limit which is already in the schema description. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Score up to 50 companies at once'), the resource ('companies'), and the output ('gives each a tier and score'). It distinguishes from sibling tools like 'get_icp_fit_score' by emphasizing batch processing and ranking capabilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to rank a list in under a second') and implies it's for batch processing rather than single-company scoring. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/geter-andru/andru-revenue-intelligence'

If you have feedback or need assistance with the MCP directory API, please join our Discord server