Skip to main content
Glama
Ownership verified

Server Details

AI-powered startup due diligence. Screen any startup across 7 IMPACT-X dimensions, get a Sieve Score (0-140) with evidence-typed findings and a clear meeting recommendation. Built for VCs, solo GPs, and angel investors.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

8 tools
sieve_dataroomA
Read-only
Inspect

List all documents in a deal's data room.

Shows what files and content have been uploaded for a deal, along with their processing status.

Args: deal_id: The deal ID (from sieve_deals or sieve_dataroom_add).

ParametersJSON Schema
NameRequiredDescriptionDefault
deal_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish the read-only, non-destructive safety profile. The description adds valuable behavioral context beyond annotations by specifying that the tool reveals 'processing status' of documents, which hints at the statefulness of the returned data without needing to detail the output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally structured with the core action front-loaded, supporting details in the second sentence, and parameter documentation in a standard Args block. No sentences are wasted; the length is appropriate for a single-parameter listing tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (not shown but indicated in context signals), the description appropriately avoids duplicating return value documentation. It sufficiently covers the single parameter and mentions 'processing status' to hint at output content. A minor gap is lack of mention of pagination or empty result handling, but this is not critical for the score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by documenting the sole parameter in the Args section: it defines deal_id and specifies valid sources ('from sieve_deals or sieve_dataroom_add'), providing necessary semantic context for the agent to source the parameter correctly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('List') and resource ('documents in a deal's data room'), clearly contrasting with sibling 'sieve_dataroom_add'. The second sentence clarifies scope by mentioning 'processing status', leaving no ambiguity about the tool's function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The Args section references siblings 'sieve_deals' and 'sieve_dataroom_add' as sources for the deal_id, providing implicit context about when to use this tool (after obtaining a deal ID). However, it lacks explicit 'when not to use' guidance or direct comparison to the add sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sieve_dataroom_addAInspect

Add a document to a deal's data room. Creates the deal if needed.

This is the primary way to get documents into Sieve for screening. Upload a pitch deck, financials, or any document -- then call sieve_screen to analyze everything in the data room.

Provide company_name to create a new deal (or find existing), or deal_id to add to an existing deal.

Provide exactly one content source: file_path (local file), text (raw text/markdown), or url (fetch from URL).

Args: title: Document title (e.g. "Pitch Deck Q1 2026"). company_name: Company name -- creates deal if new, finds existing if not. deal_id: Add to an existing deal (from sieve_deals or previous sieve_dataroom_add). website_url: Company website URL (used when creating a new deal). document_type: Type: 'pitch_deck', 'financials', 'legal', or 'other'. file_path: Path to a local file (PDF, DOCX, XLSX). The tool reads and uploads it. text: Raw text or markdown content (alternative to file). url: URL to fetch document from (alternative to file).

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNo
textNo
titleYes
deal_idNo
file_pathNo
website_urlNo
company_nameNo
document_typeNoother

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate mutation (readOnly=false) and external interaction (openWorld=true). Description adds critical behavioral context: local file reading (PDF/DOCX/XLSX), URL fetching, deal auto-creation side effects, and document upload mechanics. Could mention error handling for invalid file types.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear hierarchy: purpose → workflow → constraints → parameter details. Front-loaded with key operational context. Args section is necessarily detailed given zero schema coverage; no redundant sentences despite length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a complex 8-parameter tool with XOR constraints and conditional logic. Covers deal creation/lookup logic, content source alternatives, and sibling tool integration (sieve_screen). Output schema exists, so return value documentation isn't required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, description fully compensates by documenting all 8 parameters in Args section. Provides concrete examples ('Pitch Deck Q1 2026'), enumerates document_type values, specifies supported file formats, and explains conditional relationships between parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action 'Add a document to a deal's data room' with side effect 'Creates the deal if needed.' Distinguishes from sibling sieve_screen by clarifying this ingests documents while sieve_screen analyzes them, and positions it as 'the primary way to get documents into Sieve.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit workflow guidance ('then call sieve_screen to analyze'), mutual exclusion constraints ('Provide exactly one content source'), and conditional logic ('Provide company_name... or deal_id'). Clear when-to-use versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sieve_dealsA
Read-only
Inspect

List deals in your Sieve pipeline.

Search by company name or list all deals. Returns deal metadata including Sieve scores for screened deals.

Args: search: Search by company name (partial match). Empty returns all. limit: Maximum results to return (1-100, default 20).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
searchNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds valuable behavioral context not in annotations: it specifies 'partial match' for search functionality, documents the limit range (1-100), and discloses return content ('deal metadata including Sieve scores'). It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with three front-loaded sentences covering purpose, usage, and return values, followed by an Args block. While the docstring-style 'Args:' formatting is slightly unconventional for MCP, every sentence earns its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists, the description appropriately limits return value explanation to high-level context ('Sieve scores for screened deals'). With two undocumented parameters, the description achieves completeness by fully documenting both. It could mention pagination behavior for the limit parameter to achieve a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description carries full documentation burden. The Args section comprehensively defines both parameters: search includes partial match behavior and empty-string semantics, while limit includes valid range and default value. This fully compensates for the schema's lack of property descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with 'List deals in your Sieve pipeline,' providing a specific verb (List) and resource (Sieve pipeline deals). It distinguishes from siblings by referencing Sieve-specific concepts like 'pipeline' and 'Sieve scores,' clearly positioning it as the retrieval tool for screened deals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides basic usage patterns ('Search by company name or list all deals'), explaining how to use the search parameter versus leaving it empty. However, it lacks explicit guidance on when to use this versus siblings like sieve_results or sieve_screen, and omits prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sieve_memoAInspect

Get or generate an investment memo for a deal.

If generate=false (default), retrieves the existing memo. If generate=true, creates a new memo (~15-30 seconds). Requires a completed screen.

Args: deal_id: The deal ID (from sieve_deals or sieve_screen). generate: Set to true to generate a new memo. memo_type: 'internal' (IC-facing, full risks) or 'external' (founder-facing). Default: internal.

ParametersJSON Schema
NameRequiredDescriptionDefault
deal_idYes
generateNo
memo_typeNointernal

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false and destructiveHint=false. The description adds valuable behavioral context: generation latency (~15-30 seconds), state dependency (requires completed screen), and semantic differences between memo types ('IC-facing, full risks' vs 'founder-facing'). Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear front-loading of purpose, followed by conditional behavior, prerequisites, and Args section. Minor redundancy exists between the paragraph explaining 'generate' and the Args description, but overall every sentence earns its place with no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (noted in context signals), the description appropriately focuses on input parameters and operational behavior. It comprehensively covers the workflow prerequisites, timing expectations, and sibling tool relationships necessary for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing semantic meaning for all three parameters: deal_id includes valid sources, generate explains the creation trigger, and memo_type documents the two options with their business context (internal/external audiences).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the dual purpose with specific verbs ('Get or generate') and resource ('investment memo'), and distinguishes itself from siblings like sieve_deals (listing) and sieve_screen (screening) by focusing specifically on memo retrieval/creation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit conditional logic for when to use generate=true vs false, states the prerequisite ('Requires a completed screen'), and references sibling tools for the deal_id parameter ('from sieve_deals or sieve_screen'), giving clear workflow guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sieve_resultsA
Read-only
Inspect

Get the full results of a completed Sieve analysis.

Returns the Sieve Score (0-140), meeting decision (Take Meeting/Pass/ Need More Info), executive summary, key strengths, and key concerns.

Args: deal_id: The deal ID returned by sieve_screen. sections: Comma-separated filter (e.g. 'summary,strengths,concerns'). Options: summary, profiles, findings, questions, strengths, concerns. Empty returns everything. Score and decision are always included.

ParametersJSON Schema
NameRequiredDescriptionDefault
deal_idYes
sectionsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only/safe operation. Description adds valuable behavioral details: specific output schema preview (Score 0-140 range, decision values), filtering logic (empty returns everything, certain fields always included), and parameter format (comma-separated). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficient three-part structure: purpose statement, return value summary, and Args block with granular details. No redundant text; every sentence provides actionable information. Appropriate length for parameter complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a 2-parameter retrieval tool. References prerequisite sibling tool, explains output structure despite existence of output schema (helpful for agent planning), and documents all parameter semantics. No gaps given tool complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage, so description carries full burden. Excellently compensates by documenting deal_id's origin (sieve_screen), sections parameter syntax (comma-separated with example), valid enum-like options (summary, profiles, etc.), and default behavior (empty returns everything).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Get') + resource ('full results of a completed Sieve analysis'). Explicitly distinguishes from sibling 'sieve_screen' by referencing it as the source of the required deal_id and specifying 'completed' analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies prerequisite workflow (needs deal_id from sieve_screen) and indicates when to use (completed analysis). Lists specific output fields to expect. Lacks explicit 'when not to use' (e.g., don't call if analysis pending) but context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sieve_screenAInspect

Run a Sieve IMPACT-X Quick Screen on a startup.

Analyzes the company across 7 dimensions (Innovators, Market, Product, Advantage, Commerce, Traction, X-Factor) and returns an analysis ID. Takes 2-5 minutes to complete. Upserts -- if the company was previously screened, returns the existing deal (set confirm=true to re-screen).

Two ways to use:

  • v3 (recommended): First add documents with sieve_dataroom_add, then call sieve_screen(deal_id=...) to analyze everything in the data room.

  • v2 (legacy): Call sieve_screen(company_name=..., website_url=...) directly. At least one of website_url or pitch_deck_text is required in this mode.

Args: company_name: Name of the startup to screen (v2 flow, or to create new deal). deal_id: Screen an existing deal by ID (v3 flow -- use after sieve_dataroom_add). website_url: Company website URL (v2 flow). pitch_deck_text: Extracted pitch deck text (v2 flow). description: Brief company description (optional). confirm: Set to true to re-screen an existing deal.

ParametersJSON Schema
NameRequiredDescriptionDefault
confirmNo
deal_idNo
descriptionNo
website_urlNo
company_nameNo
pitch_deck_textNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnlyHint=false, destructiveHint=false), the description adds critical operational context: the 2-5 minute duration, the 7 analysis dimensions, upsert behavior details, and the distinction between creating new deals vs screening existing ones. Minor gap: no mention of rate limits or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with logical progression: purpose → behavior → usage modes → parameter reference. The length is appropriate for the tool's complexity (dual workflows, 6 parameters). Minor deduction for slight redundancy between the first two sentences, but overall efficient with zero extraneous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately focuses on invocation logic rather than return values. It comprehensively covers the complex dual-workflow nature, sibling tool dependencies, timing expectations, and all parameters—providing sufficient context for correct agent invocation despite high complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates via the 'Args:' section, documenting all 6 parameters with flow-specific context (e.g., deal_id is 'v3 flow -- use after sieve_dataroom_add', company_name is 'v2 flow'). It also explains parameter relationships and mutual exclusivity between the two usage modes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Run a Sieve IMPACT-X Quick Screen on a startup' and clearly distinguishes this from siblings by detailing the two distinct invocation patterns (v3 recommended flow using deal_id vs v2 legacy flow using company_name/website_url) and explicitly referencing sieve_dataroom_add as a prerequisite for the recommended approach.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Two ways to use' section labeling v3 as 'recommended' and v2 as 'legacy', clarifies that sieve_dataroom_add must be called first for v3 flow, explains upsert behavior ('returns the existing deal'), and specifies when to use confirm=true ('to re-screen'). This gives clear when-to-use guidance versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sieve_statusA
Read-only
Inspect

Check the progress of a Sieve analysis.

Returns which IMPACT-X dimensions are complete with their scores, overall progress percentage, and current phase.

Args: deal_id: The deal ID returned by sieve_screen.

ParametersJSON Schema
NameRequiredDescriptionDefault
deal_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, non-destructive). Description adds valuable behavioral context about return values ('IMPACT-X dimensions', 'scores', 'progress percentage', 'current phase') not found in annotations or structured fields, clarifying what constitutes 'status' in this domain.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose front-loaded, followed by return value preview and Args section. Three compact sentences with zero redundancy. Every line earns its place despite minimal length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter status tool. With output schema present, description appropriately focuses on high-level return semantics rather than detailed structure. Annotations cover operational safety, description covers domain-specific return values and parameter semantics. Minor gap: could mention polling frequency expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. Description fully compensates by documenting the single parameter in Args section ('The deal ID returned by sieve_screen'), adding critical semantic context that this ID originates from sieve_screen specifically, not arbitrary sources.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb ('Check') and resource ('progress of a Sieve analysis'). Effectively distinguishes from siblings: implies interim status checking vs sieve_screen (initiation) and sieve_results (final output) through the 'progress' framing and return value description (dimensions complete, current phase).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies workflow sequence through Args section ('deal_id returned by sieve_screen'), suggesting use after screening. However, lacks explicit 'when to use' guidance such as 'Use this for polling during analysis' or explicit comparison to alternatives like sieve_results.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sieve_usageA
Read-only
Inspect

Check your Sieve API usage for the current billing period.

Shows screens used, monthly limit, tier, and organization name.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only and non-destructive. The description adds valuable context by listing the specific fields returned (screens used, monthly limit, tier, organization name), which helps the agent understand what data to expect beyond the safety profile.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first establishes purpose and scope, the second details specific return fields. Every word earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with zero parameters and an existing output schema, the description is appropriately complete. It adds value by highlighting key billing fields without needing to fully document the return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, which establishes a baseline score of 4 per the rubric. The description correctly does not invent parameter requirements, matching the empty input schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Check') with clear resource ('Sieve API usage') and scope ('current billing period'). It clearly distinguishes this administrative/billing tool from operational siblings like sieve_screen or sieve_deals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies usage through the billing context (screens used, monthly limit), it provides no explicit guidance on when to use this versus similar tools like sieve_status, nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources