Skip to main content
Glama
kindrat86

mcp-deal-flow-signal

get_signals_summary

Get a high-level summary of VC deal flow signals: total sectors tracked, startups monitored, current period, and data format links for investment analysis.

Instructions

Get a high-level summary of the VC Deal Flow Signal dataset: total sectors, startups tracked, current period, last refresh date, and links to all data formats (JSON, CSV, RSS, llms.txt).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior by specifying what summary metrics are returned (total sectors, startups tracked, etc.) and mentions data format links. However, it doesn't cover important aspects like whether this is a read-only operation (implied by 'Get'), potential rate limits, authentication requirements, or error conditions. The description adds value but leaves gaps in behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently communicates the tool's purpose, scope, and key return elements. Every element (verb, dataset, specific metrics, data format links) earns its place with no wasted words. It's appropriately sized for a simple, parameterless tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters, no annotations, no output schema), the description provides adequate context about what the tool does and returns. However, without an output schema, the description doesn't specify the exact structure or format of the returned summary data (e.g., whether it's a structured object with named fields). For a tool that returns multiple data points, more detail about the response format would be helpful for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and schema description coverage is 100% (empty schema is fully described as having no parameters). The description appropriately doesn't discuss parameters since none exist. It focuses instead on what the tool returns, which is appropriate for a parameterless tool. This meets the baseline of 4 for zero parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does ('Get a high-level summary') and specifies the exact dataset ('VC Deal Flow Signal dataset') and key summary metrics (total sectors, startups tracked, etc.). It distinguishes from siblings like get_startup_signal or search_startups_by_sector by focusing on dataset-level metadata rather than individual startup data. However, it doesn't explicitly contrast with get_methodology, which might also provide dataset-level information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying it provides a 'high-level summary' of the dataset, suggesting it's for overview purposes rather than detailed analysis. However, it doesn't explicitly state when to use this tool versus alternatives like get_methodology (which might explain data collection methods) or when not to use it (e.g., for individual startup data). The guidance is present but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kindrat86/mcp-deal-flow-signal'

If you have feedback or need assistance with the MCP directory API, please join our Discord server