Skip to main content
Glama

IPOGrid IPO Calendar & Filing Research

Server Details

IPO calendar, SEC filings, deal terms, and IPO news research via the IPOGrid MCP server.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_chart retrieves chart data, get_company fetches a single company's details, list_companies provides a filtered list of companies, and list_news returns news coverage. There is no overlap in functionality, making tool selection straightforward for an agent.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (get_chart, get_company, list_companies, list_news) with clear and predictable naming. This uniformity enhances readability and usability across the tool set.

Tool Count4/5

With 4 tools, the count is reasonable for an IPO calendar and filing research server, covering key operations like data retrieval and listing. It's slightly lean but functional, as it includes core actions without being overwhelming or insufficient for the domain.

Completeness4/5

The tool set covers essential operations for IPO research, including fetching company details, listing companies with filters, retrieving chart data, and accessing news. Minor gaps might exist, such as lack of update or delete operations, but these are not critical for a research-focused server, and agents can work effectively with the provided tools.

Available Tools

4 tools
get_chartGet ChartA
Read-onlyIdempotent
Inspect

Return IPOGrid chart data with canonical chart embed and API URLs. Use this instead of hand-building chart URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
groupNo
rangeNo
scopeNo
bucketNo
issuerNo
metricNo
date_toNo
date_fromNo
materialityNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key behavioral traits: read-only, open-world, idempotent, and non-destructive. The description adds context about returning 'canonical chart embed and API URLs,' which clarifies the output format beyond annotations. However, it doesn't disclose additional details like rate limits, authentication needs, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: two sentences with zero waste. The first sentence states the purpose, and the second provides usage guidelines, both earning their place efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, no output schema), the description is incomplete. It lacks parameter explanations and details on return values, though annotations provide safety context. The purpose and usage are clear, but the absence of parameter semantics and output information limits overall completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description provides no information about any of the 8 parameters. With 0% schema description coverage, the schema only lists enums and patterns without explanations. The description fails to compensate for this gap, leaving parameters like 'group', 'range', or 'metric' undefined in meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return IPOGrid chart data with canonical chart embed and API URLs.' It specifies the verb ('Return') and resource ('IPOGrid chart data'), and distinguishes it from manual URL construction. However, it doesn't explicitly differentiate from sibling tools like get_company or list_companies, which are about different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this instead of hand-building chart URLs.' This clearly indicates when to use this tool (for chart data retrieval) versus an alternative approach (manual URL construction), though it doesn't mention sibling tools specifically.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_companyGet CompanyA
Read-onlyIdempotent
Inspect

Fetch one IPOGrid company detail by CIK.

ParametersJSON Schema
NameRequiredDescriptionDefault
cikYes
includeNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the description adds minimal value. It mentions fetching 'detail' which hints at comprehensive data, but doesn't disclose specifics like rate limits, authentication needs, or error handling. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Fetch one IPOGrid company detail by CIK'). There's no wasted text, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema), annotations provide safety and idempotency info, but the description lacks details on return values, error cases, or parameter usage. It's adequate for a basic fetch tool but incomplete for full agent guidance without output schema support.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description doesn't explain parameters beyond implying 'cik' is required for fetching. It mentions 'detail' but doesn't clarify the optional 'include' parameter's purpose or enum values. Baseline is 3 due to low coverage, but it partially compensates by hinting at the tool's scope.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Fetch') and resource ('one IPOGrid company detail'), specifying it's by CIK identifier. It distinguishes from sibling 'list_companies' by focusing on a single company rather than listing multiple. However, it doesn't explicitly contrast with 'get_chart' or 'list_news', keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning 'by CIK', suggesting it's for retrieving details of a specific company when you have its CIK. It doesn't provide explicit when-not-to-use guidance or name alternatives like 'list_companies' for broader queries, leaving usage context somewhat inferred rather than stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_companiesList CompaniesA
Read-onlyIdempotent
Inspect

List IPOGrid companies using the same filters as the public v1 API.

ParametersJSON Schema
NameRequiredDescriptionDefault
kindNoCompany kind filter.
limitNo
scopeNoFilter to IPO-only rows or all active deal rows.
cursorNo
marketNoFilter by normalized market family.
includeNoOptional enrichments to include in each company row.
updated_sinceNoISO-8601 lower bound on companies.updated_at
gross_proceeds_gtNoLower bound on gross proceeds, using stated gross proceeds or a midpoint estimate from price range times shares offered.
gross_proceeds_ltNoUpper bound on gross proceeds, using stated gross proceeds or a midpoint estimate from price range times shares offered.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only, open-world, idempotent, and non-destructive behavior, so the description doesn't need to repeat these. It adds value by specifying that filters match the 'public v1 API', implying external consistency and potential rate limits or authentication needs not covered by annotations. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('List IPOGrid companies') and adds necessary context about filters. There's no wasted verbiage or redundancy, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, no output schema), the description is minimal but adequate due to strong annotations and high schema coverage. However, it lacks details on pagination (implied by 'cursor'), response format, or error handling, which could be useful for an agent invoking this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 78% (high), so the baseline is 3. The description doesn't add specific parameter details beyond what the schema provides (e.g., it doesn't explain filter interactions or default behaviors). It only generically references 'filters', which aligns with the schema but doesn't enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and resource ('IPOGrid companies'), and specifies the filtering mechanism ('using the same filters as the public v1 API'). However, it doesn't explicitly differentiate from sibling tools like 'get_company' (which likely retrieves a single company) or 'list_news' (which lists news items), leaving some ambiguity about when to choose this tool over others.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_company' or 'list_news'. It mentions the filtering mechanism but doesn't specify use cases, prerequisites, or exclusions. This lack of context makes it harder for an agent to decide between sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_newsList NewsB
Read-onlyIdempotent
Inspect

List IPOGrid news coverage with optional full content.

ParametersJSON Schema
NameRequiredDescriptionDefault
cikNo
limitNo
cursorNo
includeNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnlyHint, destructiveHint) and behavior (openWorldHint, idempotentHint), so the bar is low. The description adds value by specifying 'optional full content', which hints at output behavior beyond annotations, but doesn't detail aspects like pagination or rate limits, keeping it from a top score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and adds a key optional feature. There is no wasted text, making it highly concise and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with 0% schema coverage and no output schema, the description is incomplete. It lacks details on parameter meanings, output format, and usage context, making it insufficient for an agent to fully understand the tool's operation without guessing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate but only mentions 'optional full content', which loosely relates to the 'include' parameter. It fails to explain other parameters (cik, limit, cursor), leaving significant gaps in understanding their purposes and usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and resource ('IPOGrid news coverage'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_company' or 'list_companies', which might also involve company-related data retrieval, so it falls short of a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_company' or 'list_companies'. It mentions 'optional full content' but doesn't specify contexts or exclusions, leaving usage unclear without external knowledge.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources