Skip to main content
Glama

Server Details

Access premium IPO intelligence through AI agents. Retrieve detailed company profiles for upcoming and recent public offerings — including deal terms, SEC filings, AI-generated research with valuation models, competitor benchmarking, underwriter ratings, risk screening, and board analysis. Monitor overall market conditions with a proprietary daily sentiment score (-100 bearish to +100 bullish) with historical trend data to help time investment entries.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 5 of 5 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct operation: updating dates batch-wise, finding missing dates, retrieving sentiment, retrieving full snapshot, and looking up a single IPO date. No overlap in functionality.

Naming Consistency4/5

Most tools use 'get_' for retrieval and descriptive verbs like 'find_', 'lookup_', and 'batch_update_'. The pattern is mostly consistent but verbs vary slightly, causing minor confusion.

Tool Count5/5

With 5 tools, the set is well-scoped for an IPO data server—neither too sparse nor overloaded. Each tool serves a clear purpose without redundancy.

Completeness4/5

The tools cover core operations: retrieving profiles, sentiment, and dates, as well as updating dates. Missing a general list/search for all IPOs, but the primary use cases are addressed.

Available Tools

5 tools
batch_update_ipo_datesBInspect

Lookup and update IPO dates for multiple companies. Can process all missing dates or a specific list of company IDs. Use limit to control batch size.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
companyIdsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It states the tool performs lookup and update, implying mutation, but does not disclose side effects, authentication needs, idempotency, or error handling for invalid IDs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. Every word adds value, efficiently conveying purpose and key usage details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple schema (2 params, no output schema), description covers modes and batch size. However, it omits return value or error behavior, which would be helpful for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description adds meaning: limit controls batch size and companyIds specifies targets. It implies default behavior (all missing dates when companyIds absent), though not explicitly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it looks up and updates IPO dates for multiple companies, distinguishing it from sibling tools that are query-oriented. However, it does not explicitly differentiate itself from siblings by name or function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It mentions two usage modes (all missing dates vs specific company IDs) and controlling batch size with limit, but lacks guidance on when to choose this tool over alternatives like find_ipos_missing_date.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_ipos_missing_dateAInspect

Find all IPO companies in the database that do not have an ipodate field set (or are null/undefined). Returns a list of company IDs and names.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries burden. It mentions the output (list of IDs and names) but does not disclose if the operation is read-only, any side effects, or permissions needed. Minimal but not misleading.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence conveying purpose and output without waste. Front-loaded with key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter query tool, description adequately explains what it does and returns. Could mention scope (all companies) but not strictly necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has no parameters (coverage 100%), so description adds value by explaining tool purpose and return format. No ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the tool's action (find IPO companies), condition (missing ipodate), and output (list of IDs and names), clearly distinguishing it from sibling tools that update, lookup, or provide sentiment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The purpose is clear enough to infer usage, but no explicit guidance on when to choose this tool over siblings or any exclusions for specific use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ipo_sentimentAInspect

Access IPOSignal's proprietary market sentiment score — a daily signal quantifying how well recent IPOs are being received by investors. Ranges from -100 (extreme bearish) to +100 (extreme bullish) with trend data for the last N days. Use it to identify favorable IPO windows, time investment entries, and assess overall market appetite for new listings. Also available as a paid HTTP endpoint at /api/agent/ipo-sentiment.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full burden. It discloses the data type (daily signal, range, trend), but does not detail authentication or rate limits. However, it adds meaningful context beyond basic read behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences front-load the core purpose and provide immediate actionable guidance. No filler or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (one optional param, no output schema), the description covers the return type (score range) and trend data. It does not specify exact response structure but is sufficient for an agent to understand input and output scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter 'days' has schema-compatible defaults and constraints, but the description only indirectly relates it to 'trend data for the last N days'. With 0% schema description coverage, the description partially compensates but lacks explicit linking of the parameter's effect.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the tool returns a proprietary market sentiment score for IPOs with a defined range (-100 to +100) and trend data. It distinguishes itself from sibling tools (date management, snapshots) by focusing on sentiment analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states use cases: identifying IPO windows, timing entries, assessing market appetite. It does not mention when not to use or alternatives, but the context of sibling tools makes the intended use clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ipo_snapshotAInspect

Retrieve a complete IPO company profile — deal terms, pricing range, expected market cap, SEC registration and prospectus details, offering structure, and lifecycle timeline. When available, includes AI-generated research with valuation models, competitor benchmarking, underwriter ratings, board analysis, and risk factors. Provide exactly one of companyId, symbol, or cik. Also available as a paid HTTP endpoint at /api/agent/ipo/{id} or /api/agent/ipo/by-symbol/{symbol}.

ParametersJSON Schema
NameRequiredDescriptionDefault
cikNo
symbolNo
companyIdNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description implies a read operation ('Retrieve') and lists included data but does not explicitly state it is non-destructive or mention authentication or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is front-loaded with the main purpose and efficiently conveys scope. The mention of the HTTP endpoint is slightly extraneous but not excessive.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema. Description enumerates many data categories (deal terms, pricing, SEC details, etc.) and mentions AI-generated research when available. Provides sufficient context for a complex tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has zero description coverage. The description adds crucial context that parameters are mutually exclusive and exactly one must be provided, which is not apparent from the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Retrieve a complete IPO company profile' and lists specific content categories. It distinguishes from sibling tools which focus on dates or sentiment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly requires exactly one of companyId, symbol, or cik. Does not specify when not to use or alternatives, but siblings are different enough that confusion is unlikely.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_ipo_dateBInspect

Lookup the IPO date (first trading day) for a company using AI. Takes company ID or company name. Returns the found date or "unknown".

ParametersJSON Schema
NameRequiredDescriptionDefault
companyIdNo
companyNameNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. The description mentions 'using AI' but does not disclose behavioral traits like potential slowness, correctness guarantees, or side effects. For a tool with zero annotations, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and input/output. No unnecessary words, but could be slightly more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but the description explicitly states the return value. For a simple lookup, this may suffice, but missing error handling and performance details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description clarifies the role of the two optional parameters (companyId and companyName) as alternative ways to identify the company. This adds value over the schema which only lists types and no relationship. However, schema coverage is 0%, so baseline is low.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'lookup', the resource 'IPO date', and the return type. It distinguishes from siblings which are for updating, finding missing, or getting sentiment/snapshot.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like batch_update_ipo_dates or find_ipos_missing_date. Lacks context on prerequisites or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources