Skip to main content
Glama

Server Details

SEC-verified company data for AI. 8K+ companies, 1.19M filings. Origin chain provenance.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
skswave/origin-sec-registry
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 3.2/5 across 9 of 9 tools scored. Lowest: 2.4/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, such as origin_company for company lookups, origin_digest for user briefings, and origin_transcripts for earnings call searches. However, origin_profile and origin_timeline could be confused as both provide company details, though origin_timeline focuses on historical data while origin_profile covers broader attributes.

Naming Consistency5/5

All tool names follow a consistent 'origin_' prefix with descriptive nouns, such as origin_company, origin_digest, and origin_search. This uniform pattern makes the set predictable and easy to navigate, with no deviations in style or structure.

Tool Count5/5

With 9 tools, the count is well-scoped for a registry focused on SEC data and company research. Each tool serves a clear function, from user enrollment to data retrieval, without being overly sparse or bloated, fitting the domain effectively.

Completeness4/5

The tool set covers key aspects like company lookups, user management, filings, and transcripts, supporting core workflows. A minor gap exists in update or delete operations for user enrollments, but agents can likely work around this with the provided tools.

Available Tools

9 tools
origin_companyAInspect

Look up a US public company by ticker. SEC-verified, ~200 tokens. Use this FIRST for any company query.

ParametersJSON Schema
NameRequiredDescriptionDefault
tickerYesStock ticker (e.g. AAPL, MSFT, WAVX) or CIK number
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'SEC-verified' (indicating data source/quality) and '~200 tokens' (giving output size expectations), which are useful behavioral traits. However, it doesn't cover error handling, rate limits, authentication needs, or what happens with invalid tickers.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence states the purpose and scope, the second provides critical usage guidance. No wasted words, and the most important information (what it does) is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter lookup tool with no annotations and no output schema, the description provides good context about data source (SEC-verified) and output size (~200 tokens). However, it doesn't describe the return format or structure, which would be helpful given the lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the ticker parameter well-documented in the schema. The description adds minimal value beyond the schema by reinforcing the ticker focus but doesn't provide additional syntax, format details, or examples beyond what's already in the schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Look up') and resource ('US public company by ticker'), and distinguishes itself from siblings by specifying it's for company queries. It provides scope details (US public, SEC-verified) that differentiate it from other tools like origin_person or origin_search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'Use this FIRST for any company query,' providing clear guidance on when to use this tool versus alternatives. It establishes a priority rule that helps the agent select this tool over other company-related tools in the sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

origin_digestBInspect

Get the latest research digest for an enrolled user. Returns filing signals, recent facts, stock quotes, and AI agent interest levels for their watched companies. Use this to brief a user on their portfolio.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesUser email address
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions what the tool returns (filing signals, facts, etc.) but doesn't cover critical aspects like authentication needs, rate limits, error handling, or whether it's a read-only operation. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, with two sentences that directly state the tool's function and usage. There's no unnecessary repetition or fluff, making it efficient. However, it could be slightly more structured by separating purpose from usage guidelines more clearly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (returns multiple data types like filing signals and AI interest levels), no annotations, and no output schema, the description is moderately complete. It outlines the return content but doesn't specify data formats, pagination, or error responses. For a tool with rich output but no structured output schema, more detail would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'email' parameter documented as 'User email address.' The description adds no additional parameter semantics beyond this, such as format requirements or validation rules. Since schema coverage is high, the baseline score of 3 is appropriate, as the schema already provides adequate parameter information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the latest research digest for an enrolled user.' It specifies the verb ('Get') and resource ('research digest'), and mentions the content returned (filing signals, facts, quotes, interest levels). However, it doesn't explicitly differentiate this from sibling tools like 'origin_profile' or 'origin_timeline', which might also provide user-related information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some implied usage context: 'Use this to brief a user on their portfolio' suggests it's for portfolio updates. However, it lacks explicit guidance on when to use this tool versus alternatives (e.g., 'origin_profile' for user details or 'origin_company' for specific company data). No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

origin_engageAInspect

Enroll a user for company research briefings. The user provides their email and tickers to watch. They get weekly or daily digests with filing signals, earnings facts, and AI interest data. Use this when a user says they want to track companies or get regular updates.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesUser email address
tickersYesArray of stock tickers to watch (max 20)
frequencyNoHow often to send briefings (default: weekly)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: enrollment triggers recurring communications (weekly/daily digests) and describes content types (filing signals, earnings facts, AI interest data). However, it lacks details on permissions needed, rate limits, error handling, or confirmation mechanisms, leaving gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core purpose followed by usage guidance. Every phrase earns its place: no redundancy, no fluff, efficiently conveys essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is reasonably complete for a 3-parameter tool. It covers purpose, usage, and behavioral outcomes (digests with specific content). However, it lacks details on post-enrollment confirmation or error scenarios, slightly reducing completeness for a mutation operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds minimal value beyond the schema—it mentions email and tickers but doesn't explain semantics like ticker format or frequency implications. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Enroll a user'), resource ('company research briefings'), and scope (weekly/daily digests with filing signals, earnings facts, AI interest data). It distinguishes from siblings like origin_digest or origin_filings by focusing on user enrollment for recurring updates rather than one-time data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool: 'when a user says they want to track companies or get regular updates.' This provides clear context for selection versus alternatives like origin_search (for one-time queries) or origin_timeline (for historical data).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

origin_filingsCInspect

SEC filing index with EDGAR links.

ParametersJSON Schema
NameRequiredDescriptionDefault
formNoOptional: 10-K, 10-Q, 8-K
tickerYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'SEC filing index with EDGAR links', which implies a read-only operation to retrieve filing data, but doesn't specify any behavioral traits such as rate limits, authentication requirements, error handling, or the format of the returned index. For a tool with no annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of a single, clear phrase: 'SEC filing index with EDGAR links.' Every word earns its place by conveying the core purpose without any unnecessary details or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (2 parameters, no output schema, no annotations), the description is incomplete. It doesn't explain what the tool returns (e.g., the structure of the index, how EDGAR links are provided), behavioral aspects, or usage context. For a tool with no output schema and partial parameter documentation, more detail is needed to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no meaning beyond what the input schema provides. Schema description coverage is 50% (only the 'form' parameter has a description with optional values like 10-K, 10-Q, 8-K), and the description doesn't explain the 'ticker' parameter or provide additional context. With moderate schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate for gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: providing an SEC filing index with EDGAR links. It specifies the resource (SEC filings) and the key feature (EDGAR links), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'origin_search' or 'origin_timeline', which might also involve SEC filings or company data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any specific contexts, prerequisites, or exclusions, nor does it refer to sibling tools like 'origin_search' or 'origin_company' that might overlap in functionality. This leaves the agent without clear direction on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

origin_personCInspect

Career path across US public companies.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesPerson name
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal information. It implies a read-only operation (career path retrieval) but does not specify aspects like data sources, rate limits, authentication needs, or error handling. This is inadequate for a tool with no annotation coverage, leaving key behavioral traits undisclosed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that is front-loaded with the core purpose. There is no wasted text, and it is appropriately sized for a simple tool. However, it could be more structured by explicitly stating the action (e.g., 'Retrieve the career path...') to improve clarity without adding length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a tool with one parameter but no annotations or output schema), the description is incomplete. It lacks details on what the tool returns (e.g., a list of job roles, timelines), how it behaves, or any usage context. Without an output schema, the description should explain return values, but it does not, leaving significant gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'name' documented as 'Person name'. The description adds no additional meaning beyond this, such as format examples (e.g., full name, case sensitivity) or constraints. Given the high schema coverage, the baseline score of 3 is appropriate, as the description does not compensate but also does not detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Career path across US public companies' states a general purpose but is vague about the specific action. It mentions a resource ('career path') and scope ('US public companies') but lacks a clear verb indicating what the tool does (e.g., retrieves, analyzes, or visualizes career paths). It does not distinguish from sibling tools like origin_profile or origin_timeline, which might overlap in functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description does not mention any context, prerequisites, or exclusions, and it does not reference sibling tools like origin_profile or origin_search that might serve similar purposes. This leaves the agent without direction on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

origin_profileCInspect

Full company profile: people, products, partnerships. ~800 tokens.

ParametersJSON Schema
NameRequiredDescriptionDefault
tickerYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the output length ('~800 tokens') which gives some indication of response size, but doesn't disclose important behavioral aspects like whether this is a read-only operation, what data sources it uses, potential rate limits, authentication requirements, or error conditions. For a tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise at just two short phrases. It's front-loaded with the core purpose ('Full company profile: people, products, partnerships') followed by a practical implementation detail ('~800 tokens'). There's no wasted language or unnecessary elaboration, though the brevity comes at the cost of completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations, no output schema, and 0% schema description coverage, the description is inadequate. It gives a high-level overview of what the tool returns but doesn't explain the structure of the response, what specific information about people/products/partnerships is included, or how to interpret the results. The token count estimate is helpful but doesn't compensate for the lack of output documentation and behavioral transparency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, so the single 'ticker' parameter is completely undocumented in the schema. The description provides no information about this parameter - it doesn't explain what format the ticker should be in, provide examples, or clarify what constitutes a valid ticker. The description fails to compensate for the complete lack of schema documentation, leaving the agent guessing about proper parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool provides a 'full company profile' covering people, products, and partnerships, which gives a general sense of purpose. However, it's vague about what 'full profile' entails and doesn't clearly distinguish this from sibling tools like 'origin_company' or 'origin_digest' that might also provide company information. The description lacks a specific verb and doesn't explain how this differs from other company-related tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools like 'origin_company', 'origin_digest', 'origin_filings', and 'origin_timeline' that likely provide different types of company information, there's no indication of what makes this tool unique or when it should be preferred over other options. The only contextual information is the token count estimate, which doesn't help with tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

origin_timelineBInspect

Quarter-by-quarter history from SEC filings. ~2000 tokens.

ParametersJSON Schema
NameRequiredDescriptionDefault
tickerYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the output format ('Quarter-by-quarter history') and approximate output size ('~2000 tokens'), which are useful behavioral traits. However, it doesn't mention rate limits, authentication needs, data freshness, or what specific SEC filings are included. The description adds value but leaves significant behavioral aspects undocumented.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at just two short phrases. The first phrase states the core purpose, the second provides important output size context. Every word earns its place with zero redundancy or filler content. It's appropriately sized for a single-parameter tool with a clear purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and 0% schema description coverage, the description provides minimal but essential context about what the tool returns. The mention of '~2000 tokens' helps set expectations about output size, which is valuable. However, for a financial data tool with 8 sibling alternatives, more context about data scope, limitations, and differentiation would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for the single 'ticker' parameter, the description provides no additional parameter information. It doesn't explain what format the ticker should be in, provide examples, or clarify any constraints. The baseline is 3 since schema coverage is low but there's only one parameter, making the gap less severe than with multiple undocumented parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'Quarter-by-quarter history from SEC filings' which specifies both the verb (provides history) and resource (SEC filings). It distinguishes from siblings by focusing on timeline/history rather than company profiles, digests, or other data types. However, it doesn't explicitly differentiate from origin_filings which might also involve SEC filings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the 8 sibling tools. There's no mention of alternatives, prerequisites, or specific contexts where this tool is appropriate versus origin_filings, origin_profile, or other related tools. The token count mention might imply usage constraints but doesn't guide tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

origin_transcriptsBInspect

Search earnings call transcripts. Query by topic (ai, trusted_computing, blockchain, data_center, etc.), company ticker, or free text. Returns extracted facts, speakers, and metrics from earnings calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
qNoOptional: free text search across facts
firstNoOptional: if true, returns first-mention date per company for the topic
topicNoOptional: topic tag (ai, trusted_computing, blockchain, data_center, quantum, cybersecurity, robotics, autonomous, gaming, etc.)
tickerNoOptional: filter by company ticker
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions the return content (facts, speakers, metrics) but lacks details on behavioral traits like pagination, rate limits, authentication needs, or data freshness. The description doesn't contradict annotations (none exist), but offers minimal operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded with the core purpose. Both sentences earn their place by specifying search methods and return values. It could be slightly more structured by separating search inputs from outputs, but remains efficient with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with full schema coverage but no output schema or annotations, the description is adequate but incomplete. It covers the tool's function and return types, but lacks details on output format, error handling, or limitations. For a search tool with no structured output, more context on result structure would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds marginal value by listing example topics and clarifying that 'q' is for free text search, but doesn't provide additional semantics beyond what's in the schema. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search earnings call transcripts' with specific search capabilities (by topic, ticker, or free text) and what it returns (facts, speakers, metrics). It distinguishes from siblings like origin_filings or origin_profile by focusing on transcripts, but doesn't explicitly contrast with origin_search which might overlap.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'Search earnings call transcripts' and lists query options, but doesn't explicitly state when to use this tool versus alternatives like origin_search or origin_timeline. No guidance on prerequisites or exclusions is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.