Skip to main content
Glama

Server Details

Analyst-debate research on every public and private company. 8 tools: search, score, compare, more.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Ask-Cyborg/askcyborg-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool serves a distinct purpose: comparing companies, retrieving full reports, fetching a quick score, and searching. No overlap or ambiguity between them.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (compare_companies, get_company_report, get_cyborg_score, search_companies), making them predictable and easy to understand.

Tool Count5/5

4 tools is well-scoped for a company research server. Each tool earns its place without being too few or too many.

Completeness4/5

The tool set covers search, detailed report retrieval, quick score lookup, and comparison. A minor gap is the lack of a tool to list or get recent reports for a company, but core workflows are supported.

Available Tools

8 tools
compare_companiesAInspect

Compare 2-5 companies side by side on Cyborg Score, industry, key insights, and competitive positioning. Useful for portfolio decisions, M&A short-listing, or competitive analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
companiesYesList of 2-5 company names or slugs to compare.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits like data source, limitations, or error handling, but only states what is compared, leaving gaps about output format and failure behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise with two sentences: the first clearly defines the action and scope, and the second lists use cases. No redundancy or superfluous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter) and no output schema, the description adequately covers input but lacks details on output format or structure, which is needed for an agent to fully understand the result.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes the parameter well (array of 2-5 strings), and the description adds value by detailing what the comparison covers (Cyborg Score, industry, etc.), enhancing meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares 2-5 companies on specific attributes (Cyborg Score, industry, insights, positioning), distinguishing it from siblings like get_company_report (single company) and get_cyborg_score (single score).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases (portfolio decisions, M&A short-listing, competitive analysis) that indicate when to use, though it does not explicitly state when not to use or suggest alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_competitorsAInspect

Find the named competitors of a company that AskCyborg's analyst panel identified as material to the target's strategic position. Returns each competitor with the one-line strategic tagline AskCyborg uses to characterize them. Useful for competitive landscape analysis, M&A short-listing, or pricing reference checks.

ParametersJSON Schema
NameRequiredDescriptionDefault
companyYesCompany name, ticker, or slug (e.g. 'OpenAI', 'AAPL', 'spacex').
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the return format (competitor with one-line strategic tagline) and the selection criteria (analyst panel identification). It implies read-only behavior without stating explicitly, but for a simple retrieval tool this is sufficient. No mention of auth or rate limits, but likely irrelevant for this tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences with no wasted words. It front-loads the main action in the first sentence, adds return detail in the second, and lists use cases in the third. Every sentence adds value, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description covers all needed information: purpose, input, return value, and usage scenarios. No gaps that could lead to incorrect invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for the 'company' parameter. The description adds no additional meaning beyond what the schema provides (e.g., 'Company name, ticker, or slug'). Baseline of 3 is appropriate as the description does not enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'find' and resource 'competitors of a company', specifying that competitors are those identified by AskCyborg's analyst panel as material to strategic position. It distinguishes from sibling tools like compare_companies and get_company_report, which focus on comparison and reports rather than competitor identification.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists explicit use cases: competitive landscape analysis, M&A short-listing, pricing reference checks. It does not provide explicit when-not-to-use or alternatives, but the context makes it clear this tool is for named competitors, while siblings like search_by_industry or compare_companies cover other scenarios. Slight lack of explicit exclusions keeps it from a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_company_reportAInspect

Retrieve AskCyborg's structured research report for a single company. Includes executive summary, Cyborg Score with rationale, strategic profile, top insights, competitive positioning, and recent developments. Returns a paywall-aware summary; the full 30-page report and analyst-debate audio are available at the returned URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
companyYesCompany name or ticker (e.g. 'OpenAI', 'AAPL', 'SpaceX'). Slug form accepted (e.g. 'openai').
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that the tool returns a paywall-aware summary and that the full report is available via URL, providing adequate transparency for a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first defines action, second lists contents and paywall note. Efficient, front-loaded, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema, the description comprehensively details report contents, paywall behavior, and return URL, making it complete for an AI agent to understand the tool's function.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, covering the only parameter 'company' with name/ticker examples. The description does not add new information beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves a structured research report for a single company, listing key content sections (executive summary, Cyborg Score, etc.). It is easily distinguishable from siblings like compare_companies or get_cyborg_score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for a single company report and mentions paywall behavior, but does not explicitly state when to use versus siblings. However, sibling names are self-explanatory, making usage clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_cyborg_scoreAInspect

Retrieve just the Cyborg Score (1-10) for a company. The Cyborg Score is AskCyborg's proprietary rating, synthesized from hundreds of data points across business model, financials, leadership, competitive position, technology, marketing, and ESG. Use this when the user wants a quick rating without the full report context.

ParametersJSON Schema
NameRequiredDescriptionDefault
companyYesCompany name or ticker.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries the burden. It discloses that the score is proprietary and synthesized from hundreds of data points across multiple categories, giving insight into its composition. It implicitly indicates a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently deliver purpose and usage guidance with no redundant phrasing. Every sentence serves a clear function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with one parameter and no output schema, the description covers purpose, usage context, and the nature of the output (score range 1-10). It lacks explicit return format details but is sufficient for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter 'company', which is described in the schema. The description does not add additional parameter information beyond the schema, meeting the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb 'Retrieve', the resource 'Cyborg Score (1-10)', and the scope 'for a company'. It distinguishes itself from siblings by noting it provides a quick rating 'without the full report context', contrasting with get_company_report.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage context: 'Use this when the user wants a quick rating without the full report context.' This implies when to use and hints at alternatives, though it does not explicitly state when not to use or name sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_recent_developmentsAInspect

Retrieve the most recent material developments (news, deals, leadership changes, product launches, financial events) that AskCyborg's analyst panel flagged as decision-relevant for this company. Each entry is dated and concise. Use this for news catch-up before a meeting, for monitoring portfolio companies, or to spot recent strategic shifts.

ParametersJSON Schema
NameRequiredDescriptionDefault
companyYesCompany name, ticker, or slug.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses entries are dated and concise, and flagged by analysts. No annotations provided, so description covers behavioral expectations adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences: one defining, one suggesting usage. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Sufficient for a simple tool with one parameter. Mentions output format (dated, concise). Could address missing data edge case but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema fully describes the 'company' parameter (coverage 100%). Description adds no extra semantics beyond restating the purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves recent material developments for a company, listing examples. Distinguishes from sibling tools like get_company_report or find_competitors.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases like news catch-up and monitoring, but doesn't mention when to avoid using it or recommend alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_top_insightsAInspect

Retrieve just the top analyst-debate insights for a company — the punchiest, decision-relevant claims that AskCyborg's analyst panel surfaced after stress-testing the company. Faster and more focused than get_company_report when you just need 'what should I know about this company in 60 seconds'.

ParametersJSON Schema
NameRequiredDescriptionDefault
companyYesCompany name, ticker, or slug.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral traits. It mentions 'stress-testing' but does not disclose potential side effects, authentication needs, or what happens on invalid input (e.g., unknown company). The tool is a simple read operation, but more detail would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, consisting of two focused sentences that efficiently convey purpose, value, and differentiation without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 required parameter, no nested objects, no output schema), the description covers the essential purpose and usage. However, it does not mention the return format or any error handling, which would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes the 'company' parameter as 'Company name, ticker, or slug.' with 100% coverage, so the description adds no extra parameter semantics. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'top analyst-debate insights' for a company, distinguishing it from the sibling 'get_company_report' by emphasizing speed and focus. The verb 'retrieve' and resource specification are precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool ('when you just need what should I know about this company in 60 seconds') and contrasts it with the heavier 'get_company_report', providing clear differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_industryAInspect

Find companies in AskCyborg's corpus that operate in a specific industry. Returns up to N companies with their Cyborg Score and one-line strategic profile, ranked by Cyborg Score. Use this for sector mapping, comparable analysis, or to discover companies you didn't know existed in a space.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return. Default 10, max 25.
industryYesIndustry name or keyword (e.g. 'semiconductor', 'fintech', 'biotech', 'enterprise SaaS', 'aerospace').
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It discloses output format (up to N companies, Cyborg Score, strategic profile, ranked) and implies read-only behavior, but lacks details on error conditions or limits beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no filler. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple search tool without output schema, the description adequately explains return values and ordering. Could mention empty results handling, but overall complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds value by specifying default limit (10), providing industry examples, and noting ranking by Cyborg Score, which goes beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Find companies in AskCyborg's corpus that operate in a specific industry' with a specific verb and resource, and distinguishes from siblings like search_companies and find_competitors by focusing on industry-based retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides clear use cases ('sector mapping, comparable analysis, or to discover companies you didn't know existed'), but does not explicitly state when not to use this tool versus alternatives like search_companies.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_companiesAInspect

Search AskCyborg's corpus of company research by name or industry keyword. Returns up to N matches with company name, Cyborg Score, one-line strategic profile, and the URL to the full preview report. Use this first when the user mentions a company you want to research, or to discover companies in a category.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return. Default 10, max 25.
queryYesCompany name (e.g. 'OpenAI', 'Stripe'), partial name, or industry keyword (e.g. 'semiconductor', 'biotech').
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that results are returned up to N matches with specific fields, but does not mention sorting order, pagination, or behavior when no matches are found. The description adds minimal behavioral context beyond the input schema, but is not misleading.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences long, each serving a distinct purpose: stating the action, describing the output, and providing usage guidance. It is front-loaded with the main purpose and contains no unnecessary words or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 2 simple parameters and no output schema or annotations, the description covers the essential aspects: what the tool does, what it returns, when to use it, and example inputs. It does not cover error handling or edge cases, but it is sufficient for an agent to use the tool correctly in most scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for both parameters. The description adds value by explaining that the query can be a company name, partial name, or industry keyword, and reinforces the limit parameter's default and maximum. It also clarifies the output fields, which is not present in the schema, making the tool easier to use.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to search for companies by name or industry keyword. It also specifies the returned information (company name, Cyborg Score, strategic profile, preview report URL) and explicitly says to use it first for company research, distinguishing it from sibling tools that likely provide more detailed or comparative information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Use this first when the user mentions a company you want to research, or to discover companies in a category.' It implies this is the entry point, but does not explicitly mention when not to use it or name specific alternatives, though the context suggests deeper tools exist for further details.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.