Skip to main content
Glama

Server Details

Look up how any brand surfaces in ChatGPT and Google AI Overviews. Brands, prompts, sources, niches & more.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.4/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation3/5

The set has some overlapping functionality: 'fetch' resolves IDs like brand:stripe, which overlaps with 'parse_get_brand' for brand details. Similarly, 'fetch' may also handle prompt IDs, overlapping with 'parse_get_prompt'. The alias 'search' for 'parse_search' adds confusion. However, most tools have distinct purposes.

Naming Consistency2/5

There are two naming conventions: 'parse_' prefix with verb_noun (e.g., parse_get_brand) and bare verbs 'fetch' and 'search'. This inconsistency makes it harder for agents to predict tool names. The alias 'search' also breaks the pattern.

Tool Count5/5

With 6 tools, the set is well-scoped for a marketing research server. Each tool serves a clear purpose (search, brand details, prompt details, stats, and compatibility aliases). No unnecessary bloat.

Completeness4/5

The server covers the main operations for the Parse index: searching, getting brand details, getting prompt details, and viewing stats. Minor gaps exist (e.g., no dedicated tool for citation sources beyond what's included in brand output), but overall it's sufficient for the domain.

Available Tools

6 tools
fetchFetchCInspect

Compatibility alias that resolves fetch IDs like brand:stripe or prompt:best-crm into JSON-text results with human-readable text.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It implies a read operation but does not disclose side effects, error behavior, or prerequisites for using the IDs. The behavior is minimally described.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the purpose. It has no unnecessary words, making it concise, though it could benefit from a short structure for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers the basic action but omits return format details, error handling, and relationship to siblings. It is adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema coverage, the description adds meaning by giving example values (e.g., 'brand:stripe'). This helps interpret the 'id' parameter, though it lacks details on required format or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool resolves fetch IDs into JSON-text results with human-readable text, and provides examples like 'brand:stripe' or 'prompt:best-crm'. However, it does not differentiate from sibling tools like parse_get_brand or parse_get_prompt, which likely serve similar purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus its siblings. The description only labels it as a 'compatibility alias', leaving the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

parse_get_brandGet brand AI visibility briefAInspect

Fetch a concise public marketing brief for one brand, including Parse score, strengths, weak spots, top prompts, citation sources, related brands, and next research questions.

ParametersJSON Schema
NameRequiredDescriptionDefault
slug_or_idYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description indicates it is a read-only operation ('Fetch'), and 'public' suggests no destructive potential. It lists the return contents, offering adequate transparency, though it could explicitly state it does not modify data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, focused sentence that immediately specifies the action and output contents. It is concise, though it could benefit from being broken into multiple sentences for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers the main purpose and output contents. However, it lacks parameter explanation and usage context with siblings, making it minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage and the tool description does not explain the parameter 'slug_or_id'. It merely implies the tool is for one brand but gives no hint that this parameter is the brand identifier or how to use it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'Fetch' and the resource 'public marketing brief for one brand'. It lists specific contents like Parse score, strengths, etc., making the tool's purpose distinct from siblings such as parse_get_prompt or parse_search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving a brand's marketing brief but provides no explicit guidance on when to use this tool versus alternatives like parse_search or parse_get_stats.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

parse_get_promptGet AI prompt detailBInspect

Fetch one public organic prompt by slug when the user wants to inspect the exact AI-search question behind a result.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully convey behavior. It states the tool fetches data, implying a read operation, but lacks details on side effects, authentication requirements, error handling, or rate limits. This is insufficient for an agent to understand the full behavioral implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no redundancy. It conveys the core action and context efficiently, though it could be slightly expanded without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one required parameter, no output schema), the description is marginally adequate. It explains what the tool does and when to use it, but omits details about return format, possible errors, and pagination, making it minimally complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, and the description only mentions 'by slug' without explaining what a slug is, its format, or constraints. No additional semantic value is added beyond the parameter name.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Fetch', the resource 'one public organic prompt', and the identifier 'by slug'. It also specifies the context 'when the user wants to inspect the exact AI-search question behind a result', distinguishing it from sibling tools like parse_search (searching prompts) or parse_get_brand (getting brand info).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a 'when' clause, indicating the appropriate context for use. While it does not explicitly list alternatives or when not to use it, the purpose is clear enough to avoid confusion with siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

parse_get_statsSummarize Parse dataset scaleAInspect

Explain the public Parse index scale and freshness: tracked brands, organic prompts, and citation observations.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, and the description does not disclose behavioral traits such as read-only nature, permission requirements, or rate limits. It only states what the tool does, leaving the agent uninformed about side effects or constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that conveys the tool's purpose efficiently with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and no output schema, the description adequately covers the tool's function. However, it does not hint at the output format (e.g., text summary vs. structured data), which could help the agent understand what to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and the schema description coverage is 100%. The description adds no parameter details because none exist, but baseline score of 4 is appropriate as no additional value is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: explaining the public Parse index scale and freshness. It specifies the exact data points (tracked brands, organic prompts, citation observations), making it distinct from siblings like parse_get_brand or parse_search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for obtaining a summary overview but lacks explicit guidance on when to use this tool versus alternatives like parse_get_brand or parse_get_prompt. No when-not-to-use or alternative references are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources