Parse
Server Details
Look up how any brand surfaces in ChatGPT and Google AI Overviews. Brands, prompts, sources, niches & more.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.4/5 across 6 of 6 tools scored.
The set has some overlapping functionality: 'fetch' resolves IDs like brand:stripe, which overlaps with 'parse_get_brand' for brand details. Similarly, 'fetch' may also handle prompt IDs, overlapping with 'parse_get_prompt'. The alias 'search' for 'parse_search' adds confusion. However, most tools have distinct purposes.
There are two naming conventions: 'parse_' prefix with verb_noun (e.g., parse_get_brand) and bare verbs 'fetch' and 'search'. This inconsistency makes it harder for agents to predict tool names. The alias 'search' also breaks the pattern.
With 6 tools, the set is well-scoped for a marketing research server. Each tool serves a clear purpose (search, brand details, prompt details, stats, and compatibility aliases). No unnecessary bloat.
The server covers the main operations for the Parse index: searching, getting brand details, getting prompt details, and viewing stats. Minor gaps exist (e.g., no dedicated tool for citation sources beyond what's included in brand output), but overall it's sufficient for the domain.
Available Tools
6 toolsfetchFetchCInspect
Compatibility alias that resolves fetch IDs like brand:stripe or prompt:best-crm into JSON-text results with human-readable text.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It implies a read operation but does not disclose side effects, error behavior, or prerequisites for using the IDs. The behavior is minimally described.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that front-loads the purpose. It has no unnecessary words, making it concise, though it could benefit from a short structure for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description covers the basic action but omits return format details, error handling, and relationship to siblings. It is adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema coverage, the description adds meaning by giving example values (e.g., 'brand:stripe'). This helps interpret the 'id' parameter, though it lacks details on required format or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool resolves fetch IDs into JSON-text results with human-readable text, and provides examples like 'brand:stripe' or 'prompt:best-crm'. However, it does not differentiate from sibling tools like parse_get_brand or parse_get_prompt, which likely serve similar purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is given on when to use this tool versus its siblings. The description only labels it as a 'compatibility alias', leaving the agent without context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
parse_get_brandGet brand AI visibility briefAInspect
Fetch a concise public marketing brief for one brand, including Parse score, strengths, weak spots, top prompts, citation sources, related brands, and next research questions.
| Name | Required | Description | Default |
|---|---|---|---|
| slug_or_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description indicates it is a read-only operation ('Fetch'), and 'public' suggests no destructive potential. It lists the return contents, offering adequate transparency, though it could explicitly state it does not modify data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, focused sentence that immediately specifies the action and output contents. It is concise, though it could benefit from being broken into multiple sentences for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description covers the main purpose and output contents. However, it lacks parameter explanation and usage context with siblings, making it minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage and the tool description does not explain the parameter 'slug_or_id'. It merely implies the tool is for one brand but gives no hint that this parameter is the brand identifier or how to use it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Fetch' and the resource 'public marketing brief for one brand'. It lists specific contents like Parse score, strengths, etc., making the tool's purpose distinct from siblings such as parse_get_prompt or parse_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving a brand's marketing brief but provides no explicit guidance on when to use this tool versus alternatives like parse_search or parse_get_stats.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
parse_get_promptGet AI prompt detailBInspect
Fetch one public organic prompt by slug when the user wants to inspect the exact AI-search question behind a result.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully convey behavior. It states the tool fetches data, implying a read operation, but lacks details on side effects, authentication requirements, error handling, or rate limits. This is insufficient for an agent to understand the full behavioral implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no redundancy. It conveys the core action and context efficiently, though it could be slightly expanded without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, no output schema), the description is marginally adequate. It explains what the tool does and when to use it, but omits details about return format, possible errors, and pagination, making it minimally complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, and the description only mentions 'by slug' without explaining what a slug is, its format, or constraints. No additional semantic value is added beyond the parameter name.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Fetch', the resource 'one public organic prompt', and the identifier 'by slug'. It also specifies the context 'when the user wants to inspect the exact AI-search question behind a result', distinguishing it from sibling tools like parse_search (searching prompts) or parse_get_brand (getting brand info).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a 'when' clause, indicating the appropriate context for use. While it does not explicitly list alternatives or when not to use it, the purpose is clear enough to avoid confusion with siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
parse_get_statsSummarize Parse dataset scaleAInspect
Explain the public Parse index scale and freshness: tracked brands, organic prompts, and citation observations.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, and the description does not disclose behavioral traits such as read-only nature, permission requirements, or rate limits. It only states what the tool does, leaving the agent uninformed about side effects or constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that conveys the tool's purpose efficiently with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description adequately covers the tool's function. However, it does not hint at the output format (e.g., text summary vs. structured data), which could help the agent understand what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and the schema description coverage is 100%. The description adds no parameter details because none exist, but baseline score of 4 is appropriate as no additional value is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: explaining the public Parse index scale and freshness. It specifies the exact data points (tracked brands, organic prompts, citation observations), making it distinct from siblings like parse_get_brand or parse_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining a summary overview but lacks explicit guidance on when to use this tool versus alternatives like parse_get_brand or parse_get_prompt. No when-not-to-use or alternative references are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
parse_searchSearch Parse AI visibility dataAInspect
Find brands, organic AI prompts, citation sources, and market niches for marketer research. Use this first when the user names a brand, category, source, or AI visibility question.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| types | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Only states it finds data; no mention of side effects, permissions, rate limits, or output format. Lacks behavioral details beyond basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states purpose, second provides usage guideline. Efficient and front-loaded with key information, no redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter search tool with no annotations and no output schema, description is too brief. Missing parameter explanations and result behavior, making it incomplete for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but description does not add meaning for parameters. Only loosely mentions the types enum values (brands, prompts, sources, niches) but does not explain limit, query, or types explicitly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Find' and specific resources: brands, prompts, sources, niches. Title adds context of AI visibility data. Distinguishes from siblings by saying 'Use this first'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use this first when the user names a brand, category, source, or AI visibility question.' Provides clear context but does not explicitly rule out alternatives or name them.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchSearchBInspect
Compatibility alias for parse_search. Use for clients that expect a generic search tool.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. The description only states it is an alias but does not disclose any behavioral traits (e.g., read-only, rate limits, side effects). This is insufficient for a tool that likely performs a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no redundancy. The first sentence conveys the core purpose, and the second provides usage context. It is appropriately sized and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple nature of the tool (2 parameters, no output schema), the description is incomplete. It lacks explanation of parameter purpose, return values, or any behavioral context. As an alias, more information would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description makes no mention of parameters. It does not explain the meaning of 'query' or 'limit', nor does it provide usage tips. The description fails to add any value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is an alias for parse_search and intended for clients expecting a generic search tool. The purpose is evident, though it does not explicitly differentiate from sibling tools beyond referencing the primary tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides usage guidance: 'Use for clients that expect a generic search tool.' This tells when to use it, though it does not explicitly mention when not to use or direct alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!