Skip to main content
Glama

neo-x402-mcp

Server Details

8-tool AI web intelligence suite: search, scrape, screenshot, SEO, docs, crypto, code.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some potential overlap between pdf_analyze and convert_document for PDF handling, and between scrape and seo_analyze for web content analysis. However, their descriptions clarify different focuses, so confusion is minimal.

Naming Consistency3/5

The naming is mixed: some tools use snake_case (code_review, crypto_price) while others use more descriptive phrases (convert_document, pdf_analyze). There is no consistent verb_noun pattern, but the names are generally readable and descriptive.

Tool Count4/5

With 8 tools, the count is reasonable for a utility-focused server. It covers a broad range of functions without being overwhelming, though it might feel slightly scattered across different domains like web, documents, and crypto.

Completeness3/5

The server lacks a clear unified domain, making completeness hard to assess. It offers various utilities but has gaps in coverage, such as no document editing or advanced crypto features beyond price checking, which limits workflow integration.

Available Tools

8 tools
code_reviewCInspect

AI-powered code review. Analyzes code for bugs, security issues, and style problems.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYes
languageNopython

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool is 'AI-powered' and analyzes for specific issues, but doesn't describe response format, rate limits, authentication needs, or whether it's read-only or mutative. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: a single, efficient sentence that states the core purpose without unnecessary details. Every word earns its place, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, but has an output schema), the description is minimally adequate. The output schema existence means return values needn't be explained, but the description lacks details on behavioral traits and parameter semantics, leaving it incomplete for effective use without additional context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning parameters are undocumented in the schema. The description doesn't add any meaning beyond what the schema provides—it doesn't explain what 'code' or 'language' parameters represent, their expected formats, or constraints. With 2 parameters and low coverage, the description fails to compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'AI-powered code review' with specific functions 'Analyzes code for bugs, security issues, and style problems.' It uses a specific verb ('analyzes') and resource ('code'), though it doesn't explicitly distinguish from sibling tools like 'pdf_analyze' or 'seo_analyze' which might share analytical functions but on different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, constraints, or compare it to sibling tools like 'pdf_analyze' or 'seo_analyze' that might be used for different types of analysis. The usage is implied from the purpose but lacks explicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_documentCInspect

Convert PDF, DOCX, PPTX, XLSX files to Markdown or Text. Supports URLs or base64 uploads.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNo
file_base64No
source_formatNo
target_formatNomarkdown

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions support for URLs or base64 uploads, but lacks details on permissions, rate limits, error handling, or what the conversion process entails (e.g., formatting preservation). This is inadequate for a tool with mutation implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—two sentences that efficiently cover key functionality without wasted words. It's front-loaded with the core purpose, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (file conversion with 4 parameters) and no annotations, the description is incomplete. However, the presence of an output schema mitigates the need to explain return values. The description covers basic functionality but lacks details on behavior and parameters, leaving gaps for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It implies parameters for source formats and target formats, but doesn't explain the four parameters (url, file_base64, source_format, target_format) or their interactions (e.g., mutual exclusivity). The description adds minimal value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: converting specific file formats (PDF, DOCX, PPTX, XLSX) to Markdown or Text. It specifies the action (convert) and resources (file types), but doesn't explicitly differentiate from sibling tools like pdf_analyze, which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It doesn't mention sibling tools like pdf_analyze or code_review that might handle similar content, nor does it specify prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

crypto_priceBInspect

Get real-time cryptocurrency price, market cap, 24h change, and volume.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoBTC

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only mentions what data is returned, not behavioral traits like rate limits, authentication needs, data freshness guarantees, or error conditions. It lacks critical operational context for a real-time data tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and lists key data points without unnecessary words. Every part earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (real-time data fetch), no annotations, and an output schema present, the description is minimally adequate. It covers the purpose but lacks behavioral details and parameter guidance, leaving gaps in operational understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description doesn't mention the 'symbol' parameter at all. It implies cryptocurrency data but doesn't specify how to identify which cryptocurrency. Baseline is 3 since the schema covers the single parameter, but the description adds no value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get real-time cryptocurrency price') and resources ('cryptocurrency'), and lists the data points returned. It doesn't distinguish from siblings since they're unrelated tools, but the purpose is unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description states what it does but offers no context about prerequisites, limitations, or comparison to other tools for similar data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pdf_analyzeBInspect

Download and analyze a PDF file. Returns metadata, summary, and key points.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes
summaryNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions downloading and analyzing, implying network operations and processing, but lacks details on permissions, rate limits, error handling, or what 'analyze' entails beyond metadata, summary, and key points. For a tool with potential complexity (download + analysis), this is insufficient, though it does hint at output types.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—two sentences that directly state the action and output. Every word earns its place with no redundancy or fluff. It's front-loaded with the core purpose, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (download + analysis), no annotations, and an output schema (which likely covers return values), the description is minimally complete. It states what the tool does and output types, but misses behavioral details and usage context. The output schema reduces the need to explain returns, but gaps in transparency and guidelines keep it at an adequate level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter details. The description adds minimal semantics: it implies 'url' is for downloading a PDF and mentions 'summary' as part of the analysis output, but doesn't explain parameter roles, defaults, or constraints. With 2 parameters and low coverage, this is baseline adequate but lacks compensation for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Download and analyze a PDF file' specifies the verb (download and analyze) and resource (PDF file). It distinguishes from siblings like 'convert_document' or 'seo_analyze' by focusing on PDF analysis. However, it doesn't explicitly differentiate from 'scrape' or 'web_search' in terms of PDF-specific handling, keeping it at 4 rather than 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a valid URL), exclusions (e.g., non-PDF files), or comparisons to sibling tools like 'scrape' for general web content or 'convert_document' for format conversion. This lack of contextual direction results in a minimal score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapeCInspect

Anti-bot web scraping with Cloudflare bypass. Returns page content as markdown, HTML, or JSON.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes
selectorNo
return_formatNomarkdown

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'Anti-bot web scraping with Cloudflare bypass' which suggests it handles protected sites, but doesn't disclose rate limits, authentication needs, potential costs, error conditions, or what happens with malformed URLs/selectors. For a web scraping tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with just two clauses that efficiently convey the core functionality: anti-bot scraping capability and output formats. Every word earns its place with zero wasted text, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which should document return values), the description doesn't need to explain return formats in detail. However, for a web scraping tool with 3 parameters and no annotations, the description should provide more context about authentication, rate limits, and error handling. The anti-bot/Cloudflare bypass mention is useful but insufficient for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for all 3 parameters, the description must compensate but provides minimal parameter information. It mentions 'Returns page content as markdown, HTML, or JSON' which hints at the 'return_format' parameter, but doesn't explain the 'url' requirement, 'selector' purpose, or format options. The description adds some value but doesn't adequately compensate for the complete lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Anti-bot web scraping with Cloudflare bypass' and specifies it 'Returns page content as markdown, HTML, or JSON.' This identifies the verb (scrape with bypass), resource (web pages), and output formats. However, it doesn't explicitly differentiate from sibling tools like 'web_search' or 'screenshot' which might have overlapping web-related functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'web_search' or 'screenshot'. It mentions the anti-bot/Cloudflare bypass capability which implies usage for protected sites, but doesn't state when NOT to use it or compare it to other web-related tools in the server.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

screenshotBInspect

Take a full-page screenshot of any URL using headless Chromium. Returns base64 PNG.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes
widthNo
heightNo
full_pageNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool uses headless Chromium and returns base64 PNG, but lacks details on performance (e.g., timeouts, rate limits), error handling, or prerequisites (e.g., URL accessibility). This is inadequate for a tool with potential execution complexity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core functionality ('Take a full-page screenshot of any URL') and adds essential technical details ('using headless Chromium', 'Returns base64 PNG') without any wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no annotations) and the presence of an output schema (which covers return values), the description is minimally complete. It states the purpose and output format but lacks usage guidelines and sufficient behavioral context, making it adequate but with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate, but it only implies the 'url' parameter without explaining others. It mentions 'full-page' but doesn't clarify the 'full_page' boolean or default dimensions. The description adds minimal value beyond the schema, resulting in a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Take a full-page screenshot'), the resource ('any URL'), the method ('using headless Chromium'), and the output format ('base64 PNG'). It distinguishes itself from siblings like 'scrape' or 'web_search' by focusing on visual capture rather than data extraction or search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention scenarios where screenshotting is preferred over scraping for text or using other tools like 'pdf_analyze' for document analysis, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

seo_analyzeBInspect

Analyze a webpage for SEO best practices. Returns score, issues, warnings, and recommendations.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the return types ('score, issues, warnings, and recommendations'), which is helpful, but lacks details on permissions, rate limits, error handling, or whether the analysis is destructive. For a tool with no annotations, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences that efficiently state the purpose and output. It's front-loaded with the main action and wastes no words, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which should cover return values), 1 parameter, and no annotations, the description is reasonably complete. It specifies the analysis domain and output types, but could improve by adding more behavioral context or usage guidelines to compensate for the lack of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter with 0% description coverage, so the schema provides no semantic details. The description doesn't add any parameter-specific information beyond implying the 'url' is for a webpage. It doesn't explain format constraints or usage, so it partially compensates but not fully.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyze a webpage for SEO best practices.' It specifies the verb ('analyze'), resource ('webpage'), and domain ('SEO best practices'). However, it doesn't explicitly differentiate from siblings like 'pdf_analyze' or 'web_search', which might also involve web content analysis, so it's not a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, exclusions, or compare to siblings such as 'scrape' or 'web_search' for similar web-related tasks. This leaves the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources