Skip to main content
Glama

Server Details

Search the agentic web. 1,750+ sites scored by agent-readiness. 8 tools incl. verify_mcp.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
unitedideas/nothumansearch-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 6 of 6 tools scored. Lowest: 3.4/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: get_site_details retrieves detailed reports for a domain, get_stats provides index statistics, register_monitor sets up alerts, search_agents discovers agent-friendly services, submit_site adds new URLs to the index, and verify_mcp validates MCP server endpoints. The descriptions explicitly differentiate their functions, eliminating any ambiguity.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (e.g., get_site_details, register_monitor, search_agents, submit_site, verify_mcp), using snake_case throughout. The verbs accurately reflect the actions (get, register, search, submit, verify), making the set predictable and easy to understand.

Tool Count5/5

With 6 tools, the server is well-scoped for its purpose of indexing and evaluating agentic readiness of websites and APIs. Each tool serves a specific, necessary function—from querying and submitting data to monitoring and verification—without being overly sparse or bloated, fitting typical expectations for a specialized service.

Completeness5/5

The tool set provides comprehensive coverage for the domain of agentic readiness assessment: it supports querying detailed reports and statistics, discovering services, submitting new sites for indexing, setting up monitoring alerts, and verifying MCP server compliance. This covers the full lifecycle from discovery to maintenance, with no obvious gaps that would hinder agent workflows.

Available Tools

8 tools
get_site_detailsGet Site Agentic Readiness ReportAInspect

Get the full agentic readiness report for a specific domain: score, category, all 7 signal checks (llms.txt, ai-plugin.json, OpenAPI, structured API, MCP server, robots.txt AI rules, Schema.org), plus any cached llms.txt content and OpenAPI summary.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain to look up (e.g. 'stripe.com'). Do not include scheme or path.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool retrieves a report including cached content and summaries, suggesting it may fetch precomputed data. However, it lacks details on permissions, rate limits, data freshness, or error handling, which are important for a tool that likely queries external resources.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and efficiently lists all components in a single, dense sentence. Every part adds value without redundancy, making it highly concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (involves multiple signal checks and cached data) and lack of annotations or output schema, the description is moderately complete. It outlines what the report contains but does not cover behavioral aspects like response format, latency, or failure modes, which could be important for agentic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single 'domain' parameter. The description adds value by specifying the domain is for a readiness report and listing what the report includes, but does not provide additional syntax or format details beyond the schema. With only one parameter, the baseline is high, but the description compensates slightly with context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and resource ('full agentic readiness report for a specific domain'), listing all components included (score, category, 7 signal checks, cached content). It distinguishes from sibling tools like 'get_stats' and 'search_agents' by focusing on a detailed domain report rather than aggregated statistics or agent searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving comprehensive readiness data for a domain, but does not explicitly state when to use this tool versus alternatives like 'get_stats' (which might provide broader statistics) or 'search_agents' (which might find agents). No exclusions or prerequisites are mentioned, leaving usage context somewhat open-ended.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_statsGet Index StatsBInspect

Get current statistics for the Not Human Search index: total sites, average agentic score, top category.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states this is a 'Get' operation (implying read-only) and describes what statistics are returned, but doesn't mention important behavioral aspects like whether this requires authentication, has rate limits, returns real-time vs cached data, or what happens if the index is empty. For a tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that immediately states the tool's purpose and enumerates the key statistics returned. Every word earns its place with no redundancy or unnecessary elaboration. The structure is front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema, no annotations), the description provides adequate basic information about what statistics are returned. However, for a tool with zero annotation coverage, it should ideally mention more behavioral context (like whether this is a lightweight operation, authentication requirements, or data freshness). The absence of an output schema means the description should more fully describe the return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema already fully documents the parameter situation (none needed). The description appropriately doesn't discuss parameters since none exist, maintaining focus on what the tool returns rather than what it accepts. This meets the baseline expectation for zero-parameter tools.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get current statistics for the Not Human Search index' with specific metrics mentioned (total sites, average agentic score, top category). It distinguishes from siblings by focusing on index-level statistics rather than individual site details (get_site_details) or search functionality (search_agents). However, it doesn't explicitly contrast with siblings in the description text itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the metrics it returns (index-level statistics), suggesting it should be used when needing overall index health/status information. However, there's no explicit guidance on when to use this tool versus alternatives like get_site_details or search_agents, nor any mention of prerequisites or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_top_sitesGet Top Scored SitesAInspect

Get the highest-scored agent-ready sites in the index, optionally filtered by category. Returns sites ranked by agentic readiness score (100 = perfect agent support). Use this to discover the most agent-ready services overall or in a specific domain like 'finance' or 'developer'.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 10, max 50)
categoryNoFilter by category (e.g. 'developer', 'finance', 'ai-tools'). Omit for all categories.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it's a read operation (implied by 'get'), returns ranked results, and explains the scoring system (100 = perfect). However, it lacks details on rate limits, authentication needs, pagination, or error conditions, which would be valuable for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core functionality with optional filtering, and the second explains usage context and scoring. Every sentence adds value without redundancy, making it front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, and scoring, but lacks output details (e.g., return format or structure) and behavioral constraints like rate limits. With no output schema, explaining return values would improve completeness, though the current description is adequate for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters (limit and category). The description adds marginal value by providing example categories ('finance', 'developer') and clarifying that omitting category returns all categories, but doesn't significantly enhance parameter meaning beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('get', 'discover') and resources ('highest-scored agent-ready sites', 'most agent-ready services'), distinguishing it from siblings like get_site_details (specific site) or search_agents (agent search). It explains the ranking metric (agentic readiness score) and scope (overall or domain-specific).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to discover the most agent-ready services overall or in a specific domain') and implies usage with optional category filtering. However, it doesn't explicitly state when NOT to use it or name specific alternatives among siblings (e.g., vs. search_agents or list_categories), missing full comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesList Index CategoriesAInspect

List all categories in the Not Human Search index with site counts and average agentic scores. Use this to understand what kinds of agent-ready services exist before searching — e.g. discover that 'developer' has 400+ sites while 'health' has 50.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what the tool returns (categories with site counts and average agentic scores) and its exploratory purpose, but lacks details on potential limitations like pagination, rate limits, or error conditions. The description adds value by explaining the tool's role in the workflow, but doesn't fully cover behavioral traits like performance or constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by usage guidance and an example. Every sentence adds value: the first defines the tool, the second explains when to use it, and the third provides a concrete example. There is no wasted text, and the structure is logical and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple list operation with no parameters) and the absence of annotations and output schema, the description is reasonably complete. It explains what the tool does, when to use it, and what data to expect, though it could benefit from mentioning the format of the output (e.g., list of objects) or any default sorting. The lack of output schema means the description should ideally cover return values more explicitly, but it does adequately for a low-complexity tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, focusing instead on the tool's purpose and usage. This meets the baseline of 4 for tools with no parameters, as it avoids unnecessary parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('List all categories') and resources ('in the Not Human Search index'), and distinguishes it from siblings by focusing on categories rather than sites, agents, or other resources. It explicitly mentions the data returned ('site counts and average agentic scores'), making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Use this to understand what kinds of agent-ready services exist before searching') and includes a practical example ('e.g. discover that 'developer' has 400+ sites while 'health' has 50'). This clearly indicates it's for exploration and discovery prior to more targeted searches, distinguishing it from tools like search_agents or get_top_sites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_monitorMonitor a Site's Agentic ReadinessAInspect

Register an email to get alerted when the indicated domain's agentic readiness score drops. Useful for agents tracking a dependency's agent-readiness health — e.g. an agent that relies on stripe.com's MCP surface wants to know the moment it regresses. Returns an unsubscribe URL. Multiple monitors per email allowed, one per domain.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesEmail address to receive alert
domainYesDomain to monitor (no scheme, e.g. 'stripe.com')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it registers for alerts, returns an unsubscribe URL, and allows multiple monitors per email (one per domain). However, it lacks details on alert frequency, conditions for triggering alerts, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with every sentence adding value: the first states the purpose, the second provides usage context and an example, and the third covers behavioral details like return value and constraints. There is no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, no output schema), the description is mostly complete. It covers purpose, usage, and key behaviors, but lacks details on output (beyond the unsubscribe URL mention) and potential errors or limitations, which could be important for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents both parameters (email and domain). The description adds minimal value beyond the schema by implying the domain format ('no scheme') and context for email use, but does not provide additional syntax or format details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('register an email to get alerted') and resources ('domain's agentic readiness score'), distinguishing it from siblings like get_site_details or submit_site by focusing on monitoring and alerting rather than retrieval or submission.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('useful for agents tracking a dependency's agent-readiness health') and includes a concrete example ('e.g. an agent that relies on stripe.com's MCP surface'), but it does not explicitly state when not to use it or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_agentsSearch the Agentic WebAInspect

Search for websites, APIs, and services that AI agents can actually use. Results are ranked by agentic readiness score (0-100) based on llms.txt, OpenAPI specs, ai-plugin.json, structured APIs, and MCP server availability. Use this to discover payment APIs, job boards, data sources, or any web service your agent needs to call.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 10, max 20)
queryNoKeyword query (e.g. 'payment API', 'weather data', 'job board')
has_apiNoOnly return sites with a documented structured API
categoryNoFilter by category
min_scoreNoMinimum agentic readiness score 0-100 (higher = more agent-ready)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It explains that results are ranked by 'agentic readiness score (0-100)' based on specific criteria (llms.txt, OpenAPI specs, etc.), which adds useful context about ranking behavior. However, it does not disclose other behavioral traits like rate limits, authentication needs, or error handling, leaving gaps for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: it starts with the core purpose, explains ranking criteria, and ends with usage examples. Every sentence earns its place without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (search with multiple filters), no annotations, and no output schema, the description is fairly complete. It covers purpose, ranking methodology, and usage examples. However, it lacks details on output format (e.g., what fields are returned) and behavioral constraints like pagination or rate limits, which would enhance completeness for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds some semantic context by mentioning 'agentic readiness score' which relates to the min_score parameter, but it does not provide additional meaning beyond what the schema specifies for parameters like query or category. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search for websites, APIs, and services that AI agents can actually use.' It specifies the verb 'search' and resource 'websites, APIs, and services,' and distinguishes itself from sibling tools (get_site_details, get_stats) by focusing on discovery rather than detailed information retrieval or statistics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'Use this to discover payment APIs, job boards, data sources, or any web service your agent needs to call.' It gives examples of use cases but does not explicitly state when not to use it or mention alternatives, such as using get_site_details for detailed information after discovery.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_siteSubmit a Site for IndexingAInspect

Submit a URL for NHS to crawl and score. Use when you discover an agent-first tool, API, or service that isn't in the index yet. NHS will fetch the site, check its 7 agentic signals (llms.txt, ai-plugin.json, OpenAPI, structured API, MCP server, robots.txt AI rules, Schema.org), compute a score, and add it to the index. The site becomes searchable within a few seconds if the crawl succeeds.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesFull URL to submit (include scheme, e.g. 'https://example.com'). Homepage is best — NHS will check /.well-known/ paths, /robots.txt, /llms.txt, etc. relative to the site root.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by explaining the crawl process, what signals NHS checks, the scoring computation, and the indexing outcome. It mentions the time frame ('within a few seconds') and success condition ('if the crawl succeeds'), though it doesn't detail potential failure modes or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with two sentences that each earn their place: the first explains the core action and use case, the second details the process and outcome. There's no wasted text, and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (submission with crawling, scoring, and indexing) and no annotations or output schema, the description provides substantial context about the process and outcome. It explains what happens after submission but doesn't detail the scoring methodology or what 'agentic signals' specifically entail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description doesn't add meaningful parameter information beyond what's already in the schema's description field, which already explains URL format requirements and best practices.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('submit for NHS to crawl and score') and identifies the resource ('URL'). It distinguishes from sibling tools like get_site_details, get_stats, and search_agents by focusing on submission rather than retrieval or search operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('when you discover an agent-first tool, API, or service that isn't in the index yet'). It provides clear context about the tool's purpose and distinguishes it from alternatives by focusing on initial submission rather than subsequent operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_mcpVerify MCP EndpointAInspect

Actively probe any URL to check if it is a live, spec-compliant MCP server. Sends a JSON-RPC tools/list request and verifies a valid response. Use this before depending on a third-party MCP endpoint — manifests and documentation can claim MCP support without actually serving it. Returns {verified: true/false, endpoint, note}.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesFull URL of the MCP endpoint to probe (include scheme, e.g. 'https://example.com/mcp').
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by explaining the tool's behavior: it actively probes via a JSON-RPC tools/list request and returns a structured result with verification status. It doesn't mention error handling or rate limits, but covers the core operation adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by implementation details and usage context, all in three efficient sentences with zero wasted words, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (probing external endpoints), no annotations, and no output schema, the description does a good job by explaining the verification process and return format. It could mention potential errors or timeouts, but it's largely complete for its purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'url' parameter fully. The description adds no additional parameter details beyond what the schema provides, meeting the baseline score of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('probe', 'check', 'verify') and resource ('URL', 'MCP server'), distinguishing it from sibling tools like get_site_details or register_monitor by focusing on endpoint verification rather than data retrieval or registration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states when to use this tool ('before depending on a third-party MCP endpoint') and provides context about why ('manifests and documentation can claim MCP support without actually serving it'), offering clear guidance on its intended scenario.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.