Not Human Search
Server Details
Search the agentic web. 4,100+ sites, 11 tools incl. check_url + verify_mcp for probe-before-use.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- unitedideas/nothumansearch-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 6 of 6 tools scored. Lowest: 3.4/5.
Each tool has a clearly distinct purpose with no overlap: get_site_details retrieves detailed reports for a domain, get_stats provides index statistics, register_monitor sets up alerts, search_agents discovers agent-friendly services, submit_site adds new URLs to the index, and verify_mcp validates MCP server endpoints. The descriptions explicitly differentiate their functions, eliminating any ambiguity.
All tool names follow a consistent verb_noun pattern (e.g., get_site_details, register_monitor, search_agents, submit_site, verify_mcp), using snake_case throughout. The verbs accurately reflect the actions (get, register, search, submit, verify), making the set predictable and easy to understand.
With 6 tools, the server is well-scoped for its purpose of indexing and evaluating agentic readiness of websites and APIs. Each tool serves a specific, necessary function—from querying and submitting data to monitoring and verification—without being overly sparse or bloated, fitting typical expectations for a specialized service.
The tool set provides comprehensive coverage for the domain of agentic readiness assessment: it supports querying detailed reports and statistics, discovering services, submitting new sites for indexing, setting up monitoring alerts, and verifying MCP server compliance. This covers the full lifecycle from discovery to maintenance, with no obvious gaps that would hinder agent workflows.
Available Tools
11 toolscheck_urlCheck Any URL's Agentic ReadinessAInspect
On-demand agentic-readiness check for any URL. Runs the NHS 7-signal crawler live (llms.txt, ai-plugin.json, OpenAPI, structured API, MCP server, robots.txt AI rules, Schema.org) and returns a score 0-100 with per-signal breakdown. Use before calling an unfamiliar API to confirm it's agent-usable. Re-runnable without the submissions-table side-effect of submit_site — ideal for verify-before-use workflows.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Full URL or bare domain (e.g. 'stripe.com' or 'https://stripe.com'). Homepage works best — NHS probes the /.well-known/ paths, /robots.txt, /llms.txt relative to the site root. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's 'on-demand', 'runs live', returns specific outputs ('score 0-100 with per-signal breakdown'), and has no side-effects unlike submit_site. It could mention rate limits or authentication needs but covers most operational aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by operational details and usage guidance. Every sentence earns its place by adding distinct value: first defines the tool, second explains what it does, third provides usage context, and fourth distinguishes from alternatives. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (live crawling, scoring), no annotations, and no output schema, the description does well by explaining the 7 signals checked, the scoring range, and the verification workflow. It could specify the exact format of the 'per-signal breakdown' output to be fully complete, but covers most essential context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds minimal parameter semantics beyond the schema's detailed url description, mentioning 'Homepage works best' and what NHS probes, but doesn't significantly enhance understanding of the single parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('check', 'runs', 'returns') and resources ('URL's agentic-readiness', 'NHS 7-signal crawler'). It distinguishes from sibling submit_site by noting it's 're-runnable without the submissions-table side-effect', making the distinction explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Use before calling an unfamiliar API to confirm it's agent-usable') and when to use alternatives ('ideal for verify-before-use workflows' vs submit_site's side-effects). It also specifies the ideal context ('Homepage works best') and target use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_mcp_serversFind MCP ServersAInspect
List sites in the index that expose a live MCP server, ranked by agentic readiness. Use this when your agent needs to discover callable MCP endpoints for a domain ('payments', 'jobs', 'search') or overall. Pairs naturally with verify_mcp for a probe-before-use workflow.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 10, max 20) | |
| query | No | Optional keyword to narrow results (e.g. 'payments', 'jobs', 'weather') | |
| category | No | Filter by category (e.g. 'developer', 'finance', 'ai-tools'). Omit for all categories. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it lists sites, ranks them by 'agentic readiness', and is used for discovery of live MCP servers. However, it lacks details on potential side effects, error handling, or performance characteristics like rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage guidance and workflow pairing. Every sentence adds value without redundancy, making it efficient and well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is largely complete. It covers purpose, usage, and workflow integration. However, it could benefit from more detail on output format or ranking criteria to fully compensate for the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters. The description adds minimal value beyond the schema, mentioning 'domain' as an example for the query parameter but not providing additional context or constraints. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('List sites', 'ranked by agentic readiness') and resources ('sites in the index that expose a live MCP server'). It distinguishes from siblings by specifying its unique focus on MCP server discovery rather than general site details, stats, or registration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('when your agent needs to discover callable MCP endpoints for a domain or overall') and provides a clear alternative ('Pairs naturally with verify_mcp for a probe-before-use workflow'), distinguishing it from other tools like get_site_details or search_agents.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_site_detailsGet Site Agentic Readiness ReportAInspect
Get the full agentic readiness report for a specific domain: score, category, all 7 signal checks (llms.txt, ai-plugin.json, OpenAPI, structured API, MCP server, robots.txt AI rules, Schema.org), plus any cached llms.txt content and OpenAPI summary.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Domain to look up (e.g. 'stripe.com'). Do not include scheme or path. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool retrieves a report including cached content and summaries, suggesting it may fetch precomputed data. However, it lacks details on permissions, rate limits, data freshness, or error handling, which are important for a tool that likely queries external resources.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and efficiently lists all components in a single, dense sentence. Every part adds value without redundancy, making it highly concise and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (involves multiple signal checks and cached data) and lack of annotations or output schema, the description is moderately complete. It outlines what the report contains but does not cover behavioral aspects like response format, latency, or failure modes, which could be important for agentic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single 'domain' parameter. The description adds value by specifying the domain is for a readiness report and listing what the report includes, but does not provide additional syntax or format details beyond the schema. With only one parameter, the baseline is high, but the description compensates slightly with context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and resource ('full agentic readiness report for a specific domain'), listing all components included (score, category, 7 signal checks, cached content). It distinguishes from sibling tools like 'get_stats' and 'search_agents' by focusing on a detailed domain report rather than aggregated statistics or agent searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving comprehensive readiness data for a domain, but does not explicitly state when to use this tool versus alternatives like 'get_stats' (which might provide broader statistics) or 'search_agents' (which might find agents). No exclusions or prerequisites are mentioned, leaving usage context somewhat open-ended.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_statsGet Index StatsBInspect
Current Not Human Search index stats: total sites, average agentic score, top category, sites added in the last 7 days, count of sites exposing an MCP server, and count scoring a perfect 100/100.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states this is a 'Get' operation (implying read-only) and describes what statistics are returned, but doesn't mention important behavioral aspects like whether this requires authentication, has rate limits, returns real-time vs cached data, or what happens if the index is empty. For a tool with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that immediately states the tool's purpose and enumerates the key statistics returned. Every word earns its place with no redundancy or unnecessary elaboration. The structure is front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description provides adequate basic information about what statistics are returned. However, for a tool with zero annotation coverage, it should ideally mention more behavioral context (like whether this is a lightweight operation, authentication requirements, or data freshness). The absence of an output schema means the description should more fully describe the return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema already fully documents the parameter situation (none needed). The description appropriately doesn't discuss parameters since none exist, maintaining focus on what the tool returns rather than what it accepts. This meets the baseline expectation for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get current statistics for the Not Human Search index' with specific metrics mentioned (total sites, average agentic score, top category). It distinguishes from siblings by focusing on index-level statistics rather than individual site details (get_site_details) or search functionality (search_agents). However, it doesn't explicitly contrast with siblings in the description text itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through the metrics it returns (index-level statistics), suggesting it should be used when needing overall index health/status information. However, there's no explicit guidance on when to use this tool versus alternatives like get_site_details or search_agents, nor any mention of prerequisites or limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_top_sitesGet Top Scored SitesAInspect
Get the highest-scored agent-ready sites in the index, optionally filtered by category. Returns sites ranked by agentic readiness score (100 = perfect agent support). Use this to discover the most agent-ready services overall or in a specific domain like 'finance' or 'developer'.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 10, max 50) | |
| category | No | Filter by category (e.g. 'developer', 'finance', 'ai-tools'). Omit for all categories. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it's a read operation (implied by 'get'), returns ranked results, and explains the scoring system (100 = perfect). However, it lacks details on rate limits, authentication needs, pagination, or error conditions, which would be valuable for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the core functionality with optional filtering, and the second explains usage context and scoring. Every sentence adds value without redundancy, making it front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, and scoring, but lacks output details (e.g., return format or structure) and behavioral constraints like rate limits. With no output schema, explaining return values would improve completeness, though the current description is adequate for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters (limit and category). The description adds marginal value by providing example categories ('finance', 'developer') and clarifying that omitting category returns all categories, but doesn't significantly enhance parameter meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('get', 'discover') and resources ('highest-scored agent-ready sites', 'most agent-ready services'), distinguishing it from siblings like get_site_details (specific site) or search_agents (agent search). It explains the ranking metric (agentic readiness score) and scope (overall or domain-specific).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('to discover the most agent-ready services overall or in a specific domain') and implies usage with optional category filtering. However, it doesn't explicitly state when NOT to use it or name specific alternatives among siblings (e.g., vs. search_agents or list_categories), missing full comparative guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_categoriesList Index CategoriesAInspect
List all categories in the Not Human Search index with site counts and average agentic scores. Use this to understand what kinds of agent-ready services exist before searching — counts are live, so the distribution shifts as the index grows.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what the tool returns (categories with site counts and average agentic scores) and its exploratory purpose, but lacks details on potential limitations like pagination, rate limits, or error conditions. The description adds value by explaining the tool's role in the workflow, but doesn't fully cover behavioral traits like performance or constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by usage guidance and an example. Every sentence adds value: the first defines the tool, the second explains when to use it, and the third provides a concrete example. There is no wasted text, and the structure is logical and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (simple list operation with no parameters) and the absence of annotations and output schema, the description is reasonably complete. It explains what the tool does, when to use it, and what data to expect, though it could benefit from mentioning the format of the output (e.g., list of objects) or any default sorting. The lack of output schema means the description should ideally cover return values more explicitly, but it does adequately for a low-complexity tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, focusing instead on the tool's purpose and usage. This meets the baseline of 4 for tools with no parameters, as it avoids unnecessary parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('List all categories') and resources ('in the Not Human Search index'), and distinguishes it from siblings by focusing on categories rather than sites, agents, or other resources. It explicitly mentions the data returned ('site counts and average agentic scores'), making the purpose highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Use this to understand what kinds of agent-ready services exist before searching') and includes a practical example ('e.g. discover that 'developer' has 400+ sites while 'health' has 50'). This clearly indicates it's for exploration and discovery prior to more targeted searches, distinguishing it from tools like search_agents or get_top_sites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_additionsRecently Indexed Agent-First SitesAInspect
List agent-ready sites newly added to the Not Human Search index, sorted newest first. Use this to discover what's just landed on the agentic web — new MCP servers, fresh llms.txt adopters, new OpenAPI publishers. Good for weekly agent digests or tracking ecosystem momentum.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Look back window in days (default 7, max 90) | |
| limit | No | Max results (default 10, max 50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses key behavioral traits: the tool lists sites (read operation), sorts by newest first, and focuses on 'agent-ready' sites. However, it doesn't mention potential limitations like rate limits, authentication needs, or what 'agent-ready' specifically means beyond the examples given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. Every sentence earns its place: first states what it does, second provides concrete examples of what it discovers, third gives specific use cases. Zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is reasonably complete. It explains the tool's purpose, sorting behavior, and use cases well. However, without annotations or output schema, it could benefit from more detail about return format or what constitutes 'agent-ready' beyond the examples.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema (days for look-back window, limit for max results). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('list', 'discover', 'tracking') and resources ('agent-ready sites', 'Not Human Search index'). It distinguishes from siblings by focusing on 'newly added' sites sorted by recency, unlike find_mcp_servers (general search) or get_top_sites (popularity-based).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('weekly agent digests', 'tracking ecosystem momentum'), but doesn't explicitly state when NOT to use it or name specific alternatives among siblings. It implies usage for discovering recent additions rather than comprehensive searches.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_monitorMonitor a Site's Agentic ReadinessAInspect
Register an email to get alerted when the indicated domain's agentic readiness score drops. Useful for agents tracking a dependency's agent-readiness health — e.g. an agent that relies on stripe.com's MCP surface wants to know the moment it regresses. Returns an unsubscribe URL. Multiple monitors per email allowed, one per domain.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Email address to receive alert | ||
| domain | Yes | Domain to monitor (no scheme, e.g. 'stripe.com') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it registers for alerts, returns an unsubscribe URL, and allows multiple monitors per email (one per domain). However, it lacks details on alert frequency, conditions for triggering alerts, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with every sentence adding value: the first states the purpose, the second provides usage context and an example, and the third covers behavioral details like return value and constraints. There is no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no annotations, no output schema), the description is mostly complete. It covers purpose, usage, and key behaviors, but lacks details on output (beyond the unsubscribe URL mention) and potential errors or limitations, which could be important for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents both parameters (email and domain). The description adds minimal value beyond the schema by implying the domain format ('no scheme') and context for email use, but does not provide additional syntax or format details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('register an email to get alerted') and resources ('domain's agentic readiness score'), distinguishing it from siblings like get_site_details or submit_site by focusing on monitoring and alerting rather than retrieval or submission.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('useful for agents tracking a dependency's agent-readiness health') and includes a concrete example ('e.g. an agent that relies on stripe.com's MCP surface'), but it does not explicitly state when not to use it or name specific alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_agentsSearch the Agentic WebAInspect
Search for websites, APIs, and services that AI agents can actually use. Results are ranked by agentic readiness score (0-100) based on llms.txt, OpenAPI specs, ai-plugin.json, structured APIs, and MCP server availability. Use this to discover payment APIs, job boards, data sources, or any web service your agent needs to call.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 10, max 20) | |
| query | No | Keyword query (e.g. 'payment API', 'weather data', 'job board') | |
| has_api | No | Only return sites with a documented structured API | |
| has_mcp | No | Only return sites that expose an MCP server | |
| category | No | Filter by category | |
| min_score | No | Minimum agentic readiness score 0-100 (higher = more agent-ready) | |
| has_openapi | No | Only return sites with a published OpenAPI / Swagger spec | |
| has_llms_txt | No | Only return sites that publish an llms.txt file (LLM-first site summary) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains that results are ranked by 'agentic readiness score (0-100)' based on specific criteria (llms.txt, OpenAPI specs, etc.), which adds useful context about ranking behavior. However, it does not disclose other behavioral traits like rate limits, authentication needs, or error handling, leaving gaps for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: it starts with the core purpose, explains ranking criteria, and ends with usage examples. Every sentence earns its place without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search with multiple filters), no annotations, and no output schema, the description is fairly complete. It covers purpose, ranking methodology, and usage examples. However, it lacks details on output format (e.g., what fields are returned) and behavioral constraints like pagination or rate limits, which would enhance completeness for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds some semantic context by mentioning 'agentic readiness score' which relates to the min_score parameter, but it does not provide additional meaning beyond what the schema specifies for parameters like query or category. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search for websites, APIs, and services that AI agents can actually use.' It specifies the verb 'search' and resource 'websites, APIs, and services,' and distinguishes itself from sibling tools (get_site_details, get_stats) by focusing on discovery rather than detailed information retrieval or statistics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'Use this to discover payment APIs, job boards, data sources, or any web service your agent needs to call.' It gives examples of use cases but does not explicitly state when not to use it or mention alternatives, such as using get_site_details for detailed information after discovery.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_siteSubmit a Site for IndexingAInspect
Submit a URL for NHS to crawl and score. Use when you discover an agent-first tool, API, or service that isn't in the index yet. NHS will fetch the site, check its 7 agentic signals (llms.txt, ai-plugin.json, OpenAPI, structured API, MCP server, robots.txt AI rules, Schema.org), compute a score, and add it to the index. The site becomes searchable within a few seconds if the crawl succeeds.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Full URL to submit (include scheme, e.g. 'https://example.com'). Homepage is best — NHS will check /.well-known/ paths, /robots.txt, /llms.txt, etc. relative to the site root. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by explaining the crawl process, what signals NHS checks, the scoring computation, and the indexing outcome. It mentions the time frame ('within a few seconds') and success condition ('if the crawl succeeds'), though it doesn't detail potential failure modes or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with two sentences that each earn their place: the first explains the core action and use case, the second details the process and outcome. There's no wasted text, and information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (submission with crawling, scoring, and indexing) and no annotations or output schema, the description provides substantial context about the process and outcome. It explains what happens after submission but doesn't detail the scoring methodology or what 'agentic signals' specifically entail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description doesn't add meaningful parameter information beyond what's already in the schema's description field, which already explains URL format requirements and best practices.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('submit for NHS to crawl and score') and identifies the resource ('URL'). It distinguishes from sibling tools like get_site_details, get_stats, and search_agents by focusing on submission rather than retrieval or search operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('when you discover an agent-first tool, API, or service that isn't in the index yet'). It provides clear context about the tool's purpose and distinguishes it from alternatives by focusing on initial submission rather than subsequent operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_mcpVerify MCP EndpointAInspect
Actively probe any URL to check if it is a live, spec-compliant MCP server. Sends a JSON-RPC tools/list request and verifies a valid response. Use this before depending on a third-party MCP endpoint — manifests and documentation can claim MCP support without actually serving it. Returns {verified: true/false, endpoint, note}.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Full URL of the MCP endpoint to probe (include scheme, e.g. 'https://example.com/mcp'). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by explaining the tool's behavior: it actively probes via a JSON-RPC tools/list request and returns a structured result with verification status. It doesn't mention error handling or rate limits, but covers the core operation adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by implementation details and usage context, all in three efficient sentences with zero wasted words, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (probing external endpoints), no annotations, and no output schema, the description does a good job by explaining the verification process and return format. It could mention potential errors or timeouts, but it's largely complete for its purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'url' parameter fully. The description adds no additional parameter details beyond what the schema provides, meeting the baseline score of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('probe', 'check', 'verify') and resource ('URL', 'MCP server'), distinguishing it from sibling tools like get_site_details or register_monitor by focusing on endpoint verification rather than data retrieval or registration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states when to use this tool ('before depending on a third-party MCP endpoint') and provides context about why ('manifests and documentation can claim MCP support without actually serving it'), offering clear guidance on its intended scenario.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!