Argus
Server Quality Checklist
- Disambiguation4/5
Tools have distinct purposes, though expand_links and search_web both involve query-based discovery which could cause minor confusion. The separation between content extraction, URL recovery, and operational monitoring is clear.
Naming Consistency5/5Strict verb_noun pattern throughout (expand_links, extract_content, recover_url, search_budgets, search_health, search_web, test_provider) with no deviations in casing or structure.
Tool Count4/5Seven tools is appropriate for a web search broker with health monitoring - sufficient to cover search, extraction, recovery, and provider diagnostics without excess.
Completeness4/5Covers core search operations, content extraction, dead URL recovery, and provider health/budget monitoring. Lacks explicit provider configuration or listing tools, but the monitoring and search lifecycle is reasonably complete.
Average 2.8/5 across 6 of 7 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.1
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 7 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. It reveals nothing about side effects, rate limits, what constitutes 'related' links, or the structure/format of returned links despite the existence of an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single phrase is brief and front-loaded, but given the total lack of schema descriptions and annotations, this brevity leaves critical information gaps. 'For discovery' adds minimal value without elaborating what discovery entails.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema, the description fails to characterize return values. With zero schema descriptions on 2 parameters and no annotations, the description should explain the query-context relationship and expected link types, but provides none of this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must compensate for undocumented parameters. It only implicitly references 'query' but provides no type hints, format expectations, or explanation of what 'context' parameter does or when to use it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses 'Expand' as a verb and mentions 'query' and 'related links', giving a basic sense of the operation. However, 'expand' is semantically ambiguous—unclear whether it augments the query text or discovers external URLs. It fails to differentiate from sibling tools like search_web or extract_content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus search_web or extract_content. No mention of prerequisites, expected query formats, or when the optional 'context' parameter should be supplied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. While 'smoke-test' implies a read-only health check, the description does not confirm idempotency, error behavior, side effects, or what 'smoke-test' actually validates.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely brief at four words. While not verbose, it is under-specified rather than appropriately concise given the lack of schema documentation and annotations. Every word earns its place, but critical information is omitted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema (reducing the need to document returns), the description fails to explain the two parameters or behavioral traits. For a tool with 0% schema coverage and no annotations, additional context is mandatory but missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, requiring the description to compensate. It implicitly references the 'provider' parameter but provides no information on valid values, format, or what the 'query' parameter represents (despite its specific default value 'argus').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'smoke-test' and identifies the resource 'provider', but leaves 'provider' ambiguous (API provider? search provider?) without domain context. It implicitly distinguishes from siblings (search/extract tools) by being a testing utility.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like search_web or extract_content, nor does it mention prerequisites or typical use cases (e.g., validation before production queries).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full behavioral burden. Fails to explain what 'recovery' entails (archived versions? redirect following? status checking?). Omits read-only nature, auth requirements, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise (7 words) and front-loaded with action. Structure is efficient with zero redundancy, though undersized given the complexity (3 undocumented params, no annotations).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for complexity: 3 parameters with 0% schema coverage, no annotations, and ambiguous behavioral scope ('recover' is undefined). While output schema exists (reducing description burden for return values), critical gaps remain in parameter semantics and behavioral disclosure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%. While 'URL' in the description maps intuitively to the required `url` param, the optional `title` and `domain` parameters are completely undocumented—no explanation of their purpose or when to include them.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Recover') and resource ('URL') with specific scope ('dead, moved, or unavailable'). Lacks explicit differentiation from siblings like `expand_links` or `search_web`, but the 'dead/unavailable' qualifier provides implicit context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context ('dead, moved, or unavailable') but lacks explicit when-to-use guidance or alternative recommendations. No mention of when to prefer this over `search_web` or `expand_links`.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. While it names the Argus broker, it omits critical behavioral context: it doesn't explain what the 'discovery' mode implies, what other modes exist, the purpose of session_id, or whether searches consume quota/rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is front-loaded and wastes no words, earning high marks for efficiency. However, extreme brevity becomes a liability given the parameter complexity and lack of schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Insufficient for a 4-parameter tool with 0% schema coverage. While output schema exists (reducing need to describe returns), the description leaves critical operational parameters (mode, session_id, max_results) unexplained, requiring users to guess their semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate for undocumented parameters. It fails to explain the 'mode' options (only knowing 'discovery' is default), the session management behavior, or result format expectations, leaving four parameters effectively undocumented beyond their titles.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Search) and resource (the web) and identifies the implementation broker (Argus), which provides implementation context. However, it fails to distinguish from sibling search tools (search_budgets, search_health) to guide selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this general web search versus specialized siblings (search_budgets, search_health), nor mentions prerequisites like API keys for the Argus broker.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Mentions 'clean' text (indicating processing occurs), but fails to disclose safety profile, rate limits, external dependencies, or what 'clean' specifically entails (HTML stripping vs article extraction).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with no wasted words. However, given the lack of annotations and schema descriptions, the description is too terse to fully inform an agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Sufficient for a simple single-parameter tool with an output schema (which handles return documentation). However, gaps remain regarding parameter documentation and behavioral traits that the description should cover given 0% schema annotation coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. While the description implies the URL parameter ('from a URL'), it adds no details about expected format (http/https requirements), validation rules, or examples to compensate for the schema's lack of documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Extract' and resource 'text content from a URL'. However, it does not explicitly differentiate from siblings like 'expand_links' or 'search_web' that also interact with URLs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like 'search_web' (which might also return content) or 'expand_links'. No prerequisites or error conditions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet description fails to disclose behavioral traits like read-only safety, rate limits, caching behavior, or authentication requirements. Only implies read operation via 'Get'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded, no redundant information. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With zero parameters and output schema present, the description covers the basic intent but lacks domain context (what 'providers' refers to) and behavioral details expected when no annotations exist.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, meeting the baseline expectation. Schema coverage is trivially 100% for empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States clear action (Get) and resource (budget status) with scope (all providers). Sufficiently distinguishes from siblings like search_health or test_provider by focusing on financial/resource budgets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use versus alternatives (e.g., when to use search_web vs this), no prerequisites, and no conditions for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get' implies a read-only operation, the description does not confirm this explicitly, nor does it describe what constitutes 'health status' (availability, latency, rate limits, etc.), caching behavior, or whether the check is synchronous.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is exactly six words with zero redundancy. It is appropriately front-loaded with the action verb and efficiently conveys the scope ('all search providers'). Every word earns its place for this simple utility endpoint.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the existence of an output schema excuses the description from detailing return values, the description remains minimal. Given the presence of potentially overlapping sibling tools (particularly 'test_provider'), additional context distinguishing this bulk health check from individual provider testing would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, which per the evaluation rules establishes a baseline score of 4. With no parameters to document, there is no additional semantic guidance required beyond the implicit fact that this operation requires no inputs to retrieve health status for all providers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and clearly identifies the resource ('health status of all search providers'). However, it does not differentiate from the sibling tool 'test_provider', which may also relate to provider health/status checking, leaving ambiguity about which tool to use when.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'test_provider' or 'search_budgets'. There are no prerequisites, conditions, or explicit exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Khamel83/argus'
If you have feedback or need assistance with the MCP directory API, please join our Discord server