mcp-research
Server Quality Checklist
- Disambiguation4/5
The three tools occupy distinct layers: `fetch_url` for specific URL retrieval, `web_search` for discovery/browse, and `research` for comprehensive synthesis. Some capability overlap exists (e.g., `web_search` can auto-fetch), but the intent-based separation is clear enough for correct selection.
Naming Consistency3/5All tools use snake_case, but patterns vary: `fetch_url` follows verb_noun, `web_search` follows noun_verb, and `research` is an ambiguous single word. The inconsistency in verb placement and specificity reduces predictability.
Tool Count3/5Three tools is borderline thin for a research domain. While it covers a basic pipeline (search → fetch → synthesize), it lacks granularity for intermediate steps or specialized operations, forcing agents to rely on high-level compound tools for everything.
Completeness3/5Core information retrieval is covered, but notable gaps exist: no comparison/analysis of multiple sources, no citation extraction, no cache management, and no way to pause/resume or refine existing research sessions. Agents may hit dead ends when needing granular control.
Average 4/5 across 3 of 3 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 3 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Complements annotations well by disclosing the 3-tier cascade fallback mechanism and external dependencies (Brave, DuckDuckGo, scraper, Ollama). This helps the agent understand potential latency, failure modes, and that results may come from different sources. No contradiction with readOnlyHint/idempotentHint annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently organized with the operational summary front-loaded, followed by the Args block. Every sentence conveys essential information about the cascade mechanism or parameter semantics. The structured Args list is necessary given the lack of schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With output schema present, annotations covering safety profiles, and the description fully documenting parameters and behavioral mechanics (cascade, fetching), the definition is substantially complete. Minor gap: could mention rate limits or typical response time given the multi-provider approach.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Comprehensively compensates for 0% schema description coverage by documenting all 4 parameters in the Args block. Adds valuable constraints (1-20 range for max_results) and conditional dependencies (Ollama availability for summarize) that are absent from the schema's bare type definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Search) and resource (web) clearly. The 3-tier cascade detail (Brave → DuckDuckGo → scraper) precisely defines the implementation mechanism. However, it does not explicitly differentiate from sibling tools 'fetch_url' (direct URL fetching) or 'research' (likely comprehensive analysis), leaving ambiguity about when to select this versus alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus the 'research' or 'fetch_url' siblings. Does not explain when to enable 'summarize' or 'auto_fetch_top' flags versus making separate tool calls. No mention of prerequisites like internet connectivity given the openWorldHint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds substantial value beyond annotations: discloses SSRF protection, caching behavior, markdown conversion output format, and critical external dependency (Ollama availability) for summarization. These behavioral traits are not inferable from the structured annotations alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently front-loaded with core action and key behaviors (SSRF, cached). Args section follows clearly. No redundant information, though the docstring format is standard rather than exceptional.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive given complexity: documents all parameters despite poor schema coverage, mentions output format (markdown), acknowledges output schema exists so return values needn't be described. Could mention rate limits or cache TTL for a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description successfully documents all 3 parameters: url purpose, summarize's conditional behavior (Ollama dependency), and max_chars function. Deducted one point for not mentioning default values or valid ranges that appear in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (fetch) and resource (URL) with output format (markdown). However, lacks explicit differentiation from sibling tools 'web_search' and 'research' which could confuse agents on when to fetch a specific URL versus performing a search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions 'SSRF-protected' implying safety for user-provided URLs, which provides implicit usage guidance. However, lacks explicit when-to-use guidance comparing against siblings (web_search vs fetch_url) and doesn't mention prerequisites like network requirements beyond the Ollama reference.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent, openWorld), but the description adds valuable workflow context (the 4-step pipeline) and quantitative expectations for depth levels (2/5/10 pages). No contradictions with annotations detected.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with the core workflow summary, followed by structured Args documentation. Every element serves a purpose—no redundant text, no missing critical details. Well-structured for LLM parsing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of output schema and rich annotations, the description appropriately focuses on workflow orchestration and parameter semantics. It fully compensates for the schema's documentation gaps and adequately explains the compound operation's behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description carries full documentation weight and succeeds completely: it defines 'query' as the research question, explains 'depth' values with specific page counts, and clarifies 'context' usage for synthesis continuity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines the tool as a compound operation ('search → fetch top pages → summarize → synthesize') that distinguishes it from sibling tools fetch_url and web_search. It uses specific verbs and explains the multi-step pipeline distinctively.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'Compound research' framing and explicit workflow pipeline provide clear context that this is for comprehensive research synthesis versus simple fetch/search operations. While it lacks explicit 'when not to use' statements, the scope differentiation from implied siblings is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/MABAAM/Maibaamcrawler'
If you have feedback or need assistance with the MCP directory API, please join our Discord server