MCP DuckDuckGo Search Plugin
Server Quality Checklist
Latest release: v1.0.0
- Disambiguation5/5
Each tool has a clearly distinct purpose: get_page_content extracts content from a specific URL, web_search performs general web searches, and suggest_related_searches provides autocomplete suggestions. There is no overlap in functionality, making tool selection straightforward for an agent.
Naming Consistency5/5All tool names follow a consistent verb_noun pattern (get_page_content, suggest_related_searches, web_search) using snake_case. The naming is predictable and readable, with no deviations in style or convention.
Tool Count5/5With 3 tools, the server is well-scoped for a DuckDuckGo search plugin. Each tool serves a distinct and essential function in the search workflow, from performing searches to extracting content and getting suggestions, with no unnecessary redundancy.
Completeness4/5The tool surface covers core search operations effectively, including searching, content extraction, and related suggestions. A minor gap exists in advanced search features like filtering by date or region, but agents can work around this with the provided tools for most use cases.
Average 3/5 across 3 of 3 tools scored.
See the Tool Scores section below for per-tool breakdowns.
- No issues in the last 6 months
- No commit activity data available
- No stable releases found
- No critical vulnerability alerts
- No high-severity vulnerability alerts
- No code scanning findings
- CI status not available
This repository is licensed under MIT License.
This repository includes a README.md file.
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
If you are the author, simply .
If the server belongs to an organization, first add
glama.jsonto the root of your repository:{ "$schema": "https://glama.ai/mcp/schemas/server.json", "maintainers": [ "your-github-username" ] }Then . Browse examples.
Add related servers to improve discoverability.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions what the tool returns ('page title, description, and main content'), which is helpful, but lacks critical details like error handling, rate limits, authentication needs, or performance characteristics. For a web-fetching tool, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences that directly address purpose and return values. It's front-loaded with the core functionality. However, the second sentence could be more integrated with the first for better flow, and there's some whitespace formatting that slightly affects structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return value documentation) and 100% schema coverage for the single parameter, the description provides adequate context for basic understanding. However, for a web content extraction tool with no annotations, it should ideally mention common constraints like URL validation, content type limitations, or network timeout behavior to be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'url' parameter clearly documented. The description adds no additional parameter semantics beyond what's in the schema. According to scoring rules, when schema_description_coverage is high (>80%), the baseline is 3 even with no param info in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('fetch and extract content from a web page') and identifies the resource ('web page'). It distinguishes from sibling tools like 'suggest_related_searches' and 'web_search' by focusing on content extraction rather than search or suggestions. However, it doesn't explicitly differentiate itself from potential similar tools not in the sibling list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'web_search' or 'suggest_related_searches'. There's no mention of prerequisites, constraints, or typical use cases. The agent must infer usage from the purpose alone, which is insufficient for optimal tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the API source (DuckDuckGo autocomplete) and that suggestions are based on real search data, but lacks critical details: authentication requirements, rate limits, error handling, response format (though output schema exists), or whether this is a read-only operation. For a tool with no annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two clear sentences. The first sentence states the core purpose, and the second adds valuable context about the data source. There's no unnecessary repetition or fluff. It could be slightly improved with more structured guidance, but it's efficiently written.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists (which handles return values), the description doesn't need to explain return values. However, for a tool with no annotations and two parameters, the description should provide more behavioral context about API limitations, error cases, or typical use patterns. The current description is minimally adequate but leaves room for improvement in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters ('query' and 'max_suggestions'). The description adds no additional parameter semantics beyond what's in the schema. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no param info in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get search suggestions from DuckDuckGo autocomplete API' specifies both the verb ('Get') and resource ('search suggestions'), and 'Returns suggestions based on what people actually search for' adds useful context about the data source. However, it doesn't explicitly differentiate from sibling tools like 'web_search' or 'get_page_content', which would require a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'web_search' or 'get_page_content'. There's no mention of use cases, prerequisites, or exclusions. The only implied usage is for obtaining search suggestions, but this is too vague for effective tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the search engine (DuckDuckGo) and return format, but lacks critical details like rate limits, authentication needs, privacy implications, or error handling. For a tool with no annotation coverage, this leaves significant gaps in understanding its operational behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: two sentences that directly state the tool's function and output. Every sentence earns its place with no wasted words, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (web search with two parameters), no annotations, and an output schema (implied by context signals), the description is minimally adequate. It covers the basic purpose and output format, but lacks usage guidelines and behavioral details that would be helpful for an AI agent, especially without annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters (query and max_results). The description adds no additional parameter semantics beyond what's in the schema, such as query formatting tips or result ordering. This meets the baseline for high schema coverage but doesn't enhance understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search the web using DuckDuckGo' specifies the action (search) and resource (web via DuckDuckGo). It distinguishes from sibling tools like 'get_page_content' (which fetches specific page content) and 'suggest_related_searches' (which suggests queries), but could be more explicit about this differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose web_search over get_page_content (e.g., for broad queries vs. specific URLs) or suggest_related_searches (e.g., for refining searches). There's no context about use cases or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/gianlucamazza/mcp-duckduckgo'
If you have feedback or need assistance with the MCP directory API, please join our Discord server