searxng-mcp-server
Server Quality Checklist
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 2 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Fails to disclose what SearXNG aggregation entails, return value structure (despite output schema existing, behavioral traits like rate limits or latency are absent), or mutation effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose front-loaded in the first line followed by clear Args section. No redundant prose. The Args list is necessary given zero schema coverage, so it earns its place effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite 7 parameters and 0% schema coverage, description lacks crucial domain context. Misses the critical relationship to `get_available_engines` for valid engine names, doesn't explain SearXNG's meta-search nature, and omits behavioral constraints. Output schema presence excuses return value documentation, but operational context is insufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the Args section compensates by documenting all 7 parameters. Adds valuable structural hints ('comma-separated', specific enum values for time_range/day|month|year and safe_search/0,1,2) not present in raw schema. However, lacks semantic detail on valid category strings or that engines must come from sibling tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Search) and resource (SearXNG), clearly identifying this as a web search operation. Distinguishes implicitly from sibling `get_available_engines` (this performs search vs listing engines), though explicit differentiation would strengthen it further.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use versus alternatives, prerequisites, or when not to use. Critically misses the relationship to `get_available_engines` for populating the `engines` parameter, leaving agents to guess valid values.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. While 'Get' implies read-only behavior, description omits details on caching, authentication requirements, whether results are user-specific or global, and what the output structure contains (though output schema exists).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single 6-word sentence with no redundancy. Efficiently front-loaded but arguably too minimal given lack of annotations and contextual guidelines that could have been included without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple discovery tool with existing output schema (which handles return value documentation). However, misses opportunity to explain relationship to 'search' sibling or define what makes an engine 'available'. Minimal viable description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero input parameters with 100% schema coverage (empty object). Per rubric, baseline score is 4 for zero-parameter tools since there are no semantics to clarify beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Get') and resource ('available search engines'), distinguishing it from sibling 'search' (which performs searches vs. this which discovers them). However, 'information' is vague regarding what specific data is returned (names, capabilities, endpoints?).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus the 'search' sibling, nor prerequisites for calling it. Description states what it does but not why or when an agent should invoke it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/maccam912/searxng-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server