Skip to main content
Glama
apridachin

Kagi MCP Server

by apridachin

Server Quality Checklist

50%
Profile completionA complete profile improves this server's visibility in search results.
  • Latest release: v0.1.0

  • Disambiguation2/5

    The three tools have overlapping purposes that could cause confusion. 'enrich_news' and 'enrich_web' both enrich context with web content, differing mainly in focus (news vs general), which may not be clear to an agent. 'ask_fastgpt' also involves web content for answers, creating ambiguity in tool selection.

    Naming Consistency4/5

    The naming follows a consistent verb_noun pattern throughout (ask_fastgpt, enrich_news, enrich_web), which is predictable and readable. There are no deviations in style, making it easy to parse.

    Tool Count3/5

    With only 3 tools, the count feels thin for a web search and enrichment server, potentially limiting functionality. While not extreme, it may lack coverage for common operations like filtering or managing searches.

    Completeness2/5

    The tool set has significant gaps for a web content server. There are no tools for basic operations like searching without enrichment, filtering results, or handling different content types beyond news and general web. This could lead to agent failures when trying to perform common tasks.

  • Average 2.6/5 across 3 of 3 tools scored.

    See the Tool Scores section below for per-tool breakdowns.

    • No issues in the last 6 months
    • No commit activity data available
    • No stable releases found
    • No critical vulnerability alerts
    • No high-severity vulnerability alerts
    • No code scanning findings
    • CI status not available
  • This repository is licensed under MIT License.

  • This repository includes a README.md file.

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • If you are the author, simply .

    If the server belongs to an organization, first add glama.json to the root of your repository:

    {
      "$schema": "https://glama.ai/mcp/schemas/server.json",
      "maintainers": [
        "your-github-username"
      ]
    }

    Then . Browse examples.

  • Add related servers to improve discoverability.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool enriches context with web content but fails to describe key behaviors such as how it sources content, potential rate limits, authentication needs, or what the output looks like. This leaves significant gaps for a tool that presumably performs web queries.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with no wasted words. It's front-loaded with the core purpose, though it could be more structured by explicitly separating purpose from constraints. Overall, it's appropriately sized for the minimal information it conveys.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's complexity (web content enrichment), lack of annotations, no output schema, and low schema coverage, the description is incomplete. It doesn't explain the enrichment process, output format, or error handling, leaving the agent with insufficient information to use the tool effectively beyond a basic understanding.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema has 0% description coverage, so the description must compensate. It doesn't mention the 'query' parameter at all, providing no semantic meaning beyond what the schema's pattern hint suggests (1-3 words). This is inadequate for a tool with one required parameter, as the agent lacks context on what constitutes an effective query.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the tool 'enrich context with web content' which provides a general purpose (verb+resource), but it's vague about what specific type of enrichment occurs and how it differs from sibling tools like 'enrich_news'. The phrase 'focused on general, non-commercial web content' adds some differentiation but remains broad and non-specific.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no explicit guidance on when to use this tool versus alternatives like 'enrich_news' or 'ask_fastgpt'. It mentions 'general, non-commercial web content' which implies a context but doesn't specify use cases, exclusions, or prerequisites, leaving the agent with minimal direction.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the content focus ('non-commercial news and discussions') which adds some context, but fails to describe critical behaviors: what 'enrich' means operationally, what format the enrichment takes, whether this is a read-only or write operation, potential rate limits, authentication needs, or error conditions. The description is insufficient for a tool with zero annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely concise at just one sentence with zero wasted words. It's front-loaded with the core purpose and efficiently adds domain specificity. Every word earns its place, making this a model of brevity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool has no annotations, no output schema, and 0% schema description coverage, the description is incomplete. While concise, it fails to provide sufficient information about what the tool actually does operationally, what results to expect, or how to use it effectively. For a tool with such sparse structured data, the description should do much more heavy lifting.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, so the description must compensate for the undocumented parameter. The description mentions 'web content focused on non-commercial news and discussions' which implies the 'query' parameter should relate to this domain, but provides no specifics about what constitutes appropriate queries, expected formats, or how the query influences results. The single parameter remains poorly defined despite the description's attempt at context.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the tool 'enrich context with web content' which provides a general purpose, but lacks specificity about what 'enrich' means operationally. It mentions 'focused on non-commercial news and discussions' which adds some domain context, but doesn't clearly distinguish this tool from its sibling 'enrich_web' or explain what makes it unique. The purpose is understandable but vague.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention the sibling tools 'ask_fastgpt' or 'enrich_web', nor does it provide any context about appropriate use cases, prerequisites, or limitations. The agent receives no help in selecting between available tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It mentions that the tool searches the web and provides answers with references, which gives some behavioral context. However, it lacks details on permissions, rate limits, response format, or potential side effects, leaving significant gaps for a tool that interacts with external resources.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that front-loads the core functionality. It avoids unnecessary words, but could be slightly improved by structuring it to highlight key actions more clearly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complexity of a web search tool with no annotations, no output schema, and low parameter coverage, the description is incomplete. It lacks details on error handling, response structure, or integration with siblings, making it inadequate for reliable agent use.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has one parameter ('query') with 0% description coverage. The description adds minimal semantics by implying the query is for web search, but it doesn't specify format, length, or examples. With low schema coverage, the description doesn't adequately compensate, leaving the parameter poorly documented.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose: 'Ask fastgpt to search web and give an answer with references.' It specifies the verb ('ask'), resource ('fastgpt'), and outcome ('answer with references'). However, it doesn't explicitly differentiate from sibling tools like 'enrich_news' or 'enrich_web' beyond implying web search functionality.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or contexts where this tool is preferred, such as for general web queries versus news-specific enrichment. Usage is implied by the description but not explicitly stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

kagi-search-mcp MCP server

Copy to your README.md:

Score Badge

kagi-search-mcp MCP server

Copy to your README.md:

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/apridachin/kagi-search-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server