Skip to main content
Glama

Server Quality Checklist

83%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    The two invoice tools (get_invoice vs get_arbitrum_invoice) serve the same purpose (payment generation) but for different networks, which could cause momentary hesitation, though the descriptions clarify the distinction. Search tools and report are clearly distinct from payment tools and each other.

    Naming Consistency4/5

    Follows a consistent verb_noun snake_case pattern (get_invoice, search_web, search_news). 'report' is slightly vaguer than ideal (e.g., submit_feedback), and 'get_arbitrum_invoice' adds a qualifier while 'get_invoice' implies Lightning, but overall conventions are predictable.

    Tool Count5/5

    Five tools is ideal for this scope: two payment initiation methods, two search variants (news vs web), and one feedback mechanism. No bloat, no obvious missing categories for a simple paid search API.

    Completeness4/5

    Covers the core lifecycle: generate invoice (both payment rails), execute search (both types), and provide feedback. Minor gaps include no tool to check payment status, karma level, or search history, but the pay-and-search flow is fully supported.

  • Average 3.6/5 across 5 of 5 tools scored. Lowest: 2.9/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.1

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 5 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden. It successfully discloses the payment requirement (critical behavioral context not evident in the schema), but omits other essential behaviors such as what happens when payment is missing/invalid, rate limits, or whether the payment parameters are mutually exclusive.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise at two sentences with no filler. Every sentence earns its place by establishing function and payment mechanism. However, the brevity may be excessive given the tool's complexity (paid API, four parameters, 0% schema coverage).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the combination of zero schema descriptions, no annotations, a paid-service model with cryptocurrency payments, and four parameters, the description is insufficient. It lacks explanation of payment flow logic, default behaviors, or result characteristics despite the presence of an output schema reducing the need for return value documentation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, requiring the description to compensate. It adds semantic meaning for 'payment_hash' (Lightning) and 'tx_hash' (Arbitrum ETH), but completely ignores 'query' (despite being required) and 'max_results', leaving half the parameter set undocumented.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the core action ('Search the web') and implicitly distinguishes from sibling 'search_news' by specifying 'web' as the target resource. However, it lacks specificity about what search index or results format to expect.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While it mentions payment methods available (Lightning or Arbitrum ETH), it fails to provide guidance on when to select this tool versus siblings like 'search_news', or when to use which payment option (payment_hash vs tx_hash). No prerequisites or exclusions are stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, description carries full disclosure burden. Mentions 'Helps Giskard improve' indicating side effects/feedback loop, but omits critical behavioral details: whether operation is synchronous, idempotent, possible failure modes, or data retention implications.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely compact with zero redundant sentences. Front-loaded purpose statement followed by parameter documentation. Slightly informal structure (colon-separated param definitions rather than prose) is acceptable given brevity and clarity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a simple 2-parameter feedback tool with existing output schema. Missing explicit linkage to specific sibling search tools (search_web/search_news) and return value explanation, though output schema mitigates the latter gap.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Compensates effectively for 0% schema description coverage by inline documentation: defines 'useful' boolean semantics (helped vs didn't help) and 'note' purpose (missing features/positive feedback). Clearly marks note as optional, matching schema's default value.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action ('Report') and scope ('whether the search was useful') with clear resource context. Effectively distinguishes from data-retrieval siblings (get_invoice, search_news, etc.) by identifying this as a feedback mechanism rather than a query tool.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage context through 'the search' reference, suggesting use after search operations, but lacks explicit guidance on WHEN to invoke (e.g., 'Call after search_web/search_news') or prerequisites. No alternatives or exclusions mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses the unique payment requirement (Lightning/Arbitrum) which is critical behavioral context. However, lacks details on rate limits, what 'recent' means temporally, error conditions for invalid payments, or whether the operation is idempotent.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely compact two-sentence structure with zero waste. Front-loaded with core purpose, immediately followed by critical payment context. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Has output schema (per context signals) so return values need not be explained. However, with 4 parameters and 0% schema coverage, the description inadequately covers the search parameters ('query', 'max_results') and leaves ambiguity about payment optionality despite being a 4-parameter tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 0% description coverage, requiring heavy lifting from description. Successfully explains 'payment_hash' (Lightning) and 'tx_hash' (Arbitrum ETH) semantics, but completely omits 'query' (expected format/keywords) and 'max_results' (range/limitations), leaving significant gaps.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb ('Search') and resource ('recent news'), but fails to distinguish from sibling 'search_web' which may overlap in functionality. The payment method mention adds unique context but doesn't clarify scope differentiation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance on when to use this versus 'search_web' or other siblings. Does not clarify whether payment is mandatory (implied by 'Pay with') or optional (suggested by empty string defaults in schema), nor when to use Lightning versus Arbitrum ETH.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It successfully discloses the payment method (ETH/Arbitrum), but lacks disclosure of behavioral traits like whether this creates a new invoice, idempotency, expiration behavior, or side effects.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence with zero waste. Front-loaded with the action verb, immediately qualifies the payment method, and ends with the differentiating clause. Every word serves a purpose.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the zero-parameter complexity and presence of an output schema (covering return values), the description is nearly complete. It covers purpose and differentiation, though additional context on invoice lifecycle or expiration would strengthen it for a financial tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has zero parameters, which per guidelines establishes a baseline of 4. The description appropriately does not invent parameter semantics where none exist.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a specific verb ('Get'), clear resource ('payment info'), and precise scope ('ETH on Arbitrum'). It effectively distinguishes from sibling tool get_invoice by specifying the alternative payment rail ('instead of Lightning').

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implicitly indicates when to use this tool (when paying with ETH on Arbitrum) and contrasts it with the Lightning alternative. However, it does not explicitly name the sibling tool (get_invoice) or state explicit 'when-not' conditions, falling short of perfect guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses pricing tiers (3-10 sats), karma system mechanics, and identity requirements. Missing: invoice expiration, payment confirmation flow, or side effects of generating an invoice.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three dense sentences with zero waste: purpose front-loaded, followed by parameter semantics, then pricing table. Every word earns its place; no redundancy despite complex pricing logic.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for tool complexity (1 param). Covers payment method differentiation, pricing behavior, and parameter meaning. Output schema exists so return values needn't be described. Minor gap: could clarify Lightning Network specifics or invoice expiration.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 0% description coverage. Description fully compensates by explaining agent_id as 'your identity in Giskard Marks', noting it's optional, and detailing the business logic impact (high karma = lower price with specific tier breakdown).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action (Get), resource (Lightning invoice), and workflow context (to pay before searching). Explicitly distinguishes from sibling get_arbitrum_invoice by specifying 'Lightning' (Bitcoin) vs Arbitrum (Ethereum L2).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies sequence ('before searching') suggesting prerequisite usage, but lacks explicit when/when-not guidance comparing it to get_arbitrum_invoice or stating conditions for choosing Lightning over other payment methods.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

giskard-search MCP server

Copy to your README.md:

Score Badge

giskard-search MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/giskard09/giskard-search'

If you have feedback or need assistance with the MCP directory API, please join our Discord server