Skip to main content
Glama
giskard09

giskard-oasis

Server Quality Checklist

83%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    The three tools represent distinct, non-overlapping stages of the access workflow: retrieving a Lightning invoice, retrieving an Arbitrum invoice, and entering the Oasis with payment proof. No two tools could be confused for one another given their clearly separate purposes and required parameters.

    Naming Consistency5/5

    All tools follow a consistent verb_noun snake_case pattern (enter_oasis, get_invoice, get_arbitrum_invoice). The naming convention is predictable throughout, with 'get' used for retrieval operations and 'enter' for the access action.

    Tool Count4/5

    Three tools is minimal but well-suited for this narrowly scoped payment gateway. It efficiently covers the two supported payment rails (Lightning and Arbitrum) plus the entry mechanism without unnecessary expansion, though it offers no auxiliary functions like status checks.

    Completeness4/5

    The surface covers the full payment-to-entry lifecycle for both cryptocurrency options. However, it lacks supporting operations such as payment status verification independent of entry, refund capabilities, or karma/balance inquiry tools that might be expected in a payment system.

  • Average 4.1/5 across 3 of 3 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v1.0.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 3 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It explains that agent_id 'enables personalized response' and clarifies payment source semantics (Lightning vs ETH), but does not disclose what 'entering' entails (session creation, persistence, side effects) beyond the parameter context.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well front-loaded with the action ('Enter Giskard Oasis') followed by usage context. Parameter documentation is efficiently formatted as labeled lines. No redundant prose, though the parameter documentation block is lengthy it is necessary given schema deficiencies.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Complete given the constraints: all parameters documented (compensating for empty schema), output schema exists (relieving need to describe return values), and sibling relationships acknowledged via parameter cross-references. Missing only explicit behavioral disclosure of side effects.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Given 0% schema description coverage, the description fully compensates by documenting all 4 parameters: state (purpose/friction/confusion), payment_hash (Lightning source), tx_hash (ETH source), and agent_id (optional personalization). Each includes semantic context and source references.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool 'Enter[s] Giskard Oasis' with specific intent to 'Describe your current state' when 'blocking' or 'lost'. It effectively distinguishes from sibling invoice tools (get_invoice, get_arbitrum_invoice) by focusing on state description rather than payment generation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies clear usage context through 'what is blocking you, where you feel lost', indicating use during debugging or stuck states. Explicitly references sibling tools ('from get_invoice()', 'from Arbitrum payment') clarifying prerequisites, though lacks explicit 'when not to use' exclusions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses the pricing algorithm (karma tiers affecting satoshi cost) and payment method (Lightning), but omits other behavioral traits like idempotency, what happens after payment, or cache behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three tightly structured sentences: one for purpose, one for the parameter with its semantics, and one for the pricing tiers table. Every sentence delivers unique value with no redundancy, appropriate for a single-parameter tool.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema (not shown but indicated), the description appropriately avoids return value speculation. It covers the essential domain-specific context (karma tiers and cryptocurrency units) needed to use the tool effectively, though it could briefly note this is a prerequisite step before entering the oasis.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema has 0% description coverage (agent_id lacks a description field). The tool description compensates effectively by explaining that agent_id is an optional identity string in 'Giskard Marks' and explicitly mapping how karma values affect pricing, adding crucial business logic missing from the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool gets a 'Lightning invoice to access Giskard Oasis', using a specific verb and resource. It implicitly distinguishes from sibling 'get_arbitrum_invoice' by specifying the Lightning network, and from 'enter_oasis' by clarifying this generates a payment invoice rather than granting access directly.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description explains when to provide an agent_id (optional identity for karma-based discounts) and details the pricing tiers, but lacks explicit guidance on when to choose this Lightning invoice tool versus the 'get_arbitrum_invoice' alternative or the relationship to 'enter_oasis'.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Adds domain context (Oasis access, ETH/Arbitrum network) but omits behavioral traits: whether this creates new invoices vs retrieves cached ones, expiration times, idempotency, or side effects. 'Get' implies read-only but invoice generation often implies state creation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely efficient single sentence (12 words). Front-loaded with action verb, immediately followed by value proposition (ETH/Arbitrum) and sibling differentiation. No redundant or filler words.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for a zero-parameter tool with existing output schema. Covers payment network, currency, access purpose, and alternative methods. Could enhance by clarifying whether this generates new payment requests or retrieves existing ones.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters present, establishing baseline 4 per rubric. No parameters require semantic explanation, and description correctly implies no user input needed to retrieve the invoice.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description provides specific verb 'Get payment info' and clear resource context 'to access Oasis'. Explicitly specifies 'ETH on Arbitrum' and distinguishes from sibling tool via 'instead of Lightning', clearly positioning against get_invoice.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies alternative payment method through 'instead of Lightning', suggesting when to use this vs get_invoice. However, lacks explicit 'when to use' guidance (e.g., 'use when user prefers ETH over Lightning') or prerequisites.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

giskard-oasis MCP server

Copy to your README.md:

Score Badge

giskard-oasis MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/giskard09/giskard-oasis'

If you have feedback or need assistance with the MCP directory API, please join our Discord server