Skip to main content
Glama

Server Details

Pay-per-call MCP tools via x402 USDC: ZAR prices, data extraction, Python sandbox, SA flights.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
agent_exampleAInspect

POST /agents/agent_example/run — Single-turn Claude Sonnet inference endpoint. Input: {question: string, max_tokens: integer (default 1024)}. Output: {success, answer, usage: {input_tokens, output_tokens}, error}. No tool use or agentic loop — direct model call. Use for QA, summarisation, or classification tasks. Cost: $0.0100 USDC per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesThe question or task for the agent to reason about
max_tokensNoMaximum tokens for the response (default: 1024)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it's a single-turn endpoint (no multi-turn conversations), specifies the exact HTTP method (POST), discloses cost information ($0.0100 USDC per call), and clarifies what the tool does NOT do (no tool use or agentic loops). However, it doesn't mention rate limits, authentication requirements, or error handling specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero wasted words. It front-loads the endpoint path and core functionality, then provides input/output format, usage guidelines, and cost information in a logical sequence. Every sentence earns its place by adding distinct value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 parameters, 100% schema coverage, and no output schema, the description provides excellent context about what the tool does, when to use it, and behavioral constraints. It covers the tool's purpose, limitations, and cost implications well. The main gap is the lack of output schema, but the description partially compensates by describing the output structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description mentions the parameters in parentheses but doesn't add meaningful semantic context beyond what the schema provides. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Single-turn Claude Sonnet inference endpoint'), the resource ('agents/agent_example/run'), and distinguishes it from alternatives by explicitly stating 'No tool use or agentic loop — direct model call'. It provides a verb+resource combination with clear scope limitations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Use for QA, summarisation, or classification tasks') and when not to use it ('No tool use or agentic loop — direct model call'), providing clear alternatives for different use cases. It also mentions cost implications which informs usage decisions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tool_compute_sandboxAInspect

POST /tools/tool_compute_sandbox/run — Executes Python 3.12 code in an isolated subprocess with a 5-second hard timeout. Input: {python_code: string, input_data: any (optional, bound as variable 'input_data')}. Output: {success, result, stdout (capped 50KB), execution_time_ms, error_type}. Return value: assign to 'result' variable. Pre-loaded: math, json, re, statistics, itertools, functools, collections, decimal, datetime, random, hashlib, base64. Blocked: import, open(), eval(), exec(), os, sys, network, class definitions, dunder attributes. error_type values: syntax_error | security_error | runtime_error | timeout_error. Cost: $0.1500 USDC per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
input_dataNoJSON-serializable value (dict, list, str, number, bool, or null) passed as the variable 'input_data' inside the sandbox. Omit or pass null if the code has no external input.
python_codeYesPython 3.12 source code to execute. No import statements. Set 'result = <value>' to return a value. Pre-loaded modules are in scope: math, json, re, statistics, itertools, functools, collections, decimal, datetime, random, string, textwrap, hashlib, base64, struct, copy, pprint.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and excels at disclosing critical behavioral traits: hard timeout (5 seconds), security restrictions (blocked imports/functions), pre-loaded modules, output structure, error types, and cost per call. This provides comprehensive operational context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is densely packed with essential information in a well-structured single paragraph. Every sentence serves a distinct purpose: execution method, input format, output structure, return mechanism, available libraries, restrictions, error handling, and cost. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex code execution tool with no annotations and no output schema, the description provides exceptional completeness. It covers execution environment, security model, input/output semantics, error handling, and cost - giving the agent everything needed to understand when and how to use this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds valuable context about parameter usage: it explains how input_data is bound as a variable, clarifies optionality, and provides guidance on returning values via the 'result' variable. This enhances understanding beyond the schema's technical specifications.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Executes Python 3.12 code in an isolated subprocess') and distinguishes it from siblings by specifying the exact runtime environment and constraints. It goes beyond a simple verb+resource to define the execution context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool (for executing Python code with specific constraints) and mentions cost implications. However, it doesn't explicitly state when NOT to use it or compare it to alternative tools like 'tool_data_transformer' or 'tool_example' that might handle similar tasks differently.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tool_data_transformerAInspect

POST /tools/tool_data_transformer/run — Extracts structured JSON from raw text using a caller-supplied JSON Schema. Input: {raw_text: string, target_json_schema: object (JSON Schema draft-07)}. Output: {success, extracted_data, extraction_method, validation_passed, error}. extraction_method is one of: 'direct_parse', 'embedded_json', 'regex_extraction'. No LLM involved — pure parsing pipeline. Type coercion applied for integer/number/boolean fields. Works best with flat schemas; deeply nested structures extract less reliably via key-value pass. Cost: $0.0500 USDC per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
raw_textYesUnstructured or semi-structured text to extract data from. Examples: API response body, email content, log lines, form submissions, scraped web pages, or any text that contains the values you need.
target_json_schemaYesA JSON Schema object describing the expected output structure. Define 'properties' with a 'type' for each field you want extracted, and list required fields under 'required'. Example: {"type": "object", "properties": {"name": {"type": "string"}, "age": {"type": "integer"}}, "required": ["name", "age"]}
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and excels by disclosing key behavioral traits: cost ('$0.0500 USDC per call'), extraction methods ('direct_parse', 'embedded_json', 'regex_extraction'), type coercion ('Type coercion applied for integer/number/boolean fields'), reliability limitations ('deeply nested structures extract less reliably'), and output structure details. It fully informs the agent about operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by essential details in a logical flow: input format, output structure, methods, limitations, and cost. Every sentence earns its place with zero waste, making it highly efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a data extraction tool with no annotations and no output schema, the description is complete enough. It explains the output structure, extraction methods, reliability constraints, and cost, compensating for the lack of structured output schema and providing all necessary context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds minimal value beyond the schema by mentioning the parameters in the input format but does not provide additional syntax, format, or usage details. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('extracts structured JSON from raw text') and resources ('caller-supplied JSON Schema'), distinguishing it from sibling tools like agent_example or tool_zar_prices. It specifies the exact transformation process without ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool ('Works best with flat schemas; deeply nested structures extract less reliably via key-value pass') and mentions what it does not involve ('No LLM involved — pure parsing pipeline'). However, it does not explicitly state when not to use it or name specific alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tool_exampleAInspect

POST /tools/tool_example/run — Stateless text utility for testing x402 payment flows. Input: {text: string}. Output: {original, uppercase, word_count, char_count}. Cost: $0.0010 USDC per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesInput text to process
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and does this well. It reveals the tool is stateless (important behavioral trait), discloses the exact cost per call ($0.0010 USDC), and specifies the HTTP method (POST). It also describes the output structure, though no output schema exists. The only gap is lack of information about rate limits or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in a single sentence that packs substantial information. It's front-loaded with the most important details (POST endpoint, purpose, input/output). However, the inclusion of the exact endpoint path ('POST /tools/tool_example/run') could be considered slightly verbose since this is typically metadata rather than descriptive content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no annotations but complete schema coverage, the description provides excellent context. It covers purpose, behavioral traits (stateless, cost), input/output structure, and specific domain context. The main gap is the lack of an output schema, though the description compensates by documenting the return values. It doesn't address potential error cases or rate limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds minimal value beyond the schema by mentioning 'Input: {text: string}' which essentially repeats what the schema already documents. No additional semantic context about the text parameter is provided beyond what's in the schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Stateless text utility for testing x402 payment flows') and distinguishes it from siblings by mentioning its testing focus and text processing functionality. It explicitly identifies the resource being processed (text) and the specific domain context (x402 payment flows).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('for testing x402 payment flows'), but doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools. The testing context gives good guidance but lacks explicit exclusions or comparisons to other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tool_sa_airport_oracleAInspect

POST /tools/sa-airport-oracle/run — Returns live flight status from ACSA (airports.co.za). Input: {airport_code: 'JNB'|'CPT'|'DUR', flight_number: string, request_type: 'arrival'|'departure'}. Output: {success, live_status, scheduled_time, estimated_time, actual_time, gate, carousel, terminal, flight_number, airport_code, request_type, error}. Coverage: JNB (O.R. Tambo), CPT (Cape Town Int'l), DUR (King Shaka). Data window: flights within 48 hours. Call GET /tools/sa-airport-oracle/health (free) first — if structure_valid=false, do not proceed. error_type values: 'stale_data' (do not retry), 'not found' (retry after 10-15 min), network error (retry once). flight_number is case-insensitive and normalised to uppercase internally. Read-only — no booking/ticketing. Cost: $0.1200 USDC per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
airport_codeYesIATA airport code. JNB=O.R. Tambo (Johannesburg), CPT=Cape Town, DUR=King Shaka (Durban).
request_typeYesSearch the arrivals board or departures board.
flight_numberYesIATA flight number, e.g. 'SA322'. Case-insensitive.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and excels by disclosing critical behavioral traits: cost ($0.1200 USDC per call), read-only nature, data window (flights within 48 hours), coverage details for airports, error handling strategies, and case-insensitive normalization of flight_number. It comprehensively covers operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and structured efficiently, covering multiple aspects (input, output, coverage, data window, health check, error handling, cost) in a dense but clear manner. Every sentence adds critical information, though it could be slightly more streamlined.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a paid, external API tool with no annotations or output schema, the description is highly complete. It details input/output structures, operational constraints (cost, health check, error handling), and behavioral aspects, providing all necessary context for an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by clarifying airport coverage (e.g., JNB=O.R. Tambo), specifying the data window (48 hours), and noting that flight_number is case-insensitive and normalized to uppercase, which enhances understanding beyond the schema's enum and type descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Returns live flight status') and resources (ACSA airports). It distinguishes itself from sibling tools by focusing on flight status retrieval, which is unique among the listed siblings like price tools, data transformers, and sandboxes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage instructions: call the health endpoint first and check structure_valid, do not proceed if false. It also specifies when to retry based on error_type values (e.g., 'not found' retry after 10-15 min, network error retry once) and when not to retry ('stale_data').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tool_zar_pricesAInspect

POST /tools/zar-prices/run — Returns live bid/ask/last prices for crypto/ZAR pairs. Input: {pair: 'BTC/ZAR'|'ETH/ZAR'|'SOL/ZAR'|'USDC/ZAR'|'all'}. Output: array of {exchange, pair, price, bid, ask, timestamp} objects. Sources: VALR (all 4 pairs), Luno (BTC/ZAR + ETH/ZAR only). SOL/ZAR and USDC/ZAR are VALR-only. Fetches all exchanges concurrently. Timestamps are ISO-8601 UTC. Cost: $0.0050 USDC per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
pairNoTrading pair to fetch. Use 'all' to fetch every supported pair concurrently.all
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does an excellent job disclosing behavioral traits: it specifies data sources (VALR, Luno), concurrency behavior ('Fetches all exchanges concurrently'), timestamp format ('ISO-8601 UTC'), and cost information ('$0.0050 USDC per call').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core functionality, followed by essential details. Every sentence earns its place: endpoint, input format, output format, sources, availability details, concurrency, timestamp format, and cost.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (financial data from multiple exchanges), no annotations, and no output schema, the description provides complete context: it explains the return format ('array of {exchange, pair, price, bid, ask, timestamp} objects'), data sources, pair availability, concurrency behavior, timestamp format, and cost.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds some value by explaining what 'all' does ('fetch every supported pair concurrently') and listing specific pair availability, but doesn't provide significant additional parameter semantics beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Returns live bid/ask/last prices') and resources ('crypto/ZAR pairs'), and distinguishes it from sibling tools by specifying its unique financial data retrieval function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use specific pair values (e.g., 'SOL/ZAR and USDC/ZAR are VALR-only') and mentions fetching all exchanges concurrently, but doesn't explicitly state when to use this tool versus alternatives or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources