Skip to main content
Glama

Server Details

Weather, code search, currency & Solana trust scoring as MCP tools. Free, no API key needed.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
baronsengir007/openclaw-agent-tools
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 4 of 4 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose targeting different domains: code search, currency conversion, wallet trust scoring, and weather. There is no overlap in functionality, making it easy for an agent to select the correct tool without confusion.

Naming Consistency5/5

All tool names follow a consistent 'agent_' prefix with descriptive suffixes (e.g., agent_code_search, agent_currency). This uniform pattern enhances readability and predictability across the toolset.

Tool Count4/5

With 4 tools, the count is reasonable for a general-purpose utility server, though it feels slightly thin for broader agent tasks. Each tool is well-defined, but more tools could enhance coverage without becoming overwhelming.

Completeness3/5

The tools cover diverse domains (code, finance, crypto, weather) but lack cohesion as a set for a specific purpose, making it hard to assess coverage. There are no obvious gaps within each domain, but the overall surface feels fragmented rather than complete for a unified workflow.

Available Tools

4 tools
agent_currencyAInspect

Convert between currencies or get current exchange rates. Returns conversion result, rate, and major currency rates. Powered by open.er-api.com (free, no API key required). Example: 'convert 100 USD to EUR' or 'EUR to JPY rate'.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesCurrency conversion query. Examples: 'convert 100 USD to EUR', '50 GBP in JPY', 'USD to BTC rate'
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about the data source (open.er-api.com), cost (free, no API key required), and return values (conversion result, rate, major currency rates). However, it doesn't mention rate limits, error handling, or data freshness, leaving some behavioral aspects unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded, with every sentence earning its place. The first sentence states the core functionality, the second explains returns and data source, and the third provides clear examples—all without any wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, data source, and return values. However, without an output schema, it could benefit from more detail about the response structure, preventing a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents the single 'query' parameter with examples. The description adds minimal value beyond what's in the schema, only reinforcing the same examples without providing additional syntax or format details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('convert between currencies', 'get current exchange rates') and resources (currencies). It distinguishes itself from sibling tools like code search, trust score, and weather by focusing exclusively on currency operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context with examples ('convert 100 USD to EUR' or 'EUR to JPY rate'), showing when to use this tool for currency conversion or rate queries. However, it doesn't explicitly state when NOT to use it or mention alternatives, which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agent_trust_scoreAInspect

Get a trust score for a Solana wallet address. Queries on-chain data: transaction count, last activity, and SOL balance. Returns trust_score (0.0–1.0), tier (unknown/emerging/established/verified), and detailed signals. Useful before delegating tasks or payments to an agent wallet.

ParametersJSON Schema
NameRequiredDescriptionDefault
wallet_addressYesSolana wallet address in base58 encoding (32–44 characters). Example: 9WzDXwBbmkg8ZTbNMqUxvQRAyrZzDsGYdLVL9zYtAWWM
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It explains what data is queried (transaction count, last activity, SOL balance) and the return format (trust_score, tier, detailed signals), but it lacks details on rate limits, error handling, or performance characteristics, leaving some behavioral aspects unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and followed by key details like data sources and usage context. Every sentence adds value without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is mostly complete: it covers purpose, data sources, return values, and usage context. However, it could be enhanced by specifying output details like the meaning of tiers or signal types, which are not in an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, fully documenting the single required parameter (wallet_address). The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline score of 3 without compensating for any gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('trust score for a Solana wallet address'), and it distinguishes this from sibling tools like agent_code_search, agent_currency, and agent_weather by focusing on wallet trust assessment rather than code, currency, or weather queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('Useful before delegating tasks or payments to an agent wallet'), but it does not explicitly state when not to use it or name alternatives among the sibling tools, which are unrelated to wallet trust scoring.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agent_weatherAInspect

Get real-time weather and 3-day forecast for any city worldwide. Returns current temperature, wind speed, precipitation, and conditions. Powered by OpenMeteo (free, no API key required). Example: 'weather in Amsterdam' or 'forecast for Tokyo'.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesCity name or weather query. Examples: 'Amsterdam', 'weather in Tokyo', 'forecast for New York'
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a read-only operation (implied by 'Get'), discloses the data source ('Powered by OpenMeteo'), and notes no authentication requirements ('free, no API key required'). However, it lacks details on rate limits, error handling, or response format, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by supporting details and examples, all in three efficient sentences. Every sentence adds value—explaining functionality, data returned, source, and usage—with zero waste, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema, no annotations), the description is mostly complete: it covers purpose, usage, and behavioral aspects like data source and authentication. However, without an output schema, it could better explain the return format (e.g., structure of forecast data), leaving a minor gap in context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, documenting the single parameter 'query' with examples. The description adds minimal value beyond this, only reinforcing the parameter's purpose through the tool's examples. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get real-time weather and 3-day forecast') and resources ('for any city worldwide'), distinguishing it from sibling tools like currency conversion or code search. It explicitly mentions what data is returned (temperature, wind speed, etc.), making the function unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (for weather queries worldwide) and includes examples ('weather in Amsterdam', 'forecast for Tokyo'), but it does not explicitly state when not to use it or mention alternatives among the sibling tools. This gives good guidance but lacks exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.