openclaw-agent-tools
Server Details
Weather, code search, currency & Solana trust scoring as MCP tools. Free, no API key needed.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- baronsengir007/openclaw-agent-tools
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 4 of 4 tools scored. Lowest: 3.2/5.
Each tool has a clearly distinct purpose targeting different domains: code search, currency conversion, wallet trust scoring, and weather. There is no overlap in functionality, making it easy for an agent to select the correct tool without confusion.
All tool names follow a consistent 'agent_' prefix with descriptive suffixes (e.g., agent_code_search, agent_currency). This uniform pattern enhances readability and predictability across the toolset.
With 4 tools, the count is reasonable for a general-purpose utility server, though it feels slightly thin for broader agent tasks. Each tool is well-defined, but more tools could enhance coverage without becoming overwhelming.
The tools cover diverse domains (code, finance, crypto, weather) but lack cohesion as a set for a specific purpose, making it hard to assess coverage. There are no obvious gaps within each domain, but the overall surface feels fragmented rather than complete for a unified workflow.
Available Tools
4 toolsagent_code_searchBInspect
Search GitHub repositories by topic, language, or description. Returns top repositories with star counts, descriptions, and URLs. Useful for finding libraries, implementations, and examples. Example: 'MCP server Python' or 'agent framework TypeScript'.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query for GitHub repositories. Examples: 'MCP server Python', 'solana smart contract rust', 'react hooks typescript' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions that the tool returns top repositories with specific details, but it doesn't disclose behavioral traits such as rate limits, authentication needs, pagination, error handling, or what 'top' means (e.g., sorting by stars). This leaves gaps in understanding how the tool behaves beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core functionality and moving to usage examples. All sentences earn their place by explaining what the tool does and how it can be used, though it could be slightly more structured (e.g., separating purpose from examples).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search with one parameter) and no annotations or output schema, the description is somewhat complete but has gaps. It covers the purpose and basic usage but lacks details on behavioral aspects like limitations or response format. For a search tool without structured output, more context on return values would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single parameter 'query', with examples provided. The description adds minimal value beyond the schema by reiterating search criteria (topic, language, description) and giving example queries, but it doesn't provide additional syntax or format details. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches GitHub repositories by specific criteria (topic, language, description) and returns repository information with star counts, descriptions, and URLs. It provides a specific verb ('search') and resource ('GitHub repositories'), though it doesn't explicitly differentiate from sibling tools like agent_currency or agent_weather, which are unrelated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for finding libraries, implementations, and examples, and provides example queries, but it doesn't explicitly state when to use this tool versus alternatives or any exclusions. Since sibling tools are unrelated (currency, trust_score, weather), no direct comparison is needed, but guidance on context is limited to implied scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agent_currencyAInspect
Convert between currencies or get current exchange rates. Returns conversion result, rate, and major currency rates. Powered by open.er-api.com (free, no API key required). Example: 'convert 100 USD to EUR' or 'EUR to JPY rate'.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Currency conversion query. Examples: 'convert 100 USD to EUR', '50 GBP in JPY', 'USD to BTC rate' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about the data source (open.er-api.com), cost (free, no API key required), and return values (conversion result, rate, major currency rates). However, it doesn't mention rate limits, error handling, or data freshness, leaving some behavioral aspects unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and front-loaded, with every sentence earning its place. The first sentence states the core functionality, the second explains returns and data source, and the third provides clear examples—all without any wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, data source, and return values. However, without an output schema, it could benefit from more detail about the response structure, preventing a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents the single 'query' parameter with examples. The description adds minimal value beyond what's in the schema, only reinforcing the same examples without providing additional syntax or format details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('convert between currencies', 'get current exchange rates') and resources (currencies). It distinguishes itself from sibling tools like code search, trust score, and weather by focusing exclusively on currency operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context with examples ('convert 100 USD to EUR' or 'EUR to JPY rate'), showing when to use this tool for currency conversion or rate queries. However, it doesn't explicitly state when NOT to use it or mention alternatives, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agent_trust_scoreAInspect
Get a trust score for a Solana wallet address. Queries on-chain data: transaction count, last activity, and SOL balance. Returns trust_score (0.0–1.0), tier (unknown/emerging/established/verified), and detailed signals. Useful before delegating tasks or payments to an agent wallet.
| Name | Required | Description | Default |
|---|---|---|---|
| wallet_address | Yes | Solana wallet address in base58 encoding (32–44 characters). Example: 9WzDXwBbmkg8ZTbNMqUxvQRAyrZzDsGYdLVL9zYtAWWM |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains what data is queried (transaction count, last activity, SOL balance) and the return format (trust_score, tier, detailed signals), but it lacks details on rate limits, error handling, or performance characteristics, leaving some behavioral aspects unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose and followed by key details like data sources and usage context. Every sentence adds value without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is mostly complete: it covers purpose, data sources, return values, and usage context. However, it could be enhanced by specifying output details like the meaning of tiers or signal types, which are not in an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, fully documenting the single required parameter (wallet_address). The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline score of 3 without compensating for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('trust score for a Solana wallet address'), and it distinguishes this from sibling tools like agent_code_search, agent_currency, and agent_weather by focusing on wallet trust assessment rather than code, currency, or weather queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool ('Useful before delegating tasks or payments to an agent wallet'), but it does not explicitly state when not to use it or name alternatives among the sibling tools, which are unrelated to wallet trust scoring.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agent_weatherAInspect
Get real-time weather and 3-day forecast for any city worldwide. Returns current temperature, wind speed, precipitation, and conditions. Powered by OpenMeteo (free, no API key required). Example: 'weather in Amsterdam' or 'forecast for Tokyo'.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | City name or weather query. Examples: 'Amsterdam', 'weather in Tokyo', 'forecast for New York' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a read-only operation (implied by 'Get'), discloses the data source ('Powered by OpenMeteo'), and notes no authentication requirements ('free, no API key required'). However, it lacks details on rate limits, error handling, or response format, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by supporting details and examples, all in three efficient sentences. Every sentence adds value—explaining functionality, data returned, source, and usage—with zero waste, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema, no annotations), the description is mostly complete: it covers purpose, usage, and behavioral aspects like data source and authentication. However, without an output schema, it could better explain the return format (e.g., structure of forecast data), leaving a minor gap in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting the single parameter 'query' with examples. The description adds minimal value beyond this, only reinforcing the parameter's purpose through the tool's examples. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get real-time weather and 3-day forecast') and resources ('for any city worldwide'), distinguishing it from sibling tools like currency conversion or code search. It explicitly mentions what data is returned (temperature, wind speed, etc.), making the function unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (for weather queries worldwide) and includes examples ('weather in Amsterdam', 'forecast for Tokyo'), but it does not explicitly state when not to use it or mention alternatives among the sibling tools. This gives good guidance but lacks exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!