get_latest_block
Retrieve the current block from the Cross-LLM MCP Server to access unified LLM API data across multiple providers.
Instructions
Get the latest block
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Retrieve the current block from the Cross-LLM MCP Server to access unified LLM API data across multiple providers.
Get the latest block
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states a read operation ('Get') but doesn't disclose behavioral traits like whether this is a network call, if it returns cached or real-time data, error conditions, or response format. For a tool with zero annotation coverage, this leaves significant gaps in understanding how it behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It's front-loaded with the core action and resource, making it easy to parse. For a simple tool, this minimal structure is appropriate and earns full marks for conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is minimally adequate but incomplete. It lacks context on what 'latest block' means (e.g., blockchain context), return format, or error handling. With no annotations or output schema, the description should provide more behavioral context to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100% (empty schema). The description doesn't need to explain parameters, and it correctly implies no inputs are required. This meets the baseline for parameterless tools, though it could note the absence of parameters more explicitly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('latest block'), making the purpose immediately understandable. It doesn't distinguish from sibling tools like 'get_transaction', but for a simple read operation with no parameters, this is adequate. The description avoids tautology by specifying 'latest' rather than just restating the name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_transaction' or other siblings. It doesn't mention context, prerequisites, or exclusions. The agent must infer usage from the name alone, which is insufficient for optimal tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/JamesANZ/cross-llm-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server