get_temperature_checks
Retrieve temperature check polls for governance decisions within the Indigo Protocol ecosystem.
Instructions
Get temperature check polls
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Retrieve temperature check polls for governance decisions within the Indigo Protocol ecosystem.
Get temperature check polls
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but only implies read-only access through the verb 'Get'. It lacks disclosure of return format, authentication requirements, rate limits, or side effects that agents need for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The three-word description is efficiently structured without redundancy, but it is inappropriately terse given the complete absence of annotations, output schema, and parameter definitions that would otherwise provide necessary context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema or annotations, the description fails to explain what distinguishes a 'temperature check' from regular polls (likely informal vs. formal governance) or what data structure is returned, leaving critical gaps for correct agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4 per evaluation rules. No parameter semantics are required from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get temperature check polls' identifies the resource (temperature check polls) and uses a specific verb, but critically fails to distinguish from the sibling tool `get_polls`. Given the similar naming and likely overlapping functionality in what appears to be a governance domain, this lack of differentiation creates ambiguity for tool selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided regarding when to use this tool versus the sibling `get_polls`, nor are there any stated prerequisites, conditions for use, or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/IndigoProtocol/indigo-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server