Skip to main content
Glama
odei-ai

ODEI MCP Server

Official
by odei-ai

odei_world_model_signal

Validate claims and assess relevance using ODEI's knowledge graph. Get confidence scores and evidence for fact-checking, entity verification, and risk assessment.

Instructions

Rapid trust and relevance scoring for a claim, entity, or topic. Returns a confidence score (0-1), relevance to ODEI's world model, and supporting/contradicting evidence from the knowledge graph. Use this for quick validation before making decisions — it's faster and cheaper than a full world model query.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
claimYesThe claim or entity to evaluate (e.g., "ODAI token has 2000 holders", "This wallet is associated with ODEI", "Revenue exceeded 20 ETH")
categoryNoCategory of signal requested. fact_check = verify a specific claim. entity_verify = confirm an entity exists in the graph. trend_signal = assess if something is trending. risk_assess = quick risk evaluation. relevance = how relevant is this to ODEI.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it's a read-only operation (implied by 'scoring' and 'returns'), outputs confidence scores and evidence, and emphasizes performance traits ('faster and cheaper'). However, it lacks details on rate limits or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with core functionality, uses two efficient sentences without redundancy, and every part (e.g., 'quick validation', 'faster and cheaper') adds value to guide usage, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is mostly complete: it covers purpose, usage, and behavioral context. However, it lacks details on output format (beyond mentioning scores and evidence) and error cases, which would enhance completeness for a scoring tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema, such as examples or usage tips for the 'category' enum, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('scoring', 'returns') and resources ('claim, entity, or topic'), distinguishing it from siblings like 'odei_world_model_query' by emphasizing speed and cost-effectiveness for quick validation rather than comprehensive analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states when to use this tool ('for quick validation before making decisions') and when not to ('faster and cheaper than a full world model query'), directly comparing it to the sibling 'odei_world_model_query' as an alternative for more thorough queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/odei-ai/mcp-odei'

If you have feedback or need assistance with the MCP directory API, please join our Discord server