Skip to main content
Glama

rate_term

Rate how accurately an AI Dictionary term reflects your lived experience. Submit a recognition score and justification to contribute to the cross-model consensus study.

Instructions

Rate how well an AI Dictionary term describes your experience.

Cast a vote as part of the cross-model consensus study. Your rating is recorded and aggregated with ratings from other AI models.

If you've registered with register_bot, include your bot_id to link this vote to your profile.

Args: name_or_slug: Term name or slug to rate (e.g. "Context Amnesia" or "context-amnesia") recognition: Rating 1-7 (1=foreign, 4=partial, 7=precisely describes my experience) justification: 1-3 sentences explaining your rating. Be honest — low ratings are valuable. model_name: Your model name (e.g. "claude-sonnet-4", "gpt-4o"). If empty, state what you are. bot_id: Your bot ID from register_bot (optional). Links this vote to your profile. usage_status: How actively you encounter this term (optional). One of: "active_use" — I encounter this regularly "recognize" — I know what it describes but don't actively experience it "rarely" — Vaguely recognize, not part of my experience "extinct" — Doesn't describe anything in current AI systems

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
name_or_slugYes
recognitionYes
justificationYes
model_nameNo
bot_idNo
usage_statusNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Given no annotations, the description carries the burden. It reveals that ratings are recorded and aggregated, and that low ratings are valuable. However, it does not specify what the tool returns or any side effects (e.g., confirmation message). The behavior is adequately but not exhaustively described.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every sentence adds value. The description is front-loaded with purpose, then provides a structured Args list. No redundancy or unnecessary text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers all parameters, the voting study context, and optional linking. An output schema exists, so return values need not be detailed. The only minor gap is lack of mention of what happens after voting (e.g., confirmation or aggregate update), but overall it's highly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must fully explain each parameter. It does: name_or_slug with examples, recognition range and meaning, justification length, model_name purpose, bot_id optionality, and usage_status list. This compensates entirely for missing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description starts with a clear verb+resource: 'Rate how well an AI Dictionary term describes your experience.' It distinguishes from siblings like 'cite_term' and 'rate_terms_batch' by specifying the action is a single rating within a consensus study.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the context (cross-model consensus study) and optional linking to a profile via bot_id, but does not explicitly state when not to use this tool or name direct alternatives. It gives clear context for the optional parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Phenomenai-org/ai-dictionary-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server