get_user
Retrieve user information from Carbon Voice by providing a user ID to access conversation data and contact details.
Instructions
Get a User by their ID.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes |
Retrieve user information from Carbon Voice by providing a user ID to access conversation data and contact details.
Get a User by their ID.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true and destructiveHint=false, indicating it's a safe read operation. The description adds no behavioral context beyond this (e.g., error handling, permissions, or rate limits), but doesn't contradict annotations. With annotations covering safety, a baseline 3 is appropriate as the description adds minimal value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words, front-loading the core action. It's appropriately sized for a simple lookup tool, earning full marks for conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (1 parameter, no output schema), annotations cover safety, and the description clarifies the parameter. However, it lacks details on return values (no output schema) and usage context, making it adequate but with gaps. A score of 3 reflects minimum viability for this simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description mentions 'by their ID', clarifying the single parameter 'id' as a user identifier. This adds meaning beyond the bare schema, though it doesn't detail format (e.g., string type). With one parameter and low coverage, it compensates partially, aligning with a baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('a User'), specifying it's by ID. It distinguishes from siblings like 'get_current_user' (no ID needed) and 'search_user' (search vs. direct lookup), though not explicitly. However, it lacks explicit sibling differentiation, keeping it at 4 instead of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'get_current_user' (for current user) or 'search_user' (when ID is unknown). The description implies usage by ID but doesn't specify context or exclusions, leaving gaps for an AI agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/PhononX/cv-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server