Skip to main content
Glama
MakingChatbots

Genesys Cloud MCP Server

sample_conversations_by_queue

Retrieve conversation analytics for a specific queue between two dates to get a representative sample of conversation IDs for reporting or investigation.

Instructions

Retrieves conversation analytics for a specific queue between two dates, returning a representative sample of conversation IDs. Useful for reporting, investigation, or summarisation.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queueIdYesThe UUID of the queue to filter conversations by. (e.g., 00000000-0000-0000-0000-000000000000)
startDateYesThe start date/time in ISO-8601 format (e.g., '2024-01-01T00:00:00Z')
endDateYesThe end date/time in ISO-8601 format (e.g., '2024-01-07T23:59:59Z')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only provide a title, so the description carries the full burden of behavioral disclosure. It adds value by specifying the output ('representative sample of conversation IDs') and use cases, but does not cover critical behaviors like rate limits, authentication needs, pagination, or how the sample is selected (e.g., random, stratified). Without annotations, this leaves gaps in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, with two sentences that efficiently convey purpose and utility. Every sentence earns its place by stating the action, inputs, output, and use cases without redundancy. Minor improvement could be made by integrating usage guidance more explicitly, but it remains well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and minimal annotations, the description provides basic completeness by covering purpose, inputs, and output type. However, it lacks details on return format (e.g., structure of the sample), error handling, or behavioral constraints, which are important for a tool with 3 parameters and analytics focus. It is adequate but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for all parameters (queueId, startDate, endDate). The description adds no additional parameter semantics beyond what the schema provides, such as format details or constraints. Baseline score of 3 is appropriate as the schema adequately documents parameters, but the description does not enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieves conversation analytics') and resources ('for a specific queue between two dates'), and distinguishes its output ('returning a representative sample of conversation IDs'). However, it does not explicitly differentiate from sibling tools like 'search_voice_conversations' or 'query_queue_volumes', which might have overlapping functions, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance by stating the tool is 'useful for reporting, investigation, or summarisation', which suggests contexts for use. However, it lacks explicit when-to-use vs. when-not-to-use instructions or named alternatives among sibling tools, such as when to choose this over 'search_voice_conversations' for sampling vs. full results.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MakingChatbots/genesys-cloud-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server