messages_get_all
Retrieve all messages from ProPresenter to access and manage communication content for presentations and stage displays.
Instructions
Get a list of all messages
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Retrieve all messages from ProPresenter to access and manage communication content for presentations and stage displays.
Get a list of all messages
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. 'Get a list' implies a read operation, but it doesn't specify whether this returns all messages at once (potential performance implications), if there's pagination, what format the list takes, or any rate limits. The description provides minimal behavioral context beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that gets straight to the point. While it could be more informative, it doesn't waste words or include unnecessary information. The structure is front-loaded with the core operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations, no output schema, and multiple sibling tools in the same domain, the description is inadequate. It doesn't explain what 'messages' are in this system, how the list is structured, whether there are limitations on what's returned, or how this differs from other message-related tools. The context demands more explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, earning a baseline score of 4 for this dimension since there's nothing to compensate for.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get a list of all messages' clearly states the verb ('Get') and resource ('messages'), but it's vague about scope and doesn't differentiate from sibling tools like 'messages_get' or 'messages_clear'. It specifies 'all' which provides some scope information, but doesn't explain what 'messages' refers to in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives like 'messages_get' (which presumably gets specific messages) or 'messages_clear' (which clears messages). The description doesn't mention prerequisites, limitations, or appropriate contexts for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Marc416/propresenter-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server