rostro
Server Details
Turn any LLM multimodal; generate images, voices, videos, 3D models, music, and more.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- francis-ros/rostro-mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsaccountFetch account infoBRead-onlyIdempotentInspect
Response includes their identity, current scopes/access, subscription, and credit balance.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| note | No | |
| scopes | No | What scopes the user has access to. |
| credits | No | The current number of credits remaining. |
| identity | Yes | The user's unique identifier. |
| username | Yes | The user's username. |
| subscription | No | None, Basic, Full, or Max. |
| credits_reset | No | When the credits next reset, in ISO 8601 timestamp format in UTC. |
| credits_reset_in | No | A human-readable version of when credits reset. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish this is read-only, idempotent, and non-destructive. The description adds valuable context by specifying exactly which account fields are returned (identity, scopes/access, subscription, credit balance), which is information not present in the structured metadata.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with no extraneous text. However, the passive phrasing ('Response includes...') front-loads the output rather than the action, which is slightly less scannable than starting with the verb (e.g., 'Fetch account details including...').
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has zero parameters and an output schema exists (per context signals), the description appropriately focuses on summarizing the return value contents rather than re-documenting the schema. It adequately covers the tool's simple scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, which per the baseline guidelines warrants a score of 4. With no arguments to document, the schema coverage is trivially complete and no additional parameter semantics are needed in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description indirectly describes the tool's purpose by listing what the 'Response includes' rather than stating the action explicitly (e.g., 'Retrieves account details'). While the fields listed (identity, scopes, subscription, credit) clarify the resource, there is no differentiation from sibling tools like 'check' that might overlap conceptually.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus siblings like 'check' or 'history'. There are no prerequisites, conditions, or exclusion criteria mentioned to help the agent decide if this is the correct tool for a given user request.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
checkCheck on generationARead-onlyIdempotentInspect
Continue to wait for a currently running generation. Usually only videos and 3D models take long enough to require this.
| Name | Required | Description | Default |
|---|---|---|---|
| generation_uuid | Yes | The generation UUID of the item to check on. |
Output Schema
| Name | Required | Description |
|---|---|---|
| cost | No | Cost in credits, if any. |
| info | No | The layer info, if this is part of a project. |
| note | No | A note, typically for the LLM. |
| uuid | No | The unique identity of the media. |
| error | No | If there was an error, the reason may be here. |
| prompt | No | Prompt metadata. |
| is_done | No | Whether this generation is completed. |
| user_id | No | |
| is_failed | No | Whether this generation is failed. |
| description | No | A generated description of the item. |
| generation_uuid | No | The unique identity of the generation, if any. |
| primary_media_url | No | The URL where the generated media is located. If this value is non-null, use it to show clickable link(s) in your response with a relevant call-to-action like this: [Watch the Video](put the primary_media_url here). |
| secondary_media_urls | No | If this generated a compound asset like a 3D model, the URLS of other components of the asset, like texture images, will be here. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent safety properties, so the description appropriately focuses on adding operational context: it clarifies this is a polling/waiting mechanism and specifies which generation types typically require extended waiting. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences with zero redundancy. The first sentence establishes the core action immediately; the second qualifies usage by content type. Every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown but indicated), comprehensive annotations covering safety properties, and 100% parameter coverage, the description provides complete conceptual context without needing to specify return values or technical constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'generation_uuid' parameter, the baseline is 3. The description implies the UUID comes from a prior generation request but does not add syntax details or explicit sourcing guidance beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb phrase ('Continue to wait') combined with the resource ('currently running generation') to clearly define the tool's polling function. It effectively distinguishes from sibling 'imagine' (likely the creation tool) by implying this is a follow-up status check.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear contextual guidance by identifying specific content types that require polling ('videos and 3D models'), implying when the tool is necessary versus when results might be immediate. Lacks explicit workflow mapping (e.g., 'use after imagine'), but the usage context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deleteDelete mediaADestructiveIdempotentInspect
Delete the media with these uuids. Don't use this unless the user explicitly asks you to.
| Name | Required | Description | Default |
|---|---|---|---|
| uuids | No | A list of UUIDs to delete. |
Output Schema
| Name | Required | Description |
|---|---|---|
| failures | No | |
| successes | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare destructiveHint=true and idempotentHint=true, establishing the safety profile. The description adds valuable behavioral context regarding authorization requirements (explicit user consent) not present in the annotations. However, it omits details about deletion permanence, recovery options, or what the operation returns (though output schema is present).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. Front-loaded with the core action, followed immediately by the critical safety constraint. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter destructive operation where annotations and output schema handle the safety profile and return structure. The explicit consent requirement addresses the key missing behavioral gap. Could be improved by mentioning permanence of deletion, but not strictly necessary given destructiveHint annotation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% ('A list of UUIDs to delete'), so the structured documentation carries the full burden. The description references the parameter obliquely ('these uuids') but adds no syntax, format, or semantic details beyond the schema. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Delete) and resource (media) with clear scoping mechanism (UUIDs). The sibling tools (account, check, history, imagine) perform entirely different functions, so the verb alone effectively distinguishes this tool, though the description doesn't explicitly contrast with them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-not guidance ('Don't use this unless the user explicitly asks you to'), which is critical for a destructive operation. However, it does not name specific alternative tools for non-destructive actions (e.g., if there's a 'soft delete' or 'archive' option among siblings).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
historyFetch generation historyBRead-onlyIdempotentInspect
A unified endpoint for fetching multimedia asset generation history.
| Name | Required | Description | Default |
|---|---|---|---|
| asc | No | Whether to sort results in ascending order. Defaults to false. | |
| first | No | The number of the results to get. | |
| uuids | No | If given, only these media UUIDs will be fetched. | |
| cursor | No | The cursor to use to fetch the next page. | |
| order_by | No | What field to order results by. Defaults to created_at. | created_at |
| media_types | No | What generation types to fetch. If given, other types will be excluded. | |
| generated_only | No | Whether to return only generated images or not. If false, uploads like source images for image-to-image will be included. | |
| generation_uuids | No | If given, only these generation UUIDs will be fetched. |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | No | The results. |
| cursor | No | The cursor to use to fetch the next page. If not given, there are no more results. |
| previous_cursor | No | The cursor of the previous fetched page, if any. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, idempotent, non-destructive safety properties. The description adds the 'unified' concept, clarifying this aggregates multiple generation modalities. However, it omits behavioral details like pagination mechanics (despite the cursor parameter), rate limiting, or history retention periods.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is tightly constructed with zero redundancy. 'Unified' efficiently signals multi-type support, 'fetching' establishes the read operation, and 'multimedia asset generation history' precisely scopes the resource without wasting words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and comprehensive input schema documentation (100% coverage), the description adequately anchors the tool's purpose. However, for an 8-parameter tool with complex filtering capabilities (UUID arrays, media type filters, pagination), mentioning pagination behavior or filtering logic would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured documentation carries the semantic load. The description does not explicitly elaborate on parameter interactions (e.g., how uuids filters interact with media_types), meeting the baseline expectation for well-schematized tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as fetching 'multimedia asset generation history' with the term 'unified' hinting at comprehensive coverage across media types (supported by the GenerationType enum in schema). It effectively distinguishes this read operation from sibling tools like 'imagine' (likely creation) and 'delete' (removal).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus siblings like 'check' or 'account'. While 'unified' implicitly suggests use for broad historical queries across all generation types, there are no stated prerequisites, exclusions, or workflow guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
imagineGenerate mediaAInspect
A unified endpoint for multimedia asset generation. For example, if the user asks for you to 'make', 'create', 'generate', or 'imagine' an image, song, video, speech, sound effect, or 3D model, use this tool.
| Name | Required | Description | Default |
|---|---|---|---|
| request | Yes | The request, either as an object (preferred) or a json-formatted string (tolerated for compatibility). |
Output Schema
| Name | Required | Description |
|---|---|---|
| cost | No | Cost in credits, if any. |
| info | No | The layer info, if this is part of a project. |
| note | No | A note, typically for the LLM. |
| uuid | No | The unique identity of the media. |
| error | No | If there was an error, the reason may be here. |
| prompt | No | Prompt metadata. |
| is_done | No | Whether this generation is completed. |
| user_id | No | |
| is_failed | No | Whether this generation is failed. |
| description | No | A generated description of the item. |
| generation_uuid | No | The unique identity of the generation, if any. |
| primary_media_url | No | The URL where the generated media is located. If this value is non-null, use it to show clickable link(s) in your response with a relevant call-to-action like this: [Watch the Video](put the primary_media_url here). |
| secondary_media_urls | No | If this generated a compound asset like a 3D model, the URLS of other components of the asset, like texture images, will be here. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=false and destructiveHint=false, establishing this is a safe write operation. The description adds context about it being a 'unified' endpoint handling multiple modalities, but does not disclose additional behavioral traits like generation latency, credit consumption, or async completion status.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently structured sentences where every clause earns its place. The first establishes scope ('unified endpoint'), the second provides actionable trigger words. No redundancy or unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (9 generation types, 30+ parameters) and presence of output schema, the description appropriately focuses on high-level purpose and usage triggers rather than parameter minutiae. It successfully conveys the breadth of capabilities (multimedia) that the schema elaborates.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema itself documents all parameters comprehensively. The description adds value by explaining the polymorphic nature ('unified endpoint') that justifies the single 'request' parameter structure, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly defines the tool as a 'unified endpoint for multimedia asset generation' and lists specific media types (image, song, video, speech, sound effect, 3D model). This clearly distinguishes it from siblings like 'delete', 'account', or 'history' through its specific verb and resource scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit trigger keywords ('make', 'create', 'generate', 'imagine') for when to invoke the tool. However, it lacks explicit 'when not to use' guidance or named alternatives, though none of the siblings appear to be alternative media generation tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!