Skip to main content
Glama

Server Details

Turn any LLM multimodal; generate images, voices, videos, 3D models, music, and more.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
francis-ros/rostro-mcp-server
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
accountFetch account infoB
Read-onlyIdempotent
Inspect

Response includes their identity, current scopes/access, subscription, and credit balance.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
noteNo
scopesNoWhat scopes the user has access to.
creditsNoThe current number of credits remaining.
identityYesThe user's unique identifier.
usernameYesThe user's username.
subscriptionNoNone, Basic, Full, or Max.
credits_resetNoWhen the credits next reset, in ISO 8601 timestamp format in UTC.
credits_reset_inNoA human-readable version of when credits reset.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish this is read-only, idempotent, and non-destructive. The description adds valuable context by specifying exactly which account fields are returned (identity, scopes/access, subscription, credit balance), which is information not present in the structured metadata.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single efficient sentence with no extraneous text. However, the passive phrasing ('Response includes...') front-loads the output rather than the action, which is slightly less scannable than starting with the verb (e.g., 'Fetch account details including...').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has zero parameters and an output schema exists (per context signals), the description appropriately focuses on summarizing the return value contents rather than re-documenting the schema. It adequately covers the tool's simple scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, which per the baseline guidelines warrants a score of 4. With no arguments to document, the schema coverage is trivially complete and no additional parameter semantics are needed in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description indirectly describes the tool's purpose by listing what the 'Response includes' rather than stating the action explicitly (e.g., 'Retrieves account details'). While the fields listed (identity, scopes, subscription, credit) clarify the resource, there is no differentiation from sibling tools like 'check' that might overlap conceptually.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus siblings like 'check' or 'history'. There are no prerequisites, conditions, or exclusion criteria mentioned to help the agent decide if this is the correct tool for a given user request.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

checkCheck on generationA
Read-onlyIdempotent
Inspect

Continue to wait for a currently running generation. Usually only videos and 3D models take long enough to require this.

ParametersJSON Schema
NameRequiredDescriptionDefault
generation_uuidYesThe generation UUID of the item to check on.

Output Schema

ParametersJSON Schema
NameRequiredDescription
costNoCost in credits, if any.
infoNoThe layer info, if this is part of a project.
noteNoA note, typically for the LLM.
uuidNoThe unique identity of the media.
errorNoIf there was an error, the reason may be here.
promptNoPrompt metadata.
is_doneNoWhether this generation is completed.
user_idNo
is_failedNoWhether this generation is failed.
descriptionNoA generated description of the item.
generation_uuidNoThe unique identity of the generation, if any.
primary_media_urlNoThe URL where the generated media is located. If this value is non-null, use it to show clickable link(s) in your response with a relevant call-to-action like this: [Watch the Video](put the primary_media_url here).
secondary_media_urlsNoIf this generated a compound asset like a 3D model, the URLS of other components of the asset, like texture images, will be here.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent safety properties, so the description appropriately focuses on adding operational context: it clarifies this is a polling/waiting mechanism and specifies which generation types typically require extended waiting. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero redundancy. The first sentence establishes the core action immediately; the second qualifies usage by content type. Every word serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (not shown but indicated), comprehensive annotations covering safety properties, and 100% parameter coverage, the description provides complete conceptual context without needing to specify return values or technical constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'generation_uuid' parameter, the baseline is 3. The description implies the UUID comes from a prior generation request but does not add syntax details or explicit sourcing guidance beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb phrase ('Continue to wait') combined with the resource ('currently running generation') to clearly define the tool's polling function. It effectively distinguishes from sibling 'imagine' (likely the creation tool) by implying this is a follow-up status check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear contextual guidance by identifying specific content types that require polling ('videos and 3D models'), implying when the tool is necessary versus when results might be immediate. Lacks explicit workflow mapping (e.g., 'use after imagine'), but the usage context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deleteDelete mediaA
DestructiveIdempotent
Inspect

Delete the media with these uuids. Don't use this unless the user explicitly asks you to.

ParametersJSON Schema
NameRequiredDescriptionDefault
uuidsNoA list of UUIDs to delete.

Output Schema

ParametersJSON Schema
NameRequiredDescription
failuresNo
successesNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare destructiveHint=true and idempotentHint=true, establishing the safety profile. The description adds valuable behavioral context regarding authorization requirements (explicit user consent) not present in the annotations. However, it omits details about deletion permanence, recovery options, or what the operation returns (though output schema is present).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with the core action, followed immediately by the critical safety constraint. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter destructive operation where annotations and output schema handle the safety profile and return structure. The explicit consent requirement addresses the key missing behavioral gap. Could be improved by mentioning permanence of deletion, but not strictly necessary given destructiveHint annotation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% ('A list of UUIDs to delete'), so the structured documentation carries the full burden. The description references the parameter obliquely ('these uuids') but adds no syntax, format, or semantic details beyond the schema. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (Delete) and resource (media) with clear scoping mechanism (UUIDs). The sibling tools (account, check, history, imagine) perform entirely different functions, so the verb alone effectively distinguishes this tool, though the description doesn't explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-not guidance ('Don't use this unless the user explicitly asks you to'), which is critical for a destructive operation. However, it does not name specific alternative tools for non-destructive actions (e.g., if there's a 'soft delete' or 'archive' option among siblings).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

historyFetch generation historyB
Read-onlyIdempotent
Inspect

A unified endpoint for fetching multimedia asset generation history.

ParametersJSON Schema
NameRequiredDescriptionDefault
ascNoWhether to sort results in ascending order. Defaults to false.
firstNoThe number of the results to get.
uuidsNoIf given, only these media UUIDs will be fetched.
cursorNoThe cursor to use to fetch the next page.
order_byNoWhat field to order results by. Defaults to created_at.created_at
media_typesNoWhat generation types to fetch. If given, other types will be excluded.
generated_onlyNoWhether to return only generated images or not. If false, uploads like source images for image-to-image will be included.
generation_uuidsNoIf given, only these generation UUIDs will be fetched.

Output Schema

ParametersJSON Schema
NameRequiredDescription
itemsNoThe results.
cursorNoThe cursor to use to fetch the next page. If not given, there are no more results.
previous_cursorNoThe cursor of the previous fetched page, if any.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive safety properties. The description adds the 'unified' concept, clarifying this aggregates multiple generation modalities. However, it omits behavioral details like pagination mechanics (despite the cursor parameter), rate limiting, or history retention periods.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is tightly constructed with zero redundancy. 'Unified' efficiently signals multi-type support, 'fetching' establishes the read operation, and 'multimedia asset generation history' precisely scopes the resource without wasting words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and comprehensive input schema documentation (100% coverage), the description adequately anchors the tool's purpose. However, for an 8-parameter tool with complex filtering capabilities (UUID arrays, media type filters, pagination), mentioning pagination behavior or filtering logic would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured documentation carries the semantic load. The description does not explicitly elaborate on parameter interactions (e.g., how uuids filters interact with media_types), meeting the baseline expectation for well-schematized tools.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as fetching 'multimedia asset generation history' with the term 'unified' hinting at comprehensive coverage across media types (supported by the GenerationType enum in schema). It effectively distinguishes this read operation from sibling tools like 'imagine' (likely creation) and 'delete' (removal).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus siblings like 'check' or 'account'. While 'unified' implicitly suggests use for broad historical queries across all generation types, there are no stated prerequisites, exclusions, or workflow guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

imagineGenerate mediaAInspect

A unified endpoint for multimedia asset generation. For example, if the user asks for you to 'make', 'create', 'generate', or 'imagine' an image, song, video, speech, sound effect, or 3D model, use this tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestYesThe request, either as an object (preferred) or a json-formatted string (tolerated for compatibility).

Output Schema

ParametersJSON Schema
NameRequiredDescription
costNoCost in credits, if any.
infoNoThe layer info, if this is part of a project.
noteNoA note, typically for the LLM.
uuidNoThe unique identity of the media.
errorNoIf there was an error, the reason may be here.
promptNoPrompt metadata.
is_doneNoWhether this generation is completed.
user_idNo
is_failedNoWhether this generation is failed.
descriptionNoA generated description of the item.
generation_uuidNoThe unique identity of the generation, if any.
primary_media_urlNoThe URL where the generated media is located. If this value is non-null, use it to show clickable link(s) in your response with a relevant call-to-action like this: [Watch the Video](put the primary_media_url here).
secondary_media_urlsNoIf this generated a compound asset like a 3D model, the URLS of other components of the asset, like texture images, will be here.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=false and destructiveHint=false, establishing this is a safe write operation. The description adds context about it being a 'unified' endpoint handling multiple modalities, but does not disclose additional behavioral traits like generation latency, credit consumption, or async completion status.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently structured sentences where every clause earns its place. The first establishes scope ('unified endpoint'), the second provides actionable trigger words. No redundancy or unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity (9 generation types, 30+ parameters) and presence of output schema, the description appropriately focuses on high-level purpose and usage triggers rather than parameter minutiae. It successfully conveys the breadth of capabilities (multimedia) that the schema elaborates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema itself documents all parameters comprehensively. The description adds value by explaining the polymorphic nature ('unified endpoint') that justifies the single 'request' parameter structure, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly defines the tool as a 'unified endpoint for multimedia asset generation' and lists specific media types (image, song, video, speech, sound effect, 3D model). This clearly distinguishes it from siblings like 'delete', 'account', or 'history' through its specific verb and resource scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit trigger keywords ('make', 'create', 'generate', 'imagine') for when to invoke the tool. However, it lacks explicit 'when not to use' guidance or named alternatives, though none of the siblings appear to be alternative media generation tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.