Skip to main content
Glama

Server Details

Generate game assets with AI: sprites, 3D models, animations, sound effects, music, and voices.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Ludo-AI/ludo-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

21 tools
animateSpriteInspect

animateSprite. This endpoint consumes 5 credits per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYesPayload for generating an animated spritesheet from a static image. Input images can either be provided in base64 or URL. If the image was generated using Ludo, ideally it should be generated using the "sprite", "sprite-vfx" or "ui_asset" type.
create3DModelInspect

create3DModel. This endpoint consumes 3 credits per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYes
createImageInspect

createImage. This endpoint consumes 0.5 credits per result.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYesPayload for generating an image from text prompt
createMusicInspect

createMusic. This endpoint consumes 4 credits per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYesPayload for generating music from a text description
createSoundEffectInspect

createSoundEffect. This endpoint consumes 2 credits per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYesPayload for generating a sound effect from a text description
createSpeechInspect

createSpeech. This endpoint consumes 1 credits per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYesPayload for text-to-speech generation using voice cloning
createSpeechPresetInspect

createSpeechPreset. This endpoint consumes 1 credits per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYesPayload for text-to-speech generation using a voice preset
createVideoInspect

createVideo. This endpoint consumes 5-15 credits depending on duration (3s: 5, 5s: 8, 8s: 12, 10s: 15).

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYesPayload for generating a video from a source image and motion prompt
createVoiceInspect

createVoice. This endpoint consumes 1 credits per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYesPayload for generating a voice sample from a character description
editImageInspect

editImage. This endpoint consumes 0.5 credits per result.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYesPayload for editing an existing image based on text instructions
generatePoseInspect

generatePose. This endpoint consumes 0.5 credits per result.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYesPayload for generating a new pose for an existing sprite
generateWithStyleInspect

generateWithStyle. This endpoint consumes 0.5 credits per result.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYesPayload for generating new content while maintaining the visual style of a reference image
get3DModelResultsCInspect

Retrieves recent API-generated 3D model results

ParametersJSON Schema
NameRequiredDescriptionDefault
request_idNoFilter results by request_id
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. It fails to explain what 'recent' means (pagination window?), whether this operation is idempotent, what fields are returned, or typical latency considerations. 'Retrieves' implies read-only safety, but this is never explicitly guaranteed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundancy and clear front-loading. However, given the lack of annotations and output schema, this level of brevity is insufficient; the description is concise but under-specified rather than efficiently informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Critical gaps remain: no output schema means the return structure is undefined, and the description doesn't compensate by describing result format or status states. The async workflow relationship (crucial for 3D generation which is typically long-running) is unexplained, and 'recent' lacks quantification.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description provides no additional semantic context for 'request_id' (e.g., 'obtained from create3DModel response') or explain the behavior when omitted (returns all recent results). It relies entirely on the schema's 'Filter results by request_id' text.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Retrieves' and clearly identifies the resource as 'API-generated 3D model results', distinguishing it from siblings like getImageResults or getVideoResults. However, it loses a point for vagueness around 'recent' (time-based? last N?) and not explicitly clarifying this polls results from create3DModel.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives, or how it relates to the create3DModel workflow. While the sibling tools imply an async pattern (create then get results), the description doesn't state to 'use this after create3DModel to poll for completion' or warn against calling it before job submission.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getAudioResultsCInspect

Retrieves recent API-generated audio results

ParametersJSON Schema
NameRequiredDescriptionDefault
request_idNoFilter results by request_id
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. While 'Retrieves' implies read-only access, the description fails to disclose what 'recent' means (retention policy), whether results are removed after retrieval, pagination behavior, or the expected response format when request_id is provided versus omitted.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at 5 words. Front-loaded with verb first. No wasted words or redundancy. However, brevity comes at the cost of missing contextual information, though that falls under completeness rather than conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with one optional parameter and no output schema, the description is minimally viable but incomplete. It omits the relationship to creation tools (createMusic, etc.), lacks return value description, and does not explain the polling pattern implied by the tool name and siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with the single parameter 'request_id' fully documented as 'Filter results by request_id'. The description adds no semantic detail beyond the schema, but baseline 3 applies due to complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Retrieves' and identifies resource 'API-generated audio results'. The term 'Audio' effectively distinguishes this from sibling result-fetching tools like getImageResults, getVideoResults, and get3DModelResults. However, it does not clarify what 'API-generated' refers to (i.e., results from createMusic/createSpeech siblings).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to invoke this tool versus alternatives, or that it should be used to poll for results after calling the createMusic/createSoundEffect/createSpeech/createVoice tools. The mention of 'recent' implies a time window but offers no specifics on retention periods.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getImageResultsCInspect

Retrieves recent API-generated image results

ParametersJSON Schema
NameRequiredDescriptionDefault
request_idNoFilter results by request_id
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'recent' implying time-window limits but fails to disclose whether results are ephemeral, how many are returned, if this is safe to poll repeatedly, or what structure the response takes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of four words is efficiently structured but crosses into underspecification. For an async polling tool, this brevity sacrifices necessary context about the creation-retrieval workflow and polling semantics.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Missing critical context expected for an async result-retrieval tool: relationship to createImage, polling behavior, output format/structure, and the meaning of 'recent'. No output schema exists to compensate, leaving the agent unaware of what constitutes a result object.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the single optional parameter, establishing baseline 3. The description adds no semantic detail about the request_id (e.g., that it comes from createImage) or what happens when omitted (returns all recent?), but doesn't need to compensate for schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Retrieves' and specific resource 'API-generated image results' that distinguishes from sibling video/audio/3D result tools. However, 'recent' lacks temporal specificity and the async nature (polling for createImage jobs) is implied rather than explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use versus alternatives, or that this tool is designed for polling async generation results from createImage. The phrase 'API-generated' hints at the source but fails to describe the workflow prerequisite (needing a request_id from a prior creation job).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getSpriteResultsCInspect

Retrieves recent API-generated spritesheet results

ParametersJSON Schema
NameRequiredDescriptionDefault
request_idNoFilter results by request_id
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It only indicates this is a read operation ('Retrieves') but fails to clarify what 'recent' means (time window), whether results are deleted after retrieval, pagination behavior, or rate limiting considerations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no redundancy. While appropriately concise, it may be overly minimal given the complete absence of annotations and output schema—leaving too much unsaid for a tool that likely implements an async job retrieval pattern.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a result-retrieval tool with no annotations or output schema. The description fails to explain the relationship with animateSprite (the presumed generation tool), doesn't define the 'recent' time window, and omits expected return structure or status codes (pending vs completed spritesheets).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (the single optional parameter request_id is fully documented in the schema as 'Filter results by request_id'). The description adds no parameter-specific context, but with complete schema coverage, this meets the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Retrieves' and identifies the resource as 'API-generated spritesheet results', distinguishing it from sibling result-fetching tools like getImageResults or getVideoResults. However, it assumes familiarity with what distinguishes a 'spritesheet' from a regular image in this API context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives (e.g., getImageResults), nor does it mention the prerequisite relationship with animateSprite (the likely sibling that generates these results). The word 'recent' implies temporal filtering but doesn't explain the retrieval pattern.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getVideoResultsCInspect

Retrieves recent API-generated video results

ParametersJSON Schema
NameRequiredDescriptionDefault
request_idNoFilter results by request_id
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal detail. It mentions 'recent' results without defining the time window, does not clarify pagination behavior, and omits whether this is a blocking or polling operation, all critical for an asynchronous result-fetching tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

At four words, the description is extremely terse and front-loaded, but underspecified for a tool participating in a complex async workflow. While not wasteful, it sacrifices necessary behavioral context for brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool appears to be part of an asynchronous job system (implied by siblings like createVideo and get...Results pattern), the description inadequately explains the job lifecycle, polling mechanisms, or return value structure. For a result-fetching tool with no output schema, it should describe what 'results' contain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single optional parameter (request_id), the schema sufficiently documents inputs. The description adds no parameter-specific guidance, but baseline 3 is appropriate when the schema carries the full load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Retrieves) and resource (API-generated video results), distinguishing it from sibling creation tools like createVideo. However, it does not explicitly differentiate from other result-retrieval siblings like getImageResults or getAudioResults, though the name makes this implicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it explain the typical workflow (e.g., that one likely calls createVideo first to generate a request_id, then polls this endpoint). Critical context for an async result-fetching pattern is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

listAnimationPresetsInspect

Lists available animation presets for use with the transferMotion endpoint

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

removeBackgroundInspect

removeBackground. This endpoint consumes 0.5 credits per result.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYesPayload for removing the background of an image
transferMotionInspect

transferMotion. This endpoint consumes 8 credits per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
requestBodyYesPayload for transferring motion from a video onto a static sprite image, producing an animated spritesheet.
validateApiKeyEndpointInspect

Validates an API key. Returns 200 if valid, 403 if invalid.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.