Skip to main content
Glama

Server Details

Connect AI coding agents to Anima Playground, Figma, and your design system.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
AnimaApp/mcp-server-guide
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
codegen-figma_to_codeAInspect

Convert a Figma design to production-ready code.

This tool generates code from Figma designs, supporting multiple frameworks and styling options.

Authentication: Requires X-Figma-Token header with your Figma personal access token.

Inputs:

  • fileKey: Figma file key extracted from the URL. For example, from "https://figma.com/design/abc123XYZ/MyDesign", the fileKey is "abc123XYZ".

  • nodesId: Array of Figma node IDs to convert. Extract from the URL's node-id parameter, replacing "-" with ":". For example, from "?node-id=1-2", the nodeId is "1:2".

  • framework: Target framework (react, html). Detect from the user's project to match their existing stack.

  • styling: CSS approach (tailwind, plain_css). Detect from the user's project to match their existing styling system.

  • language: TypeScript or JavaScript. Detect from the user's project.

  • uiLibrary: Optional UI component library (mui, antd, shadcn). Detect from the user's project if they use one of the supported libraries.

  • assetsBaseUrl: Base path for assets in generated code

Returns:

  • files: Generated code files as a record of {path: {content, isBinary}}

  • assets: Array of {name, url} for images/assets that need to be downloaded from Figma

  • tokenUsage: Approximate token count for the generation

  • snapshotsUrls: Record of {nodeId: url} with screenshot URLs for each requested node

  • guidelines: IMPORTANT Instructions for using the generated code effectively

CRITICAL - Implementation Workflow: After calling this tool, you MUST:

  1. Download the snapshot images from snapshotsUrls - these are the visual reference of the original Figma design

  2. View/analyze the snapshot images to understand the exact visual appearance. Use BOTH the generated code AND the snapshots as inputs for your implementation

  3. Parse data-variant attributes from generated components → map to your component props

  4. Extract CSS variables from generated styles → use exact colors

  5. IMPORTANT: Follow the detailed guidelines provided in the tool response for accurate implementation

  6. Compare final implementation against snapshot for visual accuracy

Asset Handling: The generated code references assets at the assetsBaseUrl path. You must download the assets from the returned URLs and place them at your assetsBaseUrl location. For example, if assetsBaseUrl is "./assets" and an asset is named "logo.png", the code will reference "./assets/logo.png".

ParametersJSON Schema
NameRequiredDescriptionDefault
fileKeyYesFigma file key extracted from the Figma URL. For example, from "figma.com/design/abc123XYZ/MyDesign", the file key is "abc123XYZ"
nodesIdYesArray of Figma node IDs to generate code for. You can find node IDs in the Figma URL after selecting elements, e.g., "0:1", "123:456"
stylingNoCSS styling approach for the generated code. Defaults to Tailwindtailwind
languageNoProgramming language for the generated code. Defaults to TypeScripttypescript
frameworkNoTarget framework for the generated code. Defaults to Reactreact
uiLibraryNoUI component library or code style to use. Options: "mui" (Material UI), "antd" (Ant Design), "shadcn" (shadcn/ui), "clean_react" (production-ready React with semantic HTML, accessibility, and interactivity - no UI library). If not specified, generates plain React/HTML.
assetsBaseUrlNoBase URL or path for assets in generated code. For example, "./assets" will produce paths like "./assets/logo.png"./assets
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses authentication requirements (X-Figma-Token header), post-invocation workflow requirements (must download snapshots, parse data-variant attributes), and asset handling obligations. Does not mention rate limits or destructive potential, but covers the critical operational behaviors for this generation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear headers, but the 'Inputs' section largely duplicates schema descriptions (wasted space). However, the 'CRITICAL' workflow section and 'Asset Handling' are vital, front-loaded context that justify the length. Could be more concise by referencing schema details instead of restating them.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent completeness for a complex 7-parameter generation tool without output schema. The description documents return values (files, assets, tokenUsage, snapshotsUrls, guidelines), authentication, and crucial post-processing workflow steps required for successful implementation. Distinguishes from siblings effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds crucial extraction semantics not in schema: specifically that nodeId requires replacing '-' with ':' in URL parameters, and provides URL extraction examples for fileKey. This guidance is essential for correct invocation and goes beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a clear specific action ('Convert a Figma design to production-ready code') and distinguishes itself from siblings (playground-create, etc.) by focusing specifically on Figma-to-code generation rather than playground management.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides strong workflow guidance ('CRITICAL - Implementation Workflow' with 6 specific steps post-invocation) and detection heuristics for parameters ('Detect from the user's project'). Lacks explicit 'when not to use' alternatives, but the specific Figma-focused scope makes usage boundaries clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

design_system-get_filesAInspect

Fetch multiple documentation files by their paths. Returns a JSON object mapping file paths to their content. Use this after getting the manifest to retrieve specific files like README.md, COMPONENTS.md, TROUBLESHOOTING.md or component documentation.

ParametersJSON Schema
NameRequiredDescriptionDefault
dsIdYesThe design system ID
filePathsYesArray of file paths to fetch (e.g., ["README.md", "COMPONENTS.md"])
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the return format ('JSON object mapping file paths to their content'), which is valuable behavioral information. However, it doesn't mention error handling, rate limits, authentication needs, or what happens with invalid file paths, leaving gaps for a tool that fetches multiple resources.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states purpose and return format, the second provides usage context and examples. Every phrase adds value with zero wasted words, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 parameters, 100% schema coverage, no output schema, and no annotations, the description is adequate but has gaps. It explains the return format and usage context, but doesn't cover error cases, performance implications, or how to handle large numbers of files. Given the complexity level, it's minimally complete but could be more robust.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds minimal value beyond the schema by providing example file paths like 'README.md' and 'COMPONENTS.md', but doesn't explain parameter interactions or constraints not in the schema. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Fetch multiple documentation files') and resource ('by their paths'), with specific examples of file types. It doesn't explicitly distinguish from sibling tools like 'design_system-get_manifest', but the focus on fetching files rather than a manifest provides implicit differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('after getting the manifest to retrieve specific files') and gives examples of appropriate file types. However, it doesn't explicitly state when NOT to use it or name alternatives among siblings, though the mention of 'manifest' suggests 'design_system-get_manifest' as a prerequisite.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

design_system-get_manifestAInspect

Get the manifest.json file describing the design system documentation structure. Returns a JSON object mapping file/folder paths to their metadata (description, type, optional tags/category).

ParametersJSON Schema
NameRequiredDescriptionDefault
dsIdYesThe design system ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the return format ('JSON object mapping file/folder paths to metadata'), which is helpful, but doesn't cover aspects like error handling, authentication needs, rate limits, or whether the operation is idempotent. It adequately explains what the tool does but lacks deeper behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose and followed by return value details. Every word contributes meaning with zero waste, making it highly efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single parameter, no output schema, no annotations), the description is mostly complete: it explains the purpose, return format, and distinguishes the resource. However, it could improve by addressing behavioral aspects like error cases or usage prerequisites, slightly reducing completeness for a tool with no structured safety or output information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already documents the single parameter 'dsId' as 'The design system ID'. The description adds no additional parameter information beyond what the schema provides, maintaining the baseline score of 3 for adequate but non-enhancing coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get'), resource ('manifest.json file'), and purpose ('describing the design system documentation structure'), distinguishing it from siblings like 'design_system-get_files' which likely retrieves actual files rather than metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving documentation structure metadata, but provides no explicit guidance on when to use this tool versus alternatives like 'design_system-get_files' or other siblings. The context is clear but lacks comparative or exclusionary advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

playground-createAInspect

Create an Anima playground from a prompt, website URL, or Figma design.

Returns a playground URL where the generated code can be viewed and edited.

Input types (set via "type" field):

  • "p2c": Generate from a text prompt. Requires: prompt. Styling: tailwind, css, inline_styles.

  • "l2c": Convert a website URL to code. Requires: url. Styling: tailwind, inline_styles. uiLibrary: shadcn only.

  • "f2c": Convert Figma frames to code. Requires: fileKey, nodesId (also requires X-Figma-Token header). Styling: tailwind, plain_css, css_modules, inline_styles. uiLibrary: mui, antd, shadcn, clean_react.

Common fields: framework (react or html), styling (see per-type options above). React-only fields: language (typescript or javascript), uiLibrary (see per-type options above).

Returns: { success, sessionId, playgroundUrl }

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNo[l2c only] Website URL to convert to code. Required when type is l2c.
typeYesGeneration type: p2c (prompt), l2c (website URL), f2c (Figma)
promptNo[p2c only] Text prompt describing the UI to generate. Required when type is p2c.
fileKeyNo[f2c only] Figma file key. Required when type is f2c.
nodesIdNo[f2c only] Figma node IDs to convert. Required when type is f2c.
stylingNoCSS styling. Valid values per type — p2c: tailwind, css, inline_styles. l2c: tailwind, inline_styles. f2c: tailwind, plain_css, css_modules, inline_styles.tailwind
languageNoProgramming language (react framework only). f2c: typescript or javascript. l2c: always typescript. Not used for p2c or html framework.
frameworkNoTarget framework. Supported by all types.react
uiLibraryNoUI component library (optional, react framework only). l2c: only shadcn is supported. f2c: mui, antd, shadcn, or clean_react. Not used for p2c.
guidelinesNo[p2c only] Additional coding guidelines for the generation.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the return structure ({success, sessionId, playgroundUrl}) and mentions the X-Figma-Token header requirement for f2c mode, but lacks information on rate limits, persistence duration, async processing delays, or error handling behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with markdown headers and bullet points that group related constraints logically. The information density is appropriate for a complex polymorphic tool with 10 parameters, though the formatting could be slightly more compact.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity (10 parameters, 3 distinct modes, conditional requirements) and lack of output schema, the description successfully covers the essential operational patterns. It documents the return payload structure and captures the cross-parameter validation rules (e.g., which fields apply to which type).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the schema already documents individual fields. The description adds significant value by organizing parameters into logical groups ('Input types', 'Common fields', 'React-only fields') and clarifying the conditional dependencies between the 'type' parameter and other fields (e.g., which styling options apply to which type).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific action ('Create an Anima playground') and identifies the three distinct input sources (prompt, website URL, Figma design). It clearly distinguishes this as a code generation/playground creation tool rather than a simple file upload or project management function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent documentation of the three mutually exclusive input modes (p2c, l2c, f2c) with specific requirements for each. However, it does not mention when to use the sibling tool 'codegen-figma_to_code' versus the f2c mode here, nor does it clarify when to use 'playground-publish' in the workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

playground-publishAInspect

Publish an Anima playground session to a live URL or as a design system npm package.

Requires a sessionId from a previously created playground (via playground-create).

Modes:

  • "webapp" (default): Deploys to a live URL. Returns { success, liveUrl, subdomain }.

  • "designSystem": Publishes as an npm package. Requires: packageName. Returns { success, packageUrl, packageName, packageVersion }.

Returns: { success, liveUrl, subdomain } for webapp mode, or { success, packageUrl, packageName, packageVersion } for designSystem mode.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoDeploy mode: "webapp" publishes to a live URL, "designSystem" publishes as an npm package.webapp
sessionIdYesThe session ID of the playground to publish (returned by playground-create).
packageNameNo[designSystem mode only] NPM package name. Required when mode is "designSystem".
packageVersionNo[designSystem mode only] NPM package version. Required when mode is "designSystem".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden. It effectively documents mode-specific return structures and prerequisites. However, it omits operational details like whether publishing overwrites existing deployments, idempotency, or resource persistence.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-organized with clear hierarchical structure (Modes and Returns sections) and front-loaded purpose statement. Minor redundancy occurs between the Modes bullet points and the Returns section, which essentially duplicate information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description appropriately compensates by documenting return value shapes for both modes. It adequately covers the 4-parameter input space and explains the prerequisite chain. Could be improved with side-effect disclosure typical of publishing operations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. The description adds significant value beyond the schema: it explains the origin of sessionId (from playground-create), documents full return value structures (compensating for missing output schema), and clarifies mode-specific requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Publish') and resource ('Anima playground session'), clearly stating it deploys to either a 'live URL' or 'design system npm package'. It distinguishes from sibling 'playground-create' by explicitly noting it requires a sessionId from that tool, establishing the workflow sequence.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit sequencing guidance ('Requires a sessionId from a previously created playground via playground-create') and clearly delineates when to use each mode ('webapp' vs 'designSystem'). Lacks explicit mention of sibling alternatives like 'project-download_from_playground' for cases where local download is preferred over publishing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

project-download_from_playgroundAInspect

Download project files from an Anima playground session as a zip file. Provide the playground URL and receive a pre-signed download URL (valid 10 min) for the zip file containing all project source files. Requires authentication and user must have access to the session.

ParametersJSON Schema
NameRequiredDescriptionDefault
playgroundUrlYesAnima playground URL (e.g., https://dev.animaapp.com/chat/<session_id>?...). The session ID will be extracted from this URL.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses return type (pre-signed URL), expiration constraint (valid 10 min), content details (zip of all project source files), and authorization requirements. Does not mention error cases or rate limits, but covers key behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: purpose (sentence 1), input/output contract with expiration detail (sentence 2), prerequisites (sentence 3). No redundancy, front-loaded with purpose, every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive despite no output schema: explains pre-signed URL return value, 10-minute expiration, zip content, and auth barriers. With only one well-documented parameter and no output schema, the description provides sufficient context for invocation. Minor gap: could mention URL expiry handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed description and example for playgroundUrl. Description mentions 'Provide the playground URL' which adds minimal semantic value beyond the schema, but with complete schema documentation, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Download' + resource 'project files from an Anima playground session' + format 'as a zip file'. Distinguishes clearly from siblings: it downloads existing files vs codegen-figma_to_code (converts Figma), playground-create (creates), and playground-publish (publishes).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear prerequisites ('Requires authentication and user must have access to the session') and input requirements ('Provide the playground URL'). Lacks explicit contrast with siblings (e.g., 'use this after creating with playground-create'), but functional context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.