Skip to main content
Glama

Fusión Studio AI — Branding Tools

Server Details

Brand audits, visual catalogs, AI proposals & checkout for LATAM SMBs. Full funnel via MCP.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.2/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct and non-overlapping purpose: brand_audit analyzes brand presence, brand_book_status checks progress, brand_book_viewer provides full details, generate_proposal creates sales pitches, purchase_brand_book handles payments, and visual_catalog offers design resources. The descriptions clearly differentiate their functions, eliminating any ambiguity.

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with clear verb_noun structures (e.g., brand_audit, generate_proposal, purchase_brand_book). The naming is predictable and readable throughout the set, with no deviations in style or convention.

Tool Count5/5

With 6 tools, this server is well-scoped for its branding domain, covering analysis, consultation, generation, purchasing, and resource access. Each tool earns its place without feeling excessive or insufficient for the intended workflows.

Completeness4/5

The toolset provides strong coverage for core branding workflows, including audit, proposal generation, and purchase. A minor gap exists in update or management tools for brand books (e.g., editing or deleting), but agents can likely work around this given the comprehensive viewer and status tools.

Available Tools

6 tools
brand_auditCInspect

Analiza la presencia de marca de un negocio (sitio web y/o Instagram) y devuelve un score 1-100 con fugas de marca identificadas. Útil para evaluar la identidad visual, coherencia y comunicación de cualquier PyME o emprendimiento. Powered by Fusión Studio AI.

ParametersJSON Schema
NameRequiredDescriptionDefault
industryNoIndustria o rubro del negocio
website_urlNoURL del sitio web
company_nameYesNombre de la empresa o negocio
contact_nameNoNombre del contacto
contact_emailNoEmail de contacto
instagram_handleNoHandle de Instagram sin @
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool returns a score and identifies brand leaks, but lacks details on authentication needs, rate limits, data handling, or what constitutes a 'brand leak.' For a tool analyzing external websites/social media without annotations, this is insufficient transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, stating the core functionality in the first sentence. The second sentence adds context about usefulness and target audience, while the third credits the AI provider. There's minimal redundancy, though the third sentence could be considered slightly extraneous.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain what the 1-100 score means, what 'brand leaks' are, how the analysis works, or what the output format looks like. The context signals indicate significant complexity that the description doesn't adequately address.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters. The description implies the tool uses website_url and instagram_handle for analysis but doesn't add syntax or format details beyond what the schema provides. With high schema coverage, the baseline score of 3 is appropriate as the description adds minimal parameter context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: analyzing brand presence for a business (website and/or Instagram) and returning a score 1-100 with identified brand leaks. It specifies the target audience (SMEs or startups) and mentions identity, consistency, and communication aspects. However, it doesn't explicitly differentiate from sibling tools like brand_book_status or visual_catalog.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance, stating it's 'useful for evaluating visual identity, consistency, and communication' but offers no explicit when-to-use rules, prerequisites, or comparisons to alternatives. No guidance on when to choose this over sibling tools like brand_book_status or visual_catalog is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brand_book_statusCInspect

Consulta el estado de un Brand Book: etapa, progreso, próximos pasos. Busca por email o brand_book_id.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailNoEmail del cliente
brand_book_idNoID del brand book
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states this is a read operation ('Consulta'), implying it's non-destructive, but doesn't mention authentication requirements, rate limits, error conditions, or what the return format looks like. For a status-checking tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that efficiently convey the tool's purpose and search parameters. It's front-loaded with the main functionality, though it could be slightly more structured by explicitly separating purpose from parameter guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's purpose (checking status with multiple data points) and the lack of annotations and output schema, the description is incomplete. It doesn't explain what 'etapa, progreso, próximos pasos' means in practice, what format the response takes, or potential error scenarios. For a status tool with no structured output documentation, this creates ambiguity for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (email and brand_book_id) with their types and descriptions. The description adds that these are search parameters ('Busca por email o brand_book_id'), which provides some context about their usage, but doesn't add significant meaning beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Consulta el estado de un Brand Book' (Check the status of a Brand Book) with specific details about what information is retrieved (etapa, progreso, próximos pasos). It uses a specific verb ('Consulta') and resource ('Brand Book'), though it doesn't explicitly differentiate from sibling tools like 'brand_book_viewer' which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions searching by email or brand_book_id, but doesn't specify prerequisites, context, or exclusions. For example, it doesn't clarify if this should be used instead of 'brand_book_viewer' or how it relates to other brand-related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brand_book_viewerBInspect

Datos completos de un Brand Book: estrategia, identidad visual (paleta + tipografía), identidad verbal (tono, vocabulario, copies). Para que un agente IA use la identidad de marca como contexto.

ParametersJSON Schema
NameRequiredDescriptionDefault
sectionNoSección a consultarall
brand_book_idYesID del brand book
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. The description indicates this is a data retrieval tool ('Datos completos de un Brand Book'), implying it's likely a read-only operation, but doesn't explicitly state this or mention other behavioral traits like authentication requirements, rate limits, or error handling. It adds some context about the data structure but lacks comprehensive behavioral information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, with two sentences that efficiently convey the tool's purpose and usage context. The first sentence lists the data components, and the second explains the intended use case. There's no unnecessary repetition or fluff, though it could be slightly more structured for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides basic completeness by outlining the data retrieved and the AI context use case. However, it lacks details on return values, error conditions, or behavioral constraints, which are important for a tool with 2 parameters and no structured output documentation. It's minimally adequate but has clear gaps in contextual information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters ('section' and 'brand_book_id'). The description doesn't add any additional semantic information beyond what the schema provides, such as explaining the meaning of 'all', 'strategy', 'visual', 'verbal' sections or the UUID format. With high schema coverage, the baseline score of 3 is appropriate as the schema handles the parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to retrieve complete brand book data including strategy, visual identity, and verbal identity. It specifies the resource (brand book) and the scope of data returned. However, it doesn't explicitly differentiate from sibling tools like 'brand_book_status' or 'visual_catalog', which might provide overlapping or related functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating 'Para que un agente IA use la identidad de marca como contexto' (For an AI agent to use brand identity as context), suggesting this tool is for retrieving brand identity data for AI context. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'brand_book_status' or 'visual_catalog', nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_proposalBInspect

Genera una propuesta de venta personalizada basada en un Brand Audit. Claude Opus analiza las fugas y genera un pitch con soluciones específicas. Requiere audit_id de brand_audit.

ParametersJSON Schema
NameRequiredDescriptionDefault
audit_idYesID del audit (obtenido de brand_audit)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that 'Claude Opus analiza las fugas y genera un pitch con soluciones específicas' (Claude Opus analyzes leaks and generates a pitch with specific solutions), which adds some context about the analysis process. However, it lacks details on permissions needed, rate limits, whether the proposal is saved or temporary, or what the output format looks like (e.g., text, PDF). For a tool with no annotations, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, starting with the core purpose in the first sentence. The second sentence adds useful context about the analysis process, and the third specifies the required parameter. There's no wasted text, but it could be slightly more structured (e.g., separating usage notes).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (involves analysis and proposal generation), no annotations, and no output schema, the description is incomplete. It doesn't explain what the generated proposal contains (e.g., format, content), how it's delivered, or any error conditions. For a tool with no structured output information, more detail is needed to guide the agent effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the parameter 'audit_id' fully documented in the schema as a UUID from 'brand_audit'. The description adds minimal value beyond the schema by repeating that it requires 'audit_id de brand_audit'. Since the schema already covers this, the baseline score of 3 is appropriate, as the description doesn't provide additional semantic context (e.g., how the audit ID is obtained or validated).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Genera una propuesta de venta personalizada basada en un Brand Audit' (Generates a personalized sales proposal based on a Brand Audit). It specifies the verb (generate), resource (sales proposal), and input source (Brand Audit). However, it doesn't explicitly differentiate from sibling tools like 'brand_book_viewer' or 'visual_catalog', which might also involve brand-related outputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating it requires an 'audit_id de brand_audit', suggesting it should be used after a brand audit is performed. However, it doesn't provide explicit guidance on when to use this tool versus alternatives (e.g., 'brand_book_viewer' for viewing results or 'purchase_brand_book' for purchases), nor does it specify exclusions or prerequisites beyond the audit ID.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

purchase_brand_bookAInspect

Genera un link de pago de MercadoPago para comprar un Brand Book Profesional ($310.000 ARS). Requiere audit_id de brand_audit + datos de contacto.

ParametersJSON Schema
NameRequiredDescriptionDefault
audit_idYesID del audit
contact_nameYesNombre completo del comprador
contact_emailYesEmail del comprador
referral_codeNoCódigo de referido (10% descuento)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it generates a payment link (implying an external transaction via MercadoPago), requires specific inputs (audit_id and contact data), and mentions a discount option via referral_code. However, it doesn't cover important aspects like whether this is a read-only or mutating operation, what happens after payment (e.g., delivery of the Brand Book), error handling, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose (generating a payment link) and includes essential details (price, prerequisites). Every word earns its place, with no redundant or vague phrasing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a payment transaction tool with no annotations and no output schema), the description is moderately complete. It covers the purpose, price, and prerequisites well, but lacks details on behavioral outcomes (e.g., what the generated link looks like, what happens post-payment) and doesn't compensate for the absence of annotations or output schema. It's adequate but has clear gaps for a financial tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema by mentioning that audit_id comes from brand_audit and contact data is required, but it doesn't provide additional semantic context (e.g., format expectations beyond what's in the schema or how referral_code affects the price). Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Genera un link de pago de MercadoPago') and the resource ('para comprar un Brand Book Profesional'), including the exact price ($310.000 ARS). It distinguishes itself from siblings like brand_audit (which likely creates the audit) and brand_book_status/viewer (which check status or view content) by focusing on payment generation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: after a brand_audit (as indicated by 'Requiere audit_id de brand_audit') and when contact data is available. It doesn't explicitly state when NOT to use it or name alternatives (e.g., no mention of whether generate_proposal might be a precursor or alternative), but the prerequisite is clearly specified.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

visual_catalogCInspect

Consulta el catálogo curado de Fusión Studio: 38 combinaciones tipográficas y 60 paletas de color, categorizadas por personalidad e industria.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoTipo de catálogoboth
limitNoMáximo resultados por categoría
industryNoFiltrar por industria: gastronomía, salud, tecnología, retail, etc.
personalityNoFiltrar por personalidad: elegante, moderno, cálido, profesional, etc.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool is for consultation (implying read-only access) and mentions curated content with specific counts (38 typographic combinations, 60 color palettes). However, it doesn't disclose important behavioral traits like whether results are paginated, if authentication is required, rate limits, or what the output format looks like (especially critical since there's no output schema).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise—a single sentence that efficiently conveys the core purpose and scope. It's front-loaded with the main action ('Consulta el catálogo') and includes relevant details (counts, categorizations). There's no wasted verbiage, though it could be slightly more structured by explicitly separating purpose from features.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (4 parameters, no output schema, no annotations), the description is incomplete. It adequately states what the tool does but fails to provide necessary context for effective use: no output format details, no behavioral constraints (e.g., pagination, auth), and no differentiation from sibling tools. For a catalog query tool with multiple filtering options, more guidance is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema—it doesn't explain how 'type', 'limit', 'industry', or 'personality' parameters affect the consultation process. With high schema coverage, the baseline is 3 even without param info in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Consulta el catálogo curado de Fusión Studio' (Consult the curated catalog of Fusión Studio) and specifies the content: '38 combinaciones tipográficas y 60 paletas de color' (38 typographic combinations and 60 color palettes). It distinguishes itself from siblings by focusing on catalog consultation rather than brand auditing, book generation, or purchasing. However, it doesn't explicitly mention the verb 'retrieve' or 'fetch', which would make it more specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions categorization by personality and industry but doesn't explain how this differs from sibling tools like brand_audit or brand_book_viewer. There are no explicit instructions on prerequisites, exclusions, or recommended contexts for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources