Skip to main content
Glama
appian-design

Design System MCP Server

get-sail-guidance

Retrieve SAIL coding guidance and best practices for Appian's Aurora design system to implement components, layouts, and patterns correctly.

Instructions

Get SAIL coding guidance and best practices

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
technologyNoTechnology or framework (e.g., 'sail', 'html', 'css')

Implementation Reference

  • The tool 'get-sail-guidance' is defined and implemented here, accepting a 'technology' parameter and returning either a list of available guides or the specific content of a requested guide.
    server.tool(
      "get-sail-guidance",
      "Get SAIL coding guidance and best practices",
      {
        technology: z.string().describe("Technology or framework (e.g., 'sail', 'html', 'css')").optional(),
      },
      async ({ technology }) => {
        // If no technology specified, list available guides
        if (!technology) {
          const codingGuides = designSystemData['coding-guides'];
          const guides = Object.entries(codingGuides).map(
            ([key, guide]) => `${key}: ${guide.title} - ${guide.body}`
          );
          
          return {
            content: [
              {
                type: "text",
                text: `Available coding guides:\n\n${guides.join("\n\n")}\n\nUse get-component-details with category 'coding-guides' to access specific guides.`,
              },
            ],
          };
        }
        
        // Look for technology-specific guide
        const normalizedTech = technology.toLowerCase();
        const codingGuides = designSystemData['coding-guides'];
        
        // Check if there's a direct match or partial match
        let matchedGuide = null;
        let matchedKey = null;
        
        for (const [key, guide] of Object.entries(codingGuides)) {
          if (key.includes(normalizedTech) || guide.title.toLowerCase().includes(normalizedTech)) {
            matchedGuide = guide;
            matchedKey = key;
            break;
          }
        }
        
        if (!matchedGuide) {
          return {
            content: [
              {
                type: "text",
                text: `No coding guide found for "${technology}". Available guides: ${Object.keys(codingGuides).join(", ")}`,
              },
            ],
          };
        }
        
        // Fetch the full guide content
        const repoContent = await fetchRepoContent(matchedGuide.filePath);
        
        if (!repoContent) {
          return {
            content: [
              {
                type: "text",
                text: `Failed to fetch ${matchedGuide.title} guide. Basic info: ${matchedGuide.body}`,
              },
            ],
          };
        }
        
        return {
          content: [
            {
              type: "text",
              text: `# ${matchedGuide.title}\n\n${repoContent.content}`,
            },
          ],
        };
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves guidance but doesn't describe what the output looks like (e.g., text, structured data), whether it's read-only (implied by 'Get' but not explicit), or any constraints like rate limits or authentication needs. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core action and resource, making it easy to parse. Every word earns its place, with no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, 100% schema coverage, no output schema), the description is incomplete. It doesn't explain what 'SAIL' is or what form the guidance takes (e.g., documentation snippets, examples), which is crucial for an agent to use it effectively. With no annotations and no output schema, the description should provide more context about the tool's behavior and output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no meaning beyond what the input schema provides. The schema has 100% description coverage for its single parameter ('technology'), clearly explaining it as 'Technology or framework (e.g., 'sail', 'html', 'css')'. Since the description doesn't mention parameters at all, it doesn't compensate or add value, but the high schema coverage justifies a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states what the tool does ('Get SAIL coding guidance and best practices'), which is clear but somewhat vague. It specifies the resource ('SAIL coding guidance and best practices') and the action ('Get'), but doesn't distinguish it from sibling tools like 'get-component-details' or 'search-design-system' that might also provide guidance-related information. The purpose is understandable but lacks specificity about what exactly 'SAIL' refers to or what form the guidance takes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get-component-details' or 'search-design-system', nor does it specify contexts where this tool is appropriate (e.g., for coding standards vs. component lookup). There's no indication of prerequisites or exclusions, leaving the agent to guess based on the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/appian-design/aurora-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server