Siri Shortcuts MCP Server
Server Quality Checklist
Latest release: v1.0.0
- Disambiguation5/5
Each tool has a clearly distinct purpose: listing shortcuts, opening them in the app, and running them with parameters. There is no overlap in functionality, making it easy for an agent to select the correct tool.
Naming Consistency5/5All tool names follow a consistent verb_noun pattern (list_shortcuts, open_shortcut, run_shortcut) with no deviations in style or convention, providing predictable and readable naming.
Tool Count3/5With only 3 tools, the set feels thin for a Siri Shortcuts server, as it lacks operations like creating, editing, or deleting shortcuts, which are core to managing shortcuts. However, the tools cover basic viewing and execution.
Completeness2/5The tool surface is significantly incomplete for a Siri Shortcuts domain, missing essential CRUD operations (e.g., create_shortcut, update_shortcut, delete_shortcut) and other management features, which will limit agent capabilities in handling shortcuts comprehensively.
Average 3/5 across 3 of 3 tools scored.
See the Tool Scores section below for per-tool breakdowns.
- 0 of 2 issues responded to in the last 6 months
- No commit activity data available
- Last stable release on
- No critical vulnerability alerts
- No high-severity vulnerability alerts
- No code scanning findings
- CI is failing
This repository is licensed under GPL 3.0.
This repository includes a README.md file.
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
If you are the author, simply .
If the server belongs to an organization, first add
glama.jsonto the root of your repository:{ "$schema": "https://glama.ai/mcp/schemas/server.json", "maintainers": [ "your-github-username" ] }Then . Browse examples.
Add related servers to improve discoverability.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure but offers minimal information. It states the action but doesn't explain what 'opening' entails (does it launch the app? display details? require specific permissions?), whether it's read-only or has side effects, or what happens if the shortcut doesn't exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence that directly states the tool's purpose with zero wasted words. It's front-loaded and efficiently communicates the core functionality without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations, no output schema, and sibling tools that perform related but distinct operations, the description is insufficiently complete. It doesn't explain what 'opening' means operationally, how it differs from 'running', what the expected outcome is, or address potential error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'name' clearly documented in the schema. The description doesn't add any parameter-specific information beyond what's already in the structured data, so it meets the baseline for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Open') and target resource ('a shortcut in the Shortcuts app'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'run_shortcut' - both involve interacting with shortcuts but serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'run_shortcut' or 'list_shortcuts'. There's no mention of prerequisites, appropriate contexts, or what distinguishes 'opening' from 'running' a shortcut, leaving the agent with insufficient usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but offers minimal behavioral insight. It mentions 'optional input and output parameters' but doesn't clarify execution effects (e.g., whether it's read-only, requires permissions, has side effects, or handles errors). This is inadequate for a tool that likely performs actions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and key details. Every word earns its place with no redundancy, making it highly concise and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and a tool that performs an action ('run'), the description is incomplete. It lacks details on execution behavior, return values, error handling, or security implications, which are critical for an agent to use it correctly in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters. The description adds no additional meaning beyond implying 'name' can be an identifier and 'input' is optional, which is already covered in the schema. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('run') and resource ('shortcut'), specifying it can be executed by name or UUID. It distinguishes from siblings like 'list_shortcuts' (listing) and 'open_shortcut' (opening), though it doesn't explicitly contrast them. The purpose is specific but lacks explicit sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'list_shortcuts' or 'open_shortcut'. The description mentions optional input and output parameters but doesn't explain contexts or prerequisites for usage, leaving the agent to infer from tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states the action 'List all available Siri shortcuts' but doesn't describe what 'available' means, whether it requires permissions, how results are returned, or any rate limits. For a tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste: 'List all available Siri shortcuts.' It's front-loaded with the core action and resource, making it immediately understandable. No extraneous information or structural issues are present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is incomplete. It doesn't explain what 'available' means, how results are structured, or any behavioral constraints. For a listing tool with no structured support, more context about output format or scope would be needed for adequate completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the absence of inputs. The description doesn't need to compensate for parameter documentation, and it appropriately doesn't mention parameters. Baseline for 0 parameters is 4, as it avoids unnecessary parameter discussion.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'Siri shortcuts' with the qualifier 'all available', making the purpose unambiguous. It distinguishes from siblings open_shortcut and run_shortcut by focusing on listing rather than execution. However, it doesn't specify output format or scope beyond 'available', leaving minor room for improvement.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus alternatives like open_shortcut or run_shortcut. The description implies it's for listing shortcuts, but doesn't clarify prerequisites, context, or exclusions. Usage is implied rather than explicitly stated, which could lead to confusion in tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/dvcrn/mcp-server-siri-shortcuts'
If you have feedback or need assistance with the MCP directory API, please join our Discord server