Things MCP
Server Quality Checklist
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 20 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior1/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but offers none. It doesn't indicate whether this requires authentication, if results are cached, or what data structure is returned despite the missing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
While brief at three words, the description is insufficiently front-loaded with actionable information. Extreme brevity here represents under-specification rather than efficient communication.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness1/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Critically fails to differentiate from sibling 'things_get_area' (singular vs plural). With no output schema and no annotations, the description should explain the return format and domain-specific meaning of 'areas,' but provides neither.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single 'max_results' parameter, establishing the baseline score. The description adds no additional semantic context about the parameter's usage or implications.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose2/5Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get all areas' restates the tool name with minimal expansion. It fails to explain what constitutes an 'area' in the Things domain or distinguish this from the sibling tool 'things_get_area' (singular).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines1/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus 'things_get_area' or other list-fetching siblings. No mention of when to apply the 'max_results' parameter versus fetching all results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden but reveals almost nothing: it doesn't specify if the operation is read-only (though 'Get' implies it), what happens if area_id is invalid, whether 'items' includes completed/archived todos, or if pagination applies beyond max_results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient at 7 words with no redundancy. However, the brevity crosses into under-specification given the lack of annotations and sibling ambiguity. Structure is front-loaded but insufficiently informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for safe invocation. No output schema, no annotations, undocumented area_id semantics, and unexplained 'items' concept in the Things domain. Should clarify relationship to sibling tools and disclose error handling for invalid area_id.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (max_results described, area_id bare). The description adds context for area_id by referencing 'specific area', establishing it as a container identifier, but offers no format guidance (UUID vs slug) or examples. Baseline appropriate for partial schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a clear verb ('Get') and resource ('items in a specific area'), but uses vague terminology ('items' undefined) and fails to explicitly distinguish from sibling 'things_get_areas' (which likely lists areas, not items within them). The scope is implied but not precise enough to prevent selection errors.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus siblings like 'things_get_project', 'things_get_areas', or 'things_get_inbox'. The agent has no signal whether an 'area' is a workspace, location, or project container, nor when to prefer this over other retrieval tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It does not describe the return format, structure of tag objects, whether results are cached, or pagination behavior. The claim 'Get all' is technically accurate (max_results defaults to all) but the interaction with the limit parameter is not explained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The three-word description contains no redundancy or wasted language. However, it borders on under-specification rather than optimal conciseness, as it omits critical context that would help an agent select and invoke the tool correctly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of both annotations and output schema, the description should explain what constitutes a 'tag' in this system and what data structure to expect. For a tool within a complex ecosystem of 20+ related tools, the description provides insufficient context for proper agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single optional parameter (max_results), the schema fully documents the parameter semantics. The description adds no additional parameter context, but none is needed given the comprehensive schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states the basic action ('Get') and resource ('tags'), but provides no context about what 'tags' represent in the Things app ecosystem or how they relate to todos/projects. It does not differentiate from the numerous sibling getter tools (things_get_todo_details, things_get_projects, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives. Given the large number of sibling 'get_' tools (things_get_inbox, things_get_anytime, etc.), the description fails to clarify use cases or prerequisites for retrieving tags.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'JSON API for full feature support' indicating this is the comprehensive method, but fails to disclose mutation semantics (partial vs full replacement), safety characteristics, or side effects of the update operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is efficient with no wasted words, but it is undersized for a 12-parameter mutation tool with complex nested structures (the 'items' array). While the sentence earns its place, the overall structure lacks necessary elaboration for this complexity level.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given high complexity (12 parameters, mutation operation, sophisticated 'items' structure), zero annotations, and no output schema, a single-sentence description is grossly insufficient. Critical gaps remain around error handling, partial update behavior, and return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is only 42%, leaving 7 parameters undocumented in the schema (including ambiguous fields like 'operation', 'completed', 'canceled'). The description adds zero parameter-specific context or syntax guidance, failing to compensate for the schema gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Update), resource (existing project), and domain (Things). The phrase 'existing project' effectively distinguishes this from sibling tool 'things_add_project', though it could explicitly mention the creation alternative for maximum clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While 'existing project' implies a prerequisite, there is no explicit guidance on when to use this versus 'things_add_items_to_project' (which also adds items to projects) or 'things_add_project'. The description lacks 'when-not-to-use' exclusions or prerequisite instructions beyond what's in the schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to mention mutation characteristics, idempotency, error conditions (e.g., invalid ID), or reversibility. The phrase 'JSON API for full feature support' hints at capability but provides no actionable behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is appropriately brief, but wastes valuable descriptive space on implementation details ('JSON API') rather than usage guidance or behavioral traits. Front-loading is adequate with the action verb first, though the latter half of the sentence provides minimal value to the agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a complex 13-parameter mutation tool with no output schema and no annotations. The description omits crucial context: that partial updates are supported (only 'id' and 'title' are required), how to obtain the 'id' (despite the schema noting this), or what 'full feature support' actually encompasses.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 46% (low), and the description does not compensate by explaining any parameters. Critical parameters like 'operation' (no schema description) and 'deadline' (no schema description) remain undocumented. The description should have clarified the 'id' lookup requirement or partial update behavior given the schema gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Update' and resource 'existing to-do in Things', distinguishing it from sibling creation tools (things_add_todo) and project updates (things_update_project). However, 'using JSON API for full feature support' adds implementation detail rather than clarifying functional scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus alternatives, nor any prerequisites mentioned. While 'Update an existing to-do' implicitly contrasts with creating new todos, there is no mention of when to prefer this over other modification methods or that the 'id' parameter requires fetching the todo first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It does not indicate whether this is read-only (though implied), what format the to-dos are returned in, or how the results are sorted. No mention of the default 'all results' behavior when max_results is omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is extremely concise with no redundant words. However, given the complete absence of annotations and output schema, it may be undersized—leaving the agent to infer behavioral details that could have been included without significant bloat.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description is insufficient. It omits the definition of 'Anytime' (tasks without assigned dates), lacks return value description, and provides no behavioral safeguards or pagination guidance beyond the single parameter description in the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for max_results, the description appropriately relies on the schema for parameter documentation. It adds no parameter context, which is acceptable given the schema completeness, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('Get') and identifies the specific resource ('to-dos in Anytime'), distinguishing it from sibling tools like things_get_today or things_get_inbox. However, it assumes familiarity with the Things app concept of 'Anytime' without explaining that it contains tasks without specific deadlines.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use Anytime versus alternatives like things_get_someday (for deferred tasks), things_get_upcoming (for scheduled tasks), or things_get_today. The description fails to clarify that Anytime is for available, non-scheduled to-dos.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a read-only operation (though implied by 'Get'), what happens if the inbox is empty, or whether results are paginated. The phrase 'all to-dos' suggests completeness but doesn't clarify interaction with the max_results parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at five words. It is front-loaded with the action verb and wastes no space. However, given the lack of annotations and the presence of many siblings, this brevity comes at the cost of necessary context, preventing a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations, no output schema, and a crowded namespace of 16 sibling tools with similar purposes, the description is insufficient. It does not explain the return structure, the concept of the Inbox in Things (uncategorized items), or provide selection criteria to help the agent choose this over things_get_list or other views.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the max_results parameter ('Limit number of results returned'). The description adds no parameter-specific context, but since the schema fully documents the optional limiting behavior, it meets the baseline expectation without adding value beyond the structured schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('Get') and specifies the resource ('to-dos in the Inbox'), which distinguishes it from siblings like things_get_today or things_get_anytime by naming the specific list. However, it lacks explicit differentiation from things_get_list or guidance on what constitutes 'Inbox' in the Things app workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the 16 sibling tools available (e.g., things_get_today, things_get_anytime, things_get_someday). It does not indicate that the Inbox typically contains uncategorized/new items or when a user might prefer this over other list views.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states 'Get all to-dos' but does not clarify return format, pagination behavior, or whether results include completed items. The presence of max_results suggests result limiting, but the description doesn't explain truncation behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is front-loaded and efficient. However, extreme brevity contributes to the lack of behavioral and contextual information needed for a tool with many siblings and no output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of 16+ sibling tools with overlapping functionality, no annotations, and no output schema, the description is insufficient. It omits critical context about the relationship to specific list getters and provides no hint about the return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (max_results has description, list does not). The description adds 'by name' which loosely clarifies the list parameter accepts string identifiers, but doesn't explain the enum values (inbox, today, etc.) or their semantics. Baseline score appropriate given schema partially documents parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('to-dos') and identifies the targeting mechanism ('by name' referring to the list parameter). However, it fails to distinguish this generic getter from sibling-specific tools like things_get_inbox or things_get_today which duplicate functionality for individual lists.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the numerous specific list getters (things_get_inbox, things_get_today, etc.). The agent cannot determine whether to use the generic tool with list='inbox' or the specific things_get_inbox tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden, yet it fails to disclose read-only safety, pagination behavior, default sorting (by completion date?), or whether returned items include full task details or summaries. 'Get' implies read operation but lacks explicit safety confirmation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at 7 words. Front-loaded with key action and resource. However, given zero annotations and 19 sibling tools, the brevity leaves significant gaps in behavioral and usage context. Efficient but undersized for the complexity of the tool ecosystem.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a tool with no annotations and no output schema. Description omits return value structure, Logbook concept definition, sorting behavior, and relationship to other 'get' operations. For a retrieval tool in a complex task management system, this leaves the agent under-informed about what data structure to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for max_results, so baseline score applies. Description adds no parameter context beyond schema, but schema adequately documents the optional limit. No credit for parameter explanation since schema handles it entirely.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') and resource ('completed to-dos'), with specific container ('Logbook') that implicitly distinguishes from siblings like things_get_inbox or things_get_trash. Could be improved by explicitly contrasting with active task lists or defining Logbook for users unfamiliar with Things app terminology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus alternatives like things_get_today or things_get_someday, nor any mention of prerequisites. Agent must infer this is for retrieving historical completed tasks rather than current active work.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to indicate safety properties (read-only, idempotent), error handling for invalid project_ids, or whether completed/cancelled to-dos are included. 'Get' implies read-only but this is not explicitly confirmed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief at seven words with no filler. However, given the lack of annotations and output schema, this conciseness comes at the cost of necessary behavioral context, making it slightly too terse rather than efficiently informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a two-parameter read operation, the description meets minimum viability by stating the core retrieval function. However, with no output schema and no annotations, it omits expected details about return structure, error states, and data scope (active vs. completed to-dos).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (max_results is described, project_id is not). The description partially compensates by referencing 'a specific project', implying the purpose of the undocumented project_id parameter. However, it omits any mention of the max_results pagination control.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'all to-dos' (specific verb + resource) within a project scope. It implicitly distinguishes from sibling 'things_get_projects' (plural) by requiring a specific project identifier and focusing on to-dos rather than project metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'things_get_list' or 'things_get_inbox'. It lacks explicit prerequisites (e.g., needing a valid project_id from 'things_get_projects') and does not mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get' implies a read-only operation, the description confirms nothing about safety, rate limits, or the return structure. It also does not clarify whether 'Someday' is a static list or dynamic query, or what fields are returned for each to-do.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at five words with no redundant or wasted content. However, it may be overly terse—lacking even a single sentence of context about the 'Someday' list concept—suggesting it could benefit from slightly more length to improve clarity without sacrificing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one optional parameter, no nested objects, no output schema), the description is minimally sufficient. It identifies the target resource, but for a tool returning unknown data structures without an output schema, it should ideally describe what constitutes a 'Someday' to-do or what data is returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'all' to-dos, which aligns with the max_results parameter's default behavior (returns all if not specified), but adds no additional semantic context about parameter usage, validation rules, or pagination behavior beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('Get') and specifies the exact resource ('to-dos in Someday'). It effectively distinguishes from siblings like things_get_today or things_get_inbox by naming the specific 'Someday' list. However, it assumes familiarity with the Things app taxonomy without clarifying what 'Someday' represents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like things_get_anytime, things_get_upcoming, or things_get_today. It does not indicate whether Someday items are excluded from other lists or when this specific view is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get' implies a read-only operation, the description does not specify the return format (since no output schema exists), pagination behavior, error handling when no to-dos exist, or any permission requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no extraneous words. It is appropriately front-loaded with the action and scope. However, given the lack of annotations and output schema, it borders on under-specification rather than optimal conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with one optional parameter, the description adequately identifies the target resource. However, given the absence of an output schema and annotations, the description should ideally describe the return structure or format to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single 'max_results' parameter. The description adds no additional parameter context, but the baseline score of 3 is appropriate given the schema already fully documents the optional limit parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('to-dos scheduled for Today'), clearly indicating it retrieves today's tasks. It implicitly distinguishes from siblings like things_get_upcoming or things_get_inbox by specifying the 'Today' scope, though it does not explicitly reference sibling alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like things_get_upcoming, things_get_inbox, or things_get_anytime. There are no stated prerequisites, exclusions, or conditions for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. While 'Navigate' implies a read-only UI operation, the description does not confirm side effects, error handling when IDs are invalid, or whether this opens the Things application externally.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is appropriately front-loaded with no wasted words. However, given the complexity of the Things tool ecosystem (20+ siblings), it is slightly too terse to provide sufficient context for correct tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks guidance on distinguishing this from things_get_list and other list retrieval tools. Without an output schema or annotations, and with all parameters being optional, the description should explain the navigation behavior and expected outcomes to ensure correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents all three parameters (id, query, filter). The description aligns with the schema by mentioning 'item or list' but adds no additional syntax details, format constraints, or usage examples beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Navigate') and resource ('item or list in Things'), clearly indicating a UI-focused operation. However, it does not explicitly distinguish from the numerous 'things_get_*' siblings which likely retrieve data without navigating the UI, potentially causing selection confusion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance is provided. Given the many sibling tools for retrieving lists (things_get_inbox, things_get_today, things_get_list), the description fails to clarify whether to use this for UI navigation versus data retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get' implies a read-only operation, the description fails to define the temporal scope of 'upcoming' (e.g., includes today? next 7 days? all future?), mention pagination behavior, or describe the return format. Critical behavioral traits remain undocumented.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence where every word serves a purpose. It is front-loaded with the action verb, specifies the resource, and includes the distinguishing characteristic '(with dates)' without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list-retrieval tool with one optional parameter and no output schema, the description is minimally adequate. However, given the rich ecosystem of sibling tools with overlapping concerns (various temporal views), the description should clarify the specific time window of 'upcoming' to ensure correct agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the max_results parameter. The description adds no explicit parameter guidance, but given the high schema coverage, it meets the baseline expectation. The description implies filtering/limiting capability through 'Get all' which aligns with the optional max_results parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('scheduled to-dos'), and the parenthetical '(with dates)' provides implicit differentiation from sibling tools like things_get_anytime or things_get_someday. However, it does not explicitly clarify what time range constitutes 'upcoming' versus things_get_today.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. Given the numerous sibling list-retrieval tools (things_get_today, things_get_anytime, things_get_someday, things_get_inbox), the absence of selection criteria forces the agent to guess which tool is appropriate for a given temporal query.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully explains the critical heading/todo grouping behavior ('todos that follow a heading...will appear grouped under it') which is essential for correct usage. However, it omits mutation side effects, error handling, or success indicators.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with no waste. Front-loaded with the core action, followed by use-case guidance, then critical behavioral details about the items array structure. Appropriately sized for the complexity of the tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (10 parameters with nested array structure) and lack of annotations/output schema, the description adequately covers the most complex aspect (items array structure) but leaves significant gaps regarding standard parameters and post-creation behavior. Minimum viable for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is low at 40%. While the description adds valuable context for the complex 'items' parameter (explaining the heading/todo relationship and order dependency), it fails to compensate for the six undocumented parameters (title, notes, deadline, tags, completed, canceled) that lack schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Creates) and resource (project in Things) and distinguishes from things_add_todo via emphasis on 'sections (headings) and todos' for complex structures. However, it does not explicitly contrast with sibling things_add_items_to_project which adds to existing projects vs. creating new ones.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context ('Perfect for complex projects with multiple phases, days, or categories') suggesting when to use this over simple todo creation. However, lacks explicit when-not-to-use guidance or naming of alternatives like things_add_items_to_project for modifying existing projects.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden of behavioral disclosure. It specifies the 'active' filter (distinguishing from archived/trashed projects), implying a read-only operation. However, it omits details about the return format, pagination behavior with max_results, or what constitutes 'active' in the Things app context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at four words. No filler content, though the brevity approaches under-specification. Appropriately front-loaded for a simple list retrieval operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter list tool, the description identifies what is retrieved ('active projects'). However, with no output schema provided, it should ideally specify that it returns an array/collection of project objects rather than leaving this implicit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single 'max_results' parameter, which has a complete description in the schema. The description adds no parameter-specific context, meeting the baseline expectation when the schema is self-documenting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Get' with resource 'projects' and scope 'active'. The plural 'projects' distinguishes it from sibling 'things_get_project' (singular), and 'active' distinguishes from siblings like 'things_get_trash' or 'things_get_logbook'. However, it lacks explicit clarification about returning a collection vs. the singular variant.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus the singular 'things_get_project' or other project-related tools. No mention of prerequisites or when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It specifies the 'deleted' state of the to-dos, but fails to indicate whether this is a read-only operation (presumed but not stated), what data structure is returned, or whether trashed items include metadata like deletion dates. It meets minimum disclosure by defining the trashed status.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. The most critical information (action and scope) is front-loaded, making it immediately scannable. No restructuring or trimming is needed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with one optional parameter and 100% schema coverage, the description is minimally adequate. However, given the absence of an output schema and annotations, the description could be improved by mentioning the return type (list of to-dos) or confirming the read-only nature of the operation. As written, it covers the essentials but leaves gaps in contextual richness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the 'max_results' parameter. The description mentions 'all' to-dos, which aligns with the parameter's default behavior, but adds no additional semantic context (e.g., performance implications of large result sets, pagination behavior). Baseline 3 is appropriate given the schema already fully documents the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource ('deleted to-dos in the Trash') and action ('Get'), effectively distinguishing this from sibling list tools like things_get_inbox or things_get_logbook by specifying the Trash scope. However, it uses the generic verb 'Get' rather than 'List' or 'Retrieve', and does not explicitly contrast with similar archival states (e.g., completed vs. deleted).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives (e.g., when to check Trash vs. Logbook for completed items). While 'Trash' implies a specific use case, there is no 'when to use' or 'when not to use' instruction to help the agent decide between the many available list-retrieval siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It explains checklist behavior ('individually checked off', 'visual progress feedback') but omits other critical behavioral traits like error handling, idempotency, or what happens when both list_id and list are provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, both earning their place. The first establishes purpose and key feature; the second provides specific usage guidance. No redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 11-parameter creation tool with no annotations or output schema, the description is incomplete. While checklist functionality is well-covered, the lack of parameter documentation for deadline, completed flags, and heading leaves significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is low at 45%. The description elaborates on checklist usage patterns but fails to compensate for undocumented parameters like deadline (format?), completed/canceled (behavior when true?), or notes. Heavy focus on checklist_items leaves other parameters unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Add[s] a new to-do to Things' with a specific verb and resource. It distinguishes from things_add_project by noting checklists are for tasks that 'don't warrant a separate project,' though this differentiation could be more explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance on when to use the checklist feature ('when task has multiple components'), but lacks explicit guidance on when to use this tool versus siblings like things_add_project or things_update_todo. No mention of prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds value by disclosing returned fields (deadline, notes, status, etc.), hinting at output structure since no output schema exists. However, it omits other behavioral traits like error handling (404 if not found), read-only safety, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action verb. Efficient structure with minimal waste, though 'etc.' is slightly vague regarding additional returned fields. Appropriately concise for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read operation with no annotations, the description is adequate but has gaps. It partially compensates for the missing output schema by listing example fields, though it could clarify that it returns a single object versus a collection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for the single 'id' parameter. The description does not mention parameters explicitly, but with complete schema documentation, the baseline score of 3 is appropriate as the schema sufficiently carries the semantic load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Get' with clear resource 'detailed information about a specific to-do'. The phrase 'specific to-do' effectively distinguishes this from sibling list operations (things_get_today, things_get_inbox) and mutation operations (things_update_todo, things_add_todo).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'specific to-do' implies usage when an ID is known versus listing operations, but lacks explicit guidance on when to use this versus list views or prerequisite steps (e.g., obtaining an ID first). No explicit alternatives or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden of behavioral disclosure. It successfully explains the visual grouping behavior ('headings act as visual separators for the todos that follow them'), but lacks disclosure about safety characteristics, error handling, or whether this operation is idempotent or destructive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first establishes the action and scope immediately, while the second provides essential structural context about how the array items relate to each other visually.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this is a mutation tool with no output schema and no annotations, the description adequately explains the input structure but leaves gaps around the return value, error conditions, and the purpose of the 'operation' parameter which lacks schema documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 67% (id and items are well-documented, operation is not). The description adds semantic context about the flat array structure and heading behavior, but doesn't compensate for the undocumented 'operation' parameter or add details beyond what the schema provides for 'id' and 'items'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Add) and resources (todos and headings) with context (existing project). It effectively distinguishes from sibling tools like things_add_project (creates projects) and things_add_todo (likely creates standalone todos) by specifying 'existing project' and 'items'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the main description doesn't explicitly list alternatives, it clearly scopes the tool to 'existing' projects, distinguishing it from things_add_project. The schema description for the 'id' parameter provides explicit workflow guidance stating 'You MUST use things_get_projects to find the correct ID first,' which helps prevent common usage errors.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/hildersantos/things-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server