prts-mcp
Server Quality Checklist
- Disambiguation4/5
Most tools have distinct purposes (operator data, story/event listing, reading content, searching), but some overlap exists: read_activity and read_story both handle story content, and get_operator_archives vs get_operator_basic_info could be confusing without careful reading of their descriptions. The descriptions help clarify, but minor ambiguity remains.
Naming Consistency4/5Naming follows a consistent verb_noun pattern (e.g., get_operator_archives, list_stories, read_activity) with clear actions. However, there are minor deviations: search_prts uses 'search' instead of a more specific verb like 'find', and read_prts_page includes 'prts' in the noun, which slightly breaks the pattern compared to others like read_story.
Tool Count5/5With 9 tools, the count is well-scoped for a server focused on Arknights game data (operators, stories, wiki content). Each tool serves a clear purpose, covering key aspects like operator info retrieval, story management, and wiki access without being overly broad or sparse.
Completeness4/5The tool set provides good coverage for accessing operator data, story events, and wiki content, with clear workflows (e.g., list_story_events -> list_stories -> read_story). Minor gaps exist: there's no tool for updating or deleting data (though likely not needed for read-only access), and operator tools lack a search or listing function, requiring exact names. Overall, it supports core use cases effectively.
Average 3.4/5 across 9 of 9 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.1
Tools from this server were used 14 times in the last 30 days.
This repository includes a glama.json configuration file.
- This server provides 9 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. The description only states what the tool does (retrieves archive materials) but doesn't disclose any behavioral traits such as whether this is a read-only operation, potential rate limits, authentication requirements, error conditions, or what happens with invalid operator names. For a tool with zero annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—a single sentence that efficiently communicates the core functionality and input requirement. There's no wasted language, and the key information (what it does and the input format) is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there's an output schema (which presumably documents return values), the description doesn't need to explain output structure. However, for a tool with no annotations and minimal parameter documentation, the description is incomplete. It covers basic purpose and input format but lacks behavioral context, usage differentiation from siblings, and error handling information that would be helpful for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds some semantic context for the single parameter: it clarifies that 'operator_name' should be the Chinese name as used in-game (e.g., '阿米娅'). However, with 0% schema description coverage, the schema provides no parameter documentation. The description compensates partially by explaining the expected format, but doesn't provide examples beyond one, list valid operators, or explain handling of edge cases.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '获取指定干员的档案资料' (get specified operator's archive materials). It specifies the resource (operator archives) and the action (get/retrieve). However, it doesn't distinguish this from sibling tools like get_operator_basic_info or get_operator_voicelines, which likely retrieve different aspects of operator data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance: it specifies that operator names should be in Chinese as used in-game (e.g., '阿米娅'). However, it offers no guidance on when to use this tool versus alternatives like get_operator_basic_info or list_stories, nor does it mention any prerequisites or exclusions. The guidance is limited to input format only.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states it reads '纯文本内容' (plain text content), which implies a read-only operation, but doesn't disclose behavioral traits like authentication needs, rate limits, error conditions, or what happens with invalid page titles. For a tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence in Chinese that directly states the tool's function without unnecessary words. It's appropriately sized and front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 1 parameter, no annotations, and an output schema exists (so return values are documented elsewhere), the description is minimally complete. It covers the basic purpose but lacks usage guidelines and behavioral details, making it adequate but with clear gaps for a read operation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It implies the parameter is for specifying a wiki entry ('指定词条'), which aligns with the 'page_title' parameter. However, it doesn't add details beyond what's obvious from the parameter name, such as format examples or constraints. With 1 parameter and low schema coverage, baseline 3 is appropriate as it minimally addresses the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('读取' meaning 'read') and resource ('PRTS 维基指定词条的纯文本内容' meaning 'plain text content of specified PRTS wiki entry'). It distinguishes from siblings like 'search_prts' (searching) and 'read_story' (reading story content). However, it doesn't explicitly differentiate from 'read_activity' which might have similar reading functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to use 'read_prts_page' versus 'search_prts' (for searching) or 'read_story' (for story content). No prerequisites, exclusions, or comparative context is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the return format ('词条标题和摘要' meaning entry titles and summaries) which is helpful, but lacks critical details like whether this is a read-only operation, potential rate limits, authentication requirements, or how results are sorted/ranked. For a search tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence that states the action, target, and return format with zero wasted words. It's front-loaded with the core purpose and doesn't include unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which should document return values), the description doesn't need to explain return format details. However, for a search tool with 2 parameters and no annotations, the description is minimally adequate - it states what the tool does but lacks behavioral context, usage guidance, and parameter semantics that would make it more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The tool description mentions searching but doesn't explain what the 'query' parameter should contain (e.g., keywords, exact titles) or the meaning of 'limit' beyond what's obvious from the name. It adds minimal semantic value beyond what can be inferred from parameter names alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('搜索' meaning search) and resource ('PRTS 明日方舟中文维基词条' meaning PRTS Arknights Chinese wiki entries), making the purpose immediately understandable. It distinguishes from siblings like 'read_prts_page' by focusing on search rather than direct page reading. However, it doesn't explicitly differentiate from potential search-like siblings that might exist.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose search_prts over read_prts_page for finding content, nor does it specify prerequisites or constraints. The agent must infer usage from the tool name and description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it indicates this is a read operation ('获取' - get/retrieve), it doesn't disclose important behavioral traits such as error handling (what happens if the operator name isn't found), response format, rate limits, authentication requirements, or whether this is a real-time query versus cached data. The description is minimal and lacks behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise - a single sentence that efficiently communicates the tool's purpose and key usage requirement. It's front-loaded with the main functionality, followed by the important input format specification. There's no wasted text, though it could potentially benefit from slightly more structure or separation of concerns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there's an output schema (which presumably documents the return values), the description doesn't need to explain return values. However, for a tool with no annotations and minimal schema documentation, the description should do more to compensate. It adequately covers the basic purpose and input format but lacks behavioral context, error handling information, and differentiation from sibling tools. The description is minimally viable but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, so the description must compensate. It adds meaning by specifying that the operator_name parameter should be in Chinese as used in-game, providing an example ('阿米娅'). However, it doesn't explain the parameter's purpose beyond what's obvious from the name, nor does it provide guidance on valid values, character limits, or formatting beyond language. The description adds some value but doesn't fully compensate for the schema's lack of documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving basic information about a specific operator, listing the types of information included (profession, rarity, faction, recruitment tags, talents). It uses a specific verb ('获取' - get/retrieve) and resource ('干员的基本信息' - operator's basic information). However, it doesn't explicitly differentiate from sibling tools like get_operator_archives or get_operator_voicelines, which likely provide different types of operator information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance by specifying that operator names should be in Chinese as used in-game (e.g., '阿米娅'). However, it doesn't explicitly state when to use this tool versus alternatives like get_operator_archives or search_prts, nor does it mention any prerequisites or exclusions. The guidance is limited to input format requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the language requirement (Chinese names) but doesn't describe what the tool returns (voice line format, structure), whether it's read-only or has side effects, error conditions, or performance characteristics. The description provides minimal behavioral context beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (one sentence) and front-loaded with the core purpose. Every word serves a purpose: stating the action, specifying the resource, and providing critical input guidance. There's zero waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter but specific format requirements), no annotations, and the presence of an output schema, the description is minimally adequate. It covers the core operation and parameter semantics but lacks behavioral context about what voice lines are returned, their structure, or any limitations. The output schema existence reduces but doesn't eliminate the need for some behavioral description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds crucial semantic information beyond the schema: it specifies that 'operator_name' must be '游戏内中文名' (in-game Chinese name) like '阿米娅'. With 0% schema description coverage and only 1 parameter, this guidance significantly compensates for the schema's lack of documentation, though it doesn't cover all possible edge cases or validation rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '获取指定干员的语音记录' (get voice lines for a specified operator). It specifies the verb (get) and resource (operator voice lines), but doesn't explicitly differentiate from siblings like 'get_operator_basic_info' or 'get_operator_archives' that might provide different operator data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying '使用游戏内中文名' (use in-game Chinese name), which helps guide parameter input. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'get_operator_archives' or 'get_operator_basic_info', leaving the agent to infer based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the output is ordered officially, which adds some behavioral context, but fails to disclose critical traits such as whether this is a read-only operation, potential rate limits, authentication needs, error handling, or pagination. For a tool with no annotations, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose clearly, and the second provides essential parameter guidance. There is no wasted text, and the structure efficiently conveys necessary information in two concise sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), one parameter with low schema coverage, and no annotations, the description is reasonably complete. It covers the purpose, parameter semantics, and hints at usage, but could improve by adding more behavioral details (e.g., safety, performance) to fully compensate for the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful semantics beyond the input schema, which has 0% description coverage. It explains that 'event_id' is an activity ID like 'act31side' and specifies it can be obtained from 'list_story_events'. This clarifies the parameter's purpose and source, compensating well for the lack of schema descriptions, though it doesn't detail format constraints beyond the example.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '列出指定活动的所有剧情章节' (list all story chapters for a specified event). It specifies the verb (list), resource (story chapters), and scope (for a specified event, ordered officially). However, it doesn't explicitly differentiate from siblings like 'read_story' or 'list_story_events', which could handle similar story-related data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying that 'event_id' can be obtained from 'list_story_events', suggesting a workflow. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'read_story' or 'list_story_events', nor does it mention any exclusions or prerequisites beyond the parameter source.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what the tool does (reads dialogue) and includes a parameter for narration inclusion, but it doesn't cover important behavioral aspects such as error handling, response format (though an output schema exists), rate limits, authentication needs, or whether it's a read-only operation. The description is functional but lacks depth for safe and effective use.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and well-structured. It starts with a clear purpose statement, followed by a bullet-point list of parameters with brief explanations. Every sentence adds value without redundancy, making it easy to scan and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no annotations, but with an output schema), the description is reasonably complete. It covers the purpose and parameters adequately, and since an output schema exists, it doesn't need to explain return values. However, it could improve by addressing behavioral aspects like error cases or usage context relative to siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful semantics beyond the input schema, which has 0% description coverage. It explains that 'story_key' is a chapter key with an example format and references 'list_stories' as a source, and clarifies that 'include_narration' controls whether narration and scene descriptions are included. This compensates well for the schema's lack of descriptions, though it doesn't detail all possible values or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '读取单章剧情的完整台词' (read the complete dialogue of a single story chapter). It specifies the verb ('读取' - read) and resource ('单章剧情' - single story chapter), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'list_stories' or 'read_activity', which could provide overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some implied usage guidance by mentioning that 'story_key' can be obtained from 'list_stories', suggesting a workflow. However, it doesn't explicitly state when to use this tool versus alternatives like 'list_story_events' or 'read_activity', nor does it provide exclusions or clear context for choosing this specific tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It implies this is a read-only operation (listing events) and mentions optional filtering by category. However, it doesn't disclose behavioral traits like rate limits, authentication needs, pagination, or what happens when no events match. The description adds some context but lacks comprehensive behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the purpose, followed by a clear parameter explanation and usage suggestion. Every sentence adds value without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter) and the presence of an output schema, the description is mostly complete. It covers the purpose, parameter semantics, and usage guidelines. However, without annotations, it could benefit from more behavioral context (e.g., response format hints), though the output schema mitigates this gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage. The description compensates by explaining the 'category' parameter: it's optional, with values 'main' for main story chapters and 'activities' for event stories (including collaborations), and if not provided, returns all events. This adds meaningful semantics beyond the schema, though it doesn't cover edge cases like invalid categories.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '列出明日方舟剧情活动列表' (list Arknights story events). It specifies the resource (story events) and the action (list). However, it doesn't explicitly differentiate from sibling tools like 'list_stories' beyond an implied usage suggestion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage guidance: '建议先调用本工具了解活动全貌,再用 list_stories 查询具体章节' (suggest calling this tool first to understand the overall events, then use list_stories to query specific chapters). It mentions an alternative (list_stories) and suggests a workflow, but doesn't explicitly state when NOT to use this tool or compare with other siblings like 'read_activity'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context: it mentions that activity text may be large (implying potential performance considerations) and explains pagination behavior (page parameter for batch retrieval, default returns all chapters). However, it lacks details on permissions, rate limits, error handling, or response format, leaving gaps for a tool with pagination and filtering parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized: it starts with the core purpose, adds usage context, and then details parameters in a clear Args section. Every sentence earns its place, but minor redundancy exists (e.g., '单次活动文本量可能较大' could be integrated more tightly). It's front-loaded with key information, making it efficient for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, pagination, filtering), no annotations, but with an output schema (which handles return values), the description is largely complete. It covers purpose, usage, and parameter semantics well. However, it lacks behavioral details like error cases or performance limits, which would be helpful despite the output schema. For a read-only tool with output schema, it's above minimum viable but not fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate fully. It does so effectively: it explains all 4 parameters in the Args section, clarifying event_id format (e.g., 'act31side'), include_narration default and purpose, page usage (starts at 1, optional for full retrieval), and page_size default. This adds essential meaning beyond the bare schema, making parameters understandable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '读取整个活动的完整剧情台词(按官方章节顺序合并)' (read the complete script/dialogue of an entire activity in official chapter order). It specifies the resource (activity story/dialogue) and the verb (read/retrieve), distinguishing it from siblings like list_stories or read_story by focusing on full activity content rather than listing or reading individual stories.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: '适合需要了解完整活动故事的场景' (suitable for scenarios needing to understand the complete activity story). It also mentions that activity text volume may be large, suggesting pagination usage. However, it does not explicitly state when not to use it or name specific alternatives among siblings, such as read_story for individual stories.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/3aKHP/prts-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server