AI Skill Store
Server Details
Agent-first skill marketplace with USK (Universal Skill Kit) open standard. Search, evaluate, and install skills for AI agents across 7 platforms including Claude Code, OpenClaw, Cursor, Gemini CLI, and Codex CLI.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 10 of 10 tools scored.
Each tool has a clearly distinct purpose with no significant overlap. Tools like check_vetting_status, download_skill, get_install_guide, and get_skill all target different aspects of skill management, while list_categories and list_platforms serve distinct informational roles. The descriptions clearly differentiate their functions, preventing agent misselection.
All tool names follow a consistent verb_noun pattern using snake_case (e.g., check_vetting_status, download_skill, get_install_guide). The naming convention is predictable throughout the set, with verbs like 'check', 'download', 'get', 'list', 'register', 'search', and 'upload' consistently applied to their respective nouns.
With 10 tools, the count is well-scoped for an AI skill store server. Each tool serves a clear purpose in the skill lifecycle—from registration and upload to search, download, and installation guidance—without being excessive or insufficient for the domain's needs.
The tool set provides comprehensive coverage for the AI skill store domain, including developer registration, skill upload and vetting, search and retrieval, download and installation, and platform/category listing. There are no obvious gaps; agents can perform full CRUD-like operations and navigate the skill lifecycle seamlessly.
Available Tools
10 toolscheck_vetting_statusAInspect
업로드한 스킬의 보안 검수(vetting) 상태를 확인합니다.
upload_skill 결과에서 받은 version_id와 API 키가 필요합니다.
Args:
version_id: 스킬 버전 ID (upload_skill 결과의 version_id 또는 vetting_job_id)
api_key: 개발자 API 키 (스킬 소유자만 조회 가능)
Returns:
검수 상태 메시지| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| version_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that API key is required and only accessible to skill owners (implying authentication needs), and mentions the dependency on upload_skill results. However, it lacks details on rate limits, error handling, or what specific states might be returned (though output schema may cover returns).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with the purpose stated first, followed by prerequisites and parameter explanations. It uses bullet-like sections (Args, Returns) for structure, though the formatting could be slightly more streamlined (e.g., integrating Args into the flow).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 parameters, no annotations, 0% schema coverage, and an output schema (which handles return values), the description is mostly complete. It covers purpose, prerequisites, and parameter semantics well, but could add more behavioral context (e.g., error cases or state details) to be fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It adds crucial meaning beyond the schema: version_id is explained as coming from upload_skill results or being a vetting_job_id, and api_key is clarified as a developer key only usable by skill owners. This provides essential context not in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('check security vetting status') and resource ('uploaded skill'), distinguishing it from siblings like upload_skill (which uploads) or get_skill (which retrieves skill details). It explicitly mentions 'vetting status', which is unique among the listed tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: after upload_skill (stating it requires version_id from that result) and for checking vetting status. However, it does not explicitly state when not to use it or name alternatives (e.g., vs. get_skill for general skill info), though the context implies it's specific to vetting.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
download_skillBInspect
스킬 파일을 다운로드합니다. platform을 지정하면 해당 플랫폼용으로 변환된 패키지를 받습니다.
Args:
skill_id: 다운로드할 스킬 ID
platform: 플랫폼 (OpenClaw, ClaudeCode, ClaudeCodeAgentSkill, CustomAgent, Cursor, GeminiCLI, CodexCLI). 비워두면 원본(.skill) 다운로드.
save_dir: 저장 디렉터리 경로 (비워두면 임시 디렉터리에 저장)
Returns:
저장된 파일 경로 또는 오류 메시지| Name | Required | Description | Default |
|---|---|---|---|
| platform | No | ||
| save_dir | No | ||
| skill_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions file storage behavior (saves to specified or temp directory) and returns a file path or error, which adds some context. However, it lacks details on permissions, rate limits, side effects (e.g., if it modifies server state), or error conditions beyond a generic mention, making behavioral traits incomplete for a download operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized: a brief purpose statement, key behavior note, and clear sections for Args and Returns. Each sentence adds value without redundancy. However, the Korean text might reduce clarity for non-Korean agents, and some details could be more integrated, but overall it's efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters with 0% schema coverage and an output schema (implied by Returns), the description adds param semantics and return info, compensating partially. However, for a download tool with no annotations, it lacks error handling specifics, file format details, or platform conversion implications, making it adequate but with gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaningful semantics: 'skill_id' is for the skill to download, 'platform' specifies conversion with enum-like options and default behavior, and 'save_dir' indicates storage path with default to temp. This clarifies beyond schema titles, though it doesn't detail format constraints or validation rules for parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '스킬 파일을 다운로드합니다' (downloads skill files). It specifies the verb (download) and resource (skill files), and distinguishes from siblings like 'get_skill' (which likely retrieves metadata) by focusing on file download. However, it doesn't explicitly contrast with 'upload_skill' or other siblings beyond the core action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context: download skill files, optionally converted for a platform. It mentions that leaving 'platform' empty downloads the original file, providing some guidance. However, it lacks explicit when-to-use vs. alternatives (e.g., when to use this over 'get_skill' for metadata), prerequisites, or exclusions, leaving usage partially inferred.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_install_guideBInspect
특정 플랫폼에 스킬을 설치하는 방법을 안내합니다.
Args:
skill_id: 스킬 ID
platform: 플랫폼 이름 - 'OpenClaw' | 'ClaudeCode' | 'ClaudeCodeAgentSkill' | 'CustomAgent' | 'Cursor' | 'GeminiCLI' | 'CodexCLI'
Returns:
단계별 설치 가이드 문자열| Name | Required | Description | Default |
|---|---|---|---|
| platform | No | OpenClaw | |
| skill_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool '안내합니다' (guides), implying it's informational/read-only, but doesn't clarify if it requires authentication, has rate limits, or what happens if inputs are invalid. The description lacks details on error handling, response format beyond '단계별 설치 가이드 문자열' (step-by-step installation guide string), or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by clear 'Args' and 'Returns' sections. Every sentence earns its place by defining parameters and output without redundancy. The structure is efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, one required), no annotations, and an output schema present (implied by 'Returns'), the description is reasonably complete. It covers purpose, parameters, and return value. However, it lacks behavioral context like error cases or usage guidelines, which would be beneficial since annotations are absent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains that 'skill_id' is a 스킬 ID (skill ID) and 'platform' is a 플랫폼 이름 (platform name) with enumerated values like 'OpenClaw' and 'ClaudeCode'. This compensates well for the schema's lack of descriptions, though it doesn't detail format constraints or examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '특정 플랫폼에 스킬을 설치하는 방법을 안내합니다' (guides how to install a skill on a specific platform). It specifies the verb '안내합니다' (guides) and resource '스킬' (skill) with context '특정 플랫폼에' (on a specific platform). However, it doesn't explicitly differentiate from sibling tools like 'get_skill' or 'download_skill', which might provide related but different functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'download_skill' (which might download the skill itself) or 'get_skill' (which might retrieve skill details), nor does it specify prerequisites or exclusions. The usage context is implied but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_skillBInspect
특정 스킬의 상세 정보를 조회합니다.
Args:
skill_id: 스킬 ID (search_skills 결과의 skill_id)
Returns:
스킬 상세 정보 JSON 문자열| Name | Required | Description | Default |
|---|---|---|---|
| skill_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states this is a retrieval operation ('조회합니다'), which implies read-only behavior, but doesn't disclose any behavioral traits like authentication needs, rate limits, error conditions, or what happens if the skill_id is invalid. For a tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves beyond its basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and well-structured with clear sections for purpose, arguments, and returns. Each sentence earns its place: the first states the purpose, the second explains the parameter with useful context, and the third describes the return value. It's front-loaded with the main purpose and avoids unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter read operation), no annotations, but with an output schema (which handles return values), the description is minimally adequate. It covers the purpose, parameter semantics, and return type at a high level, but lacks behavioral details like error handling or performance characteristics. The presence of an output schema means the description doesn't need to explain return values in detail, but other gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context for the single parameter: it explains that skill_id is '스킬 ID' and specifically notes it comes from 'search_skills 결과의 skill_id' (skill_id from search_skills results). This provides practical guidance on how to obtain the parameter value. With 0% schema description coverage and only one parameter, the description effectively compensates by clarifying the parameter's origin and purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('조회합니다' - retrieve/look up) and resource ('특정 스킬의 상세 정보' - detailed information of a specific skill). It distinguishes this from siblings like search_skills (which searches) and download_skill (which downloads), though it doesn't explicitly name these alternatives. The purpose is specific but could be more explicit about differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning that skill_id comes from 'search_skills 결과의 skill_id' (skill_id from search_skills results), suggesting this tool should be used after search_skills to get detailed information for a specific skill. However, it doesn't provide explicit guidance on when to use this versus alternatives like get_skill_schema or get_install_guide, nor does it state any exclusions or prerequisites beyond the implied workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_skill_schemaBInspect
에이전트가 스킬을 호출하기 위한 전체 스키마를 조회합니다.
인터페이스, 입출력 스키마, 권한, 능력 태그 등을 반환합니다.
Args:
skill_id: 스킬 ID
Returns:
스킬 호출 스키마 정보| Name | Required | Description | Default |
|---|---|---|---|
| skill_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns schema information including interfaces, I/O schemas, permissions, and capability tags, which adds useful context about the return format. However, it doesn't describe error conditions (e.g., invalid skill_id), rate limits, authentication requirements, or whether it's a read-only operation. For a tool with no annotations, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized: it starts with the core purpose, lists return details, and includes Args/Returns sections. Each sentence adds value without redundancy. However, the Args and Returns sections are brief and could be more integrated, slightly reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (context signals indicate true), the description doesn't need to detail return values extensively. It mentions key components like interfaces and permissions, which is helpful. With 1 parameter and no annotations, the description covers the basics but lacks error handling and usage guidelines. For a simple retrieval tool, this is mostly complete but has minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter (skill_id) with 0% description coverage, meaning the schema provides no semantic information. The description adds a brief note in the Args section ('스킬 ID'), which clarifies the parameter's purpose but doesn't specify format (e.g., string pattern), validation rules, or examples. This partially compensates for the schema gap but is minimal, aligning with the baseline 3 for moderate value addition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '에이전트가 스킬을 호출하기 위한 전체 스키마를 조회합니다' (retrieves the complete schema for invoking a skill). It specifies the verb '조회합니다' (retrieves) and resource '스킬 스키마' (skill schema), and distinguishes from siblings like get_skill (which likely returns skill metadata) or download_skill (which likely downloads skill code). However, it doesn't explicitly differentiate from get_install_guide or other schema-related tools, keeping it at 4.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a valid skill_id), exclusions, or compare it to siblings like get_skill or search_skills. The agent must infer usage from the purpose alone, which is insufficient for optimal tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_categoriesBInspect
AI Skill Store의 전체 카테고리 목록을 반환합니다.
Returns:
카테고리 목록 문자열| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states that the tool returns a category list, which implies a read-only operation, but it doesn't disclose any behavioral traits such as rate limits, authentication requirements, or potential side effects. For a tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, with the main purpose stated clearly in the first sentence. The second sentence adds minimal but relevant information about the return type. There is no wasted text, making it efficient, though it could be slightly improved by integrating the return information more seamlessly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no annotations, but has an output schema), the description is minimally adequate. It states what the tool does and the return type, but lacks behavioral context and usage guidelines. The output schema likely covers return values, so the description doesn't need to explain those, but overall completeness is limited due to missing behavioral and usage details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and the schema description coverage is 100%, so there are no parameters to document. The description doesn't need to add parameter semantics beyond what the schema provides. A baseline score of 4 is appropriate as the description doesn't contradict the schema and the lack of parameters is straightforward.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'AI Skill Store의 전체 카테고리 목록을 반환합니다' (returns the complete category list of the AI Skill Store). It specifies the verb '반환합니다' (returns) and the resource '카테고리 목록' (category list), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'list_platforms' or 'search_skills', which is why it doesn't score a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, context for usage, or comparisons with sibling tools like 'search_skills' or 'list_platforms'. The absence of usage guidelines leaves the agent without direction on when this tool is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_platformsBInspect
AI Skill Store가 지원하는 플랫폼 목록을 반환합니다.
Returns:
플랫폼 목록 문자열| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It only states what the tool does (returns a platform list) without mentioning any behavioral traits such as whether it's read-only, if there are rate limits, authentication needs, or what format the return value takes. For a tool with no annotations, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, with the main purpose stated clearly in the first sentence. The second sentence adds minimal but relevant information about the return type. There's no wasted text, though it could be slightly more structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has 0 parameters, 100% schema coverage, and an output schema exists (as indicated by context signals), the description is somewhat complete for its simplicity. However, it lacks details on behavioral aspects and usage context, which are important even for simple tools. The output schema likely covers return values, so the description doesn't need to explain those, but overall completeness is adequate with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and the schema description coverage is 100% (as there are no parameters to describe). The description doesn't need to add parameter semantics, so it meets the baseline for this case. No additional value is required beyond what's already covered by the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'AI Skill Store가 지원하는 플랫폼 목록을 반환합니다' (Returns the list of platforms supported by AI Skill Store). This specifies the verb (returns) and resource (platform list) with context about the AI Skill Store. However, it doesn't explicitly differentiate from sibling tools like list_categories, which is why it doesn't reach a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There are sibling tools like list_categories and search_skills that might be relevant for related queries, but the description doesn't mention any context, exclusions, or alternatives for usage. This leaves the agent with minimal direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_developerAInspect
AI Skill Store 개발자 계정을 등록합니다.
이메일 인증 후 API 키가 발급됩니다 (보안을 위해 즉시 발급되지 않음).
Args:
username: 사용할 username (영문/숫자, 3자 이상, 중복 불가)
email: 인증용 이메일 주소 (필수 — 인증 링크가 발송됨)
Returns:
등록 결과 메시지. 이메일 인증 후 API 키를 받을 수 있습니다.| Name | Required | Description | Default |
|---|---|---|---|
| Yes | |||
| username | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: email verification is required, API key issuance is delayed for security, and it returns a registration result message. This covers authentication needs and process flow, though it lacks details on rate limits or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the main purpose, followed by key behavioral details and parameter explanations. Every sentence adds value without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (account registration with verification), no annotations, and an output schema present, the description is largely complete. It covers the purpose, process, parameters, and return value adequately. A minor gap is the lack of explicit error cases or prerequisites, but the output schema likely handles return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains that 'username' must be alphanumeric, at least 3 characters, and unique, and that 'email' is required for sending a verification link. This compensates fully for the schema's lack of documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'AI Skill Store 개발자 계정을 등록합니다' (registers a developer account for AI Skill Store). It specifies the action (register) and resource (developer account), but does not explicitly differentiate from sibling tools like 'upload_skill' or 'check_vetting_status', which are related but distinct operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning email verification and API key issuance, suggesting this is for initial developer setup. However, it does not provide explicit guidance on when to use this tool versus alternatives (e.g., no mention of prerequisites like checking vetting status first) or when not to use it (e.g., if already registered).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_skillsAInspect
AI Skill Store에서 스킬을 검색합니다.
capability나 platform을 지정하면 에이전트 최적화 검색(인기순 정렬)을 사용합니다.
Args:
query: 검색 키워드 (스킬 이름 또는 설명). 비워두면 전체 목록.
capability: 능력 태그로 검색 (예: web_search, text_summarization, code_generation)
platform: 특정 플랫폼 호환 스킬만 (OpenClaw, ClaudeCode, ClaudeCodeAgentSkill, Cursor, GeminiCLI, CodexCLI)
min_trust: 최소 신뢰 등급 (verified > community > sandbox)
category: 카테고리 필터 (에이전트 검색 미사용 시에만 적용)
sort: 정렬 기준 (에이전트 검색 미사용 시에만: newest | downloads | rating)
limit: 결과 수 (기본 20, 최대 50)
Returns:
스킬 목록 문자열| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | newest | |
| limit | No | ||
| query | No | ||
| category | No | ||
| platform | No | ||
| min_trust | No | ||
| capability | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the search behavior (agent-optimized vs regular search), sorting logic, and default/limit values, which is helpful. However, it doesn't mention potential rate limits, authentication requirements, error conditions, or pagination behavior for results beyond the limit. The description adds meaningful context but leaves gaps in operational transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear purpose statement, behavioral notes, and a formatted parameter section. Every sentence adds value, though the Korean text might require translation for some agents. The structure is efficient but could be slightly more front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters with 0% schema coverage and an output schema present (which handles return values), the description does an excellent job explaining parameter semantics and search behavior. It covers the main functionality thoroughly. The only minor gap is lack of explicit guidance on when to choose this tool over sibling alternatives, but overall it's quite complete for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must fully compensate. It provides excellent parameter semantics: it explains what each parameter does, gives examples for capability and platform, clarifies dependencies (category/sort only apply without agent search), and specifies defaults and limits. This goes well beyond what the bare schema provides and makes all 7 parameters understandable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for skills in the AI Skill Store, specifying the resource (skills) and action (search). It distinguishes from siblings like 'get_skill' (retrieve specific skill) or 'list_categories' (list categories only), but doesn't explicitly contrast with all siblings like 'check_vetting_status' or 'download_skill'. The purpose is specific but sibling differentiation is incomplete.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use certain parameters: it explains that specifying capability or platform triggers 'agent-optimized search (popularity sorting)', and notes that category and sort parameters only apply when agent search is not used. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_skill' for retrieving a specific skill or 'list_categories' for browsing categories without searching.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upload_skillBInspect
스킬 파일을 AI Skill Store에 업로드합니다. API 키가 필요합니다.
소유자는 API 키를 통해 서버에서 자동으로 확인됩니다.
Args:
file_path: 업로드할 .skill 파일의 절대 경로
api_key: 개발자 API 키
Returns:
업로드 결과 메시지| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| file_path | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that an API key is needed for authentication and that ownership is verified automatically, adding useful context. However, it lacks details on rate limits, error handling, or what the upload entails (e.g., file size limits, overwrite behavior), leaving gaps in behavioral understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three sentences and a structured Args/Returns section, making it easy to scan. It avoids unnecessary fluff, though the translation to English could be slightly polished for clarity without adding bulk.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which covers return values) and no annotations, the description provides basic purpose and parameter info but lacks depth. For a mutation tool with authentication needs, it should include more on error cases, success conditions, or integration context to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining that 'file_path' is the absolute path to a .skill file and 'api_key' is a developer API key, which clarifies parameter purposes beyond the schema's bare titles. However, it does not detail format constraints or examples for the parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('upload') and resource ('skill file to AI Skill Store'), making the purpose evident. It distinguishes from siblings like 'download_skill' or 'search_skills' by specifying upload functionality. However, it lacks explicit differentiation in terms of scope or constraints compared to similar tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions that an API key is required, which provides some context for prerequisites, but it does not specify when to use this tool versus alternatives like 'register_developer' or 'check_vetting_status'. No explicit guidance on scenarios, exclusions, or comparisons with sibling tools is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!