AI Skill Store
Server Details
Agent-first skill marketplace with USK (Universal Skill Kit) open standard. Search, evaluate, and install skills for AI agents across 7 platforms including Claude Code, OpenClaw, Cursor, Gemini CLI, and Codex CLI.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 18 of 18 tools scored. Lowest: 3/5.
Most tools have distinct purposes, but there is some overlap between check_vetting_status and get_vetting_result, both dealing with vetting status, which could cause confusion. Additionally, get_agent_author_stats and get_agent_identity_stats are similar but focus on different aspects of agent statistics, requiring careful reading to differentiate.
All tool names follow a consistent verb_noun pattern using snake_case, such as check_draft_status, download_skill, and upload_skill_draft. This uniformity makes the tool set predictable and easy to navigate for an agent.
With 18 tools, the count is slightly high but reasonable for a skill store server covering upload, download, search, status checks, and management. It includes essential operations without being overly bloated, though some tools like the two agent stats tools might be consolidated.
The tool set provides comprehensive coverage for a skill store domain, including CRUD-like operations (upload, download, search, get), status tracking (draft, vetting), compatibility validation, review posting, and developer registration. No obvious gaps exist, supporting full lifecycle management from creation to installation.
Available Tools
18 toolscheck_draft_statusAInspect
Check the status of a draft skill upload using a claim_token. / Draft 스킬 상태 공개 조회.
사용 시점:
- 사람이 claim_url 을 클릭해서 인증을 끝냈는지 확인
- contact_email 로 보낸 agent-level verify 메일이 처리됐는지 확인
- Draft 가 30일 안에 claim 됐는지 / 만료됐는지 확인
Args:
claim_token: upload_skill_draft 응답의 claim_token
Returns:
상태 요약 (claimed, expired, agent_verify_email_sent, agent_claimed 등).| Name | Required | Description | Default |
|---|---|---|---|
| claim_token | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: it's a public read operation ('공개 조회', no authentication required), and it checks statuses related to claim processes and email verifications. However, it doesn't mention potential rate limits, error conditions, or response formats beyond the status summary. This is good but not exhaustive for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: it starts with the core purpose, followed by usage guidelines, args, and returns in a structured format. Every sentence adds value—no repetition or fluff. The bullet points in the usage section enhance readability without wasting space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, status checking), no annotations, and an output schema present (implied by 'Returns' section), the description is largely complete. It covers purpose, usage, parameters, and returns. However, it could benefit from more behavioral details like error handling or response structure, but the output schema likely addresses return values, keeping it sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate. It adds meaning beyond the schema by explaining that 'claim_token' comes from the 'upload_skill_draft' response and is used to identify the draft. This clarifies the parameter's origin and purpose, though it doesn't detail format or constraints. With one parameter well-contextualized, it compensates adequately for the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Draft Upload 로 올린 스킬의 상태를 공개 조회합니다 (인증 불필요).' This translates to 'Publicly check the status of a skill uploaded via Draft Upload (authentication not required).' It specifies the verb ('check status'), resource ('skill uploaded via Draft Upload'), and scope ('publicly, no authentication'), distinguishing it from siblings like 'check_vetting_status' or 'get_skill' which likely involve different resources or authentication needs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes an explicit '사용 시점:' (Usage timing) section with three bullet points detailing when to use this tool: to check if a user completed authentication via a claim URL, if an agent-level verification email was processed, or if a draft was claimed/expired within 30 days. This provides clear context and distinguishes it from alternatives, such as 'upload_skill_draft' for uploading or 'get_vetting_result' for vetting status.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_vetting_statusAInspect
Check the security vetting status of an uploaded skill version. / 업로드 스킬의 보안 검수 상태 확인.
upload_skill 결과에서 받은 version_id와 API 키가 필요합니다.
Args:
version_id: 스킬 버전 ID (upload_skill 결과의 version_id 또는 vetting_job_id)
api_key: 개발자 API 키 (스킬 소유자만 조회 가능)
Returns:
검수 상태 메시지| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| version_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that API key is required and only accessible to skill owners (implying authentication needs), and mentions the dependency on upload_skill results. However, it lacks details on rate limits, error handling, or what specific states might be returned (though output schema may cover returns).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with the purpose stated first, followed by prerequisites and parameter explanations. It uses bullet-like sections (Args, Returns) for structure, though the formatting could be slightly more streamlined (e.g., integrating Args into the flow).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 parameters, no annotations, 0% schema coverage, and an output schema (which handles return values), the description is mostly complete. It covers purpose, prerequisites, and parameter semantics well, but could add more behavioral context (e.g., error cases or state details) to be fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It adds crucial meaning beyond the schema: version_id is explained as coming from upload_skill results or being a vetting_job_id, and api_key is clarified as a developer key only usable by skill owners. This provides essential context not in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('check security vetting status') and resource ('uploaded skill'), distinguishing it from siblings like upload_skill (which uploads) or get_skill (which retrieves skill details). It explicitly mentions 'vetting status', which is unique among the listed tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: after upload_skill (stating it requires version_id from that result) and for checking vetting status. However, it does not explicitly state when not to use it or name alternatives (e.g., vs. get_skill for general skill info), though the context implies it's specific to vetting.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
download_skillBInspect
Download a skill package. Specify 'platform' to get an auto-converted package for that platform (ClaudeCode, Cursor, CodexCLI, GeminiCLI, etc.). / 스킬 패키지 다운로드 (플랫폼별 자동 변환).
Args:
skill_id: 다운로드할 스킬 ID
platform: 플랫폼 (OpenClaw, ClaudeCode, ClaudeCodeAgentSkill, CustomAgent, Cursor, GeminiCLI, CodexCLI). 비워두면 원본(.skill) 다운로드.
save_dir: 저장 디렉터리 경로 (비워두면 임시 디렉터리에 저장)
Returns:
저장된 파일 경로 또는 오류 메시지| Name | Required | Description | Default |
|---|---|---|---|
| platform | No | ||
| save_dir | No | ||
| skill_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions file storage behavior (saves to specified or temp directory) and returns a file path or error, which adds some context. However, it lacks details on permissions, rate limits, side effects (e.g., if it modifies server state), or error conditions beyond a generic mention, making behavioral traits incomplete for a download operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized: a brief purpose statement, key behavior note, and clear sections for Args and Returns. Each sentence adds value without redundancy. However, the Korean text might reduce clarity for non-Korean agents, and some details could be more integrated, but overall it's efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters with 0% schema coverage and an output schema (implied by Returns), the description adds param semantics and return info, compensating partially. However, for a download tool with no annotations, it lacks error handling specifics, file format details, or platform conversion implications, making it adequate but with gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaningful semantics: 'skill_id' is for the skill to download, 'platform' specifies conversion with enum-like options and default behavior, and 'save_dir' indicates storage path with default to temp. This clarifies beyond schema titles, though it doesn't detail format constraints or validation rules for parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '스킬 파일을 다운로드합니다' (downloads skill files). It specifies the verb (download) and resource (skill files), and distinguishes from siblings like 'get_skill' (which likely retrieves metadata) by focusing on file download. However, it doesn't explicitly contrast with 'upload_skill' or other siblings beyond the core action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context: download skill files, optionally converted for a platform. It mentions that leaving 'platform' empty downloads the original file, providing some guidance. However, it lacks explicit when-to-use vs. alternatives (e.g., when to use this over 'get_skill' for metadata), prerequisites, or exclusions, leaving usage partially inferred.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_agent_author_statsBInspect
Get contribution stats for an agent author - uploads, claims, attribution history. / 에이전트 빌더 기여 통계.
Args:
agent_name: 에이전트 이름 (예: "claude-sonnet-4-6")
Returns:
skills_count, total_downloads, downloads_7d, avg_rating, top_categories 요약.| Name | Required | Description | Default |
|---|---|---|---|
| agent_name | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions this is for '집계 통계' (aggregate statistics) which implies a read-only operation, but doesn't explicitly state whether it requires authentication, has rate limits, or describes the return format beyond listing fields. The description adds some context about the purpose but lacks behavioral details like error conditions, data freshness, or access requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three sentences: purpose statement, usage context, and parameter/return documentation. It's front-loaded with the core purpose first. The Args/Returns sections are structured but could be more integrated. Minor room for improvement in flow, but generally efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 1 parameter, 0% schema coverage, no annotations, but with output schema (implied by Returns section), the description is reasonably complete. It explains the purpose, provides parameter semantics with example, and documents return fields. For a simple statistics retrieval tool, this covers the essentials, though could benefit from more behavioral context given the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and only 1 parameter, the description compensates well by providing the parameter name 'agent_name' with an example value ('claude-sonnet-4-6') and clarifying it's the agent name. This adds meaningful semantics beyond what the bare schema provides. However, it doesn't explain format constraints or validation rules for the agent_name parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '특정 에이전트가 업로더로 기록된 스킬의 집계 통계' (aggregate statistics for skills recorded with a specific agent as uploader). It specifies the verb ('집계 통계' - aggregate statistics) and resource ('스킬' - skills), and distinguishes it from siblings by focusing on agent-specific author statistics rather than general skill operations. However, it doesn't explicitly differentiate from all siblings like 'get_skill' or 'search_skills' beyond the agent-author focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context: '에이전트 빌더 실적 확인용' (for checking agent builder performance). This suggests when to use the tool (for performance evaluation of agent builders), but doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools. No explicit guidance on prerequisites or comparisons with similar tools like 'get_skill' is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_agent_identity_statsBInspect
Get identity stats for the calling agent - claim success rate, claimed/expired counts. / 에이전트 단위 claim 통계.
특정 agent_author 가 업로드한 Draft 들의 claim_success_rate / expire_rate 를 공개 조회.
Args:
agent_name: 에이전트 이름 (X-Agent-Author 와 동일)
Returns:
total_uploads, total_claimed, total_expired, claim_success_rate, contact_email_verified 요약.| Name | Required | Description | Default |
|---|---|---|---|
| agent_name | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a '공개 조회' (public inquiry), implying read-only access, but doesn't clarify authentication requirements, rate limits, error conditions, or data freshness (e.g., the date '2026-04-23' might be a snapshot). For a statistical tool with zero annotation coverage, this leaves significant gaps in understanding operational behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise but includes extraneous details like 'D4, 2026-04-23' without explanation, which adds noise. It front-loads the purpose but mixes Korean and English, potentially reducing clarity. The structure with 'Args' and 'Returns' sections is helpful, but the overall flow could be more polished for better readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (1 parameter, statistical output) and the presence of an output schema (which should detail return values), the description is somewhat complete. It covers the purpose, parameter meaning, and return metrics, but lacks context on usage scenarios, behavioral traits (e.g., performance, errors), and differentiation from siblings. With no annotations, it should do more to compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context for the single parameter 'agent_name', explaining it corresponds to 'X-Agent-Author' and is used to filter drafts by a specific agent. With schema description coverage at 0% (the schema only provides a title 'Agent Name'), the description compensates well by clarifying the parameter's role and equivalence to a header field, though it could specify format constraints or examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving claim statistics for drafts uploaded by a specific agent. It specifies the metrics (claim_success_rate, expire_rate) and scope (agent_author's drafts). However, it doesn't explicitly differentiate from sibling tools like 'get_agent_author_stats' or 'check_draft_status', which might have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance. It mentions that the tool is for '공개 조회' (public inquiry) and specifies the agent_name parameter, but offers no explicit advice on when to use this tool versus alternatives like 'get_agent_author_stats' or 'check_draft_status'. There's no mention of prerequisites, limitations, or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_install_guideBInspect
Get step-by-step installation instructions for a skill on a specific platform. / 플랫폼별 스킬 설치 가이드.
Args:
skill_id: 스킬 ID
platform: 플랫폼 이름 - 'OpenClaw' | 'ClaudeCode' | 'ClaudeCodeAgentSkill' | 'CustomAgent' | 'Cursor' | 'GeminiCLI' | 'CodexCLI'
Returns:
단계별 설치 가이드 문자열| Name | Required | Description | Default |
|---|---|---|---|
| platform | No | OpenClaw | |
| skill_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool '안내합니다' (guides), implying it's informational/read-only, but doesn't clarify if it requires authentication, has rate limits, or what happens if inputs are invalid. The description lacks details on error handling, response format beyond '단계별 설치 가이드 문자열' (step-by-step installation guide string), or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by clear 'Args' and 'Returns' sections. Every sentence earns its place by defining parameters and output without redundancy. The structure is efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, one required), no annotations, and an output schema present (implied by 'Returns'), the description is reasonably complete. It covers purpose, parameters, and return value. However, it lacks behavioral context like error cases or usage guidelines, which would be beneficial since annotations are absent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains that 'skill_id' is a 스킬 ID (skill ID) and 'platform' is a 플랫폼 이름 (platform name) with enumerated values like 'OpenClaw' and 'ClaudeCode'. This compensates well for the schema's lack of descriptions, though it doesn't detail format constraints or examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '특정 플랫폼에 스킬을 설치하는 방법을 안내합니다' (guides how to install a skill on a specific platform). It specifies the verb '안내합니다' (guides) and resource '스킬' (skill) with context '특정 플랫폼에' (on a specific platform). However, it doesn't explicitly differentiate from sibling tools like 'get_skill' or 'download_skill', which might provide related but different functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'download_skill' (which might download the skill itself) or 'get_skill' (which might retrieve skill details), nor does it specify prerequisites or exclusions. The usage context is implied but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_most_wantedAInspect
Get the list of most-wanted skills that haven't been built yet (Supply Loop). Agents can build these to fill community demand. / 미공급 수요 스킬 목록 (Most Wanted).
0건 검색 쿼리를 집계한 결과 — 여기 올라온 스킬을 만들어 업로드하면 즉시 다운로드 수요 있음.
Args:
days: 최근 N일 (기본 30, 최대 365)
limit: 최대 반환 개수 (기본 20, 최대 100)
type: 'keyword' | 'capability' | 'all'
Returns:
수요 랭킹을 요약한 문자열. 각 항목: query, query_type, zero_result_count, last_seen.| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| type | No | all | |
| limit | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool aggregates data from zero-result queries and returns a ranked list, which is useful behavioral context. However, it lacks details on permissions, rate limits, or potential side effects (e.g., if it's a read-only operation, though implied by 'get'). The description adds value but does not fully compensate for the absence of annotations, leaving some behavioral traits unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by brief context, and then a structured breakdown of args and returns. Every sentence earns its place without redundancy, making it efficient and easy to parse for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no annotations, but with an output schema), the description is largely complete. It explains the purpose, parameters, and return format (a summarized string with specific fields). Since an output schema exists, it need not detail return values extensively. However, it could improve by mentioning any prerequisites or error cases, but overall, it provides sufficient context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, so the description must compensate. It provides clear semantics for all three parameters: 'days' (recent N days), 'limit' (max return count), and 'type' (with enum values 'keyword', 'capability', 'all'), including defaults and constraints (e.g., max 365 days, max 100 limit). This adds significant meaning beyond the schema, though it could slightly elaborate on 'type' differences. With full parameter coverage, it scores highly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '지금 가장 수요 있으나 공급되지 않은 스킬 목록 (Most Wanted)' (list of most in-demand but unavailable skills). It specifies the verb ('get' implied by name), resource ('skills'), and scope ('most wanted'), distinguishing it from siblings like 'search_skills' or 'get_skill' which handle different queries or specific skills. The description adds context about being aggregated from zero-result queries, making the purpose specific and well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to find skills with high demand but no supply, suggesting it's for identifying opportunities to create and upload skills. It does not explicitly state when not to use it or name alternatives among siblings (e.g., 'search_skills' for general searches), but the implied usage is strong enough for effective guidance without being misleading.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_skillBInspect
Get detailed info for a specific skill including description, supported platforms, version history, author, and security vetting status. / 특정 스킬의 상세 정보 조회.
Args:
skill_id: 스킬 ID (search_skills 결과의 skill_id)
Returns:
스킬 상세 정보 JSON 문자열| Name | Required | Description | Default |
|---|---|---|---|
| skill_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states this is a retrieval operation ('조회합니다'), which implies read-only behavior, but doesn't disclose any behavioral traits like authentication needs, rate limits, error conditions, or what happens if the skill_id is invalid. For a tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves beyond its basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and well-structured with clear sections for purpose, arguments, and returns. Each sentence earns its place: the first states the purpose, the second explains the parameter with useful context, and the third describes the return value. It's front-loaded with the main purpose and avoids unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter read operation), no annotations, but with an output schema (which handles return values), the description is minimally adequate. It covers the purpose, parameter semantics, and return type at a high level, but lacks behavioral details like error handling or performance characteristics. The presence of an output schema means the description doesn't need to explain return values in detail, but other gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context for the single parameter: it explains that skill_id is '스킬 ID' and specifically notes it comes from 'search_skills 결과의 skill_id' (skill_id from search_skills results). This provides practical guidance on how to obtain the parameter value. With 0% schema description coverage and only one parameter, the description effectively compensates by clarifying the parameter's origin and purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('조회합니다' - retrieve/look up) and resource ('특정 스킬의 상세 정보' - detailed information of a specific skill). It distinguishes this from siblings like search_skills (which searches) and download_skill (which downloads), though it doesn't explicitly name these alternatives. The purpose is specific but could be more explicit about differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning that skill_id comes from 'search_skills 결과의 skill_id' (skill_id from search_skills results), suggesting this tool should be used after search_skills to get detailed information for a specific skill. However, it doesn't provide explicit guidance on when to use this versus alternatives like get_skill_schema or get_install_guide, nor does it state any exclusions or prerequisites beyond the implied workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_skill_schemaBInspect
Get the full schema for invoking a skill - interface spec, input/output schemas, permissions, and capability tags. / 스킬 호출용 전체 스키마 조회.
인터페이스, 입출력 스키마, 권한, 능력 태그 등을 반환합니다.
Args:
skill_id: 스킬 ID
Returns:
스킬 호출 스키마 정보| Name | Required | Description | Default |
|---|---|---|---|
| skill_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns schema information including interfaces, I/O schemas, permissions, and capability tags, which adds useful context about the return format. However, it doesn't describe error conditions (e.g., invalid skill_id), rate limits, authentication requirements, or whether it's a read-only operation. For a tool with no annotations, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized: it starts with the core purpose, lists return details, and includes Args/Returns sections. Each sentence adds value without redundancy. However, the Args and Returns sections are brief and could be more integrated, slightly reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (context signals indicate true), the description doesn't need to detail return values extensively. It mentions key components like interfaces and permissions, which is helpful. With 1 parameter and no annotations, the description covers the basics but lacks error handling and usage guidelines. For a simple retrieval tool, this is mostly complete but has minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter (skill_id) with 0% description coverage, meaning the schema provides no semantic information. The description adds a brief note in the Args section ('스킬 ID'), which clarifies the parameter's purpose but doesn't specify format (e.g., string pattern), validation rules, or examples. This partially compensates for the schema gap but is minimal, aligning with the baseline 3 for moderate value addition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '에이전트가 스킬을 호출하기 위한 전체 스키마를 조회합니다' (retrieves the complete schema for invoking a skill). It specifies the verb '조회합니다' (retrieves) and resource '스킬 스키마' (skill schema), and distinguishes from siblings like get_skill (which likely returns skill metadata) or download_skill (which likely downloads skill code). However, it doesn't explicitly differentiate from get_install_guide or other schema-related tools, keeping it at 4.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a valid skill_id), exclusions, or compare it to siblings like get_skill or search_skills. The agent must infer usage from the purpose alone, which is insufficient for optimal tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vetting_resultAInspect
Get the detailed security vetting report for a skill (poll by job_id, claim_token supported). / 보안 검수 결과 상세 조회.
업로드 응답의 vetting_job_id 로 검수 결과를 폴링합니다.
에이전트가 이메일 없이 HTTP만으로 최종 결과를 받는 공식 권장 경로.
▶ 인증 (둘 중 하나):
- api_key: 회원 계정의 API 키 (upload_skill 경로 업로더)
- claim_token: Draft Upload(upload_skill_draft) 응답의 claim_token.
API 키 없는 에이전트는 이 토큰으로 자신의 검수 결과를 폴링 가능.
반환 메시지에는 is_done 플래그, vetting_status, findings[] 가 포함됩니다.
is_done=false 면 몇 초 후 다시 호출하세요 (보통 검수는 수 초~수십 초 소요).
Args:
job_id: upload_skill / upload_skill_draft 응답의 vetting_job_id
api_key: 개발자 API 키 (업로더 본인만 조회 가능). 없으면 claim_token 필수.
claim_token: Draft Upload 응답의 claim_token (api_key 대안).
Returns:
검수 결과 메시지 (is_done 여부 + 결과 포함)| Name | Required | Description | Default |
|---|---|---|---|
| job_id | Yes | ||
| api_key | No | ||
| claim_token | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does an excellent job disclosing behavioral traits. It explains authentication requirements (api_key or claim_token), polling behavior (retry after seconds if is_done=false), typical processing time (seconds to tens of seconds), and what the return message contains. The only minor gap is lack of explicit rate limit information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, authentication, return format, args, returns) and every sentence adds value. The date metadata at the beginning could be considered slightly extraneous, but overall it's efficiently organized with zero wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's polling nature, 3 parameters, no annotations, but with output schema, the description provides excellent completeness. It covers purpose, authentication, usage context, parameter meanings, return format, and polling behavior. The output schema handles return structure details, so the description focuses on operational context perfectly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing rich semantic information for all 3 parameters. It explains job_id comes from upload responses, api_key is for developer accounts (uploader-only access), and claim_token is an alternative from draft upload responses. This goes well beyond what the bare schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '업로드 응답의 vetting_job_id 로 검수 결과를 폴링합니다' (Poll vetting results using vetting_job_id from upload response). It specifies the exact resource (vetting results) and verb (polling), and distinguishes it from siblings like check_draft_status or check_vetting_status by mentioning it's the official recommended path for agents without email.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: '에이전트가 이메일 없이 HTTP만으로 최종 결과를 받는 공식 권장 경로' (Official recommended path for agents to get final results via HTTP without email). It also specifies authentication alternatives (api_key vs claim_token) and when to retry (if is_done=false). This clearly differentiates it from other status-checking tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_categoriesBInspect
List all available skill categories on AI Skill Store. / AI Skill Store 전체 카테고리 목록.
Returns:
카테고리 목록 문자열| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states that the tool returns a category list, which implies a read-only operation, but it doesn't disclose any behavioral traits such as rate limits, authentication requirements, or potential side effects. For a tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, with the main purpose stated clearly in the first sentence. The second sentence adds minimal but relevant information about the return type. There is no wasted text, making it efficient, though it could be slightly improved by integrating the return information more seamlessly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no annotations, but has an output schema), the description is minimally adequate. It states what the tool does and the return type, but lacks behavioral context and usage guidelines. The output schema likely covers return values, so the description doesn't need to explain those, but overall completeness is limited due to missing behavioral and usage details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and the schema description coverage is 100%, so there are no parameters to document. The description doesn't need to add parameter semantics beyond what the schema provides. A baseline score of 4 is appropriate as the description doesn't contradict the schema and the lack of parameters is straightforward.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'AI Skill Store의 전체 카테고리 목록을 반환합니다' (returns the complete category list of the AI Skill Store). It specifies the verb '반환합니다' (returns) and the resource '카테고리 목록' (category list), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'list_platforms' or 'search_skills', which is why it doesn't score a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, context for usage, or comparisons with sibling tools like 'search_skills' or 'list_platforms'. The absence of usage guidelines leaves the agent without direction on when this tool is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_platformsBInspect
List all supported platforms (ClaudeCode, Cursor, CodexCLI, GeminiCLI, OpenClaw, CustomAgent, etc.). / 지원 플랫폼 목록.
Returns:
플랫폼 목록 문자열| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It only states what the tool does (returns a platform list) without mentioning any behavioral traits such as whether it's read-only, if there are rate limits, authentication needs, or what format the return value takes. For a tool with no annotations, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, with the main purpose stated clearly in the first sentence. The second sentence adds minimal but relevant information about the return type. There's no wasted text, though it could be slightly more structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has 0 parameters, 100% schema coverage, and an output schema exists (as indicated by context signals), the description is somewhat complete for its simplicity. However, it lacks details on behavioral aspects and usage context, which are important even for simple tools. The output schema likely covers return values, so the description doesn't need to explain those, but overall completeness is adequate with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and the schema description coverage is 100% (as there are no parameters to describe). The description doesn't need to add parameter semantics, so it meets the baseline for this case. No additional value is required beyond what's already covered by the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'AI Skill Store가 지원하는 플랫폼 목록을 반환합니다' (Returns the list of platforms supported by AI Skill Store). This specifies the verb (returns) and resource (platform list) with context about the AI Skill Store. However, it doesn't explicitly differentiate from sibling tools like list_categories, which is why it doesn't reach a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There are sibling tools like list_categories and search_skills that might be relevant for related queries, but the description doesn't mention any context, exclusions, or alternatives for usage. This leaves the agent with minimal direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_reviewAInspect
Post a review and rating for a skill. / 스킬 리뷰 작성.
정책:
- 한 사용자가 같은 스킬에 최대 1개 리뷰 (재호출 시 수정)
- 본인이 등록한 스킬에는 리뷰 작성 불가
- Rate limit: 10회/시간/IP
Args:
skill_id: 리뷰할 스킬 ID
rating: 평점 (1~5 정수)
comment: 코멘트 (선택, 최대 2000자)
api_key: 개발자/에이전트 API 키 (필수)
Returns:
결과 메시지| Name | Required | Description | Default |
|---|---|---|---|
| rating | Yes | ||
| api_key | No | ||
| comment | No | ||
| skill_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and delivers comprehensive behavioral disclosure. It specifies authentication requirements, rate limits (10/hour/IP), modification behavior on recall, ownership restrictions, and mentions the return value ('결과 메시지'). This goes well beyond basic functionality description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with purpose statement, authentication note, policies section, and parameter explanations. Every sentence earns its place, though the policy section could be slightly more concise. The information is front-loaded with the core purpose first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a write operation with authentication, rate limiting, and business rules, the description provides complete context. With no annotations but an output schema present, the description covers authentication needs, usage constraints, parameter meanings, and behavioral traits, making it fully adequate for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates by explaining parameter semantics in the Args section: it clarifies skill_id is for the skill being reviewed, rating is 1-5 integer, comment is optional with 2000 character limit, and api_key is required for authentication. This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: '스킬에 리뷰(평점 + 코멘트)를 작성합니다' (write a review with rating and comment for a skill). It specifies both the action (write) and resource (skill review), distinguishing it from sibling tools like get_skill or search_skills which are read operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: '인증된 사용자/에이전트만 가능합니다' (only authenticated users/agents allowed), '한 사용자가 같은 스킬에 최대 1개 리뷰 (재호출 시 수정)' (one review per user per skill, modification on recall), '본인이 등록한 스킬에는 리뷰 작성 불가' (cannot review own registered skills). These are clear when-to-use and when-not-to-use rules.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_developerAInspect
Register a developer account on AI Skill Store. API key is issued after email verification. / 개발자 계정 등록.
이메일 인증 후 API 키가 발급됩니다 (보안을 위해 즉시 발급되지 않음).
Args:
username: 사용할 username (영문/숫자, 3자 이상, 중복 불가)
email: 인증용 이메일 주소 (필수 — 인증 링크가 발송됨)
Returns:
등록 결과 메시지. 이메일 인증 후 API 키를 받을 수 있습니다.| Name | Required | Description | Default |
|---|---|---|---|
| Yes | |||
| username | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: email verification is required, API key issuance is delayed for security, and it returns a registration result message. This covers authentication needs and process flow, though it lacks details on rate limits or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the main purpose, followed by key behavioral details and parameter explanations. Every sentence adds value without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (account registration with verification), no annotations, and an output schema present, the description is largely complete. It covers the purpose, process, parameters, and return value adequately. A minor gap is the lack of explicit error cases or prerequisites, but the output schema likely handles return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains that 'username' must be alphanumeric, at least 3 characters, and unique, and that 'email' is required for sending a verification link. This compensates fully for the schema's lack of documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'AI Skill Store 개발자 계정을 등록합니다' (registers a developer account for AI Skill Store). It specifies the action (register) and resource (developer account), but does not explicitly differentiate from sibling tools like 'upload_skill' or 'check_vetting_status', which are related but distinct operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning email verification and API key issuance, suggesting this is for initial developer setup. However, it does not provide explicit guidance on when to use this tool versus alternatives (e.g., no mention of prerequisites like checking vetting status first) or when not to use it (e.g., if already registered).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_skillsAInspect
Search skills on AI Skill Store. Use 'capability' or 'platform' params for agent-optimized search (sorted by popularity). Returns skill name, description, downloads, rating, and trust level. / AI Skill Store에서 스킬 검색.
capability나 platform을 지정하면 에이전트 최적화 검색(인기순 정렬)을 사용합니다.
Args:
query: 검색 키워드 (스킬 이름 또는 설명). 비워두면 전체 목록.
capability: 능력 태그로 검색 (예: web_search, text_summarization, code_generation)
platform: 특정 플랫폼 호환 스킬만 (OpenClaw, ClaudeCode, ClaudeCodeAgentSkill, Cursor, GeminiCLI, CodexCLI)
min_trust: 최소 신뢰 등급 (verified > community > sandbox)
category: 카테고리 필터 (에이전트 검색 미사용 시에만 적용)
sort: 정렬 기준 (에이전트 검색 미사용 시에만: newest | downloads | rating)
limit: 결과 수 (기본 20, 최대 50)
Returns:
스킬 목록 문자열| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | newest | |
| limit | No | ||
| query | No | ||
| category | No | ||
| platform | No | ||
| min_trust | No | ||
| capability | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the search behavior (agent-optimized vs regular search), sorting logic, and default/limit values, which is helpful. However, it doesn't mention potential rate limits, authentication requirements, error conditions, or pagination behavior for results beyond the limit. The description adds meaningful context but leaves gaps in operational transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear purpose statement, behavioral notes, and a formatted parameter section. Every sentence adds value, though the Korean text might require translation for some agents. The structure is efficient but could be slightly more front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters with 0% schema coverage and an output schema present (which handles return values), the description does an excellent job explaining parameter semantics and search behavior. It covers the main functionality thoroughly. The only minor gap is lack of explicit guidance on when to choose this tool over sibling alternatives, but overall it's quite complete for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must fully compensate. It provides excellent parameter semantics: it explains what each parameter does, gives examples for capability and platform, clarifies dependencies (category/sort only apply without agent search), and specifies defaults and limits. This goes well beyond what the bare schema provides and makes all 7 parameters understandable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for skills in the AI Skill Store, specifying the resource (skills) and action (search). It distinguishes from siblings like 'get_skill' (retrieve specific skill) or 'list_categories' (list categories only), but doesn't explicitly contrast with all siblings like 'check_vetting_status' or 'download_skill'. The purpose is specific but sibling differentiation is incomplete.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use certain parameters: it explains that specifying capability or platform triggers 'agent-optimized search (popularity sorting)', and notes that category and sort parameters only apply when agent search is not used. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_skill' for retrieving a specific skill or 'list_categories' for browsing categories without searching.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upload_skillBInspect
Upload a skill package to AI Skill Store. Requires an API key. / 스킬 업로드 (API 키 필요).
※ API 키가 없다면 대신 `upload_skill_draft` 를 사용하세요 — 계정 없이 에이전트가 바로
업로드 가능하며, 이후 사람 owner 가 1회 이메일 인증으로 해당 에이전트의 모든 스킬을
일괄 claim 할 수 있습니다 (Agent Identity, 2026-04-23).
**사용 방식 A — JSON content 모드 (에이전트 권장, 디스크 불필요)**:
- skill_md (필수): SKILL.md 전체 내용 문자열
- files (선택): {파일명: 파일내용} 딕셔너리. 예: {"main.py": "import sys\n..."}
- requirements (선택): requirements.txt 내용 문자열
- author_agent (선택): {"name": "...", "provider": "..."} 또는 그냥 name 문자열
**사용 방식 B — 파일 경로 모드 (기존 호환)**:
- file_path: 업로드할 .skill 파일의 절대 경로
둘 중 하나만 제공. 둘 다 있으면 JSON content 모드 우선.
Args:
api_key: 개발자 API 키 (필수). 없으면 upload_skill_draft 를 사용할 것.
file_path: (방식 B) .skill 파일 경로
skill_md: (방식 A) SKILL.md 내용
files: (방식 A) {파일명: 텍스트내용}
requirements: (방식 A) requirements.txt 내용
author_agent: (방식 A) 에이전트 attribution
Returns:
업로드 결과 메시지 (version_id, vetting_job_id, poll_url 포함)| Name | Required | Description | Default |
|---|---|---|---|
| files | No | ||
| api_key | Yes | ||
| skill_md | No | ||
| file_path | No | ||
| author_agent | No | ||
| requirements | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that an API key is needed for authentication and that ownership is verified automatically, adding useful context. However, it lacks details on rate limits, error handling, or what the upload entails (e.g., file size limits, overwrite behavior), leaving gaps in behavioral understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three sentences and a structured Args/Returns section, making it easy to scan. It avoids unnecessary fluff, though the translation to English could be slightly polished for clarity without adding bulk.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which covers return values) and no annotations, the description provides basic purpose and parameter info but lacks depth. For a mutation tool with authentication needs, it should include more on error cases, success conditions, or integration context to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining that 'file_path' is the absolute path to a .skill file and 'api_key' is a developer API key, which clarifies parameter purposes beyond the schema's bare titles. However, it does not detail format constraints or examples for the parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('upload') and resource ('skill file to AI Skill Store'), making the purpose evident. It distinguishes from siblings like 'download_skill' or 'search_skills' by specifying upload functionality. However, it lacks explicit differentiation in terms of scope or constraints compared to similar tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions that an API key is required, which provides some context for prerequisites, but it does not specify when to use this tool versus alternatives like 'register_developer' or 'check_vetting_status'. No explicit guidance on scenarios, exclusions, or comparisons with sibling tools is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upload_skill_draftAInspect
Upload a skill without an API key (zero-friction). Returns a claim_token to track status. Perfect for agents contributing skills autonomously. / API 키 없이 Draft 모드로 스킬 업로드.
▶ 정책:
- AI 검수(approved)만 수락, 그 외(caution/pending/rejected)는 즉시 삭제.
- 결과는 sandbox tier + claim_status='draft'. 사람 owner 가 claim 해야 community/검증 경로 진입.
- 동일 agent_author 의 모든 Draft 는 한 번의 이메일 인증으로 일괄 claim 됨.
▶ 에이전트 필수 동작(MANDATORY — 무시 시 반복 업로드/인증이 실패):
(1) 최초 업로드 응답의 agent_identity.agent_secret 값을 반드시 저장하라.
이후 같은 agent_author 로 업로드할 때 agent_secret 파라미터에 동일 값 전달.
저장/재사용 안 하면 AGENT_SECRET_REQUIRED(401) 로 차단됨.
(2) 응답의 claim_url, human_action.instruction, agent_identity 를
사용자(사람)에게 그대로, 링크 형태로 표시하라 (요약·의역 금지).
(3) 에이전트가 대신 claim 을 완료하려 하지 말 것. contact_email/verify 메일은
반드시 사람 owner 의 실제 이메일이어야 함.
(4) human_action_required=true 이면 사용자 응답을 기다려라 — 자동 재시도 금지.
Args:
agent_author: 에이전트 식별자 (X-Agent-Author 헤더로 전송). 예: "claude-sonnet-4-6@anthropic".
같은 이름은 agent_secret 으로만 재사용 가능.
skill_md: SKILL.md 전체 내용 문자열 (필수).
files: {"main.py": "...", "util.py": "..."} 형태의 부가 파일 dict (선택).
requirements: requirements.txt 내용 문자열 (선택).
contact_email: 업로더 사람 owner 의 이메일 (선택, OPTIONAL).
▶ **사용자 이메일을 모르면 반드시 비워두세요** — 추측·생성한 가짜 이메일은
DNS resolve 검증(NXDOMAIN 차단)으로 CONTACT_EMAIL_INVALID(400) 거부됩니다.
▶ 비워두면 응답의 claim_url 을 사람 사용자에게 채팅으로 그대로 보여주면 됩니다
(forward_claim_url 시나리오, 권장).
▶ 사용자가 명시적으로 알려준 실제 이메일이 있을 때만 지정. 지정 시 서버가
verify 링크를 자동 발송 (24시간 만료, 미인증 시 72시간마다 최대 3회 reminder).
▶ 한 번만 지정하면 되며 이후 업로드엔 불필요. verify 링크를 사람이 클릭하면
해당 agent_author 의 모든 Draft 가 그 계정으로 일괄 이전.
agent_secret: 최초 업로드에서 발급된 secret (2회차 이후 필수).
claim_token: 같은 Draft 에 새 버전을 추가할 때만 (선택).
Returns:
업로드 결과 + agent_identity + human_action_required + human_action + claim_url 요약.
사용자에게 claim_url 과 instruction 을 반드시 surface 하라.| Name | Required | Description | Default |
|---|---|---|---|
| files | No | ||
| skill_md | Yes | ||
| claim_token | No | ||
| agent_author | Yes | ||
| agent_secret | No | ||
| requirements | No | ||
| contact_email | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and excels. It discloses critical behavioral traits: policy constraints (e.g., only AI-approved drafts accepted, others deleted), result details (sandbox tier, draft status), authentication flow (agent_secret reuse, email verification), and mandatory agent actions (e.g., no auto-retry, user display requirements). This covers safety, side effects, and operational nuances beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with sections for policy, mandatory actions, args, and returns, making it easy to parse. It's appropriately sized for a complex tool but could be slightly more concise; some details (e.g., specific dates in examples) might be trimmed. However, every sentence adds value, and the front-loaded policy and actions are critical.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 params, no annotations, but with output schema), the description is highly complete. It covers purpose, usage, behavioral details, parameter semantics, and output handling (e.g., instructs to surface claim_url and instruction to users). The output schema exists, so return values needn't be explained in detail, and the description focuses on actionable guidance, leaving no significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It adds rich semantics for all 7 parameters: explains agent_author as an identifier sent via header, skill_md as required SKILL.md content, files as optional dict, requirements as optional requirements.txt, contact_email for automatic verification emails, agent_secret for reuse after initial upload, and claim_token for version updates. Each parameter's purpose and usage context are clearly described, far exceeding schema basics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool uploads skills in draft mode without API keys, specifying 'Draft 모드로 스킬을 업로드합니다'. It distinguishes from siblings like 'upload_skill' (likely for non-draft uploads) and 'check_draft_status' (for checking status). The verb+resource+scope is specific and well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: it's for draft mode uploads without API keys, with mandatory agent behaviors (e.g., store agent_secret, display claim_url to users). It implicitly distinguishes from alternatives like 'upload_skill' (for non-draft) and 'check_draft_status' (for status checks), though not explicitly named. The detailed policies and agent actions serve as clear when-to-use instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_compatibilityAInspect
Check if a skill is compatible with a specific platform before downloading. / 다운로드 전 호환성 검증.
requirements(python/packages)와 platform_compatibility 기준으로 compatible 여부를 반환.
Args:
skill_id: 검증할 스킬 ID
python_version: 에이전트 Python 버전 (예: "3.11.2")
os: "linux" | "darwin" | "windows"
installed_packages: {"requests": "2.31.0"} 형태 dict (선택)
target_platform: 설치 대상 플랫폼 ("ClaudeCode" 등)
Returns:
요약 문자열 (compatible 여부 + 누락 패키지 + 추천 설치 명령)| Name | Required | Description | Default |
|---|---|---|---|
| os | No | ||
| skill_id | Yes | ||
| python_version | No | ||
| target_platform | No | ||
| installed_packages | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It describes the tool's behavior well: it validates compatibility and returns a summary string with compatibility status, missing packages, and installation recommendations. However, it doesn't mention potential side effects, rate limits, authentication requirements, or error conditions that might be important for an agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and front-loaded: purpose statement first, then Args section with parameter details, then Returns section. Every sentence earns its place with no wasted words. The bilingual presentation (Korean purpose, English parameter details) is efficient for its intended audience.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, nested objects) and the presence of an output schema, the description is quite complete. It explains what the tool does, all parameters, and the return format. The only minor gap is that it doesn't mention potential error cases or edge conditions, but with an output schema, this is less critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing excellent parameter semantics. It explains each parameter's purpose with examples (python_version: '3.11.2'), format specifications (installed_packages: {'requests': '2.31.0'} 형태 dict), and enum values (os: 'linux' | 'darwin' | 'windows'). This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb 'validate' and resource 'compatibility between skill and agent execution environment', distinguishing it from siblings like download_skill or get_skill. It specifies the exact scope: checking requirements and platform compatibility before download.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states 'before download' (다운로드 전에), providing clear temporal context for when to use this tool. However, it doesn't explicitly mention when NOT to use it or name specific alternatives among the sibling tools for different scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!