Skip to main content
Glama

Server Details

Agent-first skill marketplace with USK open standard for Claude, Cursor, Gemini, Codex CLI.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
garasegae/aiskillstore
GitHub Stars
0
Server Listing
aiskillstore

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 18 of 18 tools scored. Lowest: 3.1/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between check_vetting_status and get_vetting_result, both related to vetting status, which could cause confusion. Additionally, upload_skill and upload_skill_draft serve similar functions but for different user types, which might lead to misselection if not carefully read. Overall, the tools are well-differentiated, but these pairs require clear attention to descriptions.

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with clear verb_noun structures, such as check_draft_status, download_skill, and upload_skill_draft. There are no deviations in naming conventions, making the set predictable and easy to understand at a glance.

Tool Count4/5

With 18 tools, the count is slightly high but reasonable for a skill store server that covers uploading, downloading, searching, reviewing, and managing skills across multiple platforms. It provides comprehensive functionality without feeling overly bloated, though it borders on the upper limit of typical scope.

Completeness5/5

The tool set offers complete coverage for the AI skill store domain, including skill discovery (search_skills, get_most_wanted), management (upload, download, validate), status tracking (vetting, draft), user interaction (review, developer registration), and metadata (categories, platforms). There are no obvious gaps, supporting full CRUD and lifecycle operations for skills and users.

Available Tools

18 tools
check_draft_statusAInspect
Check the status of a draft skill upload using a claim_token. / Draft 스킬 상태 공개 조회.

사용 시점:
  - 사람이 claim_url 을 클릭해서 인증을 끝냈는지 확인
  - contact_email 로 보낸 agent-level verify 메일이 처리됐는지 확인
  - Draft 가 30일 안에 claim 됐는지 / 만료됐는지 확인

Args:
    claim_token: upload_skill_draft 응답의 claim_token

Returns:
    상태 요약 (claimed, expired, agent_verify_email_sent, agent_claimed 등).
ParametersJSON Schema
NameRequiredDescriptionDefault
claim_tokenYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and adds valuable behavioral context: it states that authentication is not required ('인증 불필요'), describes the tool as a public read operation, and implies it's non-destructive (status check). However, it doesn't mention rate limits, error handling, or response formats beyond the status summary.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized: it starts with the core purpose, lists usage scenarios in bullet points, and clearly sections Args and Returns. Every sentence adds value without redundancy, making it easy to scan and understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 1 parameter with 0% schema coverage and an output schema (implied by Returns section), the description is mostly complete: it explains the parameter's semantics and outlines return values. However, it could benefit from more detail on error cases or the exact structure of the status summary, though the output schema may cover this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It clearly explains the single parameter 'claim_token' as coming from 'upload_skill_draft 응답의 claim_token' (claim_token from upload_skill_draft response), adding crucial semantic context not in the schema. This fully addresses the parameter's meaning and origin.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '공개 조회합니다' (publicly check) the status of a skill uploaded via Draft Upload. It specifies the resource (skill draft) and action (check status), but doesn't explicitly differentiate from sibling tools like 'check_vetting_status' or 'get_skill', which might check different statuses or resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage scenarios in a '사용 시점' (usage timing) section, listing three specific cases: verifying human claim via URL, checking agent-level email verification, and monitoring 30-day claim/expiry status. This gives clear context for when to use this tool, though it doesn't name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_vetting_statusAInspect
Check the security vetting status of an uploaded skill version. / 업로드 스킬의 보안 검수 상태 확인.
upload_skill 결과에서 받은 version_id와 API 키가 필요합니다.

Args:
    version_id: 스킬 버전 ID (upload_skill 결과의 version_id 또는 vetting_job_id)
    api_key: 개발자 API 키 (스킬 소유자만 조회 가능)

Returns:
    검수 상태 메시지
ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyYes
version_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions authentication needs (API key for skill owners only) and prerequisites (version_id from upload_skill), which adds useful context. However, it lacks details on rate limits, error handling, or response format beyond a generic 'status message', leaving gaps in behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the purpose, followed by prerequisites and a structured breakdown of args and returns. Every sentence adds value without redundancy, making it efficient and well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, but an output schema exists), the description is mostly complete. It covers purpose, prerequisites, and parameter meanings. However, it lacks details on the return value format (e.g., what the 'status message' includes) and error conditions, which the output schema might not fully address, leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains that version_id comes from upload_skill results and can be either version_id or vetting_job_id, and that api_key is for skill owners only, adding meaningful semantics beyond the bare schema. However, it does not detail the format or constraints of these parameters, such as length or encoding, which slightly limits the score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('check security vetting status') and resource ('uploaded skill'), distinguishing it from siblings like upload_skill (which uploads) or search_skills (which searches). It precisely identifies what the tool does without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: after upload_skill and with specific prerequisites (version_id and API key). However, it does not explicitly state when not to use it or name alternatives, such as checking status through other means or tools, which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

download_skillAInspect
Download a skill package. Specify 'platform' to get an auto-converted package for that platform (ClaudeCode, Cursor, CodexCLI, GeminiCLI, etc.). / 스킬 패키지 다운로드 (플랫폼별 자동 변환).

Args:
    skill_id: 다운로드할 스킬 ID
    platform: 플랫폼 (OpenClaw, ClaudeCode, ClaudeCodeAgentSkill, CustomAgent, Cursor, GeminiCLI, CodexCLI). 비워두면 원본(.skill) 다운로드.
    save_dir: 저장 디렉터리 경로 (비워두면 임시 디렉터리에 저장)

Returns:
    저장된 파일 경로 또는 오류 메시지
ParametersJSON Schema
NameRequiredDescriptionDefault
platformNo
save_dirNo
skill_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool performs file downloads (implied read-only) and optional platform conversion, and mentions default behaviors (original .skill download if platform empty, temporary directory if save_dir empty). However, it doesn't cover error handling, file formats, permissions needed, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement followed by organized Args and Returns sections. Every sentence adds value: the first explains core functionality, the second covers platform conversion, and the parameter/return explanations are essential given the schema coverage gap.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (file download with conversion options), no annotations, and an output schema that covers return values, the description does well by explaining parameters thoroughly and stating the return type. However, it could better address behavioral aspects like error conditions or file handling details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining all three parameters: skill_id (what to download), platform (conversion target with enum values and default behavior), and save_dir (storage location with default behavior). It adds crucial meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('downloads skill files') and resource ('skill files'), and distinguishes from siblings by focusing on file retrieval rather than status checking, schema fetching, or uploading. It specifies the tool's unique capability of platform-specific conversion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to download skill files, optionally with platform conversion) and implicitly distinguishes from siblings like 'get_skill' (which likely returns metadata) or 'upload_skill'. However, it doesn't explicitly state when NOT to use it or name specific alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_author_statsBInspect
Get contribution stats for an agent author - uploads, claims, attribution history. / 에이전트 빌더 기여 통계.

Args:
    agent_name: 에이전트 이름 (예: "claude-sonnet-4-6")

Returns:
    skills_count, total_downloads, downloads_7d, avg_rating, top_categories 요약.
ParametersJSON Schema
NameRequiredDescriptionDefault
agent_nameYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool returns aggregated statistics (skills_count, total_downloads, etc.), which is helpful, but lacks critical behavioral details: whether this is a read-only operation, any rate limits, authentication requirements, or error conditions. For a statistics tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise and well-structured. It opens with the core purpose, followed by 'Args:' and 'Returns:' sections that clearly separate input and output information. Each sentence earns its place, with no redundant text. The bilingual content (Korean/English) is slightly verbose but not wasteful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, statistics retrieval), the description is reasonably complete. It explains the purpose, parameter semantics, and return values. Since an output schema exists (implied by 'Has output schema: true'), the description doesn't need to detail return structure. However, it lacks behavioral context (e.g., read-only nature, performance characteristics), which holds it back from a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful parameter semantics beyond the schema. The input schema has 0% description coverage (only 'Agent Name' as title), but the description provides: '에이전트 이름 (예: "claude-sonnet-4-6")' (agent name, e.g., "claude-sonnet-4-6"), including an example format. This compensates well for the schema's lack of detail, though it doesn't explain validation rules or edge cases.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '특정 에이전트가 업로더로 기록된 스킬의 집계 통계' (aggregate statistics for skills recorded with a specific agent as uploader). It specifies the verb ('집계 통계' - aggregate statistics) and resource ('스킬' - skills), though it doesn't explicitly differentiate from sibling tools like 'get_skill' or 'search_skills'. The English translation clarifies it's for 'agent builder performance verification'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance. It states the tool is for '에이전트 빌더 실적 확인용' (agent builder performance verification), which implies when to use it, but offers no explicit guidance on when NOT to use it or alternatives among the 12 sibling tools. There's no comparison to similar tools like 'get_skill' or 'search_skills'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_identity_statsBInspect
Get identity stats for the calling agent - claim success rate, claimed/expired counts. / 에이전트 단위 claim 통계.
특정 agent_author 가 업로드한 Draft 들의 claim_success_rate / expire_rate 를 공개 조회.

Args:
    agent_name: 에이전트 이름 (X-Agent-Author 와 동일)

Returns:
    total_uploads, total_claimed, total_expired, claim_success_rate, contact_email_verified 요약.
ParametersJSON Schema
NameRequiredDescriptionDefault
agent_nameYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states this is a '공개 조회' (public inquiry), implying read-only access, and mentions retrieving statistics, which suggests safe querying. However, it lacks details on rate limits, authentication requirements (beyond the agent_name hint), error conditions, or data freshness (e.g., the '2026-04-23' date might be misleading). The description adds minimal context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately concise with three sentences, but includes extraneous elements like 'D4, 2026-04-23' which doesn't aid understanding. The front-loaded purpose is clear, but the structure could be improved by removing the date reference and better organizing the Args/Returns sections (though these are helpful). Some redundancy exists between the description and parameter explanation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 1 parameter with 0% schema coverage and an output schema present, the description is reasonably complete. It specifies the tool's purpose, parameter semantics, and return values (e.g., 'total_uploads, total_claimed, total_expired, claim_success_rate, contact_email_verified'), reducing the need to explain outputs. However, it lacks behavioral details like error handling or usage constraints, which are important for a tool with no annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, so the description must compensate. It explains that 'agent_name' corresponds to '에이전트 이름 (X-Agent-Author 와 동일)' (agent name, same as X-Agent-Author), clarifying the parameter's meaning and its relation to authentication headers. This adds valuable semantic context beyond the schema's bare 'Agent Name' title, though it doesn't detail format constraints or examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '에이전트 단위 Claim 통계' (agent-level claim statistics) and specifies it retrieves 'claim_success_rate / expire_rate' for drafts uploaded by a specific agent. It distinguishes from siblings like 'get_agent_author_stats' by focusing on claim/expiry metrics rather than broader author statistics. However, the inclusion of 'D4, 2026-04-23' adds unnecessary noise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating it's for '특정 agent_author' (specific agent author) and references 'X-Agent-Author', suggesting authentication or header correlation. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_agent_author_stats' or 'check_draft_status', nor does it provide exclusions or prerequisites beyond the agent_name parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_install_guideBInspect
Get step-by-step installation instructions for a skill on a specific platform. / 플랫폼별 스킬 설치 가이드.

Args:
    skill_id: 스킬 ID
    platform: 플랫폼 이름 - 'OpenClaw' | 'ClaudeCode' | 'ClaudeCodeAgentSkill' | 'CustomAgent' | 'Cursor' | 'GeminiCLI' | 'CodexCLI'

Returns:
    단계별 설치 가이드 문자열
ParametersJSON Schema
NameRequiredDescriptionDefault
platformNoOpenClaw
skill_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool '안내합니다' (guides), implying it's informational and likely read-only, but doesn't confirm this or disclose other behavioral traits like authentication needs, rate limits, or whether it modifies data. For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: it starts with the core purpose, followed by clear sections for Args and Returns. Every sentence earns its place by defining parameters and output without unnecessary elaboration. The structure is efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 parameters with 0% schema coverage and an output schema (implied by Returns), the description is mostly complete. It explains the parameters and output type ('단계별 설치 가이드 문자열' - step-by-step installation guide string), but lacks behavioral context like error handling or prerequisites. With output schema handling return values, the description covers the essentials adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining 'skill_id' as '스킬 ID' (skill ID) and 'platform' as '플랫폼 이름' (platform name) with an enum list, clarifying what these parameters represent beyond the schema's basic titles. However, it doesn't detail format constraints or examples, leaving some ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '특정 플랫폼에 스킬을 설치하는 방법을 안내합니다' (guides how to install a skill on a specific platform). It specifies the verb '안내합니다' (guides) and resource '스킬' (skill) with the context of installation. However, it doesn't explicitly differentiate from siblings like 'download_skill' or 'get_skill', which might provide related but different functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'download_skill' (which might download the skill file) or 'get_skill' (which might retrieve skill details), leaving the agent to infer usage context. There's no explicit when/when-not or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_most_wantedAInspect
Get the list of most-wanted skills that haven't been built yet (Supply Loop). Agents can build these to fill community demand. / 미공급 수요 스킬 목록 (Most Wanted).
0건 검색 쿼리를 집계한 결과 — 여기 올라온 스킬을 만들어 업로드하면 즉시 다운로드 수요 있음.

Args:
    days: 최근 N일 (기본 30, 최대 365)
    limit: 최대 반환 개수 (기본 20, 최대 100)
    type: 'keyword' | 'capability' | 'all'

Returns:
    수요 랭킹을 요약한 문자열. 각 항목: query, query_type, zero_result_count, last_seen.
ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
typeNoall
limitNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the data source ('aggregated from zero-result search queries') and practical implication ('immediate download demand'), but lacks details on rate limits, authentication needs, or error behaviors. It adequately describes the core function without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with purpose statement, data source explanation, usage implication, and separate Args/Returns sections. It's appropriately sized but could be slightly more front-loaded by moving the usage implication closer to the beginning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 parameters with 0% schema coverage and an output schema, the description provides complete parameter semantics and return format explanation. It adequately covers the tool's purpose, usage context, and data characteristics without needing to duplicate output schema details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It fully documents all 3 parameters (days, limit, type) with meanings, defaults, constraints, and enum values for 'type', adding significant value beyond the bare schema. The parameter documentation is comprehensive and clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'a list of skills that are in high demand but not supplied (Most Wanted)', specifying the verb 'retrieve' and resource 'skills list'. It distinguishes from siblings by focusing on unmet demand analytics rather than skill operations like upload_skill or search_skills.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context: 'skills listed here can be created and uploaded for immediate download demand', indicating when to use this tool for content gap identification. However, it doesn't explicitly state when not to use it or name alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_skillBInspect
Get detailed info for a specific skill including description, supported platforms, version history, author, and security vetting status. / 특정 스킬의 상세 정보 조회.

Args:
    skill_id: 스킬 ID (search_skills 결과의 skill_id)

Returns:
    스킬 상세 정보 JSON 문자열
ParametersJSON Schema
NameRequiredDescriptionDefault
skill_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states this is a read operation ('조회합니다' - retrieves), which implies it's non-destructive, but doesn't mention authentication requirements, rate limits, error conditions, or what happens if the skill_id is invalid. For a tool with no annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured. It uses exactly three sentences: a purpose statement, parameter explanation, and return value description. Every sentence earns its place with no wasted words, and the information is front-loaded with the core purpose first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's relative simplicity (single parameter, read-only operation) and the presence of an output schema (which handles return value documentation), the description is reasonably complete. It covers the core purpose, parameter semantics, and mentions the return format. The main gap is the lack of behavioral context (auth, errors, etc.), but for this complexity level with output schema support, it's mostly adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context beyond the input schema. While the schema only documents skill_id as a required string parameter, the description explains that it's '스킬 ID (search_skills 결과의 skill_id)' - clarifying that this ID comes from search_skills results. This is valuable semantic information that helps the agent understand parameter sourcing, compensating for the 0% schema description coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '특정 스킬의 상세 정보를 조회합니다' (Retrieves detailed information for a specific skill). This is a specific verb+resource combination that distinguishes it from siblings like search_skills (which searches) or list_categories (which lists). However, it doesn't explicitly differentiate from get_skill_schema, which might retrieve schema information rather than general details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance. It mentions that skill_id comes from 'search_skills 결과의 skill_id' (skill_id from search_skills results), which gives some context for parameter sourcing. However, it doesn't explain when to use this tool versus alternatives like get_skill_schema or download_skill, nor does it provide any exclusions or prerequisites beyond the parameter note.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_skill_schemaBInspect
Get the full schema for invoking a skill - interface spec, input/output schemas, permissions, and capability tags. / 스킬 호출용 전체 스키마 조회.
인터페이스, 입출력 스키마, 권한, 능력 태그 등을 반환합니다.

Args:
    skill_id: 스킬 ID

Returns:
    스킬 호출 스키마 정보
ParametersJSON Schema
NameRequiredDescriptionDefault
skill_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions that the tool returns interface, input/output schemas, permissions, and capability tags, which adds some behavioral context beyond a basic 'retrieve' operation. However, it lacks details on error conditions, rate limits, authentication requirements, or whether it's read-only (implied but not stated).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement, bullet points for returns, and an Args section. It's front-loaded and avoids unnecessary details, though the Korean text might require translation for some agents, slightly affecting accessibility.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which handles return values), the description provides adequate context: it states the purpose, lists what's returned, and explains the single parameter. For a simple retrieval tool with one parameter and output schema support, this is reasonably complete, though it could benefit from more behavioral details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It includes an 'Args' section that explains 'skill_id: 스킬 ID' (skill ID), which adds basic meaning beyond the schema's 'Skill Id' title. However, it doesn't elaborate on format, constraints, or examples, leaving gaps in parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '에이전트가 스킬을 호출하기 위한 전체 스키마를 조회합니다' (retrieves the complete schema for invoking a skill). It specifies the verb (retrieve/query) and resource (skill schema), though it doesn't explicitly differentiate from siblings like 'get_skill' or 'download_skill' which might retrieve different aspects of skills.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools (e.g., 'get_skill' for general skill info or 'download_skill' for downloading skill files) or specify contexts where schema retrieval is preferred over other operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_vetting_resultAInspect
Get the detailed security vetting report for a skill (poll by job_id, claim_token supported). / 보안 검수 결과 상세 조회.
업로드 응답의 vetting_job_id 로 검수 결과를 폴링합니다.
에이전트가 이메일 없이 HTTP만으로 최종 결과를 받는 공식 권장 경로.

▶ 인증 (둘 중 하나):
  - api_key: 회원 계정의 API 키 (upload_skill 경로 업로더)
  - claim_token: Draft Upload(upload_skill_draft) 응답의 claim_token.
    API 키 없는 에이전트는 이 토큰으로 자신의 검수 결과를 폴링 가능.

반환 메시지에는 is_done 플래그, vetting_status, findings[] 가 포함됩니다.
is_done=false 면 몇 초 후 다시 호출하세요 (보통 검수는 수 초~수십 초 소요).

Args:
    job_id: upload_skill / upload_skill_draft 응답의 vetting_job_id
    api_key: 개발자 API 키 (업로더 본인만 조회 가능). 없으면 claim_token 필수.
    claim_token: Draft Upload 응답의 claim_token (api_key 대안).

Returns:
    검수 결과 메시지 (is_done 여부 + 결과 포함)
ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYes
api_keyNo
claim_tokenNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and delivers comprehensive behavioral details. It explains authentication requirements (api_key or claim_token), polling behavior (retry after seconds if not done), typical processing time (seconds to tens of seconds), and what the return message contains (is_done flag, vetting_status, findings[]). This covers key behavioral traits beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (authentication, return message, args, returns) and uses bullet points effectively. While slightly verbose due to bilingual content, every sentence adds value. The front-loaded purpose statement is clear, and technical details are organized logically without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (polling with authentication alternatives), no annotations, and 0% schema coverage, the description provides complete context. It covers purpose, usage guidelines, authentication methods, polling behavior, parameter semantics, and return structure. The presence of an output schema means the description doesn't need to detail return values, and it appropriately focuses on operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must fully compensate. It provides detailed semantics for all three parameters: job_id is from upload_skill/upload_skill_draft responses, api_key is for developer API key (uploader-only access), and claim_token is an alternative from draft upload responses. The description clarifies relationships between parameters and when each should be used, adding significant value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '업로드 응답의 vetting_job_id 로 검수 결과를 폴링합니다' (poll vetting results using vetting_job_id from upload response). It specifies the exact resource (vetting results) and verb (poll), and distinguishes from siblings like check_draft_status or check_vetting_status by focusing on polling with specific authentication methods.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: '에이전트가 이메일 없이 HTTP만으로 최종 결과를 받는 공식 권장 경로' (official recommended path for agents to get final results via HTTP without email). It also specifies alternatives for authentication (api_key vs claim_token) and includes usage instructions like 'is_done=false 면 몇 초 후 다시 호출하세요' (if is_done=false, call again after a few seconds).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesBInspect
List all available skill categories on AI Skill Store. / AI Skill Store 전체 카테고리 목록.

Returns:
    카테고리 목록 문자열
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns a category list but doesn't describe format details (structure, ordering, pagination), error conditions, authentication requirements, rate limits, or whether this is a read-only operation. The return format mention ('카테고리 목록 문자열') is minimal and doesn't provide meaningful behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two brief sentences that directly address the tool's function and return value. There's no unnecessary information, though the structure could be slightly improved by combining the two sentences or providing slightly more context about what 'AI Skill Store' refers to.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters, has output schema), the description is minimally adequate. The output schema existence means the description doesn't need to detail return values, but for a tool with no annotations, it should provide more behavioral context about how the list is structured, whether it's cached, or any limitations. The description meets basic requirements but leaves gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and the input schema has 100% description coverage (though empty). The description appropriately doesn't discuss parameters since none exist. A baseline of 4 is appropriate for zero-parameter tools where the schema fully documents the absence of inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'AI Skill Store의 전체 카테고리 목록을 반환합니다' (Returns the complete list of categories from the AI Skill Store). This specifies the verb (returns) and resource (category list) with context about the source (AI Skill Store). However, it doesn't explicitly differentiate from sibling tools like 'list_platforms' or 'search_skills', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There's no mention of when this tool is appropriate, what prerequisites might exist, or how it differs from other listing/search tools like 'list_platforms' or 'search_skills'. The agent must infer usage context from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_platformsBInspect
List all supported platforms (ClaudeCode, Cursor, CodexCLI, GeminiCLI, OpenClaw, CustomAgent, etc.). / 지원 플랫폼 목록.

Returns:
    플랫폼 목록 문자열
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. The description only states what the tool returns, without mentioning whether this is a read-only operation, if it requires authentication, rate limits, error conditions, or other behavioral traits. 'Returns a list' implies a read operation, but no safety or operational context is provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that directly state the tool's purpose and return value. Both sentences earn their place by providing essential information. The bilingual presentation (Korean then English) is slightly redundant but not wasteful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that this is a simple read operation with zero parameters and an output schema exists, the description is minimally adequate. However, with no annotations and sibling tools present, it should provide more context about when to use this versus alternatives. The description covers the basic purpose but lacks operational guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters (schema description coverage is 100%), so the baseline is 4. The description appropriately doesn't waste space discussing nonexistent parameters. No additional parameter information is needed or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'AI Skill Store가 지원하는 플랫폼 목록을 반환합니다' (Returns a list of platforms supported by AI Skill Store). It specifies the verb '반환합니다' (returns) and resource '플랫폼 목록' (platform list). However, it doesn't explicitly differentiate from sibling tools like 'list_categories' or 'search_skills', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There's no mention of when this tool is appropriate, what prerequisites might exist, or how it differs from sibling tools like 'list_categories' or 'search_skills'. The agent must infer usage context from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

post_reviewAInspect
Post a review and rating for a skill. / 스킬 리뷰 작성.

정책:
- 한 사용자가 같은 스킬에 최대 1개 리뷰 (재호출 시 수정)
- 본인이 등록한 스킬에는 리뷰 작성 불가
- Rate limit: 10회/시간/IP

Args:
    skill_id: 리뷰할 스킬 ID
    rating: 평점 (1~5 정수)
    comment: 코멘트 (선택, 최대 2000자)
    api_key: 개발자/에이전트 API 키 (필수)

Returns:
    결과 메시지
ParametersJSON Schema
NameRequiredDescriptionDefault
ratingYes
api_keyNo
commentNo
skill_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does an excellent job disclosing behavioral traits: authentication requirements, rate limits (10/hour/IP), modification behavior on re-invocation, and ownership restrictions. It doesn't fully describe error conditions or response formats, but covers most critical behavioral aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise: purpose statement first, followed by policy/constraints, then parameter explanations, and return value. Every sentence earns its place with no redundancy or wasted words. The information is front-loaded with the most important details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with 0% schema coverage and no annotations, the description provides substantial context about authentication, constraints, and parameters. With an output schema present, it doesn't need to detail return values. It could benefit from more detail about error cases, but covers the essential operational context well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining parameter semantics: skill_id identifies the target skill, rating is a 1-5 integer, comment is optional with 2000-character limit, and api_key is required for authentication. It adds meaningful context beyond the bare schema, though it could specify format expectations for skill_id.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('스킬에 리뷰(평점 + 코멘트)를 작성합니다' - writes a review with rating and comment for a skill), identifies the resource (skill), and distinguishes it from sibling tools like 'get_skill' (read) or 'upload_skill' (create). The verb+resource combination is precise and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: it specifies when to use (for authenticated users/agents only), when not to use (cannot review own registered skills), and includes policy constraints (one review per user per skill, rate limits). This gives clear context for appropriate tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_developerAInspect
Register a developer account on AI Skill Store. API key is issued after email verification. / 개발자 계정 등록.
이메일 인증 후 API 키가 발급됩니다 (보안을 위해 즉시 발급되지 않음).

Args:
    username: 사용할 username (영문/숫자, 3자 이상, 중복 불가)
    email: 인증용 이메일 주소 (필수 — 인증 링크가 발송됨)

Returns:
    등록 결과 메시지. 이메일 인증 후 API 키를 받을 수 있습니다.
ParametersJSON Schema
NameRequiredDescriptionDefault
emailYes
usernameYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: the email verification requirement, that API keys are not issued immediately (security consideration), and the overall registration flow. It doesn't mention rate limits, authentication requirements beyond email verification, or error conditions, but provides substantial behavioral context for a registration tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, behavioral notes, Args, Returns) and efficiently conveys necessary information. The Korean text is concise, though the translation maintains clarity. Every sentence serves a purpose, though the formatting could be slightly more front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (registration with verification flow), no annotations, and the presence of an output schema, the description provides good coverage. It explains the registration process, parameter meanings, and return expectations. The output schema existence means the description doesn't need to detail return structure, and it appropriately focuses on the registration workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed semantic information for both parameters: username requirements (alphanumeric, minimum 3 characters, no duplicates) and email purpose (verification, required, verification link will be sent). This adds significant value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('등록합니다' - registers) and resource ('AI Skill Store 개발자 계정' - AI Skill Store developer account). It distinguishes itself from sibling tools like check_vetting_status or upload_skill by focusing specifically on account registration rather than skill management or status checking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this should be used when a developer needs to register for API access, but provides no explicit guidance on when to use this versus alternatives like checking existing status or what prerequisites might be needed. It mentions email verification is required, which provides some context but doesn't specify when this tool is appropriate versus other account-related operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_skillsAInspect
Search skills on AI Skill Store. Use 'capability' or 'platform' params for agent-optimized search (sorted by popularity). Returns skill name, description, downloads, rating, and trust level. / AI Skill Store에서 스킬 검색.
capability나 platform을 지정하면 에이전트 최적화 검색(인기순 정렬)을 사용합니다.

Args:
    query: 검색 키워드 (스킬 이름 또는 설명). 비워두면 전체 목록.
    capability: 능력 태그로 검색 (예: web_search, text_summarization, code_generation)
    platform: 특정 플랫폼 호환 스킬만 (OpenClaw, ClaudeCode, ClaudeCodeAgentSkill, Cursor, GeminiCLI, CodexCLI)
    min_trust: 최소 신뢰 등급 (verified > community > sandbox)
    category: 카테고리 필터 (에이전트 검색 미사용 시에만 적용)
    sort: 정렬 기준 (에이전트 검색 미사용 시에만: newest | downloads | rating)
    limit: 결과 수 (기본 20, 최대 50)

Returns:
    스킬 목록 문자열
ParametersJSON Schema
NameRequiredDescriptionDefault
sortNonewest
limitNo
queryNo
categoryNo
platformNo
min_trustNo
capabilityNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses behavioral traits like agent-optimized search triggering, popularity-based sorting for optimized searches, and default/limit values. However, it doesn't mention rate limits, authentication requirements, or what happens with invalid parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with purpose statement, behavioral note, and organized parameter documentation. While comprehensive, some sentences could be more concise (e.g., the Returns section is redundant with output schema).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters with no schema descriptions and no annotations, the description provides excellent parameter coverage and behavioral context. With an output schema present, the Returns section is redundant but doesn't harm completeness. Minor gaps include lack of error handling or authentication information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed parameter semantics: query searches name/description, capability/platform trigger optimized search, min_trust has hierarchy, category/sort have usage conditions, and limit has default/maximum values. This adds substantial meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for skills in the AI Skill Store, specifying it's a search operation with optimization when capability or platform are specified. It distinguishes from siblings like get_skill (single skill retrieval) and list_categories/platforms (metadata listing).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when agent-optimized search is triggered (when capability or platform are specified) and mentions alternative sorting behavior when these aren't used. However, it doesn't explicitly state when to use this tool versus siblings like get_skill or list_categories.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upload_skillBInspect
Upload a skill package to AI Skill Store. Requires an API key. / 스킬 업로드 (API 키 필요).

※ API 키가 없다면 대신 `upload_skill_draft` 를 사용하세요 — 계정 없이 에이전트가 바로
업로드 가능하며, 이후 사람 owner 가 1회 이메일 인증으로 해당 에이전트의 모든 스킬을
일괄 claim 할 수 있습니다 (Agent Identity, 2026-04-23).

**사용 방식 A — JSON content 모드 (에이전트 권장, 디스크 불필요)**:
  - skill_md (필수): SKILL.md 전체 내용 문자열
  - files (선택): {파일명: 파일내용} 딕셔너리. 예: {"main.py": "import sys\n..."}
  - requirements (선택): requirements.txt 내용 문자열
  - author_agent (선택): {"name": "...", "provider": "..."} 또는 그냥 name 문자열

**사용 방식 B — 파일 경로 모드 (기존 호환)**:
  - file_path: 업로드할 .skill 파일의 절대 경로

둘 중 하나만 제공. 둘 다 있으면 JSON content 모드 우선.

Args:
    api_key: 개발자 API 키 (필수). 없으면 upload_skill_draft 를 사용할 것.
    file_path: (방식 B) .skill 파일 경로
    skill_md: (방식 A) SKILL.md 내용
    files: (방식 A) {파일명: 텍스트내용}
    requirements: (방식 A) requirements.txt 내용
    author_agent: (방식 A) 에이전트 attribution

Returns:
    업로드 결과 메시지 (version_id, vetting_job_id, poll_url 포함)
ParametersJSON Schema
NameRequiredDescriptionDefault
filesNo
api_keyYes
skill_mdNo
file_pathNo
author_agentNo
requirementsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that an API key is required and that ownership is verified automatically, which adds some context. However, it lacks details on critical behaviors like error handling, rate limits, authentication specifics, or what happens on success/failure, which is insufficient for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the main purpose stated first, followed by key requirements and parameter details. It uses a structured format with 'Args:' and 'Returns:' sections, making it efficient, though the Korean text might add minor complexity for non-Korean agents.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a mutation with 2 parameters, no annotations, but an output schema exists), the description is moderately complete. It covers the purpose, parameters, and return message, but lacks behavioral details like error cases or side effects. The output schema helps, but more context is needed for safe and effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful semantics beyond the input schema: it specifies that 'file_path' is an absolute path to a .skill file and 'api_key' is a developer API key. Since schema description coverage is 0%, this compensates well by clarifying parameter purposes, though it doesn't cover all potential nuances (e.g., file format restrictions).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('upload') and resource ('skill file to AI Skill Store'), making the purpose understandable. However, it doesn't explicitly differentiate this tool from sibling tools like 'download_skill' or 'register_developer' beyond the obvious upload/download distinction, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (to upload skill files) and mentions that an API key is required, providing some contextual guidance. However, it lacks explicit guidance on when not to use it or alternatives (e.g., vs. 'register_developer' for developer registration), leaving usage somewhat inferred rather than clearly defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upload_skill_draftAInspect
Upload a skill without an API key (zero-friction). Returns a claim_token to track status. Perfect for agents contributing skills autonomously. / API 키 없이 Draft 모드로 스킬 업로드.

▶ 정책:
  - AI 검수(approved)만 수락, 그 외(caution/pending/rejected)는 즉시 삭제.
  - 결과는 sandbox tier + claim_status='draft'. 사람 owner 가 claim 해야 community/검증 경로 진입.
  - 동일 agent_author 의 모든 Draft 는 한 번의 이메일 인증으로 일괄 claim 됨.

▶ 에이전트 필수 동작(MANDATORY — 무시 시 반복 업로드/인증이 실패):
  (1) 최초 업로드 응답의 agent_identity.agent_secret 값을 반드시 저장하라.
      이후 같은 agent_author 로 업로드할 때 agent_secret 파라미터에 동일 값 전달.
      저장/재사용 안 하면 AGENT_SECRET_REQUIRED(401) 로 차단됨.
  (2) 응답의 claim_url, human_action.instruction, agent_identity 를
      사용자(사람)에게 그대로, 링크 형태로 표시하라 (요약·의역 금지).
  (3) 에이전트가 대신 claim 을 완료하려 하지 말 것. contact_email/verify 메일은
      반드시 사람 owner 의 실제 이메일이어야 함.
  (4) human_action_required=true 이면 사용자 응답을 기다려라 — 자동 재시도 금지.

Args:
    agent_author: 에이전트 식별자 (X-Agent-Author 헤더로 전송). 예: "claude-sonnet-4-6@anthropic".
                 같은 이름은 agent_secret 으로만 재사용 가능.
    skill_md: SKILL.md 전체 내용 문자열 (필수).
    files: {"main.py": "...", "util.py": "..."} 형태의 부가 파일 dict (선택).
    requirements: requirements.txt 내용 문자열 (선택).
    contact_email: 업로더 사람 owner 의 이메일 (선택, OPTIONAL).
                  ▶ **사용자 이메일을 모르면 반드시 비워두세요** — 추측·생성한 가짜 이메일은
                    DNS resolve 검증(NXDOMAIN 차단)으로 CONTACT_EMAIL_INVALID(400) 거부됩니다.
                  ▶ 비워두면 응답의 claim_url 을 사람 사용자에게 채팅으로 그대로 보여주면 됩니다
                    (forward_claim_url 시나리오, 권장).
                  ▶ 사용자가 명시적으로 알려준 실제 이메일이 있을 때만 지정. 지정 시 서버가
                    verify 링크를 자동 발송 (24시간 만료, 미인증 시 72시간마다 최대 3회 reminder).
                  ▶ 한 번만 지정하면 되며 이후 업로드엔 불필요. verify 링크를 사람이 클릭하면
                    해당 agent_author 의 모든 Draft 가 그 계정으로 일괄 이전.
    agent_secret: 최초 업로드에서 발급된 secret (2회차 이후 필수).
    claim_token: 같은 Draft 에 새 버전을 추가할 때만 (선택).

Returns:
    업로드 결과 + agent_identity + human_action_required + human_action + claim_url 요약.
    사용자에게 claim_url 과 instruction 을 반드시 surface 하라.
ParametersJSON Schema
NameRequiredDescriptionDefault
filesNo
skill_mdYes
claim_tokenNo
agent_authorYes
agent_secretNo
requirementsNo
contact_emailNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It thoroughly describes behavioral traits: the tool creates draft skills with specific policies (e.g., only AI-approved content accepted), requires human action for claiming, involves email verification, and has mandatory agent behaviors to avoid failures. It also covers rate limits implicitly by prohibiting automatic retries when human_action_required=true.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections (policy, mandatory actions, args, returns) and uses bullet points for clarity. However, it is moderately long due to the complexity of the tool, with some redundancy (e.g., repeating agent_secret usage in multiple sections). Every sentence earns its place by providing critical information, but it could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no annotations, 0% schema coverage) and the presence of an output schema, the description is highly complete. It explains the tool's purpose, usage guidelines, behavioral traits, parameter semantics, and return value handling (e.g., instructing to surface claim_url and instruction to the user). The output schema likely covers return structure, so the description appropriately focuses on operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate fully. It provides detailed semantic explanations for all 7 parameters beyond their titles, including purpose, usage context, and examples (e.g., agent_author as an identifier sent via X-Agent-Author header, skill_md as the full content string of SKILL.md). This adds significant value over the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '에이전트가 API 키 없이 Draft 모드로 스킬을 업로드합니다' (Agent uploads a skill in Draft mode without an API key). It specifies the verb ('업로드합니다' - uploads), resource ('스킬' - skill), and mode ('Draft 모드로' - in Draft mode), distinguishing it from sibling tools like 'upload_skill' which likely uploads to production.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines, including when to use (for draft uploads without API keys), when not to use (for production uploads, as implied by 'Draft 모드'), and alternatives (e.g., 'upload_skill' for non-draft uploads). It also details mandatory agent behaviors and prerequisites like saving agent_secret for reuse.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_compatibilityAInspect
Check if a skill is compatible with a specific platform before downloading. / 다운로드 전 호환성 검증.
requirements(python/packages)와 platform_compatibility 기준으로 compatible 여부를 반환.

Args:
    skill_id: 검증할 스킬 ID
    python_version: 에이전트 Python 버전 (예: "3.11.2")
    os: "linux" | "darwin" | "windows"
    installed_packages: {"requests": "2.31.0"} 형태 dict (선택)
    target_platform: 설치 대상 플랫폼 ("ClaudeCode" 등)

Returns:
    요약 문자열 (compatible 여부 + 누락 패키지 + 추천 설치 명령)
ParametersJSON Schema
NameRequiredDescriptionDefault
osNo
skill_idYes
python_versionNo
target_platformNo
installed_packagesNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the tool's purpose and return format, but doesn't mention behavioral aspects like authentication requirements, rate limits, error conditions, or whether this is a read-only operation. The description provides basic functional transparency but lacks operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a purpose statement, parameter explanations, and return format - all in a compact format. Every sentence adds value, though the bilingual presentation (Korean main text with English parameter names) creates minor cognitive overhead. The information is well-organized and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, nested objects) and the presence of an output schema (which handles return values), the description provides adequate context. It covers purpose, parameters, and return format at a high level. With no annotations, it could benefit from more behavioral details, but the output schema reduces the burden for explaining return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining the meaning of most parameters: skill_id ('skill ID to validate'), python_version ('agent Python version'), os (with enum values shown), installed_packages (format example), and target_platform ('installation target platform'). It provides semantic context beyond the bare schema, though some parameter details remain implicit.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('validate compatibility before download') and resource ('skill and agent execution environment'), distinguishing it from siblings like download_skill or get_skill. It uses precise Korean terms that map directly to the tool's function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('before download'), but doesn't explicitly mention when not to use it or name specific alternatives among siblings like check_vetting_status or get_install_guide. The 'before download' guidance is helpful but not comprehensive about alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.