Skip to main content
Glama

AI Skill Store

Server Details

Agent-first skill marketplace with USK open standard for Claude, Cursor, Gemini, Codex CLI.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
garasegae/aiskillstore
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 10 of 10 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap. For example, check_vetting_status handles vetting status, download_skill retrieves files, get_install_guide provides installation instructions, and get_skill fetches skill details. The tools cover different aspects of the skill lifecycle without ambiguity.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case, such as check_vetting_status, download_skill, get_install_guide, and search_skills. This uniformity makes the tool set predictable and easy to navigate for agents.

Tool Count5/5

With 10 tools, the server is well-scoped for managing an AI skill store, covering essential operations like registration, upload, download, search, and information retrieval. Each tool serves a necessary function without redundancy, fitting the domain's needs appropriately.

Completeness4/5

The tool set provides comprehensive coverage for the AI skill store domain, including developer registration, skill upload/download, search, and detailed information retrieval. A minor gap exists in update or delete operations for skills, but core workflows are fully supported, allowing agents to work effectively.

Available Tools

10 tools
check_vetting_statusAInspect
업로드한 스킬의 보안 검수(vetting) 상태를 확인합니다.
upload_skill 결과에서 받은 version_id와 API 키가 필요합니다.

Args:
    version_id: 스킬 버전 ID (upload_skill 결과의 version_id 또는 vetting_job_id)
    api_key: 개발자 API 키 (스킬 소유자만 조회 가능)

Returns:
    검수 상태 메시지
ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyYes
version_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions authentication needs (API key for skill owners only) and prerequisites (version_id from upload_skill), which adds useful context. However, it lacks details on rate limits, error handling, or response format beyond a generic 'status message', leaving gaps in behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the purpose, followed by prerequisites and a structured breakdown of args and returns. Every sentence adds value without redundancy, making it efficient and well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, but an output schema exists), the description is mostly complete. It covers purpose, prerequisites, and parameter meanings. However, it lacks details on the return value format (e.g., what the 'status message' includes) and error conditions, which the output schema might not fully address, leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains that version_id comes from upload_skill results and can be either version_id or vetting_job_id, and that api_key is for skill owners only, adding meaningful semantics beyond the bare schema. However, it does not detail the format or constraints of these parameters, such as length or encoding, which slightly limits the score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('check security vetting status') and resource ('uploaded skill'), distinguishing it from siblings like upload_skill (which uploads) or search_skills (which searches). It precisely identifies what the tool does without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: after upload_skill and with specific prerequisites (version_id and API key). However, it does not explicitly state when not to use it or name alternatives, such as checking status through other means or tools, which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

download_skillAInspect
스킬 파일을 다운로드합니다. platform을 지정하면 해당 플랫폼용으로 변환된 패키지를 받습니다.

Args:
    skill_id: 다운로드할 스킬 ID
    platform: 플랫폼 (OpenClaw, ClaudeCode, ClaudeCodeAgentSkill, CustomAgent, Cursor, GeminiCLI, CodexCLI). 비워두면 원본(.skill) 다운로드.
    save_dir: 저장 디렉터리 경로 (비워두면 임시 디렉터리에 저장)

Returns:
    저장된 파일 경로 또는 오류 메시지
ParametersJSON Schema
NameRequiredDescriptionDefault
platformNo
save_dirNo
skill_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool performs file downloads (implied read-only) and optional platform conversion, and mentions default behaviors (original .skill download if platform empty, temporary directory if save_dir empty). However, it doesn't cover error handling, file formats, permissions needed, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement followed by organized Args and Returns sections. Every sentence adds value: the first explains core functionality, the second covers platform conversion, and the parameter/return explanations are essential given the schema coverage gap.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (file download with conversion options), no annotations, and an output schema that covers return values, the description does well by explaining parameters thoroughly and stating the return type. However, it could better address behavioral aspects like error conditions or file handling details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining all three parameters: skill_id (what to download), platform (conversion target with enum values and default behavior), and save_dir (storage location with default behavior). It adds crucial meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('downloads skill files') and resource ('skill files'), and distinguishes from siblings by focusing on file retrieval rather than status checking, schema fetching, or uploading. It specifies the tool's unique capability of platform-specific conversion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to download skill files, optionally with platform conversion) and implicitly distinguishes from siblings like 'get_skill' (which likely returns metadata) or 'upload_skill'. However, it doesn't explicitly state when NOT to use it or name specific alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_install_guideBInspect
특정 플랫폼에 스킬을 설치하는 방법을 안내합니다.

Args:
    skill_id: 스킬 ID
    platform: 플랫폼 이름 - 'OpenClaw' | 'ClaudeCode' | 'ClaudeCodeAgentSkill' | 'CustomAgent' | 'Cursor' | 'GeminiCLI' | 'CodexCLI'

Returns:
    단계별 설치 가이드 문자열
ParametersJSON Schema
NameRequiredDescriptionDefault
platformNoOpenClaw
skill_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool '안내합니다' (guides), implying it's informational and likely read-only, but doesn't confirm this or disclose other behavioral traits like authentication needs, rate limits, or whether it modifies data. For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: it starts with the core purpose, followed by clear sections for Args and Returns. Every sentence earns its place by defining parameters and output without unnecessary elaboration. The structure is efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 parameters with 0% schema coverage and an output schema (implied by Returns), the description is mostly complete. It explains the parameters and output type ('단계별 설치 가이드 문자열' - step-by-step installation guide string), but lacks behavioral context like error handling or prerequisites. With output schema handling return values, the description covers the essentials adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining 'skill_id' as '스킬 ID' (skill ID) and 'platform' as '플랫폼 이름' (platform name) with an enum list, clarifying what these parameters represent beyond the schema's basic titles. However, it doesn't detail format constraints or examples, leaving some ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '특정 플랫폼에 스킬을 설치하는 방법을 안내합니다' (guides how to install a skill on a specific platform). It specifies the verb '안내합니다' (guides) and resource '스킬' (skill) with the context of installation. However, it doesn't explicitly differentiate from siblings like 'download_skill' or 'get_skill', which might provide related but different functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'download_skill' (which might download the skill file) or 'get_skill' (which might retrieve skill details), leaving the agent to infer usage context. There's no explicit when/when-not or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_skillBInspect
특정 스킬의 상세 정보를 조회합니다.

Args:
    skill_id: 스킬 ID (search_skills 결과의 skill_id)

Returns:
    스킬 상세 정보 JSON 문자열
ParametersJSON Schema
NameRequiredDescriptionDefault
skill_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states this is a read operation ('조회합니다' - retrieves), which implies it's non-destructive, but doesn't mention authentication requirements, rate limits, error conditions, or what happens if the skill_id is invalid. For a tool with no annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured. It uses exactly three sentences: a purpose statement, parameter explanation, and return value description. Every sentence earns its place with no wasted words, and the information is front-loaded with the core purpose first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's relative simplicity (single parameter, read-only operation) and the presence of an output schema (which handles return value documentation), the description is reasonably complete. It covers the core purpose, parameter semantics, and mentions the return format. The main gap is the lack of behavioral context (auth, errors, etc.), but for this complexity level with output schema support, it's mostly adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context beyond the input schema. While the schema only documents skill_id as a required string parameter, the description explains that it's '스킬 ID (search_skills 결과의 skill_id)' - clarifying that this ID comes from search_skills results. This is valuable semantic information that helps the agent understand parameter sourcing, compensating for the 0% schema description coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '특정 스킬의 상세 정보를 조회합니다' (Retrieves detailed information for a specific skill). This is a specific verb+resource combination that distinguishes it from siblings like search_skills (which searches) or list_categories (which lists). However, it doesn't explicitly differentiate from get_skill_schema, which might retrieve schema information rather than general details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance. It mentions that skill_id comes from 'search_skills 결과의 skill_id' (skill_id from search_skills results), which gives some context for parameter sourcing. However, it doesn't explain when to use this tool versus alternatives like get_skill_schema or download_skill, nor does it provide any exclusions or prerequisites beyond the parameter note.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_skill_schemaBInspect
에이전트가 스킬을 호출하기 위한 전체 스키마를 조회합니다.
인터페이스, 입출력 스키마, 권한, 능력 태그 등을 반환합니다.

Args:
    skill_id: 스킬 ID

Returns:
    스킬 호출 스키마 정보
ParametersJSON Schema
NameRequiredDescriptionDefault
skill_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions that the tool returns interface, input/output schemas, permissions, and capability tags, which adds some behavioral context beyond a basic 'retrieve' operation. However, it lacks details on error conditions, rate limits, authentication requirements, or whether it's read-only (implied but not stated).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement, bullet points for returns, and an Args section. It's front-loaded and avoids unnecessary details, though the Korean text might require translation for some agents, slightly affecting accessibility.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which handles return values), the description provides adequate context: it states the purpose, lists what's returned, and explains the single parameter. For a simple retrieval tool with one parameter and output schema support, this is reasonably complete, though it could benefit from more behavioral details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It includes an 'Args' section that explains 'skill_id: 스킬 ID' (skill ID), which adds basic meaning beyond the schema's 'Skill Id' title. However, it doesn't elaborate on format, constraints, or examples, leaving gaps in parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '에이전트가 스킬을 호출하기 위한 전체 스키마를 조회합니다' (retrieves the complete schema for invoking a skill). It specifies the verb (retrieve/query) and resource (skill schema), though it doesn't explicitly differentiate from siblings like 'get_skill' or 'download_skill' which might retrieve different aspects of skills.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools (e.g., 'get_skill' for general skill info or 'download_skill' for downloading skill files) or specify contexts where schema retrieval is preferred over other operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesBInspect
AI Skill Store의 전체 카테고리 목록을 반환합니다.

Returns:
    카테고리 목록 문자열
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns a category list but doesn't describe format details (structure, ordering, pagination), error conditions, authentication requirements, rate limits, or whether this is a read-only operation. The return format mention ('카테고리 목록 문자열') is minimal and doesn't provide meaningful behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two brief sentences that directly address the tool's function and return value. There's no unnecessary information, though the structure could be slightly improved by combining the two sentences or providing slightly more context about what 'AI Skill Store' refers to.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters, has output schema), the description is minimally adequate. The output schema existence means the description doesn't need to detail return values, but for a tool with no annotations, it should provide more behavioral context about how the list is structured, whether it's cached, or any limitations. The description meets basic requirements but leaves gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and the input schema has 100% description coverage (though empty). The description appropriately doesn't discuss parameters since none exist. A baseline of 4 is appropriate for zero-parameter tools where the schema fully documents the absence of inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'AI Skill Store의 전체 카테고리 목록을 반환합니다' (Returns the complete list of categories from the AI Skill Store). This specifies the verb (returns) and resource (category list) with context about the source (AI Skill Store). However, it doesn't explicitly differentiate from sibling tools like 'list_platforms' or 'search_skills', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There's no mention of when this tool is appropriate, what prerequisites might exist, or how it differs from other listing/search tools like 'list_platforms' or 'search_skills'. The agent must infer usage context from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_platformsBInspect
AI Skill Store가 지원하는 플랫폼 목록을 반환합니다.

Returns:
    플랫폼 목록 문자열
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. The description only states what the tool returns, without mentioning whether this is a read-only operation, if it requires authentication, rate limits, error conditions, or other behavioral traits. 'Returns a list' implies a read operation, but no safety or operational context is provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that directly state the tool's purpose and return value. Both sentences earn their place by providing essential information. The bilingual presentation (Korean then English) is slightly redundant but not wasteful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that this is a simple read operation with zero parameters and an output schema exists, the description is minimally adequate. However, with no annotations and sibling tools present, it should provide more context about when to use this versus alternatives. The description covers the basic purpose but lacks operational guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters (schema description coverage is 100%), so the baseline is 4. The description appropriately doesn't waste space discussing nonexistent parameters. No additional parameter information is needed or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'AI Skill Store가 지원하는 플랫폼 목록을 반환합니다' (Returns a list of platforms supported by AI Skill Store). It specifies the verb '반환합니다' (returns) and resource '플랫폼 목록' (platform list). However, it doesn't explicitly differentiate from sibling tools like 'list_categories' or 'search_skills', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There's no mention of when this tool is appropriate, what prerequisites might exist, or how it differs from sibling tools like 'list_categories' or 'search_skills'. The agent must infer usage context from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_developerAInspect
AI Skill Store 개발자 계정을 등록합니다.
이메일 인증 후 API 키가 발급됩니다 (보안을 위해 즉시 발급되지 않음).

Args:
    username: 사용할 username (영문/숫자, 3자 이상, 중복 불가)
    email: 인증용 이메일 주소 (필수 — 인증 링크가 발송됨)

Returns:
    등록 결과 메시지. 이메일 인증 후 API 키를 받을 수 있습니다.
ParametersJSON Schema
NameRequiredDescriptionDefault
emailYes
usernameYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: the email verification requirement, that API keys are not issued immediately (security consideration), and the overall registration flow. It doesn't mention rate limits, authentication requirements beyond email verification, or error conditions, but provides substantial behavioral context for a registration tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, behavioral notes, Args, Returns) and efficiently conveys necessary information. The Korean text is concise, though the translation maintains clarity. Every sentence serves a purpose, though the formatting could be slightly more front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (registration with verification flow), no annotations, and the presence of an output schema, the description provides good coverage. It explains the registration process, parameter meanings, and return expectations. The output schema existence means the description doesn't need to detail return structure, and it appropriately focuses on the registration workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed semantic information for both parameters: username requirements (alphanumeric, minimum 3 characters, no duplicates) and email purpose (verification, required, verification link will be sent). This adds significant value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('등록합니다' - registers) and resource ('AI Skill Store 개발자 계정' - AI Skill Store developer account). It distinguishes itself from sibling tools like check_vetting_status or upload_skill by focusing specifically on account registration rather than skill management or status checking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this should be used when a developer needs to register for API access, but provides no explicit guidance on when to use this versus alternatives like checking existing status or what prerequisites might be needed. It mentions email verification is required, which provides some context but doesn't specify when this tool is appropriate versus other account-related operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_skillsAInspect
AI Skill Store에서 스킬을 검색합니다.
capability나 platform을 지정하면 에이전트 최적화 검색(인기순 정렬)을 사용합니다.

Args:
    query: 검색 키워드 (스킬 이름 또는 설명). 비워두면 전체 목록.
    capability: 능력 태그로 검색 (예: web_search, text_summarization, code_generation)
    platform: 특정 플랫폼 호환 스킬만 (OpenClaw, ClaudeCode, ClaudeCodeAgentSkill, Cursor, GeminiCLI, CodexCLI)
    min_trust: 최소 신뢰 등급 (verified > community > sandbox)
    category: 카테고리 필터 (에이전트 검색 미사용 시에만 적용)
    sort: 정렬 기준 (에이전트 검색 미사용 시에만: newest | downloads | rating)
    limit: 결과 수 (기본 20, 최대 50)

Returns:
    스킬 목록 문자열
ParametersJSON Schema
NameRequiredDescriptionDefault
sortNonewest
limitNo
queryNo
categoryNo
platformNo
min_trustNo
capabilityNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses behavioral traits like agent-optimized search triggering, popularity-based sorting for optimized searches, and default/limit values. However, it doesn't mention rate limits, authentication requirements, or what happens with invalid parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with purpose statement, behavioral note, and organized parameter documentation. While comprehensive, some sentences could be more concise (e.g., the Returns section is redundant with output schema).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters with no schema descriptions and no annotations, the description provides excellent parameter coverage and behavioral context. With an output schema present, the Returns section is redundant but doesn't harm completeness. Minor gaps include lack of error handling or authentication information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed parameter semantics: query searches name/description, capability/platform trigger optimized search, min_trust has hierarchy, category/sort have usage conditions, and limit has default/maximum values. This adds substantial meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for skills in the AI Skill Store, specifying it's a search operation with optimization when capability or platform are specified. It distinguishes from siblings like get_skill (single skill retrieval) and list_categories/platforms (metadata listing).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when agent-optimized search is triggered (when capability or platform are specified) and mentions alternative sorting behavior when these aren't used. However, it doesn't explicitly state when to use this tool versus siblings like get_skill or list_categories.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upload_skillBInspect
스킬 파일을 AI Skill Store에 업로드합니다. API 키가 필요합니다.
소유자는 API 키를 통해 서버에서 자동으로 확인됩니다.

Args:
    file_path: 업로드할 .skill 파일의 절대 경로
    api_key: 개발자 API 키

Returns:
    업로드 결과 메시지
ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyYes
file_pathYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that an API key is required and that ownership is verified automatically, which adds some context. However, it lacks details on critical behaviors like error handling, rate limits, authentication specifics, or what happens on success/failure, which is insufficient for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the main purpose stated first, followed by key requirements and parameter details. It uses a structured format with 'Args:' and 'Returns:' sections, making it efficient, though the Korean text might add minor complexity for non-Korean agents.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a mutation with 2 parameters, no annotations, but an output schema exists), the description is moderately complete. It covers the purpose, parameters, and return message, but lacks behavioral details like error cases or side effects. The output schema helps, but more context is needed for safe and effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful semantics beyond the input schema: it specifies that 'file_path' is an absolute path to a .skill file and 'api_key' is a developer API key. Since schema description coverage is 0%, this compensates well by clarifying parameter purposes, though it doesn't cover all potential nuances (e.g., file format restrictions).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('upload') and resource ('skill file to AI Skill Store'), making the purpose understandable. However, it doesn't explicitly differentiate this tool from sibling tools like 'download_skill' or 'register_developer' beyond the obvious upload/download distinction, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (to upload skill files) and mentions that an API key is required, providing some contextual guidance. However, it lacks explicit guidance on when not to use it or alternatives (e.g., vs. 'register_developer' for developer registration), leaving usage somewhat inferred rather than clearly defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.