BytesAgain AI Skills Search
Server Details
Search 60,000+ AI agent skills via MCP. Supports 7 languages (EN/ZH/JA/KO/DE/FR/ES). Free, no auth required.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 11 of 11 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes (search, detail, ranking, pipeline, community requests, deals), but generate_usecase, get_workflow, and run_pipeline all involve generating task solutions with skill recommendations, which could cause confusion. Descriptions help differentiate, but some overlap remains.
Tool names consistently use snake_case with a verb_noun pattern (e.g., search_skills, get_skill, submit_request), except popular_skills which is adjective_noun, but this is a minor deviation. Overall pattern is clear and predictable.
11 tools are well-scoped for a skill search and recommendation platform. Each tool serves a clear purpose without redundancy, covering search, details, rankings, generation, community, and even promotional deals.
The tool set covers the core domain thoroughly: multiple search modes, skill details, popularity, scoring, use cases, community requests (list and submit), and workflow generation. The inclusion of an AliExpress deals tool is a slight departure but doesn't create gaps.
Available Tools
14 toolsevaluate_skillAInspect
Full skill evaluation lab. Static analysis (23 patterns: sensitive file reads, remote exec, reverse shells, obfuscation) Docker sandbox execution (install skill, run with strace, monitor syscalls, file access, network) AI evaluation (strengths, weaknesses, risks, quality grade, verified capabilities) Returns: safety_score, risk_level, execution_report, evaluation (summary + verified_capabilities + strengths + weaknesses + risks + quality_grade + recommendation)
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | ClawHub skill slug to evaluate. Example: "shell", "invoice-pdf". Required. | |
| test_input | No | Test input to pass to the skill. Default: "hello world". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behavior: static analysis with 23 patterns, Docker sandbox with strace, and AI evaluation. It also lists all return fields, ensuring the agent understands what will happen.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections for evaluation types and return fields. First sentence is an effective summary. Slightly dense but each part adds value, earning a high score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description fully covers inputs, processing phases, and return fields. It provides all necessary context for an agent to understand the tool's full scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema descriptions already cover both parameters. The description adds an example for slug and mentions test_input's default, but no significant additional meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a 'Full skill evaluation lab' and enumerates three specific evaluation components (static analysis, Docker sandbox, AI evaluation), which distinguishes it from siblings like scan_skill.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus siblings (e.g., scan_skill or get_skill). The description implies comprehensiveness but does not provide comparative context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_usecaseAInspect
Generate a use case for a given topic or goal. The process: 1) search 60,000+ AI skills by keyword, 2) AI-score top results for relevance, 3) select best 5 skills for the task, 4) generate structured use case with skill recommendations. Use when a user describes a task and wants a curated AI skill stack.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Task or goal in natural language. Example: "automate invoice processing", "write social media content", "analyze customer feedback". Required. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description reveals the internal pipeline (search, score, select, generate), which is useful behavioral context. No annotations exist, so description carries the burden and does well, though it does not mention side effects or limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise paragraph with numbered steps and a usage guideline. Every sentence adds value, with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has one parameter and no output schema; description explains the process and usage well. Missing output format details but still adequate for the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description adds examples for the query parameter, exceeding the baseline. The process description also clarifies how the query is used.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool generates a use case for a given topic, with a detailed multi-step process that distinguishes it from siblings like search_skills or search_use_cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when a user describes a task and wants a curated AI skill stack.' Provides clear context but does not explicitly exclude alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_dealsAInspect
Get active AliExpress Summer Refresh Savings coupon codes — up to 60% off, valid May 10-15. Returns region-specific coupon codes with discount thresholds and validity dates. Supported regions: global codes, US, Brazil, Korea, France, Spain, Germany. Each region has its own set of coupon codes. Use when the user asks about AliExpress discounts, coupon codes, or promotions.
| Name | Required | Description | Default |
|---|---|---|---|
| region | No | Optional region filter: "global", "us", "br", "kr", "fr", "es", "de", or "all". Default: "all". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that the tool returns region-specific coupon codes with thresholds and validity dates and lists supported regions, but omits details like whether the data is cached, rate limits, or error handling. This is adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: two sentences plus a usage line. It front-loads the main purpose and region details without any unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one optional parameter, the description provides enough context: what it does, what it returns (coupon codes with thresholds and dates), supported regions, and when to use. No output schema is needed as the return type is sufficiently described.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers 100% of parameters with a clear description for 'region'. The description adds value by listing the region options explicitly and stating the default value 'all', enhancing the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get active AliExpress coupon codes and promotion deals', with a specific verb and resource. It distinguishes itself from sibling tools (get_skill, get_workflow, etc.) which are unrelated, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when the user asks about AliExpress discounts, coupon codes, or promotions', providing clear context for when to invoke. It does not mention when not to use, but sibling tools are unrelated so no confusion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_skillCInspect
Fetch full details for one skill by slug. Call AFTER search_skills or popular_skills when a user selects a specific result — do NOT batch-call for every item. Returns: name, description, category, tags, version, author, downloads, stars, install_command, homepage_url, repo_url. Error lifecycle: slug not found → {error: "Skill not found"} → fall back to search_skills with related keyword. Never guess slugs; only use slugs from prior tool results.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Exact slug string from a prior search_skills or popular_skills result. Format: lowercase, hyphen-separated (e.g. "chart-generator"). Never guess or modify slugs. Required — no default. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get full details' implies a read operation, it doesn't specify whether this requires authentication, has rate limits, returns structured data, or handles errors. For a tool with zero annotation coverage, this represents significant gaps in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the core purpose without unnecessary words. It's appropriately sized for a simple lookup tool and front-loads the essential information effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations, no output schema, and 0% schema description coverage, the description is inadequate. It doesn't explain what 'full details' includes, how to interpret the slug parameter, or what format the response takes. The agent would need to guess about the tool's behavior and output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions the single parameter ('by slug') but provides no additional semantic context beyond what's implied by the parameter name. With 0% schema description coverage and only one parameter, this meets the baseline expectation but doesn't add meaningful value beyond the minimal schema information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get full details') and target resource ('for a specific skill by slug'), providing a specific verb+resource combination. However, it doesn't distinguish this tool from its siblings (popular_skills and search_skills), which would require explicit differentiation to earn a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its siblings (popular_skills and search_skills). It doesn't mention alternatives, exclusions, or contextual prerequisites, leaving the agent with insufficient information to make informed selection decisions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_workflowAInspect
Return a complete agent-ready workflow for a user goal, including who it is for, common blockers, skill selection standards, recommended steps, tested skill-stack candidates, prompt for the user agent, and upgrade path. Use this when the user asks how to solve a problem or what skill stack their agent should use. Prefer this over raw search when the user arrives with a business/task problem.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | User goal or blocker. Example: "upgrade AI website SEO", "ecommerce product listing agent", "improve my agent workflow". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description implies a read operation but does not explicitly state it is non-destructive, nor does it mention permissions, rate limits, or other behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is one long sentence but efficiently packs all key elements. It is front-loaded with the main action and omits extraneous details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a single parameter and no output schema, the description thoroughly lists what the workflow includes. It covers purpose and usage guidelines, though it lacks specifics on return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with descriptions, but the tool description adds valuable examples and context interpreting 'query' as user goal or blocker, going beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with a specific verb 'Return' and resource 'workflow', listing concrete components. It clearly distinguishes from sibling tools focused on skills and use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this when the user asks how to solve a problem or what skill stack their agent should use' and advises to prefer it over raw search for business/task problems.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
install_stackAInspect
Return a curated skill stack (bundle) for bulk pre-installation. Each stack groups 5-15 skills for a common use case. Returns: stack name, description, skills with slugs, install commands. Available stacks: developer-starter, content-creator, data-analyst, crypto-trader, devops-engineer, ai-agent-developer, security-auditor, homework-helper, startup-founders, marketing-team
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | Stack name. Available: developer-starter, content-creator, data-analyst, crypto-trader, devops-engineer, ai-agent-developer, security-auditor, homework-helper, startup-founders, marketing-team. Default: "developer-starter". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the tool as returning information without side effects, but does not explicitly declare read-only or non-destructive behavior. The return content is listed, which adds some transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, consisting of two sentences and a list. It front-loads the purpose and is free of unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one optional parameter, no output schema), the description fully covers the purpose, return content, and available options. It is complete for an AI agent to understand and use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds meaning beyond the schema by explaining that stacks group 5-15 skills for common use cases, providing context for the parameter's default and available options.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Return a curated skill stack (bundle) for bulk pre-installation' with specific verb and resource. It distinguishes from sibling tools by focusing on pre-defined stacks rather than individual skills or other operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for bulk pre-installation but does not explicitly state when to use this tool versus alternatives like search_skills or get_skill. No exclusion or when-not instructions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_requestsAInspect
Get recent skill requests from the BytesAgain community wall, newest first. Returns id, title, request text, platform, budget, nickname, view_count, and created_at. Contact info is excluded for privacy. Optionally filter by keyword in title or request text.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of requests to return. Default: 20. Max: 50. | |
| query | No | Optional keyword to filter requests by title or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries full burden. It discloses that contact info is excluded for privacy and describes returned fields. However, it does not specify behavioral details such as time range for 'recent', pagination, authentication requirements, or rate limits. The transparency is adequate but incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences, front-loading the core purpose. Every sentence adds value: first introduces tool, second details output fields and filtering. No redundancy. Could be slightly more structured (e.g., listing fields in a way that matches output), but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description compensates by listing fields. It mentions filtering but lacks guidance on ordering (only 'newest first', not configurable). No comparative usage advice versus siblings or details on limits beyond schema. Completeness is sufficient but not thorough for seamless context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds context that the 'query' parameter filters by title or content, which aligns with schema. It also lists return fields, helping agents understand the data structure, but does not substantially enhance parameter semantics beyond schema explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get recent skill requests'), the resource ('BytesAgain community wall'), and the ordering ('newest first'). It lists returned fields, which adds specificity. The tool name 'list_requests' is differentiated from siblings like 'search_skills' and 'popular_skills' by focusing on requests rather than skills, making selection unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching recent requests with optional keyword filtering but does not explicitly state when to use this tool versus alternatives like 'search_use_cases' or 'search_skills'. No exclusion criteria or context for selection are provided, so guidance is minimal.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
popular_skillsCInspect
Return top N AI agent skills ranked by download count. Use for discovery or onboarding when user has no specific task in mind (e.g. "show me popular skills", "what can I do with this"). Do NOT use when user describes a specific task — use search_skills instead. Returns: slug, name, description, category, downloads, stars. On database error returns empty list — do not retry. Default limit 20, max 50. Follow up with get_skill only if user requests details on a specific result.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | How many top skills to return. Default: 20. Max: 50. Use 5-10 for quick recommendations, 20-50 for browsing. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It implies a read-only operation ('Get'), but lacks details on permissions, rate limits, pagination, or what the return format looks like. This is a significant gap for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't explain behavioral traits like safety or return values, and while the schema covers parameters well, the overall context for agent usage is insufficient, especially compared to sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'limit' parameter fully documented. The description doesn't add any parameter details beyond what the schema provides, so it meets the baseline score of 3 for high schema coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('top skills by download count'), making it immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_skill' or 'search_skills', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its siblings ('get_skill' and 'search_skills'), nor does it mention any prerequisites or exclusions. It simply states what the tool does without contextual usage advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
run_pipelineAInspect
Full automated content pipeline. Given a topic, it: 1) Discovers relevant skills from 60,000+ database (search + rank) 2) Scores each on 6 dimensions (downloads, stars, category relevance, description quality, source diversity, name match) 3) Uses AI to select the best 5-8 skills that genuinely fit the topic 4) Generates a structured use case (title, description, skill stack with reasons) 5) Writes a full 800+ word article in markdown 6) Creates 3 tweet drafts promoting the use case 7) Saves article to Supabase posts table and use case to use_cases table 8) Generates a 1792x1024 cover image and uploads to Supabase Storage 9) Returns everything in one response Set publish=true to auto-publish. Set publish=false (default) for draft-only.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Topic for the full pipeline. Example: "automate invoice processing", "write code documentation". Required. | |
| publish | No | Auto-publish to live site. Default: false (draft). Set true to set status=published. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure responsibility. It details all side effects: database saves, image generation, content creation, and return of multiple artifacts. It does not mention error handling or rate limits, but the level of transparency is high.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is detailed with a numbered list of 9 steps, which is helpful but somewhat lengthy. While every step adds information, a more condensed description could convey the core purpose faster. Still, the structure is clear and easy to follow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity and no output schema, the description mentions that it 'Returns everything in one response' and lists what is created, but it does not specify the response structure or format. It also omits potential errors or prerequisites (e.g., API access). More detail on output would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already covers both parameters with clear descriptions. The description adds value by explaining the default behavior of publish (false for draft) and the action when set to true (auto-publish). This enhances understanding beyond the schema's technical specification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Full automated content pipeline' and lists 9 specific steps, making the purpose unmistakable. It distinguishes from sibling tools by being a composite workflow that integrates multiple steps (skill discovery, scoring, generation, publishing) in one call.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Given a topic' and explains the publish parameter behavior. It implies when to use this pipeline (for complete content generation) versus individual sibling tools. However, it does not explicitly state when not to use it or recommend alternatives for partial tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scan_skillAInspect
Security scanner for AI agent skills. Fetches the skill's script, runs static analysis checking 30+ dangerous patterns: sensitive file reads (.env, .ssh, $HOME), remote code execution (curl|bash, base64 decode + exec), obfuscation signals, reverse shells, credential leaks, affiliate link abuse. Uses pattern matching for fast results and optional DeepSeek AI for deeper review. Returns safety score 0-100, flagged violations, and recommendations.
| Name | Required | Description | Default |
|---|---|---|---|
| deep | No | Run DeepSeek AI analysis on the script content for deeper inspection. Default: true. | |
| slug | Yes | ClawHub skill slug to scan. Example: "shell", "task-planner". Required. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses key behaviors: fetches script, runs static analysis, optional DeepSeek AI, returns safety score and violations. Does not mention side effects, but tool is read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with purpose, then details. Every sentence adds value; no fluff or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low parameter count, high schema coverage, and no output schema, the description fully explains what the tool does and its return values (safety score, violations, recommendations). Complete for a scanning tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters described. Description merely repeats schema info for 'deep' parameter, adding no new meaning. Baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it's a security scanner for AI agent skills, with specific actions: fetches script, runs static analysis on 30+ patterns. Distinguishes from siblings like evaluate_skill or score_skills by focusing on security.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for security scanning of skills, but does not explicitly state when not to use it or mention alternatives like evaluate_skill for general evaluation. Good context but lacks exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
score_skillsAInspect
Six-dimension skill scoring engine. Given a topic, searches skills and scores each on: downloads (25pts), stars (15pts), category relevance to topic (20pts, AI-evaluated), description quality (15pts), source diversity (15pts), name match (10pts). Use when you want to see how well skills rank for a task. Returns scored list sorted by total.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of skills to return scored. Default: 20. Max: 50. | |
| query | Yes | Topic or task to score skills against. Example: "email automation", "data analysis". Required. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses the six dimensions and that one is AI-evaluated, but does not mention how skills are searched, rate limits, or if results are cached. Moderate transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, concise and front-loaded with purpose, then detailing scoring criteria and return format. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains the scoring dimensions and general return format. However, it does not specify the exact output structure (e.g., property names) or behavior when no skills are found. Lacks complete detail for a moderately complex tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with clear descriptions for both parameters (query and limit). Description does not add significant parameter-level detail beyond schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a scoring engine for skills on six dimensions, given a topic. It distinguishes from siblings like search_skills (likely without scoring) and popular_skills (likely just popularity).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use when you want to see how well skills rank for a task', which is clear guidance. Does not mention alternatives or when not to use, but the context of siblings provides that.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_skillsAInspect
Search hundreds of thousands of AI agent skills from the BytesAgain platform. 3 main directions: Skill Search (hundreds of thousands of skills, 7 languages), Use Cases (1,000+ real-world AI workflows), Request Wall (community skill requests). Supports 7 languages: EN, Chinese (中文), Japanese (日本語), Korean (한국어), German, French, ES. Returns skills with slug, name, description, category, tags, downloads, stars, source, and source_url. Results ranked by relevance (full-text score) then download count. Use when user wants to find or discover skills for a specific task or topic. Example queries: "email automation", "邮件自动化", "data analysis", "메일 자동화".
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results. Default: 10. Max: 50. | |
| query | No | Search keyword in any supported language. Example: "data analysis" or "数据分析". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the scale ('60,000+ AI agent skills') and language support, which adds useful context beyond basic functionality. However, it lacks details on behavioral traits such as rate limits, authentication needs, pagination, or response format, leaving gaps for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality ('Search 60,000+ AI agent skills by keyword') and adds essential context (language support). Every word earns its place with no redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search with parameters), no annotations, and no output schema, the description is incomplete. It covers purpose and scope but lacks details on behavioral traits, output format, or error handling. It is adequate as a minimum viable description but has clear gaps for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the two parameters ('limit' and 'query'). The description does not add any parameter-specific details beyond what the schema provides, such as search syntax or language-specific query handling. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search') and resource ('60,000+ AI agent skills'), distinguishing it from sibling tools like 'get_skill' (likely retrieves a specific skill) and 'popular_skills' (likely lists trending skills). It specifies the scope (keyword-based search) and supported languages, making the purpose explicit and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for keyword-based searches across multiple languages, but does not explicitly state when to use this tool versus alternatives like 'get_skill' or 'popular_skills'. It provides context (searching by keyword) but lacks explicit guidance on exclusions or comparisons to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_use_casesAInspect
Search 1,000+ AI agent use-cases by task or goal description. Use-cases describe real-world workflows like "write a weekly report", "automate email replies", or "analyze sales data". Each use-case links to a dedicated page listing the best AI skills for that task. Use this tool when: (1) user describes a goal or workflow rather than a tool name, (2) user asks "how do I use AI for X", (3) you want to show what tasks AI can help with. Returns use-case slug, title, description, and page URL. Combine with search_skills to find specific tools for each use-case.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of use-cases to return. Default: 10. Max: 30. | |
| query | Yes | Task or goal in natural language. Example: "write job descriptions", "automate social media", "analyze financial data". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the return format (use-case slug, title, description, and page URL) and mentions the dataset size (1,000+ use-cases), which adds useful context. However, it lacks details on authentication needs, rate limits, or error handling, leaving some behavioral gaps for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and concise, with three sentences that each serve a clear purpose: stating the tool's function, providing usage guidelines, and explaining the return value and integration with other tools. There is no wasted text, and key information is front-loaded effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search functionality with two parameters) and no output schema, the description is mostly complete. It explains the purpose, usage, return format, and integration with sibling tools. However, without annotations or output schema, it could benefit from more behavioral details like pagination or error cases, slightly reducing completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('query' and 'limit') with descriptions and examples. The description does not add any additional parameter semantics beyond what the schema provides, such as query formatting tips or limit implications, resulting in a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: searching through 1,000+ AI agent use-cases by task or goal description. It specifies the resource (use-cases), the action (search), and distinguishes it from sibling tools by mentioning it's for finding workflows rather than specific skills, with examples like 'write a weekly report' or 'automate email replies'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: (1) when users describe a goal or workflow, (2) when users ask 'how do I use AI for X', and (3) when wanting to show what tasks AI can help with. It also mentions combining with 'search_skills' as an alternative for finding specific tools, clearly differentiating usage from sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_requestAInspect
Submit a new skill request to the BytesAgain community wall. Use when a user asks to publish a request for an AI skill they need. Creates a public entry on the requests wall. Sends notification to site admin. Input: title (one-line summary), request (10-800 chars), platform (optional), budget (optional), contact (required — email/TG for follow-up, kept private), nickname (optional display name).
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | One-line summary of the requested skill. | |
| budget | No | Budget for the request, e.g. "$50" or "议价". | |
| contact | Yes | Contact info (email/TG) — REQUIRED. Kept private, not shown publicly. | |
| request | Yes | Detailed description of the skill needed — features, use case, and requirements. 10-800 characters. | |
| nickname | No | Display name shown publicly on the wall. | |
| platform | No | Target AI platform: OpenClaw, Claude Desktop, Cursor, Codex CLI, Copilot, Gemini CLI, or Other. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. It discloses creation of public entry, admin notification, and that contact info is kept private. Could mention if any side effects or rate limits exist, but core behaviors are covered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus a succinct list of inputs. Every sentence serves a purpose, no waste. Front-loaded with the action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 6 parameters, no output schema, and no annotations, the description covers purpose, usage, parameter details, and key behaviors (privacy, notification). Could elaborate on return value, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description restates parameter roles (e.g., 'title (one-line summary)') but adds no new meaning beyond what the schema already provides. No improvement over structured data.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Submit a new skill request to the BytesAgain community wall' with specific verb and resource. It further explains the action creates a public entry and notifies admin, distinguishing it from sibling tools like list_requests or search_skills.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage context: 'Use when a user asks to publish a request for an AI skill they need.' Does not exclude scenarios or name alternatives, but it's sufficiently clear for an agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!