The AgentSkills MCP server brings Anthropic's Agent Skills to any MCP-compatible agent through a Progressive Disclosure architecture that efficiently manages context windows by loading capabilities only when needed.
Core Capabilities:
Discover Available Skills - Load metadata (names and descriptions) for all skills in your configured directory at startup to understand available capabilities
Load Skill Instructions - Dynamically access full instructions from SKILL.md files for specific skills when triggered
Access Reference Files - Read supporting documentation and reference files (e.g., forms.md, reference.md, ooxml.md) from within skills for detailed guidance
Execute Shell Commands - Run shell commands within skill directories to execute scripts and tools, with commands executed in the skill's context
Multiple Transport Modes - Communicate via stdio (local), SSE (Server-Sent Events), or HTTP (RESTful) protocols for flexible deployment
Key Benefits:
Universal compatibility with any MCP-compatible agent, not just Claude
Automatic detection and parsing of skills from customizable directories
Progressive context loading preserves context window space
Compatible with Anthropic's official Agent Skills format and community-created skills
Supports streaming deep research capabilities inspired by LangChain's open_deep_research, providing real-time comprehensive analysis through HTTP endpoints.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@AgentSkills MCPanalyze the latest financial report for Alibaba"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
AgentSkills MCP: Bringing Anthropic's Agent Skills to Any MCP-compatible Agent
📖 Project Overview
Agent Skills is a new function recently introduced by Anthropic. By packaging specialized skills into modular resources, it allows Claude to transform on demand into a “tailored expert” suited to any scenario. AgentSkills MCP, built on the FlowLLM framework, unlocks Claude’s proprietary Agent Skills for any MCP-compatible agent. It implements the Progressive Disclosure architecture proposed in Anthropic’s official Agent Skills engineering blog, enabling agents to load necessary skills as needed, thereby efficiently utilizing limited context windows.
💡 Why Choose AgentSkills MCP?
✅ Zero-Code Configuration: one-command install (
pip install mcp-agentskills)✅ Out-of-the-Box: uses official Skill format and fully compatible with Anthropic’s Agent Skills
✅ MCP Support: multiple transports (stdio/SSE/HTTP), works with any MCP-compatible agent
✅ Flexible Skill Path: custom skill directories with automatic detection, parsing, and loading
🔥 Latest Updates
[2025-12] 🎉 Released mcp-agentskills v0.1.1
🚀 Quick Start
Installation
Install AgentSkills MCP with pip:
pip install mcp-agentskillsOr with uv:
uv pip install mcp-agentskillsgit clone https://github.com/zouyingcao/agentskills-mcp.git
cd agentskills-mcp
conda create -n agentskills-mcp python==3.10
conda activate agentskills-mcp
pip install -e .Load Skills
Create a directory to store Skills, like:
mkdir skillsClone from open-source GitHub repositories, e.g.,
https://github.com/anthropics/skills
https://github.com/ComposioHQ/awesome-claude-skillsAdd the collected Skills into the directory created in step 1. Each Skill is a folder containing a SKILL.md file.
Run
{
"mcpServers": {
"agentskills-mcp": {
"command": "uvx",
"args": [
"agentskills-mcp",
"config=default",
"mcp.transport=stdio",
"metadata.skill_dir=\"./skills\""
],
"env": {
"FLOW_LLM_API_KEY": "xxx",
"FLOW_LLM_BASE_URL": "https://dashscope.aliyuncs.com/compatible-mode/v1"
}
}
}
}- Step 1: Configure Environment Variables
Copy example.env to .env and fill in your API key:
cp example.env .env
# Edit the .env file and fill in your API key- Step 2: Start the Server
Start the AgentSkills MCP server with SSE transport:
agentskills-mcp \
config=default \
mcp.transport=sse \
mcp.host=0.0.0.0 \
mcp.port=8001 \
metadata.skill_dir="./skills"The service will be available at: http://0.0.0.0:8001/sse
- Step 3: Connect from MCP Client
Add this configuration to your MCP client (Cursor, Gemini Code, Cline, etc.) to connect to the remote SSE server:
{
"mcpServers": {
"agentskills-mcp": {
"type": "sse",
"url": "http://0.0.0.0:8001/sse"
}
}
}You can also use the FastMCP Python client to directly access the server:
import asyncio
from fastmcp import Client
async def main():
async with Client("http://0.0.0.0:8001/sse") as client:
tools = await client.list_tools()
for tool in tools:
print(tool)
result = await client.call_tool(
name="load_skill",
arguments={
"skill_name"="pdf"
}
)
print(result)
asyncio.run(main())One-Command Test
python tests/run_project_sse.py <path/to/skills>
or
python tests/run_project_http.py <path/to/skills>Demo
After starting the AgentSkills MCP server with the SSE transport, you can run the demo:
# Enable Agent Skills for the Qwen model.
# Since Qwen supports function calling, you can implement Agent Skills by passing the MCP tools registered by the AgentSkills MCP service to the tools parameter.
cd tests
python run_skill_agent.py🔧 MCP Tools
This service provides four tools to support Agent Skills:
load_skill_metadata_op — Loads the names and descriptions of all Skills into the agent context at startup (always called)
load_skill_op — When a specific skill is needed, loads the SKILL.md content by skill name (invoked when triggering the Skill)
read_reference_file_op — Reads specific files from a skill, such as scripts or reference documents (on demand)
run_shell_command_op — Executes shell commands to run executable scripts included in the skill (on demand)
For detailed parameters and usage examples, see the documentation.
⚙️ Server Configuration Parameters
Parameter | Description | Example |
| Configuration files to load (comma-separated). Default: |
|
| Transport mode: |
|
| Host address (for sse/http transport only) |
|
| Port number (for sse/http transport only) |
|
| Skills Directory (required) |
|
For the full set of available options and defaults, refer to default.yaml.
Environment Variables
Variable Name | Required | Description |
| ✅ Yes | API key for OpenAI-compatible LLM Service |
| ✅ Yes | Base URL for OpenAI-compatible LLM Service |
🤝 Contributing
We welcome community contributions! To get started:
Install the package in development mode:
pip install -e .Install pre-commit hooks:
pip install pre-commit
pre-commit run --all-filesSubmit a pull request with your changes.
📚 Learn More
⚖️ License
This project is licensed under the Apache License 2.0 — see LICENSE for details.