The AgentSkills MCP server brings Anthropic's Agent Skills to any MCP-compatible agent through a Progressive Disclosure architecture that efficiently manages context windows by loading capabilities only when needed.
Core Capabilities:
Discover Available Skills - Load metadata (names and descriptions) for all skills in your configured directory at startup to understand available capabilities
Load Skill Instructions - Dynamically access full instructions from SKILL.md files for specific skills when triggered
Access Reference Files - Read supporting documentation and reference files (e.g., forms.md, reference.md, ooxml.md) from within skills for detailed guidance
Execute Shell Commands - Run shell commands within skill directories to execute scripts and tools, with commands executed in the skill's context
Multiple Transport Modes - Communicate via stdio (local), SSE (Server-Sent Events), or HTTP (RESTful) protocols for flexible deployment
Key Benefits:
Universal compatibility with any MCP-compatible agent, not just Claude
Automatic detection and parsing of skills from customizable directories
Progressive context loading preserves context window space
Compatible with Anthropic's official Agent Skills format and community-created skills
Supports streaming deep research capabilities inspired by LangChain's open_deep_research, providing real-time comprehensive analysis through HTTP endpoints.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@AgentSkills MCPanalyze the latest financial report for Alibaba"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
AgentSkills MCP: Bringing Anthropic's Agent Skills to Any MCP-compatible Agent
📖 Project Overview
Agent Skills is a new function recently introduced by Anthropic. By packaging specialized skills into modular resources, it allows Claude to transform on demand into a “tailored expert” suited to any scenario. AgentSkills MCP, built on the FlowLLM framework, unlocks Claude’s proprietary Agent Skills for any MCP-compatible agent. It implements the Progressive Disclosure architecture proposed in Anthropic’s official Agent Skills engineering blog, enabling agents to load necessary skills as needed, thereby efficiently utilizing limited context windows.
💡 Why Choose AgentSkills MCP?
✅ Zero-Code Configuration: one-command install (
pip install mcp-agentskills)✅ Out-of-the-Box: uses official Skill format and fully compatible with Anthropic’s Agent Skills
✅ MCP Support: multiple transports (stdio/SSE/HTTP), works with any MCP-compatible agent
✅ Flexible Skill Path: custom skill directories with automatic detection, parsing, and loading
🔥 Latest Updates
[2025-12] 🎉 Released mcp-agentskills v0.1.1
🚀 Quick Start
Installation
Install AgentSkills MCP with pip:
Or with uv:
Load Skills
Create a directory to store Skills, like:
Clone from open-source GitHub repositories, e.g.,
Add the collected Skills into the directory created in step 1. Each Skill is a folder containing a SKILL.md file.
Run
- Step 1: Configure Environment Variables
Copy example.env to .env and fill in your API key:
- Step 2: Start the Server
Start the AgentSkills MCP server with SSE transport:
The service will be available at: http://0.0.0.0:8001/sse
- Step 3: Connect from MCP Client
Add this configuration to your MCP client (Cursor, Gemini Code, Cline, etc.) to connect to the remote SSE server:
You can also use the FastMCP Python client to directly access the server:
One-Command Test
Demo
After starting the AgentSkills MCP server with the SSE transport, you can run the demo:
🔧 MCP Tools
This service provides four tools to support Agent Skills:
load_skill_metadata_op — Loads the names and descriptions of all Skills into the agent context at startup (always called)
load_skill_op — When a specific skill is needed, loads the SKILL.md content by skill name (invoked when triggering the Skill)
read_reference_file_op — Reads specific files from a skill, such as scripts or reference documents (on demand)
run_shell_command_op — Executes shell commands to run executable scripts included in the skill (on demand)
For detailed parameters and usage examples, see the documentation.
⚙️ Server Configuration Parameters
Parameter | Description | Example |
| Configuration files to load (comma-separated). Default: |
|
| Transport mode: |
|
| Host address (for sse/http transport only) |
|
| Port number (for sse/http transport only) |
|
| Skills Directory (required) |
|
For the full set of available options and defaults, refer to default.yaml.
Environment Variables
Variable Name | Required | Description |
| ✅ Yes | API key for OpenAI-compatible LLM Service |
| ✅ Yes | Base URL for OpenAI-compatible LLM Service |
🤝 Contributing
We welcome community contributions! To get started:
Install the package in development mode:
Install pre-commit hooks:
Submit a pull request with your changes.
📚 Learn More
⚖️ License
This project is licensed under the Apache License 2.0 — see LICENSE for details.