Skip to main content
Glama

MCPMake

by shex1627
README.md5.4 kB
# MCPMake An MCP (Model Context Protocol) server for managing and running Python scripts with LLM-extracted schemas - like `make`, but smarter. ## Features - **Automatic Schema Extraction**: Uses LLMs (Claude Sonnet 4 or GPT-4.1) to analyze Python scripts and extract argument schemas - **Script Registry**: Store and manage multiple scripts with metadata - **Input Validation**: Validates arguments against JSON Schema before execution - **Execution History**: Tracks all script runs with full output logs - **Environment Variables**: Pass custom env vars per execution - **Flexible Execution**: Custom Python interpreters, timeouts, and output truncation - **Update & Re-analyze**: Refresh script schemas when code changes ## Installation ```bash # Clone or navigate to the project directory cd mcpmake # Install in development mode pip install -e . ``` ## Configuration ### Set up API keys You'll need an API key for either Anthropic or OpenAI (or both): ```bash export ANTHROPIC_API_KEY="your-key-here" # or export OPENAI_API_KEY="your-key-here" ``` ### Add to MCP settings Add the server to your MCP client configuration (e.g., Claude Desktop): ```json { "mcpServers": { "mcpmake": { "command": "python", "args": ["-m", "mcpmake.server"], "env": { "ANTHROPIC_API_KEY": "your-key-here" } } } } ``` ## Usage ### 1. Register a Script ```python # Register a Python script with automatic schema extraction register_script( name="data_processor", path="/path/to/script.py", description="Processes data files", # optional, auto-generated if omitted python_path="/usr/bin/python3", # optional timeout_seconds=240, # optional, default 240 min_lines=1, # optional, default 1 llm_provider="anthropic" # optional, "anthropic" or "openai" ) ``` ### 2. List Scripts ```python list_scripts() # Shows all registered scripts with descriptions ``` ### 3. Get Script Info ```python get_script_info(name="data_processor") # Shows detailed schema, path, recent runs, etc. ``` ### 4. Run a Script ```python run_script( name="data_processor", args={ "input_file": "data.csv", "output_dir": "/tmp/output", "verbose": true }, env_vars={ # optional "API_KEY": "secret123" }, python_path="/usr/bin/python3", # optional, overrides default timeout=300, # optional, overrides default output_lines=100 # optional, default 100 ) ``` ### 5. View Run History ```python get_run_history( name="data_processor", # optional, shows all scripts if omitted limit=10 # optional, default 10 ) ``` ### 6. Update Script Schema ```python # Re-analyze script after code changes update_script( name="data_processor", llm_provider="anthropic" # optional ) ``` ### 7. Delete Script ```python delete_script(name="data_processor") ``` ## Data Storage MCPMake stores data in `~/.mcpmake/`: ``` ~/.mcpmake/ ├── scripts.json # Script registry and metadata ├── history.jsonl # Execution history log └── outputs/ # Full script outputs ├── script1_timestamp.log └── script2_timestamp.log ``` ## How It Works 1. **Registration**: When you register a script, MCPMake: - Reads the script file - Sends it to an LLM (Claude Sonnet 4 or GPT-4.1) - Extracts a JSON Schema describing the script's arguments - Extracts a description from docstrings/comments - Stores everything in `scripts.json` 2. **Execution**: When you run a script: - Validates your arguments against the stored JSON Schema - Checks if the script file still exists - Builds command-line arguments from your input - Runs the script with specified Python interpreter and env vars - Captures stdout/stderr with timeout protection - Saves full output to a log file - Returns truncated output (first N lines) - Logs execution details to history 3. **History**: All runs are logged with: - Timestamp, arguments, exit code - Execution time - Full output file path - Environment variables used ## Example Python Scripts MCPMake works best with scripts that use: ### argparse ```python import argparse parser = argparse.ArgumentParser(description="Process data files") parser.add_argument("--input-file", required=True, help="Input CSV file") parser.add_argument("--output-dir", required=True, help="Output directory") parser.add_argument("--verbose", action="store_true", help="Verbose output") args = parser.parse_args() ``` ### click ```python import click @click.command() @click.option("--input-file", required=True, help="Input CSV file") @click.option("--output-dir", required=True, help="Output directory") @click.option("--verbose", is_flag=True, help="Verbose output") def main(input_file, output_dir, verbose): pass ``` ### Simple functions ```python def main(input_file: str, output_dir: str, verbose: bool = False): """ Process data files. Args: input_file: Path to input CSV file output_dir: Output directory path verbose: Enable verbose logging """ pass ``` ## Requirements - Python 3.10+ - MCP SDK - Anthropic SDK (for Claude) - OpenAI SDK (for GPT) - jsonschema ## License MIT

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/shex1627/mcpmake'

If you have feedback or need assistance with the MCP directory API, please join our Discord server