Skip to main content
Glama

MCPMake

by shex1627

MCPMake

An MCP (Model Context Protocol) server for managing and running Python scripts with LLM-extracted schemas - like make, but smarter.

Features

  • Automatic Schema Extraction: Uses LLMs (Claude Sonnet 4 or GPT-4.1) to analyze Python scripts and extract argument schemas

  • Script Registry: Store and manage multiple scripts with metadata

  • Input Validation: Validates arguments against JSON Schema before execution

  • Execution History: Tracks all script runs with full output logs

  • Environment Variables: Pass custom env vars per execution

  • Flexible Execution: Custom Python interpreters, timeouts, and output truncation

  • Update & Re-analyze: Refresh script schemas when code changes

Installation

# Clone or navigate to the project directory cd mcpmake # Install in development mode pip install -e .

Configuration

Set up API keys

You'll need an API key for either Anthropic or OpenAI (or both):

export ANTHROPIC_API_KEY="your-key-here" # or export OPENAI_API_KEY="your-key-here"

Add to MCP settings

Add the server to your MCP client configuration (e.g., Claude Desktop):

{ "mcpServers": { "mcpmake": { "command": "python", "args": ["-m", "mcpmake.server"], "env": { "ANTHROPIC_API_KEY": "your-key-here" } } } }

Usage

1. Register a Script

# Register a Python script with automatic schema extraction register_script( name="data_processor", path="/path/to/script.py", description="Processes data files", # optional, auto-generated if omitted python_path="/usr/bin/python3", # optional timeout_seconds=240, # optional, default 240 min_lines=1, # optional, default 1 llm_provider="anthropic" # optional, "anthropic" or "openai" )

2. List Scripts

list_scripts() # Shows all registered scripts with descriptions

3. Get Script Info

get_script_info(name="data_processor") # Shows detailed schema, path, recent runs, etc.

4. Run a Script

run_script( name="data_processor", args={ "input_file": "data.csv", "output_dir": "/tmp/output", "verbose": true }, env_vars={ # optional "API_KEY": "secret123" }, python_path="/usr/bin/python3", # optional, overrides default timeout=300, # optional, overrides default output_lines=100 # optional, default 100 )

5. View Run History

get_run_history( name="data_processor", # optional, shows all scripts if omitted limit=10 # optional, default 10 )

6. Update Script Schema

# Re-analyze script after code changes update_script( name="data_processor", llm_provider="anthropic" # optional )

7. Delete Script

delete_script(name="data_processor")

Data Storage

MCPMake stores data in ~/.mcpmake/:

~/.mcpmake/ ├── scripts.json # Script registry and metadata ├── history.jsonl # Execution history log └── outputs/ # Full script outputs ├── script1_timestamp.log └── script2_timestamp.log

How It Works

  1. Registration: When you register a script, MCPMake:

    • Reads the script file

    • Sends it to an LLM (Claude Sonnet 4 or GPT-4.1)

    • Extracts a JSON Schema describing the script's arguments

    • Extracts a description from docstrings/comments

    • Stores everything in scripts.json

  2. Execution: When you run a script:

    • Validates your arguments against the stored JSON Schema

    • Checks if the script file still exists

    • Builds command-line arguments from your input

    • Runs the script with specified Python interpreter and env vars

    • Captures stdout/stderr with timeout protection

    • Saves full output to a log file

    • Returns truncated output (first N lines)

    • Logs execution details to history

  3. History: All runs are logged with:

    • Timestamp, arguments, exit code

    • Execution time

    • Full output file path

    • Environment variables used

Example Python Scripts

MCPMake works best with scripts that use:

argparse

import argparse parser = argparse.ArgumentParser(description="Process data files") parser.add_argument("--input-file", required=True, help="Input CSV file") parser.add_argument("--output-dir", required=True, help="Output directory") parser.add_argument("--verbose", action="store_true", help="Verbose output") args = parser.parse_args()

click

import click @click.command() @click.option("--input-file", required=True, help="Input CSV file") @click.option("--output-dir", required=True, help="Output directory") @click.option("--verbose", is_flag=True, help="Verbose output") def main(input_file, output_dir, verbose): pass

Simple functions

def main(input_file: str, output_dir: str, verbose: bool = False): """ Process data files. Args: input_file: Path to input CSV file output_dir: Output directory path verbose: Enable verbose logging """ pass

Requirements

  • Python 3.10+

  • MCP SDK

  • Anthropic SDK (for Claude)

  • OpenAI SDK (for GPT)

  • jsonschema

License

MIT

-
security - not tested
F
license - not found
-
quality - not tested

local-only server

The server can only run on the client's local machine because it depends on local resources.

Enables management and execution of Python scripts with automatic LLM-extracted argument schemas and validation. Provides script registry, execution history, and intelligent argument parsing like make but for Python scripts.

  1. Features
    1. Installation
      1. Configuration
        1. Set up API keys
        2. Add to MCP settings
      2. Usage
        1. 1. Register a Script
        2. 2. List Scripts
        3. 3. Get Script Info
        4. 4. Run a Script
        5. 5. View Run History
        6. 6. Update Script Schema
        7. 7. Delete Script
      3. Data Storage
        1. How It Works
          1. Example Python Scripts
            1. argparse
            2. click
            3. Simple functions
          2. Requirements
            1. License

              MCP directory API

              We provide all the information about MCP servers via our MCP API.

              curl -X GET 'https://glama.ai/api/mcp/v1/servers/shex1627/mcpmake'

              If you have feedback or need assistance with the MCP directory API, please join our Discord server