Skip to main content
Glama
fkesheh
by fkesheh

execute_python_code

Execute Python code directly to run scripts, import reusable skill libraries, and automatically manage dependencies and environment variables without creating temporary files.

Instructions

Execute Python code directly without requiring a script file.

RECOMMENDATION: Prefer Python over bash/shell scripts for better portability, error handling, and maintainability.

IMPORTANT: Use this tool instead of creating temporary script files when you need to run quick Python code.

โœ… SUPPORTS PEP 723 INLINE DEPENDENCIES - just like run_skill_script!

FEATURES:

  • PEP 723 inline dependencies: Include dependencies directly in code using /// script comments (auto-detected and installed)

  • Dependency aggregation: When importing from skills, their PEP 723 dependencies are automatically merged into your code

  • Skill file imports: Reference files from skills using namespace format (skill_name:path/to/file.py)

  • Automatic dependency installation: Code with PEP 723 metadata is run with 'uv run'

  • Environment variable loading: Automatically loads .env files from all referenced skills

  • Clean execution: Temporary file is automatically cleaned up after execution

PARAMETERS:

  • code: Python code to execute (can include PEP 723 dependencies)

  • skill_references: Optional list of skill files to make available for import Format: ["calculator:utils.py", "weather:api/client.py"] The skill directories will be added to PYTHONPATH Environment variables from each skill's .env file will be loaded

  • timeout: Optional timeout in seconds (defaults to 30 seconds if not specified)

CROSS-SKILL IMPORTS - BUILD REUSABLE LIBRARIES: Create utility skills once, import them anywhere! Perfect for:

  • Math/statistics libraries (calculator:stats.py)

  • API clients (weather:api_client.py)

  • Data processors (etl:transformers.py)

  • Common utilities (helpers:string_utils.py)

AUTOMATIC DEPENDENCY AGGREGATION: When you reference skill files, their PEP 723 dependencies are automatically collected and merged into your code! You don't need to redeclare dependencies - just reference the modules and their deps are included automatically.

Example - library module with deps:

# data-processor:json_fetcher.py # /// script # dependencies = ["requests>=2.31.0"] # /// import requests def fetch_json(url): return requests.get(url).json()

Your code - NO need to declare requests!

{ "code": "from json_fetcher import fetch_json\ndata = fetch_json('https://api.example.com')\nprint(data)", "skill_references": ["data-processor:json_fetcher.py"] }

Dependencies from json_fetcher.py are automatically aggregated!

Import from single skill:

{ "code": "from math_utils import add, multiply\nprint(add(10, 20))", "skill_references": ["calculator:math_utils.py"] }

Import from multiple skills:

{ "code": "from math_utils import add\nfrom stats_utils import mean\nfrom converters import celsius_to_fahrenheit\n\nresult = add(10, 20)\navg = mean([10, 20, 30])\ntemp = celsius_to_fahrenheit(25)\nprint(f'Sum: {result}, Avg: {avg}, Temp: {temp}F')", "skill_references": ["calculator:math_utils.py", "calculator:stats_utils.py", "calculator:converters.py"] }

Import from subdirectories:

{ "code": "from advanced.calculus import derivative_at_point\ndef f(x): return x**2\nprint(derivative_at_point(f, 5))", "skill_references": ["calculator:advanced/calculus.py"] }

ENVIRONMENT VARIABLES FROM REFERENCED SKILLS: When you import from a skill, its environment variables are automatically loaded:

{ "code": "from api_client import fetch_weather\ndata = fetch_weather('London')\nprint(data)", "skill_references": ["weather:api_client.py"] }

If weather:api_client.py uses API_KEY from its .env file, it will be available automatically!

EXAMPLE WITH PEP 723 DEPENDENCIES:

{ "code": "# /// script\n# dependencies = [\n# \"requests>=2.31.0\",\n# \"pandas\",\n# ]\n# ///\n\nimport requests\nimport pandas as pd\n\nresponse = requests.get('https://api.example.com/data')\ndf = pd.DataFrame(response.json())\nprint(df.head())" }

WHY PYTHON OVER BASH/JS:

  • Better error handling and debugging

  • Rich standard library

  • Cross-platform compatibility

  • Easier to read and maintain

  • Strong typing support

  • Better dependency management

RETURNS: Execution result with:

  • Exit code (0 = success, non-zero = failure)

  • STDOUT (standard output)

  • STDERR (error output)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYesPython code to execute (can include PEP 723 dependencies)
skill_referencesNoOptional list of skill files to import using namespace format (e.g., 'calculator:utils.py')
timeoutNoOptional timeout in seconds (defaults to 30 seconds if not specified)

Implementation Reference

  • Registration of the 'execute_python_code' MCP tool within ScriptTools.get_script_tools(), specifying name, comprehensive description, and inputSchema from ExecutePythonCodeInput.model_json_schema().
    types.Tool( name="execute_python_code", description="""Execute Python code directly without requiring a script file. RECOMMENDATION: Prefer Python over bash/shell scripts for better portability, error handling, and maintainability. IMPORTANT: Use this tool instead of creating temporary script files when you need to run quick Python code. โœ… SUPPORTS PEP 723 INLINE DEPENDENCIES - just like run_skill_script! FEATURES: - **PEP 723 inline dependencies**: Include dependencies directly in code using /// script comments (auto-detected and installed) - **Dependency aggregation**: When importing from skills, their PEP 723 dependencies are automatically merged into your code - Skill file imports: Reference files from skills using namespace format (skill_name:path/to/file.py) - Automatic dependency installation: Code with PEP 723 metadata is run with 'uv run' - Environment variable loading: Automatically loads .env files from all referenced skills - Clean execution: Temporary file is automatically cleaned up after execution PARAMETERS: - code: Python code to execute (can include PEP 723 dependencies) - skill_references: Optional list of skill files to make available for import Format: ["calculator:utils.py", "weather:api/client.py"] The skill directories will be added to PYTHONPATH Environment variables from each skill's .env file will be loaded - timeout: Optional timeout in seconds (defaults to 30 seconds if not specified) CROSS-SKILL IMPORTS - BUILD REUSABLE LIBRARIES: Create utility skills once, import them anywhere! Perfect for: - Math/statistics libraries (calculator:stats.py) - API clients (weather:api_client.py) - Data processors (etl:transformers.py) - Common utilities (helpers:string_utils.py) AUTOMATIC DEPENDENCY AGGREGATION: When you reference skill files, their PEP 723 dependencies are automatically collected and merged into your code! You don't need to redeclare dependencies - just reference the modules and their deps are included automatically. Example - library module with deps: ```python # data-processor:json_fetcher.py # /// script # dependencies = ["requests>=2.31.0"] # /// import requests def fetch_json(url): return requests.get(url).json() ``` Your code - NO need to declare requests! ```json { "code": "from json_fetcher import fetch_json\\ndata = fetch_json('https://api.example.com')\\nprint(data)", "skill_references": ["data-processor:json_fetcher.py"] } ``` Dependencies from json_fetcher.py are automatically aggregated! Import from single skill: ```json { "code": "from math_utils import add, multiply\\nprint(add(10, 20))", "skill_references": ["calculator:math_utils.py"] } ``` Import from multiple skills: ```json { "code": "from math_utils import add\\nfrom stats_utils import mean\\nfrom converters import celsius_to_fahrenheit\\n\\nresult = add(10, 20)\\navg = mean([10, 20, 30])\\ntemp = celsius_to_fahrenheit(25)\\nprint(f'Sum: {result}, Avg: {avg}, Temp: {temp}F')", "skill_references": ["calculator:math_utils.py", "calculator:stats_utils.py", "calculator:converters.py"] } ``` Import from subdirectories: ```json { "code": "from advanced.calculus import derivative_at_point\\ndef f(x): return x**2\\nprint(derivative_at_point(f, 5))", "skill_references": ["calculator:advanced/calculus.py"] } ``` ENVIRONMENT VARIABLES FROM REFERENCED SKILLS: When you import from a skill, its environment variables are automatically loaded: ```json { "code": "from api_client import fetch_weather\\ndata = fetch_weather('London')\\nprint(data)", "skill_references": ["weather:api_client.py"] } ``` If weather:api_client.py uses API_KEY from its .env file, it will be available automatically! EXAMPLE WITH PEP 723 DEPENDENCIES: ```json { "code": "# /// script\\n# dependencies = [\\n# \\"requests>=2.31.0\\",\\n# \\"pandas\\",\\n# ]\\n# ///\\n\\nimport requests\\nimport pandas as pd\\n\\nresponse = requests.get('https://api.example.com/data')\\ndf = pd.DataFrame(response.json())\\nprint(df.head())" } ``` WHY PYTHON OVER BASH/JS: - Better error handling and debugging - Rich standard library - Cross-platform compatibility - Easier to read and maintain - Strong typing support - Better dependency management RETURNS: Execution result with: - Exit code (0 = success, non-zero = failure) - STDOUT (standard output) - STDERR (error output)""", inputSchema=ExecutePythonCodeInput.model_json_schema(), ),
  • Primary MCP handler for 'execute_python_code': receives validated input, calls ScriptService for execution, formats stdout/stderr/exit_code into TextContent response.
    async def execute_python_code( input_data: ExecutePythonCodeInput, ) -> list[types.TextContent]: """Execute Python code directly.""" try: result = await ScriptService.execute_python_code( input_data.code, input_data.skill_references, input_data.timeout, ) output = "Python Code Execution\n" output += f"Exit code: {result.exit_code}\n\n" if result.stdout: output += f"STDOUT:\n{result.stdout}\n" if result.stderr: output += f"STDERR:\n{result.stderr}\n" if not result.stdout and not result.stderr: output += "(No output)\n" return [types.TextContent(type="text", text=output)] except SkillMCPException as e: return [types.TextContent(type="text", text=f"Error: {str(e)}")] except Exception as e: return [types.TextContent(type="text", text=f"Error executing code: {str(e)}")]
  • Core execution logic for 'execute_python_code': parses skill_references, aggregates and merges PEP 723 dependencies, creates temp file, executes via 'uv run' or python with proper env/PYTHONPATH, handles timeout and output truncation.
    async def execute_python_code( code: str, skill_references: Optional[List[str]] = None, timeout: Optional[int] = None, ) -> ScriptResult: """ Execute Python code directly without requiring a script file. Supports PEP 723 inline dependencies using /// script comments. Can reference skill files using namespace format (skill_name:path/to/file.py). Automatically aggregates PEP 723 dependencies from referenced skill files. Args: code: Python code to execute (can include PEP 723 dependencies) skill_references: List of skill files in namespace format to make available e.g., ["calculator:utils.py", "weather:api/client.py"] timeout: Optional timeout in seconds (defaults to SCRIPT_TIMEOUT_SECONDS if not specified) Returns: ScriptResult with stdout, stderr, and exit code Raises: ScriptExecutionError: If execution fails """ # Use provided timeout or fall back to default script_timeout = timeout if timeout is not None else SCRIPT_TIMEOUT_SECONDS temp_file = None try: # Parse skill references and collect dependencies env = os.environ.copy() python_paths: List[str] = [] aggregated_deps: List[str] = [] processed_skills: set[str] = set() # Track which skills we've processed if skill_references: for ref in skill_references: # Parse namespace format: skill_name:path/to/file.py if ":" not in ref: raise ScriptExecutionError( f"Invalid skill reference format: '{ref}'. Expected 'skill_name:path/to/file.py'" ) skill_name, file_path = ref.split(":", 1) skill_dir = SKILLS_DIR / skill_name if not skill_dir.exists(): raise SkillNotFoundError(f"Skill '{skill_name}' does not exist") # Add skill directory to PYTHONPATH if str(skill_dir) not in python_paths: python_paths.append(str(skill_dir)) # Load environment variables from this skill (only once per skill) if skill_name not in processed_skills: try: skill_env = EnvironmentService.load_skill_env(skill_name) env.update(skill_env) processed_skills.add(skill_name) except Exception: # If we can't load env vars, just skip (skill may not have .env file) processed_skills.add(skill_name) # Read the referenced file and extract its dependencies ref_file_path = skill_dir / file_path if ref_file_path.exists() and ref_file_path.is_file(): try: ref_content = ref_file_path.read_text(encoding="utf-8") ref_deps = extract_pep723_dependencies(ref_content) aggregated_deps.extend(ref_deps) except Exception: # If we can't read or parse the file, just skip dependency extraction pass # Merge aggregated dependencies into the code if aggregated_deps: code = merge_dependencies(code, aggregated_deps) # Create temporary Python file with merged code with tempfile.NamedTemporaryFile( mode="w", suffix=".py", delete=False, encoding="utf-8" ) as f: f.write(code) temp_file = Path(f.name) # Add paths to PYTHONPATH if python_paths: existing_path = env.get("PYTHONPATH", "") new_paths = ":".join(python_paths) env["PYTHONPATH"] = f"{new_paths}:{existing_path}" if existing_path else new_paths # Check if code has PEP 723 dependencies (same logic as has_uv_dependencies) has_deps = "# /// script" in code or "# /// pyproject" in code # Build command if has_deps: cmd = ["uv", "run", str(temp_file)] else: cmd = [DEFAULT_PYTHON_INTERPRETER, str(temp_file)] # Execute result = subprocess.run( cmd, capture_output=True, text=True, env=env, timeout=script_timeout, ) # Truncate output if needed stdout = result.stdout if len(stdout) > MAX_OUTPUT_SIZE: stdout = stdout[:MAX_OUTPUT_SIZE] + "\n... (output truncated)" stderr = result.stderr if len(stderr) > MAX_OUTPUT_SIZE: stderr = stderr[:MAX_OUTPUT_SIZE] + "\n... (output truncated)" return ScriptResult(result.returncode, stdout, stderr) except subprocess.TimeoutExpired: raise ScriptExecutionError(f"Code execution timed out ({script_timeout} seconds)") except (SkillNotFoundError, ScriptExecutionError): raise except Exception as e: raise ScriptExecutionError(f"Failed to execute code: {str(e)}") finally: # Clean up temporary file if temp_file and temp_file.exists(): try: temp_file.unlink() except Exception: pass
  • Pydantic BaseModel defining input schema for 'execute_python_code' tool: required 'code' str, optional 'skill_references' list and 'timeout' int.
    class ExecutePythonCodeInput(BaseModel): """Input for executing Python code directly.""" code: str = Field(description="Python code to execute (can include PEP 723 dependencies)") skill_references: Optional[List[str]] = Field( default=None, description="Optional list of skill files to import using namespace format (e.g., 'calculator:utils.py')", ) timeout: Optional[int] = Field( default=None, description="Optional timeout in seconds (defaults to 30 seconds if not specified)", )
  • Helper function to parse and extract PEP 723 dependencies from Python code or referenced skill files for automatic aggregation.
    def extract_pep723_dependencies(content: str) -> List[str]: """ Extract dependencies from PEP 723 metadata in code. Args: content: Python code or file content Returns: List of dependency strings (e.g., ["requests>=2.31.0", "pandas"]) """ # Match PEP 723 script block pattern = r"#\s*///\s*script\s*\n(.*?)#\s*///\s*$" match = re.search(pattern, content, re.MULTILINE | re.DOTALL) if not match: return [] metadata_block = match.group(1) # Extract dependencies array deps_pattern = r"dependencies\s*=\s*\[(.*?)\]" deps_match = re.search(deps_pattern, metadata_block, re.DOTALL) if not deps_match: return [] deps_content = deps_match.group(1) # Extract individual dependency strings dep_strings = re.findall(r'["\']([^"\']+)["\']', deps_content) return dep_strings

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/fkesheh/skill-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server