Skip to main content
Glama

run_script

Execute Python scripts for data analytics tasks, displaying output and optionally storing DataFrames in memory without modifying original data.

Instructions

Python Script Execution Tool

Purpose: Execute Python scripts for specific data analytics tasks.

Allowed Actions 1. Print Results: Output will be displayed as the script’s stdout. 2. [Optional] Save DataFrames: Store DataFrames in memory for future use by specifying a save_to_memory name.

Prohibited Actions 1. Overwriting Original DataFrames: Do not modify existing DataFrames to preserve their integrity for future tasks. 2. Creating Charts: Chart generation is not permitted.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
scriptYes
save_to_memoryNo

Implementation Reference

  • Core handler function that executes the user-provided Python script using exec() in a controlled environment with access to loaded DataFrames, common data science libraries (pandas, numpy, etc.), captures stdout as output, and optionally saves new DataFrames to memory.
    def safe_eval(self, script: str, save_to_memory: Optional[List[str]] = None): """safely run a script, return the result if valid, otherwise return the error message""" # first extract dataframes from the self.data local_dict = { **{df_name: df for df_name, df in self.data.items()}, } # execute the script and return the result and if there is error, return the error message try: stdout_capture = StringIO() old_stdout = sys.stdout sys.stdout = stdout_capture self.notes.append(f"Running script: \n{script}") # pylint: disable=exec-used exec(script, \ {'pd': pd, 'np': np, 'scipy': scipy, 'sklearn': sklearn, 'statsmodels': sm}, \ local_dict) std_out_script = stdout_capture.getvalue() except Exception as e: raise McpError(INTERNAL_ERROR, f"Error running script: {str(e)}") from e # check if the result is a dataframe if save_to_memory: for df_name in save_to_memory: self.notes.append(f"Saving dataframe '{df_name}' to memory") self.data[df_name] = local_dict.get(df_name) output = std_out_script if std_out_script else "No output" self.notes.append(f"Result: {output}") return [ TextContent(type="text", text=f"print out result: {output}") ]
  • Pydantic model defining the input schema for the run_script tool: requires a 'script' string and optional 'save_to_memory' list of DataFrame names to persist.
    class RunScript(BaseModel): script: str save_to_memory: Optional[List[str]] = None
  • Registration of the 'run_script' tool in the MCP server's list_tools handler, specifying name, description, and input schema.
    Tool( name=DataExplorationTools.RUN_SCRIPT, description=RUN_SCRIPT_TOOL_DESCRIPTION, inputSchema=RunScript.model_json_schema(), )
  • Enum defining the tool name constant DataExplorationTools.RUN_SCRIPT = 'run_script'.
    class DataExplorationTools(str, Enum): LOAD_CSV = "load_csv" RUN_SCRIPT = "run_script"
  • Dispatch logic in the main @server.call_tool() handler that extracts arguments and invokes the ScriptRunner.safe_eval method for run_script.
    elif name == DataExplorationTools.RUN_SCRIPT: script = arguments.get("script") df_name = arguments.get("df_name") return script_runner.safe_eval(script, df_name)

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/reading-plus-ai/mcp-server-data-exploration'

If you have feedback or need assistance with the MCP directory API, please join our Discord server