Skip to main content
Glama

run_script

Execute Python scripts for data analytics tasks, including data processing, visualization with matplotlib/plotly, and saving results for further analysis.

Instructions

Python Script Execution Tool

Purpose: Execute Python scripts for specific data analytics tasks.

Allowed Actions 1. Print Results: Output will be displayed as the script’s stdout. 2. [Optional] Save DataFrames: Store DataFrames in memory for future use by specifying a save_to_memory name. 3. Create Charts: You can use matplotlib.pyplot or plotly.graph_objects to create and save charts to an absolute path.

Prohibited Actions 1. Overwriting Original DataFrames: Do not modify existing DataFrames to preserve their integrity for future tasks.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
scriptYes
save_to_memoryNo

Implementation Reference

  • Core handler function that safely executes the provided Python script using exec(), with access to data analysis libraries (pandas, numpy, etc.), loaded dataframes, and plotting libraries. Captures stdout output and handles errors, optionally saving results to memory.
    def safe_eval(self, script: str, save_to_memory: Optional[List[str]] = None): """safely run a script, return the result if valid, otherwise return the error message""" # first extract dataframes from the self.data local_dict = { **{df_name: df for df_name, df in self.data.items()}, 'plt': plt, # Add matplotlib 'go': go, # Add plotly graph_objects 'os': os # Add os for path manipulation } # execute the script and return the result and if there is error, return the error message stdout_capture = StringIO() old_stdout = sys.stdout # Store the original stdout try: sys.stdout = stdout_capture # Redirect stdout self.notes.append(f"Running script: \n{script}") # pylint: disable=exec-used exec(script, \ {'pd': pd, 'np': np, 'scipy': scipy, 'sklearn': sklearn, 'statsmodels': sm}, \ local_dict) std_out_script = stdout_capture.getvalue() except Exception as e: error_message = f"Error running script: {str(e)}" self.notes.append(f"ERROR: {error_message}") return [ TextContent(type="text", text=f"Error: {error_message}") ] finally: sys.stdout = old_stdout # Restore original stdout # check if the result is a dataframe if save_to_memory: for df_name in save_to_memory: self.notes.append(f"Saving dataframe '{df_name}' to memory") self.data[df_name] = local_dict.get(df_name) output = std_out_script if std_out_script else "No output" self.notes.append(f"Result: {output}") return [ TextContent(type="text", text=f"print out result: {output}") ]
  • Pydantic schema for run_script tool input: required 'script' (the Python code to execute) and optional 'save_to_memory' list of DataFrame names to persist in memory.
    class RunScript(BaseModel): script: str save_to_memory: Optional[List[str]] = None
  • Registration of the 'run_script' tool in the MCP server's list_tools handler, specifying name, description, and input schema.
    Tool(name=DataExplorationTools.RUN_SCRIPT, description=RUN_SCRIPT_TOOL_DESCRIPTION, inputSchema=RunScript.model_json_schema()),
  • Dispatch handler in the MCP call_tool method that invokes the safe_eval function for the 'run_script' tool.
    elif name == DataExplorationTools.RUN_SCRIPT: return script_runner.safe_eval(arguments.get("script"), arguments.get("save_to_memory"))
  • Enum defining the tool name constant RUN_SCRIPT = 'run_script' used throughout the code.
    class DataExplorationTools(str, Enum): LOAD_FILE = "load_file" RUN_SCRIPT = "run_script"
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/OuchiniKaeru/mcp_data_analyzer'

If you have feedback or need assistance with the MCP directory API, please join our Discord server