Skip to main content
Glama

run_script

Execute Python scripts for data analytics tasks, enabling statistical analysis, chart creation with matplotlib/plotly, and storing results for further processing.

Instructions

Python Script Execution Tool

Purpose: Execute Python scripts for specific data analytics tasks.

Allowed Actions 1. Print Results: Output will be displayed as the script’s stdout. 2. [Optional] Save DataFrames: Store DataFrames in memory for future use by specifying a save_to_memory name. 3. Create Charts: You can use matplotlib.pyplot or plotly.graph_objects to create and save charts to an absolute path.

Prohibited Actions 1. Overwriting Original DataFrames: Do not modify existing DataFrames to preserve their integrity for future tasks.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
scriptYes
save_to_memoryNo

Implementation Reference

  • Core handler function that safely executes the provided Python script using exec(), with access to globals like pd, np, etc., and loaded dataframes. Captures stdout and handles errors.
    def safe_eval(self, script: str, save_to_memory: Optional[List[str]] = None): """safely run a script, return the result if valid, otherwise return the error message""" # first extract dataframes from the self.data local_dict = { **{df_name: df for df_name, df in self.data.items()}, 'plt': plt, # Add matplotlib 'go': go, # Add plotly graph_objects 'os': os # Add os for path manipulation } # execute the script and return the result and if there is error, return the error message stdout_capture = StringIO() old_stdout = sys.stdout # Store the original stdout try: sys.stdout = stdout_capture # Redirect stdout self.notes.append(f"Running script: \n{script}") # pylint: disable=exec-used exec(script, \ {'pd': pd, 'np': np, 'scipy': scipy, 'sklearn': sklearn, 'statsmodels': sm}, \ local_dict) std_out_script = stdout_capture.getvalue() except Exception as e: error_message = f"Error running script: {str(e)}" self.notes.append(f"ERROR: {error_message}") return [ TextContent(type="text", text=f"Error: {error_message}") ] finally: sys.stdout = old_stdout # Restore original stdout # check if the result is a dataframe if save_to_memory: for df_name in save_to_memory: self.notes.append(f"Saving dataframe '{df_name}' to memory") self.data[df_name] = local_dict.get(df_name) output = std_out_script if std_out_script else "No output" self.notes.append(f"Result: {output}") return [ TextContent(type="text", text=f"print out result: {output}") ]
  • Pydantic input schema for the run_script tool: required 'script' (str) and optional 'save_to_memory' (list of str for saving resulting DataFrames).
    class RunScript(BaseModel): script: str save_to_memory: Optional[List[str]] = None
  • Registers the 'run_script' tool in the MCP server's list_tools() handler, providing name, description, and input schema.
    Tool(name=DataExplorationTools.RUN_SCRIPT, description=RUN_SCRIPT_TOOL_DESCRIPTION, inputSchema=RunScript.model_json_schema()),
  • Enum value defining the tool name as 'run_script' in DataExplorationTools.
    RUN_SCRIPT = "run_script"
  • Dispatch in the MCP call_tool handler that invokes the safe_eval method for the run_script tool.
    elif name == DataExplorationTools.RUN_SCRIPT: return script_runner.safe_eval(arguments.get("script"), arguments.get("save_to_memory"))

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/OuchiniKaeru/mcp_data_analyzer'

If you have feedback or need assistance with the MCP directory API, please join our Discord server