format_data_for_tool
Analyze user requirements and raw data to produce correctly formatted parameters for aerospace tools.
Instructions
Help format data in the correct format for a specific aerospace-mcp tool.
Uses GPT-5-Medium to analyze the user's requirements and raw data, then provides the correctly formatted parameters for the specified tool.
Args: tool_name: Name of the aerospace-mcp tool to format data for user_requirements: Description of what the user wants to accomplish raw_data: Any raw data that needs to be formatted (optional)
Returns: Formatted JSON string with the correct parameters for the tool, or a JSON error object if the tool is not found or LLM call fails.
Raises: No exceptions are raised directly; errors are returned as formatted strings or JSON error objects.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| tool_name | Yes | ||
| user_requirements | Yes | ||
| raw_data | No |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| result | Yes |
Implementation Reference
- aerospace_mcp/tools/agents.py:182-264 (handler)The main handler function for the 'format_data_for_tool' tool. It checks if LLM tools are enabled, looks up the tool reference from AEROSPACE_TOOLS, validates the OpenAI API key, builds a prompt with tool schema info, calls GPT-5-Medium via LiteLLM, and returns formatted JSON parameters or an error.
def format_data_for_tool( tool_name: str, user_requirements: str, raw_data: str = "" ) -> str: """ Help format data in the correct format for a specific aerospace-mcp tool. Uses GPT-5-Medium to analyze the user's requirements and raw data, then provides the correctly formatted parameters for the specified tool. Args: tool_name: Name of the aerospace-mcp tool to format data for user_requirements: Description of what the user wants to accomplish raw_data: Any raw data that needs to be formatted (optional) Returns: Formatted JSON string with the correct parameters for the tool, or a JSON error object if the tool is not found or LLM call fails. Raises: No exceptions are raised directly; errors are returned as formatted strings or JSON error objects. """ # Check if LLM tools are enabled if not LLM_TOOLS_ENABLED: return "Error: LLM agent tools are disabled. Set LLM_TOOLS_ENABLED=true to enable them." # Find the tool reference first tool_ref = None for tool in AEROSPACE_TOOLS: if tool.name == tool_name: tool_ref = tool break if not tool_ref: available_tools = [t.name for t in AEROSPACE_TOOLS] return f"Error: Tool '{tool_name}' not found. Available tools: {', '.join(available_tools)}" if "OPENAI_API_KEY" not in os.environ: return "Error: OPENAI_API_KEY environment variable not set. Cannot use agent tools." # Build the prompt for GPT-5-Medium system_prompt = f"""You are a data formatting assistant for aerospace-mcp tools. Your job is to help format data correctly for the '{tool_name}' tool. Tool Information: - Name: {tool_ref.name} - Description: {tool_ref.description} - Parameters: {json.dumps(tool_ref.parameters, indent=2)} - Examples: {json.dumps(tool_ref.examples, indent=2)} User Requirements: {user_requirements} Raw Data (if provided): {raw_data} Please provide ONLY a valid JSON object with the correctly formatted parameters for this tool. Do not include any explanation or additional text - just the JSON object that can be directly used as input to the tool. If the user's requirements are unclear or insufficient data is provided, return a JSON object with an "error" field explaining what additional information is needed.""" try: # Call GPT-5-Medium via LiteLLM response = litellm.completion( model="gpt-5-medium", messages=[ {"role": "system", "content": system_prompt}, { "role": "user", "content": f"Format data for {tool_name}: {user_requirements}", }, ], temperature=0.1, max_tokens=1000, ) formatted_result = response.choices[0].message.content.strip() # Validate it's valid JSON try: json.loads(formatted_result) return formatted_result except json.JSONDecodeError: return f'{{"error": "Failed to generate valid JSON format. Raw response: {formatted_result}"}}' except Exception as e: return f'{{"error": "Failed to format data: {str(e)}"}}' - aerospace_mcp/fastmcp_server.py:247-248 (registration)Registration of format_data_for_tool as an MCP tool via mcp.tool() in the FastMCP server.
mcp.tool(format_data_for_tool) mcp.tool(select_aerospace_tool) - aerospace_mcp/cli.py:164-165 (registration)Registration of format_data_for_tool in the CLI tool registry dict, mapping the string name to the function reference.
"format_data_for_tool": format_data_for_tool, "select_aerospace_tool": select_aerospace_tool, - aerospace_mcp/tools/agents.py:40-46 (schema)The ToolReference model (Pydantic BaseModel) used to define each tool's schema (name, description, parameters, examples). AEROSPACE_TOOLS list provides the reference data used by format_data_for_tool.
class ToolReference(BaseModel): """Reference to an aerospace-mcp tool with its schema.""" name: str description: str parameters: dict[str, Any] examples: list[str] = [] - aerospace_mcp/tools/agents.py:25-38 (helper)LLM_TOOLS_ENABLED flag and environment variable checks that control whether the agent tools (including format_data_for_tool) are operational.
# Check if LLM tools are enabled via environment variable LLM_TOOLS_ENABLED = os.environ.get("LLM_TOOLS_ENABLED", "false").lower() == "true" # Configure LiteLLM for OpenAI GPT-5-Medium litellm.set_verbose = False # Log status of LLM tools if not LLM_TOOLS_ENABLED: logger.info("LLM tools disabled via LLM_TOOLS_ENABLED environment variable.") elif "OPENAI_API_KEY" not in os.environ: logger.warning( "LLM_TOOLS_ENABLED=true but OPENAI_API_KEY not set. Agent tools will not function without it." )