Skip to main content
Glama

format_data_for_tool

Analyze user requirements and raw data to produce correctly formatted parameters for aerospace tools.

Instructions

Help format data in the correct format for a specific aerospace-mcp tool.

Uses GPT-5-Medium to analyze the user's requirements and raw data, then provides the correctly formatted parameters for the specified tool.

Args: tool_name: Name of the aerospace-mcp tool to format data for user_requirements: Description of what the user wants to accomplish raw_data: Any raw data that needs to be formatted (optional)

Returns: Formatted JSON string with the correct parameters for the tool, or a JSON error object if the tool is not found or LLM call fails.

Raises: No exceptions are raised directly; errors are returned as formatted strings or JSON error objects.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
tool_nameYes
user_requirementsYes
raw_dataNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The main handler function for the 'format_data_for_tool' tool. It checks if LLM tools are enabled, looks up the tool reference from AEROSPACE_TOOLS, validates the OpenAI API key, builds a prompt with tool schema info, calls GPT-5-Medium via LiteLLM, and returns formatted JSON parameters or an error.
    def format_data_for_tool(
        tool_name: str, user_requirements: str, raw_data: str = ""
    ) -> str:
        """
        Help format data in the correct format for a specific aerospace-mcp tool.
    
        Uses GPT-5-Medium to analyze the user's requirements and raw data, then provides
        the correctly formatted parameters for the specified tool.
    
        Args:
            tool_name: Name of the aerospace-mcp tool to format data for
            user_requirements: Description of what the user wants to accomplish
            raw_data: Any raw data that needs to be formatted (optional)
    
        Returns:
            Formatted JSON string with the correct parameters for the tool,
            or a JSON error object if the tool is not found or LLM call fails.
    
        Raises:
            No exceptions are raised directly; errors are returned as formatted strings
            or JSON error objects.
        """
        # Check if LLM tools are enabled
        if not LLM_TOOLS_ENABLED:
            return "Error: LLM agent tools are disabled. Set LLM_TOOLS_ENABLED=true to enable them."
    
        # Find the tool reference first
        tool_ref = None
        for tool in AEROSPACE_TOOLS:
            if tool.name == tool_name:
                tool_ref = tool
                break
    
        if not tool_ref:
            available_tools = [t.name for t in AEROSPACE_TOOLS]
            return f"Error: Tool '{tool_name}' not found. Available tools: {', '.join(available_tools)}"
    
        if "OPENAI_API_KEY" not in os.environ:
            return "Error: OPENAI_API_KEY environment variable not set. Cannot use agent tools."
    
        # Build the prompt for GPT-5-Medium
        system_prompt = f"""You are a data formatting assistant for aerospace-mcp tools. Your job is to help format data correctly for the '{tool_name}' tool.
    
    Tool Information:
    - Name: {tool_ref.name}
    - Description: {tool_ref.description}
    - Parameters: {json.dumps(tool_ref.parameters, indent=2)}
    - Examples: {json.dumps(tool_ref.examples, indent=2)}
    
    User Requirements: {user_requirements}
    
    Raw Data (if provided): {raw_data}
    
    Please provide ONLY a valid JSON object with the correctly formatted parameters for this tool. Do not include any explanation or additional text - just the JSON object that can be directly used as input to the tool.
    
    If the user's requirements are unclear or insufficient data is provided, return a JSON object with an "error" field explaining what additional information is needed."""
    
        try:
            # Call GPT-5-Medium via LiteLLM
            response = litellm.completion(
                model="gpt-5-medium",
                messages=[
                    {"role": "system", "content": system_prompt},
                    {
                        "role": "user",
                        "content": f"Format data for {tool_name}: {user_requirements}",
                    },
                ],
                temperature=0.1,
                max_tokens=1000,
            )
    
            formatted_result = response.choices[0].message.content.strip()
    
            # Validate it's valid JSON
            try:
                json.loads(formatted_result)
                return formatted_result
            except json.JSONDecodeError:
                return f'{{"error": "Failed to generate valid JSON format. Raw response: {formatted_result}"}}'
    
        except Exception as e:
            return f'{{"error": "Failed to format data: {str(e)}"}}'
  • Registration of format_data_for_tool as an MCP tool via mcp.tool() in the FastMCP server.
    mcp.tool(format_data_for_tool)
    mcp.tool(select_aerospace_tool)
  • Registration of format_data_for_tool in the CLI tool registry dict, mapping the string name to the function reference.
    "format_data_for_tool": format_data_for_tool,
    "select_aerospace_tool": select_aerospace_tool,
  • The ToolReference model (Pydantic BaseModel) used to define each tool's schema (name, description, parameters, examples). AEROSPACE_TOOLS list provides the reference data used by format_data_for_tool.
    class ToolReference(BaseModel):
        """Reference to an aerospace-mcp tool with its schema."""
    
        name: str
        description: str
        parameters: dict[str, Any]
        examples: list[str] = []
  • LLM_TOOLS_ENABLED flag and environment variable checks that control whether the agent tools (including format_data_for_tool) are operational.
    # Check if LLM tools are enabled via environment variable
    LLM_TOOLS_ENABLED = os.environ.get("LLM_TOOLS_ENABLED", "false").lower() == "true"
    
    # Configure LiteLLM for OpenAI GPT-5-Medium
    litellm.set_verbose = False
    
    # Log status of LLM tools
    if not LLM_TOOLS_ENABLED:
        logger.info("LLM tools disabled via LLM_TOOLS_ENABLED environment variable.")
    elif "OPENAI_API_KEY" not in os.environ:
        logger.warning(
            "LLM_TOOLS_ENABLED=true but OPENAI_API_KEY not set. Agent tools will not function without it."
        )
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description reveals key behaviors: uses GPT-5-Medium for formatting, returns JSON strings or error objects, and does not raise exceptions directly. It does not mention potential latency or costs of the LLM call, but covers the core functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear docstring format (Args/Returns/Raises). It is mostly concise, though the opening sentence is somewhat redundant with the docstring.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, the description covers the tool's purpose, parameters, and return format adequately for an agent to decide when to use it. It explains the meta-tool nature relative to sibling tools, though more details on LLM behavior could enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description's Args section adds meaning beyond the schema: tool_name as the target tool name, user_requirements as user intent, raw_data as optional raw data. This compensates for the 0% schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool formats data for another aerospace-mcp tool using GPT-5-Medium. It specifies the verb 'format' and the resource 'data for a specific aerospace-mcp tool', distinguishing it from siblings that perform direct calculations or analyses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when raw data or requirements need formatting for an aerospace tool, but does not explicitly state when not to use it or provide alternatives. Context suggests it is a helper tool, but no direct exclusion criteria are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cheesejaguar/aerospace-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server