Skip to main content
Glama
severity1

terraform-cloud-mcp

get_apply_logs

Retrieve detailed logs from Terraform Cloud apply operations to monitor resource changes and identify errors during infrastructure deployment.

Instructions

Retrieve logs from an apply.

Gets the raw log output from a Terraform Cloud apply operation, providing detailed information about resource changes and any errors.

API endpoint: Uses the log-read-url from GET /applies/{apply_id}

Args: apply_id: The ID of the apply to retrieve logs for (format: "apply-xxxxxxxx")

Returns: The raw logs from the apply operation. The redirect to the log file is automatically followed.

See: docs/tools/apply.md for reference documentation

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
apply_idYes

Implementation Reference

  • The core handler function for the get_apply_logs tool. It validates the apply_id using ApplyRequest model, fetches apply details to extract the log-read-url, and retrieves the logs via API request.
    @handle_api_errors
    async def get_apply_logs(apply_id: str) -> APIResponse:
        """Retrieve logs from an apply.
    
        Gets the raw log output from a Terraform Cloud apply operation,
        providing detailed information about resource changes and any errors.
    
        API endpoint: Uses the log-read-url from GET /applies/{apply_id}
    
        Args:
            apply_id: The ID of the apply to retrieve logs for (format: "apply-xxxxxxxx")
    
        Returns:
            The raw logs from the apply operation. The redirect to the log file
            is automatically followed.
    
        See:
            docs/tools/apply.md for reference documentation
        """
        # Validate parameters using existing model
        params = ApplyRequest(apply_id=apply_id)
    
        # First get apply details to get the log URL
        apply_details = await api_request(f"applies/{params.apply_id}")
    
        # Extract log read URL
        log_read_url = (
            apply_details.get("data", {}).get("attributes", {}).get("log-read-url")
        )
        if not log_read_url:
            return {"error": "No log URL available for this apply"}
    
        # Use the enhanced api_request to fetch logs from the external URL
        return await api_request(log_read_url, external_url=True, accept_text=True)
  • Pydantic input schema model ApplyRequest used to validate the apply_id parameter in the handler.
    class ApplyRequest(APIRequest):
        """Request model for retrieving an apply.
    
        Used to validate the apply ID parameter for API requests.
    
        Reference: https://developer.hashicorp.com/terraform/cloud-docs/api-docs/applies#show-an-apply
    
        See:
            docs/models/apply.md for reference
        """
    
        apply_id: str = Field(
            ...,
            # No alias needed as field name matches API parameter
            description="The ID of the apply to retrieve",
            pattern=r"^apply-[a-zA-Z0-9]{16}$",  # Standard apply ID pattern
        )
  • MCP tool registration for the get_apply_logs function using FastMCP's tool decorator.
    mcp.tool()(applies.get_apply_logs)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that the tool follows redirects automatically and uses a specific API endpoint, which is useful behavioral context. However, it doesn't mention authentication requirements, rate limits, error handling, or whether this is a read-only operation (though 'Retrieve' implies it).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, API endpoint, Args, Returns, See). Every sentence adds value: the first states the purpose, the second elaborates on content, the third specifies the endpoint, and the parameter/return sections provide essential usage details without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter and no output schema, the description provides good coverage: it explains what the tool does, what parameter it needs, what it returns, and includes a reference for further documentation. The main gap is lack of explicit behavioral details like authentication or error handling, but overall it's quite complete for this tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides the parameter name ('apply_id'), clarifies its purpose ('The ID of the apply to retrieve logs for'), and specifies the expected format ('apply-xxxxxxxx'). This adds substantial meaning beyond the bare schema, though it doesn't explain where to obtain this ID.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Retrieve logs'), target resource ('from an apply'), and scope ('raw log output from a Terraform Cloud apply operation'). It distinguishes itself from sibling tools like 'get_apply_details' by focusing on logs rather than general apply metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Gets the raw log output... providing detailed information about resource changes and any errors'), but does not explicitly mention when not to use it or name alternatives. It implies usage for debugging apply operations but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/severity1/terraform-cloud-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server