Skip to main content
Glama
severity1

terraform-cloud-mcp

get_assessment_log_output

Retrieve detailed execution logs and error information from Terraform Cloud assessment results to analyze infrastructure operations.

Instructions

Retrieve logs from an assessment result.

Gets the raw log output from a Terraform Cloud assessment operation, providing detailed information about the execution and any errors.

API endpoint: GET /api/v2/assessment-results/{assessment_result_id}/log-output

Args: assessment_result_id: The ID of the assessment result to retrieve logs for (format: "asmtres-xxxxxxxx")

Returns: The raw logs from the assessment operation. The redirect to the log file is automatically followed.

Note: This endpoint requires admin level access to the workspace and cannot be accessed with organization tokens.

See: docs/tools/assessment_results.md for reference documentation

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
assessment_result_idYes

Implementation Reference

  • The core handler function implementing the tool logic: validates input using AssessmentOutputRequest, then calls the Terraform Cloud API to fetch log output from /assessment-results/{id}/log-output with text acceptance.
    @handle_api_errors
    async def get_assessment_log_output(assessment_result_id: str) -> APIResponse:
        """Retrieve logs from an assessment result.
    
        Gets the raw log output from a Terraform Cloud assessment operation,
        providing detailed information about the execution and any errors.
    
        API endpoint: GET /api/v2/assessment-results/{assessment_result_id}/log-output
    
        Args:
            assessment_result_id: The ID of the assessment result to retrieve logs for (format: "asmtres-xxxxxxxx")
    
        Returns:
            The raw logs from the assessment operation. The redirect to the log file
            is automatically followed.
    
        Note:
            This endpoint requires admin level access to the workspace and cannot be accessed
            with organization tokens.
    
        See:
            docs/tools/assessment_results.md for reference documentation
        """
        # Validate parameters
        params = AssessmentOutputRequest(assessment_result_id=assessment_result_id)
    
        # Make API request with text acceptance for the logs
        return await api_request(
            f"assessment-results/{params.assessment_result_id}/log-output", accept_text=True
        )
  • Pydantic models for input validation: AssessmentResultRequest defines the assessment_result_id with regex pattern, AssessmentOutputRequest subclasses it for output endpoints like logs.
    class AssessmentResultRequest(APIRequest):
        """Request model for retrieving assessment result details.
    
        Used to validate the assessment result ID parameter for API requests.
    
        Reference: https://developer.hashicorp.com/terraform/cloud-docs/api-docs/assessment-results#show-assessment-result
    
        See:
            docs/models/assessment_result.md for reference
        """
    
        assessment_result_id: str = Field(
            ...,
            # No alias needed as field name matches API parameter
            description="The ID of the assessment result to retrieve",
            pattern=r"^asmtres-[a-zA-Z0-9]{8,}$",  # Standard assessment result ID pattern
        )
    
    
    class AssessmentOutputRequest(AssessmentResultRequest):
        """Request model for retrieving assessment result outputs.
    
        Extends the base AssessmentResultRequest for specialized outputs like
        JSON plan, schema, and log output.
    
        Reference: https://developer.hashicorp.com/terraform/cloud-docs/api-docs/assessment-results#retrieve-the-json-output-from-the-assessment-execution
    
        See:
            docs/models/assessment_result.md for reference
        """
    
        pass  # Uses the same validation as the parent class
  • Tool registration in the FastMCP server using mcp.tool() decorator on the imported assessment_results.get_assessment_log_output function.
    mcp.tool()(assessment_results.get_assessment_log_output)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond basic functionality: it specifies that the redirect to the log file is automatically followed, mentions admin-level access requirements, and notes it cannot be accessed with organization tokens. This provides useful behavioral insights like authentication needs and operational details, though it could benefit from mentioning rate limits or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It starts with a clear purpose statement, provides details in sections (Args, Returns, Note, See), and avoids unnecessary fluff. Every sentence adds value, such as the API endpoint reference and access note, making it efficient and front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a single-parameter read operation), no annotations, and no output schema, the description does a solid job. It explains what the tool does, the parameter, return value, access requirements, and provides a reference. It could be more complete by detailing the log format or error cases, but for a tool with minimal structured data, it covers key aspects adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate. It adds meaningful semantics for the single parameter: 'assessment_result_id: The ID of the assessment result to retrieve logs for (format: "asmtres-xxxxxxxx")'. This clarifies the parameter's purpose and format, which is not covered in the schema. Since there's only one parameter and the description fully documents it, this earns a high score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Retrieve logs from an assessment result' and 'Gets the raw log output from a Terraform Cloud assessment operation'. It specifies the resource (assessment logs) and action (retrieve/get), but doesn't explicitly differentiate from sibling tools like 'get_apply_logs' or 'get_plan_logs', which is why it doesn't reach a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context: it mentions that this is for Terraform Cloud assessment operations and includes a note about admin access requirements. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_assessment_json_output' or other log-retrieval tools in the sibling list, leaving usage somewhat implied rather than clearly guided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/severity1/terraform-cloud-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server