Skip to main content
Glama
jamesbrink

MCP Server for Coroot

get_application_rca

Analyze application issues to identify root causes of incidents, performance degradation, or failures using AI-powered insights.

Instructions

Get AI-powered root cause analysis for application issues.

Analyzes application problems and provides insights into the root causes of incidents, performance degradation, or failures.

Args: project_id: Project ID app_id: Application ID (format: namespace/kind/name)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYes
app_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • MCP tool handler function for get_application_rca. This is the entry point for the tool execution in the FastMCP server.
    @mcp.tool()
    async def get_application_rca(
        project_id: str,
        app_id: str,
    ) -> dict[str, Any]:
        """Get AI-powered root cause analysis for application issues.
    
        Analyzes application problems and provides insights into the root
        causes of incidents, performance degradation, or failures.
    
        Args:
            project_id: Project ID
            app_id: Application ID (format: namespace/kind/name)
        """
        return await get_application_rca_impl(project_id, app_id)  # type: ignore[no-any-return]
  • Implementation function that calls the CorootClient's get_application_rca method and wraps the response.
    async def get_application_rca_impl(
        project_id: str,
        app_id: str,
    ) -> dict[str, Any]:
        """Get root cause analysis for an application."""
        rca = await get_client().get_application_rca(project_id, app_id)
        return {
            "success": True,
            "rca": rca,
        }
  • CorootClient method that performs the actual HTTP GET request to the Coroot API's RCA endpoint.
    async def get_application_rca(self, project_id: str, app_id: str) -> dict[str, Any]:
        """Get root cause analysis for an application.
    
        Args:
            project_id: Project ID.
            app_id: Application ID (format: namespace/kind/name).
    
        Returns:
            Root cause analysis results.
        """
        # URL encode the app_id since it contains slashes
        from urllib.parse import quote
    
        encoded_app_id = quote(app_id, safe="")
    
        response = await self._request(
            "GET", f"/api/project/{project_id}/app/{encoded_app_id}/rca"
        )
        data: dict[str, Any] = response.json()
        return data
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states what the tool does, not how it behaves. It doesn't disclose whether this is a read-only operation, if it requires specific permissions, latency expectations, rate limits, or what 'AI-powered' entails in practice.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement, elaboration, and parameter documentation in three focused sections. Every sentence adds value without redundancy, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 parameters with good semantic coverage in the description and an output schema (which handles return values), the description is reasonably complete. However, as a diagnostic tool with no annotations, it could better explain behavioral aspects like analysis depth or result format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description's Args section provides clear semantics for both parameters (project_id and app_id), including format guidance for app_id ('namespace/kind/name'). This compensates well for the schema's lack of descriptions, though it doesn't explain parameter constraints or examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get AI-powered root cause analysis') and resources ('for application issues'), distinguishing it from siblings like get_application_logs or get_application_profiling by focusing on root cause analysis rather than raw data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('Analyzes application problems... incidents, performance degradation, or failures') but doesn't explicitly state when to use this tool versus alternatives like get_incident or get_application_traces. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jamesbrink/mcp-coroot'

If you have feedback or need assistance with the MCP directory API, please join our Discord server