Skip to main content
Glama
jamesbrink

MCP Server for Coroot

get_application_profiling

Retrieve CPU and memory profiling data with flame graphs to identify performance bottlenecks and optimization opportunities in applications.

Instructions

Get CPU and memory profiling data for an application.

Retrieves profiling data including flame graphs for CPU usage and memory allocation patterns to help identify performance bottlenecks and optimization opportunities.

⚠️ WARNING: This endpoint can return extremely large responses (180k+ tokens) for applications with extensive profiling data. Consider using time filters to limit the response size to specific time windows.

Args: project_id: Project ID app_id: Application ID (format: namespace/kind/name) from_timestamp: Start timestamp (optional, strongly recommended) to_timestamp: End timestamp (optional, strongly recommended) query: Search query (optional)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYes
app_idYes
from_timestampNo
to_timestampNo
queryNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • MCP tool handler for 'get_application_profiling'. Decorated with @mcp.tool() for registration, includes docstring serving as schema description, and delegates to the implementation.
    @mcp.tool()
    async def get_application_profiling(
        project_id: str,
        app_id: str,
        from_timestamp: int | None = None,
        to_timestamp: int | None = None,
        query: str | None = None,
    ) -> dict[str, Any]:
        """Get CPU and memory profiling data for an application.
    
        Retrieves profiling data including flame graphs for CPU usage
        and memory allocation patterns to help identify performance
        bottlenecks and optimization opportunities.
    
        ⚠️ WARNING: This endpoint can return extremely large responses (180k+ tokens)
        for applications with extensive profiling data. Consider using time filters
        to limit the response size to specific time windows.
    
        Args:
            project_id: Project ID
            app_id: Application ID (format: namespace/kind/name)
            from_timestamp: Start timestamp (optional, strongly recommended)
            to_timestamp: End timestamp (optional, strongly recommended)
            query: Search query (optional)
        """
        return await get_application_profiling_impl(  # type: ignore[no-any-return]
            project_id, app_id, from_timestamp, to_timestamp, query
        )
  • Implementation helper function decorated with @handle_errors that calls CorootClient.get_application_profiling and formats the response.
    @handle_errors
    async def get_application_profiling_impl(
        project_id: str,
        app_id: str,
        from_timestamp: int | None = None,
        to_timestamp: int | None = None,
        query: str | None = None,
    ) -> dict[str, Any]:
        """Get profiling data for an application."""
        profiling = await get_client().get_application_profiling(
            project_id, app_id, from_timestamp, to_timestamp, query
        )
        return {
            "success": True,
            "profiling": profiling,
        }
  • CorootClient method that performs the HTTP GET request to the Coroot API endpoint for application profiling data, handling URL encoding and parameters.
    async def get_application_profiling(
        self,
        project_id: str,
        app_id: str,
        from_timestamp: int | None = None,
        to_timestamp: int | None = None,
        query: str | None = None,
    ) -> dict[str, Any]:
        """Get profiling data for an application.
    
        Args:
            project_id: Project ID.
            app_id: Application ID (format: namespace/kind/name).
            from_timestamp: Start timestamp.
            to_timestamp: End timestamp.
            query: Search query.
    
        Returns:
            Profiling data and flame graphs.
        """
        # URL encode the app_id since it contains slashes
        from urllib.parse import quote
    
        encoded_app_id = quote(app_id, safe="")
    
        params = {}
        if from_timestamp:
            params["from"] = str(from_timestamp)
        if to_timestamp:
            params["to"] = str(to_timestamp)
        if query:
            params["query"] = query
    
        response = await self._request(
            "GET",
            f"/api/project/{project_id}/app/{encoded_app_id}/profiling",
            params=params,
        )
        data: dict[str, Any] = response.json()
        return data
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond basic functionality by warning about 'extremely large responses (180k+ tokens)' and recommending time filters to manage size. It also implies read-only behavior through 'Get' and 'Retrieves', though it does not detail rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose, followed by details and warnings, then parameter explanations. Every sentence adds value, such as the warning about response size and parameter recommendations, with no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, 0% schema coverage, no annotations) and the presence of an output schema, the description is mostly complete. It covers purpose, usage, behavioral warnings, and parameter semantics adequately. However, it could improve by detailing output structure or error handling, though the output schema mitigates some of this need.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaningful semantics by explaining parameters in the 'Args' section, such as 'Application ID (format: namespace/kind/name)' and noting that timestamps are 'optional, strongly recommended'. This clarifies usage beyond the schema's basic types, though it could provide more on query syntax or timestamp formats.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get', 'Retrieves') and resources ('CPU and memory profiling data for an application'), including details like 'flame graphs' and 'performance bottlenecks'. It distinguishes itself from sibling tools like get_application_logs or get_application_traces by focusing on profiling data rather than logs or traces.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to help identify performance bottlenecks and optimization opportunities') and includes a warning about large responses with a recommendation to use time filters. However, it does not explicitly mention when not to use it or name specific alternatives among siblings, such as get_application_logs for logs instead of profiling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jamesbrink/mcp-coroot'

If you have feedback or need assistance with the MCP directory API, please join our Discord server