Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
SCALENE_TIMEOUTNoTimeout in seconds for profiling sessions.
SCALENE_MALLOC_THRESHOLDNoOverride default malloc threshold in bytes for reporting.
SCALENE_PYTHON_EXECUTABLENoThe Python executable to use for profiling (e.g., python3.11).
SCALENE_CPU_PERCENT_THRESHOLDNoOverride default CPU percentage threshold for reporting high-activity lines.

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{
  "tasks": {
    "list": {},
    "cancel": {},
    "requests": {
      "tools": {
        "call": {}
      },
      "prompts": {
        "get": {}
      },
      "resources": {
        "read": {}
      }
    }
  }
}

Tools

Functions exposed to the LLM to take actions

NameDescription
get_project_root

Get the detected project root and structure type.

Returns: {root, type, markers_found}

list_project_files

List project files matching pattern, relative to project root.

Args: pattern: Glob pattern (*.py, src/**, etc.) max_depth: Maximum directory depth to search exclude_patterns: Comma-separated patterns to exclude

Returns: [relative_path, ...] sorted alphabetically

set_project_context

Explicitly set the project root (overrides auto-detection).

Use this if auto-detection fails or gives wrong path.

Args: project_root: Absolute path to project root

Returns: {project_root, status}

profile

Profile Python code using Scalene.

Args: type: "script" (profile a file) or "code" (profile code snippet) script_path: Required if type="script". Path to Python script code: Required if type="code". Python code to execute cpu_only: Skip memory/GPU profiling include_memory: Profile memory allocations include_gpu: Profile GPU usage (requires NVIDIA GPU) reduced_profile: Show only lines >1% CPU or >100 allocations profile_only: Comma-separated paths to include (e.g., "myapp") profile_exclude: Comma-separated paths to exclude (e.g., "test,vendor") use_virtual_time: Measure CPU time excluding I/O wait cpu_percent_threshold: Minimum CPU % to report malloc_threshold: Minimum allocation bytes to report script_args: Command-line arguments for the script

Returns: {profile_id, summary, text_summary}

analyze

Analyze profiling data with flexible analysis types.

Args: profile_id: Profile ID from profile() metric_type: "all", "cpu", "memory", "gpu", "bottlenecks", "leaks", "file", "functions", "recommendations" top_n: Number of items to return (for rankings) cpu_threshold: Minimum CPU % to flag bottleneck memory_threshold_mb: Minimum MB to flag bottleneck filename: Required if metric_type="file", file to analyze

Returns: {metric_type, data, summary} structure varies by metric_type

compare_profiles

Compare two profiles to measure optimization impact.

Args: before_id: Profile ID from original code after_id: Profile ID from optimized code

Returns: {runtime_change_pct, memory_change_pct, improvements, regressions, summary_text}

list_profiles

List all captured profiles in this session.

Returns: [profile_id, ...]

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ptmorris05/scalene-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server