Skip to main content
Glama

bpftrace MCP Server

by eunomia-bpf
MIT License
23
  • Linux

bpftrace MCP Server: generate eBPF to trace linux kernel

A minimal MCP (Model Context Protocol) server that provides AI assistants with access to bpftrace kernel tracing capabilities.

Now implemented in Rust using the rmcp crate for better performance and type safety. The Python implementation is still available in the git history.

bpftrace MCP Server Demo

Features

  • list_probes: List available bpftrace probes with optional filtering
  • list_helpers: Get information about bpftrace helper functions
  • exec_program: Execute bpftrace programs with buffered output
  • get_result: Retrieve execution results asynchronously

Installation

Prerequisites

  1. Install Rust (if not already installed):
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
  1. Ensure bpftrace is installed:
sudo apt-get install bpftrace # Ubuntu/Debian # or sudo dnf install bpftrace # Fedora
  1. Build the server:
cargo build --release

Quick Setup

Use our automated setup scripts:

  • Claude Desktop: ./setup/setup_claude.sh
  • Claude Code: ./setup/setup_claude_code.sh

For detailed setup instructions and manual configuration, see setup/SETUP.md.

Running the Server

Direct Execution

./target/release/bpftrace-mcp-server

Through Cargo

cargo run --release

Manual Configuration

For manual setup instructions for Claude Desktop or Claude Code, see setup/SETUP.md.

Usage Examples

List System Call Probes

await list_probes(filter="syscalls:*read*")

Get BPF System Information

info = await bpf_info() # Returns system info, kernel helpers, features, map types, and probe types

Execute a Simple Trace

result = await exec_program( 'tracepoint:syscalls:sys_enter_open { printf("%s\\n", comm); }', timeout=10 ) exec_id = result["execution_id"]

Get Results

output = await get_result(exec_id) print(output["output"])

Security Notes

  • The server requires sudo access for bpftrace
  • Password Handling: Create a .env file with your sudo password:
    echo "BPFTRACE_PASSWD=your_sudo_password" > .env
  • Alternative: Configure passwordless sudo for bpftrace:
    sudo visudo # Add: your_username ALL=(ALL) NOPASSWD: /usr/bin/bpftrace
  • No script validation - trust the AI client to generate safe scripts
  • Resource limits: 60s max execution, 10k lines buffer
  • See SECURITY.md for detailed security configuration

Architecture

The Rust server uses:

  • Tokio async runtime for concurrent operations
  • Subprocess management for bpftrace execution
  • DashMap for thread-safe in-memory buffering
  • Automatic cleanup of old buffers
  • rmcp crate for MCP protocol implementation

Limitations

  • No real-time streaming (use get_result to poll)
  • Simple password handling (improve for production)
  • No persistent storage of executions
  • Basic error handling

Documentation

Future Enhancements

  • Add SSE transport for real-time streaming
  • Implement proper authentication
  • Add script validation and sandboxing
  • Support for saving/loading trace sessions
  • Integration with eBPF programs
-
security - not tested
A
license - permissive license
-
quality - not tested

A minimal server that provides AI assistants with access to Linux kernel tracing capabilities through bpftrace, enabling dynamic tracing and performance analysis via the Model Context Protocol.

  1. Features
    1. Installation
      1. Prerequisites
      2. Quick Setup
    2. Running the Server
      1. Direct Execution
      2. Through Cargo
      3. Manual Configuration
    3. Usage Examples
      1. List System Call Probes
      2. Get BPF System Information
      3. Execute a Simple Trace
      4. Get Results
    4. Security Notes
      1. Architecture
        1. Limitations
          1. Documentation
            1. Future Enhancements

              Related MCP Servers

              • A
                security
                F
                license
                A
                quality
                A Model Context Protocol server that enables AI assistants to interact with Sentry for error tracking and monitoring, allowing retrieval and analysis of error data, project management, and performance monitoring through the Sentry API.
                Last updated -
                10
                19
                TypeScript
              • A
                security
                A
                license
                A
                quality
                A Model Context Protocol server that enables AI clients to interact with virtual Ubuntu desktops, allowing them to browse the web, run code, and control instances through mouse/keyboard actions and bash commands.
                Last updated -
                5
                14
                JavaScript
                MIT License
              • A
                security
                A
                license
                A
                quality
                A server that enables AI assistants to execute terminal commands and retrieve outputs via the Model Context Protocol (MCP).
                Last updated -
                3
                13
                Python
                MIT License
                • Apple
                • Linux
              • A
                security
                F
                license
                A
                quality
                A Model Context Protocol server that lets AI assistants interact with the Sentry API to retrieve and analyze error data, manage projects, and monitor application performance.
                Last updated -
                11
                6
                TypeScript

              View all related MCP servers

              MCP directory API

              We provide all the information about MCP servers via our MCP API.

              curl -X GET 'https://glama.ai/api/mcp/v1/servers/eunomia-bpf/MCPtrace'

              If you have feedback or need assistance with the MCP directory API, please join our Discord server