Skip to main content
Glama

The Repositories was supported by Linux.do

Colab MCP

Local-first MCP server for controlling Google Colab as a development, shell, file, and training runtime.

Chinese documentation: README.zh-CN.md

What This Provides

This fork extends the upstream static Colab MCP baseline with:

  • notebook editing, batch execution, status polling, and output reading

  • controlled Edge browser startup and Colab frontend MCP repair

  • explicit Python runtime connection with connect_runtime

  • MCP-native runtime accelerator switching, including T4 GPU selection

  • Colab Terminal-backed shell commands and background jobs

  • local-path file upload/download tools

  • runtime file list/stat/delete/directory tools

  • GPU checks, snapshots, and monitor jobs

  • environment variable helpers and runtime restart/shutdown tools

Project Layout

  • src/colab_mcp/: Python MCP server, Colab session proxy, runtime/browser tools.

  • tests/: unit tests for proxy, tool behavior, and websocket server behavior.

  • docs/: usage, tool inventory, examples, troubleshooting, and API conventions.

  • TODO.md: implementation checklist and follow-up notes.

Setup

uv sync --dev
uv run pytest

Run the MCP server:

uv run colab-mcp

MCP client configuration:

{
  "mcpServers": {
    "colab-mcp": {
      "command": "uv",
      "args": ["run", "--directory", "<path-to-colab_mcp>", "colab-mcp"],
      "startup_timeout_sec": 120,
      "env": {
        "COLAB_MCP_EDGE_CDP_PORT": "9333",
        "COLAB_MCP_EDGE_URL_CONTAINS": "colab.research.google.com"
      }
    }
  }
}

Replace <path-to-colab_mcp> with the absolute path to your local checkout.

Runtime Readiness

Do not assume the Python runtime is ready just because the browser MCP is connected. Shell, file, GPU, and training tools require a real Colab runtime and terminal socket.

Result Contract

Tools use structured status fields:

  • ok=true, status="ok": operation completed without known warnings.

  • ok=false, status="warning": prerequisite missing, partial/no-op result, or recoverable state. Read warnings and recommendedNextActions.

  • ok=false, status="error": execution failed, timed out, returned a non-zero exit code, or hit an explicit tool error. Inspect error, stdout, stderr, and recommendedNextActions before retrying.

Do not continue a workflow after warning or error without handling the returned instructions.

Browser And Runtime Flow

There are two separate connection layers:

  1. Browser/frontend MCP connection.

  2. Colab Python runtime connection.

For normal runtime work:

open_colab_browser_connection()
connect_runtime(waitSeconds=180)

For GPU work:

open_colab_browser_connection()
set_runtime_accelerator(accelerator="T4 GPU", apply=true)
open_colab_browser_connection()
connect_runtime(waitSeconds=180)
check_gpu()

Changing the accelerator can restart or disconnect the runtime. Always reconnect the browser MCP and then call connect_runtime before shell, file, GPU, or training tools.

Common Training Flow

All core training steps can run through MCP tools:

open_colab_browser_connection()
set_runtime_accelerator(accelerator="T4 GPU", apply=true)
open_colab_browser_connection()
connect_runtime(waitSeconds=180)
check_gpu()
upload_local_file(localPath="<path-to-train.py>", path="/content/train.py", overwrite=true)
start_background_command(
  name="train",
  command="python /content/train.py",
  logPath="/content/train.log",
  cwd="/content"
)
watch_background_command(name="train", lines=100)
download_file_to_local(path="/content/model.pt", localPath="<path-to-model.pt>", overwrite=true)
shutdown_runtime(reason="training finished")

Use start_background_command for training and long jobs. Do not use run_shell_command for training; it is for short bounded commands.

Release Or Close The Runtime Instance

Always release the Colab runtime when work is finished, cancelled, or no longer needs CPU/GPU resources:

shutdown_runtime(reason="training finished")

Recommended completion flow:

stat_file(path="/content/model.pt")
download_file_to_local(path="/content/model.pt", localPath="<path-to-model.pt>", overwrite=true)
shutdown_runtime(reason="training finished")

Recommended cancellation flow:

check_background_command(name="train")
stop_background_command(name="train")
shutdown_runtime(reason="training cancelled")

Important notes:

  • Download weights, logs, and other artifacts before shutdown. Files under /content can be lost after the runtime is released.

  • shutdown_runtime releases/disconnects the active Colab CPU/GPU runtime instance. It does not uninstall this MCP server and does not close the browser tab.

  • After shutdown, future runtime work must reconnect with open_colab_browser_connection() and connect_runtime(waitSeconds=180).

Local File Transfer

Prefer local-path tools when moving files between this machine and Colab:

upload_local_file(localPath="<local-file>", path="/content/input.txt", overwrite=true)
download_file_to_local(path="/content/output.txt", localPath="<local-output>", overwrite=true)

Base64 tools remain available for clients that already have content in memory:

upload_file(path="/content/input.txt", contentBase64="...")
download_file(path="/content/output.txt")
upload_file_chunk(...)
download_file_chunk(...)
complete_upload(...)

Tool Groups

Connection:

  • get_connection_info

Browser, runtime, and accelerator:

  • open_colab_browser_connection

  • connect_runtime

  • check_runtime

  • restart_runtime

  • shutdown_runtime

  • set_runtime_accelerator

Shell and long jobs:

  • run_shell_command

  • start_background_command

  • check_background_command

  • watch_background_command

  • list_background_commands

  • stop_background_command

  • tail_file

Runtime files:

  • upload_local_file

  • download_file_to_local

  • upload_file

  • download_file

  • upload_file_chunk

  • complete_upload

  • download_file_chunk

  • stat_file

  • list_files

  • make_directory

  • delete_file

GPU and resource monitoring:

  • check_gpu

  • resource_usage_snapshot

  • sample_gpu_usage

  • start_gpu_monitor

  • read_gpu_monitor

  • stop_gpu_monitor

Notebook cells:

  • get_cells

  • get_cell

  • add_code_cell

  • add_text_cell

  • update_cell

  • patch_cell

  • delete_cell

  • move_cell

  • find_cells

  • replace_cells

  • run_code_cell

  • run_code_cells

  • run_cell_range

  • run_all_cells

  • cancel_queued_cells

  • get_cell_status

  • wait_for_cells

  • read_cell_outputs

  • watch_cell_outputs

Environment:

  • set_env_vars

  • get_env_vars

  • unset_env_vars

  • load_env_file

  • rerun_env_setup_cells

Notebook import/export:

  • import_notebook

  • export_notebook

  • upload_notebook alias for import_notebook

  • download_notebook alias for export_notebook

Browser Control

open_colab_browser_connection can start a dedicated Microsoft Edge instance, open Colab, and connect the Colab frontend to the local MCP server.

Defaults:

  • CDP port: 9333, override with COLAB_MCP_EDGE_CDP_PORT

  • Edge profile: ~/.codex/edge-colab-mcp-profile, override with COLAB_MCP_EDGE_PROFILE

  • Edge executable: auto-detected, override with COLAB_MCP_EDGE_PATH

colabctl remains available for diagnostics and manual repair:

uv run colabctl status
uv run colabctl connect
uv run colabctl smoke-mcp
uv run colabctl set-accelerator --accelerator GPU

Normal automation should prefer MCP tools over colabctl.

Verification

Run local tests:

uv run pytest
uv run python -m compileall src

The current flow has been validated with a full MCP-only ResCNN smoke:

  • selected T4 through MCP

  • connected the Python runtime through MCP

  • confirmed Tesla T4 with check_gpu

  • uploaded a local training script

  • trained with start_background_command

  • watched logs with watch_background_command

  • downloaded weights with download_file_to_local

  • shut down the runtime with shutdown_runtime

Documentation

Thanks to the linux.do laoyou for their support. "Laoyou" is the community's own friendly term for respected peers.

Privacy Notes

The README uses placeholders such as <path-to-colab_mcp> and does not include local usernames, personal checkout paths, tokens, or runtime artifacts. Generated training artifacts should stay under artifacts/, which is ignored by git.

A
license - permissive license
-
quality - not tested
C
maintenance

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/404F0X/better_colab_MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server