PCQ
pcq
pcq is the contract for agent-run ML experiments. This repository hosts the contract specification under
spec/and the reference Python implementation undersrc/pcq/. Install the reference impl:uv add pcq(Apache-2.0).
The contract turns a project with cq.yaml into a reproducible experiment
unit. The reference Python implementation loads config, resolves output
paths, captures metrics, writes standard artifacts, finalizes run evidence,
and exposes JSON/JSONL/MCP surfaces that coding agents, CI jobs, notebooks,
and services can consume. See spec/IMPLEMENTATIONS.md
for the registered implementation list (Python reference + CQ Go production
worker today) and the procedure for adding yours.
pcq is not a training framework, model zoo, adapter matrix, or CQ-only
client. Use PyTorch, Hugging Face Trainer, Lightning, sklearn, TabPFN, PyCaret,
XGBoost, shell scripts, remote jobs, or project-local research code. The
contract is the integration layer.
pcq does not operate the model.
pcq operates the experiment boundary.SITE | INTRODUCTION | V4_DIRECTION | VISION | AGENT_OPERABILITY | RUN_RECORD | AGENT_OPERATING_GUIDE | CHANGELOG
Contract specification (single source of truth):
spec/INDEX.md |
SPEC |
CQ_YAML_RUNTIME_CONTRACT |
JSON_CONTRACTS |
STRICTNESS |
CQ_MCP_SPEC |
VERSIONING |
CONFORMANCE |
schemas/ (auto-exported via scripts/export_schemas.py)
Case studies (external evidence): mnist-dogfood | tabular-dogfood | mcp-dogfood | cq-worker-dogfood
Agent-readable site files: llms.txt, llms-full.txt, agent-manifest.json.
Identity
pcq = open-source experiment evidence/control library
cq = managed execution + orchestration + dashboard + agent loopCQ service is one managed consumer of the contract. pcq remains useful without
CQ: locally, in CI, in notebooks, and inside third-party orchestrators.
Why pcq
Framework-neutral — keep the training stack that fits the problem.
Agent-readable — use JSON/JSONL instead of terminal scraping.
Agent-verifiable — validate source, config, environment, metrics, artifacts, and run records.
Agent-operable — run, observe, validate, describe, compare, lineage, and iterate through stable commands.
Service-ready — CQ can consume the same contract for managed execution and automatic experiment loops.
Installation
uv add pcq
# Optional — to expose pcq as MCP tools to agent runtimes:
uv add 'pcq[mcp]'pyproject.toml:
[project]
dependencies = ["pcq"] # core only
# or:
dependencies = ["pcq[mcp]"] # core + Model Context Protocol serverDocker (MCP server only)
A minimal container image is also published; it packages
pcq[mcp] from PyPI and runs pcq mcp serve on stdio.
docker build -t pcq .
docker run -i --rm pcq # MCP client attaches to stdin/stdoutThe image is intentionally scoped to the MCP server surface — for
pcq run, pcq describe-run, pcq agent install and other CLI
subcommands, install pcq directly with uv add pcq instead.
For a tag, branch, or private fork:
[tool.uv.sources]
pcq = { git = "https://github.com/playidea-lab/pcq.git", tag = "v4.1.0" }The PyPI distribution, import name, CLI command, GitHub repository, runtime
workspace, and JSON contract namespace are all pcq. Runtime contract names
from CQ remain stable: cq.yaml, CQ_CONFIG_JSON, and cq://.
Minimal Contract
cq.yaml declares the run:
name: sklearn-baseline
cmd: uv run python train.py
configs:
output_dir: output
seed: 42
strictness: 3
monitor: eval_acc
mode: max
metrics:
- epoch
- eval_acc
artifacts:
- output/
inputs: {}train.py can use any framework:
import pickle
import pcq
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
cfg = pcq.config()
out = pcq.output_dir()
pcq.seed_everything(cfg.get("seed", 42))
x, y = load_iris(return_X_y=True)
x_train, x_eval, y_train, y_eval = train_test_split(
x,
y,
test_size=0.25,
random_state=int(cfg.get("seed", 42)),
stratify=y,
)
model = RandomForestClassifier(random_state=int(cfg.get("seed", 42)))
model.fit(x_train, y_train)
eval_acc = float(model.score(x_eval, y_eval))
with (out / "model.pkl").open("wb") as f:
pickle.dump(model, f)
history = [{"epoch": 0, "eval_acc": eval_acc}]
pcq.log(**history[-1])
pcq.save_all(history=history, artifacts={"model": "model.pkl"})No sklearn adapter is required. The same pattern works for HF Trainer, Lightning, XGBoost, TabPFN, PyCaret, shell commands, or custom code.
Agent Command Surface
Read and validate the project:
pcq resolve --json
pcq inspect . --json
pcq validate . --strictness 2 --jsonRun the project:
pcq run --path . --json
pcq run --path . --jsonl
pcq run --path . --events output/events.jsonl --jsonValidate and summarize outputs:
pcq validate-run output --strictness 3 --json
pcq describe-run output --json
pcq compare-runs old_output new_output --json
pcq lineage output --jsonIterate:
pcq apply-plan experiment.plan.json --jsonAgent rule: prefer JSON/JSONL surfaces over scraping human output. pcq
reports facts; the agent or service chooses policy.
Standard Artifacts
A completed run should produce:
config.jsonmetrics.jsonmanifest.jsonrun_summary.jsonrun_record.jsonvalidation_report.json
run_record.json is the canonical completion object. It combines execution,
source, environment, input identity, metric schema, artifact manifest, agent
provenance, validation, and summary evidence.
Agent Runtime Assets
pcq can install its canonical agent instructions and skill into a project.
Package installation itself never mutates project agent files.
pcq agent install --target codex --path .
pcq agent install --target claude --path .
pcq agent install --target both --path . --dry-run --json
pcq agent status --target both --path . --jsonTo also wire the project for MCP-aware agents (Claude Code, Codex), install
pcq[mcp] and pass --mcp:
uv add 'pcq[mcp]'
pcq agent install --target claude --path . --mcp # writes .mcp.json
pcq mcp serve # stdio (default)This exposes 14 mcp__pcq__* tools (resolve_project, validate_run,
describe_run, compare_runs, ...) so agents call pcq directly without
subprocess parsing. See MCP Integration.
Reusable assets:
v4 Direction
v4 clarifies the product boundary:
contract-first workflow, not a 3-tier training API
project-local training code, not built-in production catalogs
contract scripts, not framework adapters
run evidence validation, not recipe ownership
JSON/JSONL facts, not prose parsing
See pcq v4 Direction.
Development
uv run ruff check src/ tests/ scripts/
uv run python -m compileall src/pcq
uv run pytest tests/ -q
bash scripts/release-smoke.shLicense
Apache-2.0.
Maintenance
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/playidea-lab/pcq'
If you have feedback or need assistance with the MCP directory API, please join our Discord server