Generates integration boilerplate for incorporating CI-1T stability guardrails and evaluation logic into FastAPI applications.
Supports probing local Ollama models for prediction stability by sending prompts and scoring responses locally.
Allows stability testing of OpenAI and OpenAI-compatible models through the probe tool's BYOM mode.
Retrieves billing and invoice history for the user's account directly from Stripe.
CI-1T MCP Server
Version: 1.7.0
Last Updated: February 27, 2026
License: Proprietary
MCP (Model Context Protocol) server for the CI-1T prediction stability engine. Lets AI agents — Claude Desktop, Cursor, Windsurf, VS Code Copilot, and any MCP-compatible client — evaluate model stability, manage fleet sessions, and control API keys directly.
One credential. One env var. That's it.
Tools (20) + Resources (1)
Tool | Description | Auth |
| Evaluate prediction stability (floats or Q0.16) | API key |
| Fleet-wide multi-node evaluation (floats or Q0.16) | API key |
| Probe any LLM for instability (3x same prompt). BYOM mode: bring your own model via OpenAI-compatible API | API key or BYOM |
| Check CI-1T engine status | API key |
| Create a persistent fleet session | API key |
| Submit a scoring round | API key |
| Get session state (read-only) | API key |
| List active fleet sessions | API key |
| Delete a fleet session | API key |
| List user's API keys | API key |
| Generate and register a new API key | API key |
| Delete an API key by ID | API key |
| Get billing history (Stripe) | API key |
| Welcome guide + setup instructions | None |
| Statistical breakdown of scores | None |
| Convert between floats and Q0.16 | None |
| Integration boilerplate for any framework | None |
| Compare baseline vs recent episodes for drift detection | None |
| Check episodes against custom thresholds, return alerts | None |
| Interactive HTML visualization of evaluate results | None |
Resource | URI | Description |
|
| Full usage guide: response schemas, chaining patterns, fleet workflow, thresholds, example pipelines |
Onboarding
New users get guided setup automatically. If no API key is configured:
Startup log prints a hint: "Create a free account at collapseindex.org — 1,000 free credits on signup"
onboardingreturns a full welcome guide with account status, setup steps, config examples, available tools, and pricingAuth-guarded tools return a friendly error with specific setup instructions instead of a raw 401
Utility tools (
interpret_scores,convert_scores,generate_config) always work — no auth, no credits
Every new account gets 1,000 free credits (no credit card required), enough for 1,000 evaluation episodes.
Setup
Environment Variables
Variable | Required | Description |
| Yes | Your |
| No | API base URL (default: |
Claude Desktop
Add to claude_desktop_config.json:
{
"mcpServers": {
"ci1t": {
"command": "docker",
"args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"],
"env": {
"CI1T_API_KEY": "ci_your_key_here"
}
}
}
}Cursor / Windsurf
Add to .cursor/mcp.json or equivalent:
{
"mcpServers": {
"ci1t": {
"command": "docker",
"args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"],
"env": {
"CI1T_API_KEY": "ci_your_key_here"
}
}
}
}VS Code (GitHub Copilot)
Add to .vscode/mcp.json:
{
"servers": {
"ci1t": {
"type": "stdio",
"command": "docker",
"args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"],
"env": {
"CI1T_API_KEY": "ci_your_key_here"
}
}
}
}Run from source (no Docker)
git clone https://github.com/collapseindex/ci1t-mcp.git
cd ci1t-mcp
npm install
npm run build
# Set env var and run
CI1T_API_KEY=ci_xxx node dist/index.jsBuild Docker Image
docker build -t collapseindex/ci1t-mcp .Example Usage
Once connected, an AI agent can:
"Evaluate these prediction scores: 45000, 32000, 51000, 48000, 29000, 55000"
The agent calls evaluate with scores: [45000, 32000, 51000, 48000, 29000, 55000] and gets back stability metrics per episode, including credits used and remaining.
"Create a fleet session with 4 nodes named GPT-4, Claude, Gemini, Llama"
"List my API keys"
"Probe this prompt for stability: What is the capital of France?"
"Probe my local Ollama llama3 model with: What is the meaning of life?"
The agent calls probe in BYOM mode — sends the prompt 3x to http://localhost:11434/v1 and scores the responses locally. No CI-1T credits used.
"Interpret these scores: 0.12, 0.45, 0.88, 0.03, 0.67"
The agent calls interpret_scores locally (no API call, no credits) and returns mean, std, min/max, and normalized values. For full stability classification, use evaluate.
"Convert these probabilities to Q0.16: 0.5, 0.95, 0.01"
"Generate a FastAPI integration for CI-1T with guardrail pattern"
CI-1T Quick Reference
Metric | Description |
CI (Collapse Index) | Primary stability metric (Q0.16: 0–65535). Lower = more stable |
AL (Authority Level) | Engine trust level for the model (0–4) |
Ghost | Model appears stable but may be silently wrong |
Warn / Fault | Threshold and hard-failure flags |
Classification labels (Stable / Drift / Flip / Collapse) are determined by the engine. Use the evaluate tool to get exact classifications — thresholds are configurable via the API.
Architecture
┌──────────────────────┐ stdio ┌───────────────────────┐
│ Claude Desktop / │◄──────────────►│ ci1t-mcp server │
│ Cursor / VS Code │ │ (Node.js / Docker) │
└──────────────────────┘ └──────────┬────────────┘
│ HTTPS
│ X-API-Key
┌──────────────┼──────────────┐
│ │ │
┌────▼───┐ ┌─────▼────┐ ┌─────▼─────┐
│Evaluate│ │Fleet API │ │Dashboard │
│ API │ │Sessions │ │API Keys │
│ │ │ │ │Billing │
└────────┘ └──────────┘ └───────────┘
collapseindex.orgChangelog
v1.7.0 (2026-02-27)
BYOM Probe:
probetool now supports Bring Your Own Model modeProvide
base_url+model(+ optionalmodel_api_key) to probe any OpenAI-compatible endpoint directlyWorks with local models (Ollama, LM Studio, vLLM) and remote APIs (OpenAI, Anthropic, Together, etc.)
BYOM mode runs entirely locally — no CI-1T auth needed, no credits consumed
Default mode unchanged (routes through CI-1T backend, costs 1 credit)
Local similarity scoring: Jaccard, length ratio, and character fingerprint cosine similarity
20 tools + 1 resource
v1.6.1 (2026-02-27)
SEC-01 (Critical): API key generation now uses
crypto.randomBytes()instead ofMath.random()SEC-02 (High): Visualization title is HTML-escaped to prevent XSS
SEC-03 (Medium):
toQ16()decimal heuristic prevents integer arrays[0, 1]from being misclassified as floatsSEC-04 (Medium): Score arrays capped at 10,000 per stream, 16 nodes max on fleet tools
SEC-05 (Medium): Source maps disabled in production build
SEC-06 (Low): Fixed template literal bug in
compare_windowsseverity messageSEC-07 (Low): Visualization temp files auto-cleaned after 1 hour
SEC-08 (Low): Header version comment updated
v1.6.0 (2026-02-27)
AI Discoverability: All 20 tool descriptions now include response schemas and chaining hints
tools_guideMCP resource (ci1t://tools-guide): comprehensive usage guide with response schemas, chaining patterns, fleet session workflow, classification thresholds, and example pipelinesAgents can now read the resource for full context beyond individual tool descriptions
20 tools + 1 resource
v1.5.0 (2026-02-27)
compare_windowstool: compare baseline vs recent episodes — drift delta, trend direction, degradation detectionalert_checktool: check episodes against custom thresholds (CI, EMA, AL, ghost, fault) with severity levelsBoth tools are local computation — no API call, no auth, no credits
20 tools total
v1.4.0 (2026-02-27)
visualizetool: generates self-contained interactive HTML with Canvas 2D bar chartsFixed sidebar layout matching CI-1T Lab dashboard style (KPIs, legend, stats in sidebar)
EMA Trend + Authority Level charts side-by-side
Adaptive bar sizing, hover tooltips, color-coded classifications
Links to collapseindex.org in sidebar
18 tools total
v1.3.0 (2026-02-27)
Single credential: All tools now use
CI1T_API_KEY— no Bearer token neededRemoved
CI1T_TOKENenv var entirelyBackend auth unified: all API routes accept X-API-Key (resolves user via key hash)
Simpler config: one env var to set, one credential to manage
17 tools total
v1.2.0 (2026-02-27)
onboardingtool: welcome guide with account status, setup steps, config examples, pricing, and available toolsAuth guards on all credentialed tools — returns a structured onboarding message instead of failing at the API level
Enhanced startup log: new-user hint when no credentials are configured
17 tools total
v1.1.0 (2026-02-27)
3 new utility tools:
interpret_scores,convert_scores,generate_config(local, no auth, no credits)evaluateandfleet_evaluatenow auto-detect floats (0–1) vs Q0.16 (0–65535) — no manual conversion neededDashboard parity: all Ask AI tools now available via MCP
v1.0.0 (2026-02-25)
Complete rewrite from Python to TypeScript
13 tools: evaluate, fleet_evaluate, probe, health, fleet session CRUD, API key CRUD, invoices
Docker image distribution
stdio transport for Claude Desktop, Cursor, VS Code
Dual auth: API key (X-API-Key) for evaluate, Bearer token for dashboard
© 2026 Collapse Index Labs™ — Alex Kwon
collapseindex.org · ask@collapseindex.org