grok-faf-mcp
The grok-faf-mcp server is a zero-config MCP solution for Grok/xAI that manages .faf (AI context) files, enabling persistent, AI-readable project context across sessions. It offers 21 core tools across these areas:
Project Context Management
faf_init— Create/initialize aproject.faffile (captures stack, goals, architecture)faf_status— Check if a project has aproject.faffaf_trust— Validate integrity of an existingproject.faffaf_chat— Guided interview to interactively build aproject.faf
Scoring & Analysis
faf_score— Calculate an AI-readiness score (0–100%) with breakdown and improvement suggestions
Sync & Enhancement
faf_sync/faf_bi_sync— Bi-directional sync betweenproject.fafand platform context files (CLAUDE.md,AGENTS.md,.cursorrules,GEMINI.md)faf_enhance— AI-optimize.fafcontent with support for focus areas, target models, and consensus modes
File System Operations
faf_read/faf_write— Read/write any file on the local filesystemfaf_list— Discover and list projects containingproject.faffiles
RAG (Retrieval-Augmented Generation)
rag_query— Ask questions with RAG-enhanced context from xAI Collections, with LAZY-RAG caching for 100,000x speedup on repeated queriesrag_cache_stats/rag_cache_clear— Monitor and manage the RAG cache
Utilities & Debugging
faf_about/faf_what/faf_guide— Learn about the.fafformat and usage patternsfaf_debug— Inspect the MCP environment (working directory, permissions, CLI status)faf_clear— Clear caches, temp files, and reset FAF state
Deployment options include a hosted endpoint, self-deploy on Vercel, or run locally via npx. The server uses a Mk4 WASM scoring engine for fast execution (~0.5ms average).
Utilizes the IANA-registered .faf (YAML) format to manage project metadata, providing tools to initialize, auto-detect, and score project context for AI-readiness.
grok-faf-mcp | FAST⚡️AF
📋 The 6 Ws - Quick Reference
Every README should answer these questions. Here's ours:
Question | Answer |
👥 WHO is this for? | Grok/xAI developers and teams building with URL-based MCP |
📦 WHAT is it? | First MCP server built for Grok - URL-based AI context via IANA-registered .faf format |
🌍 WHERE does it work? | Vercel (production) • Local dev • Any MCP client supporting HTTP-SSE |
🎯 WHY do you need it? | Zero-config MCP on a URL - Grok asked for it, we built it first |
⏰ WHEN should you use it? | Grok integration testing, xAI projects, URL-based MCP deployments |
🚀 HOW does it work? | Point to |
For AI: Read the detailed sections below for full context. For humans: Use this pattern in YOUR README. Answer these 6 questions clearly.
The Problem
Every Grok session starts from zero. You re-explain your stack, your goals, your architecture. Every time.
.faf fixes that. One file, your project DNA, persistent across every session.
Without .faf → "I'm building a REST API in Rust with Axum and PostgreSQL..."
With .faf → Grok already knows. Every session. Forever.One Command, Done Forever
faf_auto detects your project, creates a .faf, and scores it — in one shot:
faf_auto
━━━━━━━━━━━━━━━━━
Score: 0% → 85% (+85) 🥉 Bronze
Steps:
1. Created project.faf
2. Detected stack from package.json
3. Synced CLAUDE.md
Path: /home/user/my-projectWhat it produces:
# project.faf — your project, machine-readable
faf_version: "3.3"
project:
name: my-api
goal: REST API for user management
main_language: TypeScript
stack:
backend: Express
database: PostgreSQL
testing: Jest
runtime: Node.js
human_context:
who: Backend developers
what: User CRUD with auth
why: Replace legacy PHP serviceEvery AI agent reads this once and knows exactly what you're building.
⚡ What You Get
URL: https://grok-faf-mcp.vercel.app/
Format: IANA-registered .faf (application/vnd.faf+yaml)
Tools: 21 core MCP tools (55 total with advanced)
Engine: Mk4 WASM scoring (faf-scoring-kernel)
Speed: 0.5ms average (was 19ms — 3,800% faster with Mk4)
Tests: 179 passing (7 suites)
Status: FAST⚡️AFMCP over HTTP-SSE. Point your Grok integration at the URL. That's it.
Scoring: From Blind to Optimized
Tier | Score | What it means |
🏆 Trophy | 100% | Gold Code — AI is optimized |
🥇 Gold | 99%+ | Near-perfect context |
🥈 Silver | 95%+ | Excellent |
🥉 Bronze | 85%+ | Production ready |
🟢 Green | 70%+ | Solid foundation |
🟡 Yellow | 55%+ | AI flipping coins |
🔴 Red | <55% | AI working blind |
At 55%, Grok guesses half the time. At 100%, Grok knows your project.
🚀 Three Ways to Deploy
1. Hosted (Instant)
https://grok-faf-mcp.vercel.app/ssePoint your MCP client to this endpoint. All 21 tools available instantly.
2. Self-Deploy (Your Own Vercel)
Click the Deploy with Vercel button above. Zero config — get your own instance in 30 seconds.
3. Local (npx)
npx grok-faf-mcpOr add to your MCP config:
{
"mcpServers": {
"grok-faf": {
"command": "npx",
"args": ["-y", "grok-faf-mcp"]
}
}
}🛠️ MCP Tools (21 Core)
Create & Detect
Tool | Purpose |
| Create project.faf from your project |
| Auto-detect stack and populate context |
| AI-readiness score (0-100%) with breakdown |
| Check current AI-readability |
| Intelligent enhancement |
Sync & Persist
Tool | Purpose |
| Sync .faf → CLAUDE.md |
| Bi-directional .faf ↔ platform context |
| Validate .faf integrity |
Read & Write
Tool | Purpose |
| Read any file |
| Write any file |
| Discover projects with .faf files |
RAG & Grok-Exclusive
Tool | Purpose |
| RAG-powered context retrieval |
| RAG cache statistics |
| Clear RAG cache |
| Auto-load .faf context for Grok |
Plus 34 advanced tools available with FAF_SHOW_ADVANCED=true.
Performance
Execution: 0.5ms average (97% faster than v1.1)
Fastest: 3,360ns (version — nanosecond territory)
Slowest: 1.3ms (score — Mk4 WASM)
Improvement: 19ms → 0.5ms (3,800% faster)
Engine: Mk4 WASM via faf-scoring-kernel
Memory: Zero leaks
Transport: HTTP-SSE (Vercel Edge)Benchmarked 10x per tool, warmed up, on local execution.
Architecture
grok-faf-mcp v1.2.1
├── api/index.ts → Vercel serverless (Express + SSE transport)
├── src/
│ ├── server.ts → MCP server (ClaudeFafMcpServer)
│ ├── handlers/
│ │ ├── championship-tools.ts → 55 tool definitions
│ │ ├── tool-registry.ts → Visibility filtering (core/advanced)
│ │ └── engine-adapter.ts → FAF engine bridge
│ └── faf-core/
│ └── compiler/
│ └── faf-compiler.ts → Mk4 WASM scoring + Mk3.1 fallback
├── smithery.yaml → Smithery listing config
└── vercel.json → Vercel routingScoring pipeline: TypeScript compiler parses .faf → detects project type → The Bouncer injects slotignored for inapplicable slots → faf-scoring-kernel (WASM) scores → falls back to Mk3.1 if kernel unavailable.
Testing
179 tests across 7 suites:
npm test # runs all 179Suite | Tests | Coverage |
Desktop-native validation | 10 | Core native functions, security, performance |
MCP protocol | 28 | Tool registration, transport, error handling |
Compiler scoring | 22 | Mk4 engine, type detection, slot counting |
RAG system | 19 | Query, caching, context retrieval |
Engine adapter | 35 | CLI detection, fallback behavior |
Integration | 40 | End-to-end tool execution |
WJTTC certification | 25 | Championship-grade compliance |
🔗 Endpoints
Endpoint | URL |
Root |
|
SSE |
|
Health |
|
Info |
|
📦 Ecosystem
One format, every AI platform.
Package | Platform | Registry |
grok-faf-mcp (this) | xAI Grok | npm |
Anthropic | npm + MCP #2759 | |
PyPI | ||
Rust | crates.io | |
Universal (Cursor, Windsurf, Cline) | npm | |
Terminal CLI | npm + Homebrew |
Same project.faf. Same scoring. Same result. Different execution layer.
If grok-faf-mcp has been useful, consider starring the repo — it helps others find it.
📄 License
MIT — Free and open source
Get the CLI
faf-cli — The original AI-Context CLI. A must-have for every builder.
npx faf-cli autoAnthropic MCP #2759 · IANA Registered: application/vnd.faf+yaml · faf.one · npm
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Wolfe-Jam/grok-faf-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server