standards-mcp
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@standards-mcpget coding standards for database migrations"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
standards-mcp
Your coding standards, queryable by AI.
Every team writes a coding standards document. Nobody reads it. AI agents definitely don't — they can't fit a 20,000-token governance doc into their context window alongside the actual work.
standards-mcp flips this. Instead of a document that sits in a wiki, your standards become a live data store that agents query at task time. An agent starting a database migration calls one tool and gets exactly the 28 rules that apply — in ~700 tokens, in 0.1ms.
The 95% of rules about accessibility, internationalisation, and infrastructure? Not loaded. Not wasting tokens. Not creating noise.
npx standards-mcp init # 28 universal starter rules
npx standards-mcp serve # MCP server, works with any AI toolWorks with Claude Code, Cursor, Windsurf, Copilot — anything that speaks MCP.
The Problem: Token Economics
AI agents have finite context windows. Every token spent on rules is a token not spent on the actual work. Here's what existing approaches cost:
Approach | How standards reach the agent | Token cost | What the agent gets |
Wiki/Confluence | Agent can't access it | 0 (useless) | Nothing |
Paste into system prompt | Full doc in every session | ~20,000 | Everything, whether relevant or not |
CLAUDE.md / .cursorrules | Subset baked into config | ~2,000-5,000 | Whatever you manually curated |
ESLint/Roslyn | Runs separately, agent sees output | ~500/violation | Only what's already broken |
standards-mcp | Agent queries what it needs | ~700 for a task, ~15 per rule | Exactly the rules for this task |
The insight: don't load the document, query the index. An agent starting a database migration doesn't need your accessibility rules, your i18n rules, or your infrastructure rules. It needs DB, SEC, and ERR. That's 28 rules at ~15 tokens each instead of 200 rules at ~100 tokens each.
How It Compares
Every code quality tool has addressable rules. Here's how they work and where they fall short for AI agents:
System | Rule ID | Readable? | Queryable by agent? | Progressive detail? | Fix guidance? | Token-aware? |
ESLint |
| Yes | No — runs as linter, agent sees violations after the fact | No — one level of detail |
| No |
Roslyn |
| No — must look up | No — compiled into .NET analyzer | Yes — message → docs → code fix | CodeFixProvider | No |
SonarQube |
| No | No — server-side scanner | Yes — summary → detail → remediation | Remediation guidance | No |
Pylint |
| No | No — CLI linter | No | No | No |
CWE/OWASP |
| No | No — reference databases | Yes — summary → description → examples | Advisory only | No |
standards-mcp |
| Yes — self-describing | Yes — MCP tool call | Yes — 3 tiers (15 → 375 → 300 tokens) | Yes — typed fix providers with validation | Yes — designed for it |
The key differences:
Existing tools are linters. They run on code that already exists and report violations after the fact. The agent writes the code, the linter finds the problems, the agent fixes them. Two passes minimum.
standards-mcp is a pre-flight check. The agent loads the rules before writing code. The rules are in working memory during development, not discovered in a separate scan pass. The agent writes correct code the first time because it knows the rules while writing.
Existing tools are human-first. ESLint's docs are web pages. Roslyn's CodeFixProvider is a C# class. SonarQube's remediation is a paragraph on a dashboard. None of these are designed for an AI agent to consume programmatically at task time.
standards-mcp is agent-first. Every response is JSON. Token budgets are measured. Progressive disclosure means the agent loads 15 tokens per rule at task start, 375 tokens for the one rule it needs detail on, and 300 tokens for the fix — not 20,000 tokens of everything.
The Three-Token Index
Every rule has a self-describing address: DOMAIN.CONCERN.RULE
SEC.INJECT.SQL → Security > Injection > SQL
DB.MONEY.INT → Database > Money > Integer storage
ERR.CATCH.EMPTY → Errors > Catch > No empty blocksCompare: CA1001 tells you nothing. CWE-89 tells you nothing. SEC.INJECT.SQL tells you the domain, the concern, and the rule — in 3 tokens, without a lookup. This matters because:
Agents cite them inline:
"Store discount as integer cents (DB.MONEY.INT)"Git history becomes searchable:
git log --grep="DB.MONEY.INT"finds every commit that touched money storageNo lookup needed: the symbol IS the documentation
Exceptions Are Tracked, Not Hidden
When an agent can't follow a rule, it registers a tracked exception:
{
"id": "EXC-2026-0001",
"symbol": "FILE.SIZE.HARD",
"justification": "Generated ORM schema cannot be split",
"file": "src/generated/schema.ts",
"ticket": "PROJ-442"
}Exceptions live in .standards-exceptions.json — version-controlled, reviewable in PRs, auditable. Not // TODO: fix this later — a tracked deviation with a justification.
Model-Agnostic, Vendor-Agnostic
It's an MCP server. Claude, GPT, Gemini, Llama — if it can call MCP tools, it can query your standards. Switch models, keep your rules.
Quickstart
npx standards-mcp initThis creates standards.json with 28 rules that are near-universally agreed upon — no SELECT *, no empty catch blocks, no secrets in source, parameterised queries only. Edit it to match your team.
Add to your MCP client:
Claude Code (.mcp.json):
{
"mcpServers": {
"standards": {
"type": "stdio",
"command": "npx",
"args": ["-y", "standards-mcp", "serve"]
}
}
}Cursor (MCP settings):
{
"mcpServers": {
"standards": {
"command": "npx",
"args": ["-y", "standards-mcp", "serve"]
}
}
}Done. Your agent now has 6 tools.
The 6 Tools
Tool | Purpose | Tokens |
| Rules for a task type (database, frontend, etc.) | ~700 |
| Resolve symbols to full rules — supports progressive disclosure | ~15-375/rule |
| All rules for domain(s) | ~200/domain |
| Free-text search, filterable by tags | varies |
| Complete symbol table | ~1,300 |
| Register a tracked exception | ~100 |
The one agents should call first
standards_for_task({ task_type: "database" })Returns all rules relevant to database work — one-liners only, ~700 tokens. The agent works with these in context and cites violations by symbol.
When the agent needs more detail
standards_lookup({ symbols: ["DB.MONEY.INT"], detail_level: "full" })Returns rationale, failure modes, good/bad examples, fix guidance, and suppression scenarios. ~375 tokens for one rule.
When the agent needs to fix a violation
standards_lookup({ symbols: ["DB.MONEY.INT"], detail_level: "fix" })Returns step-by-step fix instructions, validation checks (grep patterns to verify the fix worked), blast radius, and conflict warnings. ~300 tokens for one rule.
Progressive Disclosure
The system has three tiers of information. Agents load only what they need, when they need it.
Tier 1: One-liner "No floats for money" ~15 tokens
loaded at task start via standards_for_task
Tier 2: Detailed docs Why, how, examples, when to suppress ~375 tokens
loaded on-demand via standards_lookup(detail_level="full")
Tier 3: Fix provider Step-by-step fix, validation, scope ~300 tokens
loaded on-demand via standards_lookup(detail_level="fix")Typical task: ~2,100 tokens total (task load + 1 edge case lookup + 2 fix lookups). Under 1.1% of a 200K context window.
Fix Types
Every rule's fix provider is classified by how much judgment it requires:
Type | Determinism | Agent behaviour |
| Deterministic, safe to auto-apply | Apply without asking |
| Deterministic, multi-line | Apply, note in commit |
| Requires judgment | Propose, explain trade-offs |
| Requires human review | Flag, do not auto-fix |
Validation
Every fix provider includes machine-verifiable validation:
{
"validation": {
"grep_absent": ["DECIMAL.*(?:price|cost|total|amount)"],
"grep_present": ["_cents\\s", "INTEGER.*(?:price|cost|total)"],
"assertion": "No DECIMAL/FLOAT columns for monetary values"
}
}Agents can run the grep patterns after applying a fix to verify it worked — no human review needed for mechanical fixes.
Tags
Rules are tagged for cross-cutting queries:
standards_search({ query: ".*", tags: ["security"] })Returns all rules tagged "security" regardless of domain. 24 tags across the starter set: financial, security, owasp, readability, testability, etc.
Standards Format
{
"version": "2.1.0",
"domains": {
"SEC": {
"name": "Security",
"description": "Injection prevention, secrets, auth",
"rules": {
"SEC.INJECT.SQL": {
"severity": "error",
"applicability": "enforced",
"rule": "Parameterised queries only — no string concatenation",
"detail": {
"rationale": "SQL injection is consistently #1-#3 in OWASP Top 10...",
"failure_modes": ["Full database dump via UNION injection", "Auth bypass via ' OR 1=1"],
"fix_guidance": ["Replace interpolation with parameterised placeholders..."],
"examples": [{ "label": "JS", "bad": "query(`...${id}`)", "good": "query('...$1', [id])" }],
"tags": ["security", "owasp", "injection"],
"fix": {
"type": "mechanical",
"steps": [{ "action": "Replace template literal", "from": "query(`...${id}`)", "to": "query('...$1', [id])" }],
"validation": { "grep_absent": ["query.*`.*\\$\\{"], "assertion": "No interpolated SQL" }
}
}
}
}
}
},
"task_routing": {
"database": ["FILE", "NAME", "SEC", "DB", "TEST", "VCS"],
"frontend": ["FILE", "NAME", "ERR", "TEST", "VCS"]
}
}The detail object is optional on every rule. Rules without it work fine — agents just get the one-liner. Add detail incrementally to the rules that matter most to your team.
Severity: error (must fix) or warning (should fix).
Applicability: enforced (active), aspirational (visible but not blocking), not_applicable (hidden from task/domain queries).
Task routing: maps task types to relevant domains. Add your own task types — they're just strings mapped to domain arrays.
Customisation
Add a rule
"DB.QUERY.TENANT": {
"severity": "error",
"applicability": "enforced",
"rule": "Multi-tenant queries must include tenant filter"
}Add a domain
"A11Y": {
"name": "Accessibility",
"description": "WCAG 2.2 AA compliance",
"rules": {
"A11Y.HTML.SEMANTIC": {
"severity": "error",
"applicability": "enforced",
"rule": "Use correct semantic HTML elements"
}
}
}Then add "A11Y" to the relevant task types in task_routing.
Validate
npx standards-mcp validatestandards.json v1.0.0
28 rules across 8 domains
FILE File Organisation 3 rules
SEC Security 5 rules
DB Database 4 rules
...
6 task types: database, api_endpoint, frontend, backend, refactor, bugfix
Applicability: 28 enforced, 0 aspirational, 0 not_applicable
Valid.Tell Your Agent to Use It
Add to your project instructions (CLAUDE.md, .cursorrules, etc.):
Before editing any file, call standards_for_task with the appropriate task type.
Cite relevant symbols in your approach.Architecture
standards.json ──→ standards-mcp server ──→ Any MCP client
(your rules) (6 query tools) (Claude, Cursor, etc.)
│
.standards-exceptions.json
(tracked exceptions)No database. No build step. No config files beyond standards.json. One JSON file in, six tools out.
CLI
npx standards-mcp init [--force] # Scaffold standards.json
npx standards-mcp serve [--standards <path>] # Start MCP server
npx standards-mcp validate [<path>] # Validate and reportConfig discovery: --standards flag → $STANDARDS_MCP_PATH env → ./standards.json → ./.standards/standards.json
Programmatic Use
const { StandardsStore } = require('standards-mcp/src/store');
const store = new StandardsStore('./standards.json');
store.forTask('database'); // rules for database work
store.lookup(['SEC.INJECT.SQL']); // resolve a symbol
store.search('password'); // free-text search
store.allSymbols(); // { count: 28, symbols: [{s, v, a}] }License
MIT
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/anotherben/standards-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server