mk-spec-master
Integration with Figma to extract specifications from file frames and comments.
Integration with GitHub Issues to fetch and parse specifications and link them to test scenarios.
Integration with JIRA Cloud to retrieve stories and issues as specifications.
Integration with Linear API to fetch Linear tickets as specifications and maintain traceability.
Integration with local Markdown files with YAML frontmatter as specification source.
Integration with Notion databases to use pages as specifications.
Spec-driven testing over MCP. Turn Linear / JIRA / GitHub Issues / Notion / Figma / Markdown specs into runnable scenarios, hand off to any test runner via
mk-qa-master, and keep a live spec ↔ test coverage matrix.
🟢 Alpha — v0.3 complete. 15 tools + 6 adapters. Full design in
docs/prd.md. Next stop: v1.0 (docs hardening, integration recipes, production-ready).
What this is
An MCP server that turns specs — Linear tickets, JIRA stories, GitHub Issues, Notion pages, Figma annotations, plain Markdown — into structured test scenarios, hands them to any test runner (via mk-qa-master or directly), and maintains a live spec ↔ test coverage matrix.
Sibling to mk-qa-master in the mk-* family of opinionated AI-QA MCPs.
What this is NOT
It's not | Use this instead |
A spec editor | Linear / JIRA / Notion / Markdown — keep writing specs where you already do |
A test runner |
|
An issue tracker UI | Linear / JIRA / Notion's native interface |
A spec → code generator | GitHub Spec Kit, AWS Kiro |
An LLM | Leverages your AI client (Claude / Cursor / Codex / Gemini) for the reasoning |
mk-spec-master sits between your spec source and your test runner — purely about the spec ↔ test link, the coverage matrix that lives on top, and the quality coach that grades both.
Tool surface (15 tools)
Grouped by role. Each group is a layer in the spec→test→coverage→coach loop.
Meta — orientation (1)
Tool | Purpose |
| Active adapter + all available. Call first so the AI knows whether to expect Linear / JIRA / Notion / Figma / Markdown semantics |
Discovery — find and load specs (3)
Tool | Purpose |
| Enumerate specs from the active source (filter by status / label / limit) |
| Pull a single spec's full content by id |
| Heuristic AC extraction (en + zh-TW + zh-CN headings supported); accepts |
Generation — specs → testable artifacts (2)
Tool | Purpose |
| AC → scenarios with happy / edge / error classification (negation-aware) and best-effort Given/When/Then split |
| One-shot fetch + parse + extract → markdown plan ready to feed to |
Coverage & drift — the traceability layer (4)
Tool | Purpose |
| Record that a test verifies a spec (writes to |
| Scan a test directory for |
| Spec × test grid — answer "which specs have no tests" in one call |
| Re-fetch each linked spec, recompute ac_hash, compare. Buckets into fresh / drifted / unknown / stranded |
Coach — quality + prioritization (3)
Tool | Purpose |
| Heuristic findings on vague language, implementation-leak AC, unclear role refs (the differentiator vs Kiro / Spec Kit) |
| Take analyze output → PM-facing markdown with concrete rewrites |
| Three-layer prioritized plan: coverage gaps (L1) + spec-quality (L2) + process drift (L3). The "what should we fix next" tool |
Knowledge — domain methodology (2)
Tool | Purpose |
| Create |
| Read the spec-knowledge file (with built-in fallback). Optional |
Adapter status
| Source | Status | Auth |
| Local | ✅ since 0.1.0 | none |
| GitHub Issues via | ✅ since 0.1.0 |
|
| Linear API (GraphQL) | ✅ since 0.2.2 |
|
| JIRA Cloud (REST v3, ADF → markdown) | ✅ since 0.2.3 |
|
| Notion databases (REST v1, blocks → markdown) | ✅ since 0.3.0 |
|
| Figma file frames (TEXT nodes + comments → markdown) | ✅ since 0.3.1 |
|
Common workflows
Four patterns cover ~90% of real use. Each is one sentence to the AI client; the tools chain automatically.
1. Spec → test → run → coverage (the main loop)
"Fetch LIN-123 from Linear, extract scenarios, generate Playwright tests with mk-qa-master, run them, and update the coverage matrix."
Chains: fetch_spec → parse_spec → extract_scenarios → mk-qa-master.generate_test (×N) → link_test_to_spec (×N) → mk-qa-master.run_tests → get_coverage_matrix.
2. Spec health check
"Review every in-progress spec for quality issues and give me a prioritized improvement plan."
Chains: list_specs(status="in-progress") → analyze_spec_quality → propose_spec_improvements → get_optimization_plan.
3. Rebuild traceability after a refactor
"Sync the spec ↔ test index from the test source — I just renamed a bunch of files."
Chains: auto_link_tests → get_coverage_matrix. Tests need @spec: <ID> docstring tags for auto-link to work; comment-above-function and docstring-inside both supported.
4. Session warmup
"Before we work on specs today: load the spec-knowledge methodology and tell me which source is active."
Chains: get_spec_source_info → get_spec_context. Cheap, sets the methodology + adapter context for everything that follows.
Sample output
get_optimization_plan markdown (excerpt)
# Optimization plan
_Coverage matrix: 23 spec(s) tracked, 4 untested._
_Spec quality: 23 spec(s) analyzed, 17 finding(s)._
_Drift: 2 drifted, 0 stranded, 5 without ac_hash._
## 🔴 Layer 1 — Coverage gaps
**Specs with zero tests** (ranked first — every business risk lives here):
- `LIN-204` — Apply promo code at checkout
- `LIN-211` — Refund flow
## 🟡 Layer 2 — Spec quality
### `LIN-098` — Checkout latency (score: 80/100, findings: 4)
- 🟡 `ac-1`: Quantify (e.g., 'response within 200 ms') (evidence: `fast`)
- 🔴 `ac-3`: Rewrite to describe what the user observes (evidence: `redis`)
## 🔵 Layer 3 — Process drift
**Drifted** (spec changed since link — review affected tests):
- `LIN-123` — Apply discount at checkout · 4 test(s) potentially staleget_coverage_matrix markdown (excerpt)
# Coverage matrix
- Specs tracked: 23
- Specs shown (min_tests=0): 23
- Specs with zero tests: 4
| Spec | Title | Tests | Last status |
|---------|--------------------------------|------:|-------------|
| `LIN-204` | Apply promo code at checkout | 0 | — |
| `LIN-123` | Apply discount at checkout | 4 | passed |Install
uvx mk-spec-master # or: pip install mk-spec-masterAdd to your MCP client config:
{
"mcpServers": {
"mk-spec-master": {
"command": "uvx",
"args": ["mk-spec-master"],
"env": {
"SPEC_SOURCE": "markdown_local",
"SPEC_PROJECT_ROOT": "/path/to/your/project"
}
}
}
}Then in Claude / Cursor / Codex / Gemini CLI:
"Use mk-spec-master to parse SPEC-001, extract scenarios, and hand them to mk-qa-master so we can generate Playwright tests."
Why this is missing from the ecosystem
Tool | Lock-in | What we do differently |
AWS Kiro | AWS IDE only, proprietary | MCP-native, multi-client, open source |
Jama Connect MCP | $50k+/year, enterprise-only | SMB / indie / AI-native segment |
GitHub Spec Kit | spec→code; runtime test coverage out of scope | We add runtime test coverage |
testomat.io / JIRA MCPs | Single source (JIRA), SaaS lock | Multi-source, file-based index, no lock |
See docs/prd.md §4 for the full positioning.
Walkthrough — spec → test → coverage (long form)
Given a Linear ticket LIN-123 "Apply discount at checkout" with 4 acceptance criteria:
You: Use mk-spec-master to fetch LIN-123, extract scenarios, generate
Playwright tests with mk-qa-master, run them, and report coverage.The AI client chains:
mk-spec-master.fetch_spec("LIN-123")
mk-spec-master.parse_spec(spec_id="LIN-123") → 4 AC + ac_hash
mk-spec-master.extract_scenarios(...) → 1 happy + 3 error
mk-spec-master.generate_test_plan(spec_id="LIN-123")
for scenario in plan:
mk-qa-master.generate_test(business_context=scenario.gherkin)
mk-spec-master.link_test_to_spec(spec_id="LIN-123", test_node_id=..., ac_hash=...)
mk-qa-master.run_tests
mk-spec-master.get_coverage_matrixThe traceability index now records all 4 links with their AC hashes. Next sprint, when the spec changes, get_drift_report flags every test whose linked spec has moved — re-run the chain only for those.
Status
Milestone | Target | Status |
v0.1 (MVP — markdown_local + github_issues, 7 tools) | June 2026 | ✅ Shipped |
v0.2 (Linear, JIRA, coverage matrix, spec-quality coach, drift report) | Aug 2026 | ✅ Complete (0.2.3) |
v0.3 (Notion, Figma, auto-link, optimization plan) | Oct 2026 | ✅ Complete (0.3.3) |
v1.0 (production-ready, docs, integration recipes) | Q4 2026 | ⬜ |
Family
mk-qa-master— AI 測試大師, the test-runner sibling. Tests run via mk-qa-master; coverage tracked here.More
mk-*MCPs in design (mk-perf-master,mk-a11y-master).
License
MIT © 2026 Jack Kao — see LICENSE
(中文翻譯參考:LICENSE.zh-TW.md; the English version is authoritative).
Plain-English version: personal use, commercial use, modification, redistribution — all allowed. The only requirement is that you keep the copyright and license notice in your copy. No warranty: if it breaks something in production, you can't come after the author.
If this saved you time, a coffee goes a long way. ☕
Maintenance
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/kao273183/mk-spec-master'
If you have feedback or need assistance with the MCP directory API, please join our Discord server