Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
JIRA_EMAILNoEmail for JIRA authentication. Required if SPEC_SOURCE=jira.
FIGMA_TOKENNoAPI token for Figma adapter. Required if SPEC_SOURCE=figma.
SPEC_SOURCENoThe adapter to use (e.g., markdown_local, linear, jira, notion, figma, github_issues). Required.
GITHUB_TOKENNoGitHub token for GitHub Issues adapter. Optional if gh CLI is authenticated.
NOTION_TOKENNoAPI token for Notion adapter. Required if SPEC_SOURCE=notion.
JIRA_BASE_URLNoJIRA base URL (e.g., https://your-domain.atlassian.net). Required if SPEC_SOURCE=jira.
JIRA_API_TOKENNoAPI token for JIRA authentication. Required if SPEC_SOURCE=jira.
LINEAR_API_KEYNoAPI key for Linear adapter. Required if SPEC_SOURCE=linear.
SPEC_PROJECT_KEYNoProject key or identifier (e.g., team key for Linear, project key for JIRA, database ID for Notion, file key for Figma). Optional for Linear and JIRA, required for Notion and Figma.
SPEC_PROJECT_ROOTNoRoot path of the project. The traceability index is stored at SPEC_PROJECT_ROOT/.mk-spec-master/index.json. Required.

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
get_spec_source_infoA

Return the active spec source (selected via SPEC_SOURCE env var) plus all adapters built into this server. Call first in any session so the AI knows whether to expect markdown / GitHub / (future) Linear / JIRA / Notion semantics. Returns {active, available, version}.

list_specsA

Enumerate specs from the active source. For markdown_local this globs SPEC_PROJECT_ROOT/specs/*.md and reads YAML frontmatter; for github_issues it queries the configured owner/repo (set via SPEC_PROJECT_KEY). Optional filters: status (string — adapter-specific: 'in-progress' for markdown, 'open'|'closed'|'all' for GitHub), label (string), limit (int, default 50). Returns {source, count, specs[]}.

fetch_specA

Pull a single spec by id from the active source. For markdown_local the id is either the id: field in frontmatter or the filename stem; for github_issues it's the issue number as string. Returns the full Spec record {id, title, body, url, status, labels, metadata}. Pair with parse_spec to extract structured acceptance criteria.

parse_specA

Extract structured acceptance criteria from a spec body. Looks for headings matching 'Acceptance criteria' / 'AC' / '驗收條件' / '驗收標準' (case-insensitive, en + zh-TW + zh-CN) and pulls numbered or bulleted items beneath. Pass spec_id to use the active adapter, or raw_text to parse ad-hoc spec text without going through any source. Returns {spec_id, title, acceptance_criteria[], roles[], preconditions[], _meta}. Roles + preconditions are placeholders in v0.1 — filled by the v0.2 spec-quality coach.

extract_scenariosA

Turn parsed acceptance criteria into testable scenarios. Each scenario is classified as happy / edge / error via keyword heuristics, and split into Given / When / Then where possible. Pass the acceptance_criteria array returned by parse_spec. Returns {count, scenarios[]} where each scenario has {id, ac_id, title, kind, given, when, then}. Best paired with generate_test_plan for a markdown handoff to mk-qa-master.

generate_test_planA

One-shot: fetch + parse + extract for a spec, then emit a markdown test plan with a business_context block per scenario ready to hand to mk-qa-master.generate_test(business_context=...). The AI client typically reads this plan, loops the scenarios, and calls mk-qa-master once per scenario. Set target_runner to hint the desired output (pytest / jest / cypress / go / maestro). Returns {spec_id, target_runner, scenario_count, markdown, scenarios[]}.

link_test_to_specA

Record that a test verifies a spec. Writes into SPEC_PROJECT_ROOT/.mk-spec-master/index.json (data ownership stays with the user). Re-linking the same node_id updates the timestamp instead of duplicating. Call this right after mk-qa-master.generate_test returns a node_id so the coverage matrix stays current. Pass spec_title / spec_source / spec_url (typically already known from earlier fetch_spec) to cache them into the index so get_coverage_matrix can render titles without re-fetching from the source. Pass ac_hash (from parse_spec._meta.ac_hash) to enable drift detection via get_drift_report. Returns {action: 'added'|'updated', spec_id, test_node_id, total_links_for_spec}.

get_coverage_matrixA

Snapshot of every spec ↔ test link recorded in the local index. Returns both structured rows and a ready-to-paste markdown table — call this when a user asks 'what's tested' or 'which specs have no tests'. Filters: min_tests (default 0; set to 0 to find untested specs, set to 1 to hide them) and include_orphans (default true). Returns {specs_total, specs_shown, specs_untested, orphan_count, rows[], markdown}.

get_drift_reportA

For every spec in the index that has a stored ac_hash, fetch the live spec via the active adapter and recompute its ac_hash to detect drift. Buckets the results into fresh (no drift), drifted (linked tests may be stale), unknown (no hash stored — re-link with ac_hash from parse_spec._meta.ac_hash to enable), and stranded (spec_id can no longer be fetched — deleted, closed, or source mismatch). Use when a user asks 'has anything changed' / 'what's out of sync' / 'is my test suite still aligned with specs'. Optional spec_id narrows the check to one spec. Returns counts + per-bucket details + markdown summary.

analyze_spec_qualityA

Run heuristic checks against a spec's body: vague language without measurable thresholds (fast / easy / intuitive / 現代 / 順暢 ...), implementation-detail leakage in AC ('uses Redis', '透過 X 服務'), and references to roles ('logged-in user', '管理員') without a Preconditions section. Pass spec_id for one spec, raw_text to analyze a freeform draft, or neither to sweep every spec from the active source. Returns {source, specs_analyzed, total_findings, results[]}. Each result has {spec_id, title, ac_count, score (0–100), findings[]} where each finding carries severity (info / warn / error), evidence, and a suggested rewrite. Pair with propose_spec_improvements for the markdown coach plan.

propose_spec_improvementsA

Take analyze_spec_quality output and produce a PM-facing markdown coach plan grouping findings by spec and issue type, with concrete rewrite suggestions per finding. If analysis is not provided, runs analyze_spec_quality inline with the remaining arguments. Use this when a user says 'how do I improve this spec' or 'review my PRD'. Returns {markdown, actions[]}.

auto_link_testsA

Scan a directory of test files for @spec: <ID> tags in docstrings or comments and call link_test_to_spec for each (test, spec) pair found. Supports Python (def test_*), JS/TS (it('...') / test('...')), and Go (func TestX(t *testing.T)). For each tag the nearest preceding test function within 30 lines is treated as the owner; test_node_id is <relative-path>::<test-name>. Use when a user says 'rebuild the spec coverage' or 'sync test → spec links after the refactor'. Set dry_run: true to preview without writing. Returns {test_dir, files_scanned, tags_found, links_added, links_updated, skipped, discoveries[], markdown, dry_run}.

get_optimization_planA

Three-layer coach output that integrates coverage / quality / drift signals into one prioritized markdown plan. Layer 1 surfaces untested + thin-coverage specs; Layer 2 ranks specs by severity-weighted quality findings; Layer 3 surfaces drifted + stranded specs. Use this when a user asks 'what should we fix next' / 'show me the weekly plan' / 'review the suite'. Toggle layers via include_coverage / include_quality / include_drift booleans (all default true). top_n caps per-layer detail rows (default 10). Returns {specs_total, *_count, *[], markdown}.

init_spec_knowledgeA

Create SPEC_PROJECT_ROOT/spec-knowledge.md from a starter template. The file carries spec methodology (EARS, INVEST, AC quality rules) plus TODO sections for the team's domain rules / actors / glossary. Other mk-spec-master tools lean on this indirectly via get_spec_context. Idempotent — refuses to overwrite an existing file unless overwrite=true. Optional project_name labels the file.

get_spec_contextA

Read SPEC_PROJECT_ROOT/spec-knowledge.md (or fall back to built-in defaults if missing). Call near the start of a session so the same methodology + domain glossary colours every spec interpretation that follows. Optional section filters to a single heading (partial-match, case-insensitive) — e.g. section='actors' returns just the actors block. Returns {source: 'file'|'builtin', content, ...}.

get_spec_historyA

Return the last N snapshots archived by get_optimization_plan plus trend deltas (current vs ~7 days ago, vs ~30 days ago) for spec count, untested, quality findings, drift, stranded, and unknown-hash specs. Use when a user asks 'are we improving' / 'show me the trend' / 'how did we do this month'. Requires at least 2 snapshots for trend; degrades gracefully with fewer. Returns {snapshots_total, snapshots[], trend[], markdown}.

get_drift_signatureA

Scan the recent snapshot history for chronic problems: same spec_id repeatedly appearing in drifted / unknown / low-quality buckets. Specs flagged as 'unstable' (drifts every cycle), 'chronic_low_quality' (vague every cycle), or 'chronic_unhashed' (never gets a hash recorded). Use when a user asks 'which specs keep causing trouble' / 'what's the long-running pain'. Args: window (snapshots to scan, default 5), threshold (min recurrence to flag, default 3). Returns {ready, snapshots_scanned, chronic[], markdown}.

get_telemetryA

Aggregate the tool-usage log written by this server. Surfaces: which tools are called most, which fail most (error rate), p50 / p95 latency per tool, and which declared tools have never been called in the window (dead surface). Records contain only tool name + timing + ok flag — argument values are never logged. Use when a user asks 'what's the AI actually using' / 'which tools are slow' / 'which tools are unused'. Args: days (window, default 30), include_inactive (bool, default true). Returns {records_total, window_days, tools[], inactive[], markdown}.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kao273183/mk-spec-master'

If you have feedback or need assistance with the MCP directory API, please join our Discord server