Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
JIRA_EMAILNoJIRA email.
PLAN_SOURCENoThe source adapter: markdown_local, linear, jira, or notion.
NOTION_TOKENNoNotion token.
JIRA_BASE_URLNoJIRA base URL.
JIRA_API_TOKENNoJIRA API token.
LINEAR_API_KEYNoLinear API key.
PLAN_PROJECT_KEYNoOptional project key or database ID.
PLAN_PROJECT_ROOTNoRoot path for markdown_local adapter.

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
get_plan_source_infoA

Return the active initiative source (selected via PLAN_SOURCE env var) plus all adapters built into this server. Call first in any session so the AI knows whether to expect markdown / Linear / JIRA / Notion semantics. Returns {active, available, version}.

list_initiativesA

Enumerate product initiatives from the active source. For markdown_local this globs PLAN_PROJECT_ROOT/initiatives/*.md and reads YAML-ish frontmatter; for linear it queries the GraphQL API for issues in triage / backlog / unstarted state types; for jira it runs JQL filtered to statusCategory='To Do'; for notion it queries the database and filters to status in (Triage / Backlog / Idea). Optional filters: status (string — adapter-specific), label (string — single label match), limit (int, default 50). Returns {source, count, initiatives[]}.

fetch_initiativeA

Pull a single initiative by id from the active source. Returns the full Initiative record {id, source, title, body, url, status, labels, raw_metadata}. raw_metadata holds scoring inputs (reach / impact / confidence / effort / okr) plus any source-specific fields. Pair with score_initiative to get a RICE / Impact-Effort rank.

add_initiativeA

Write a new markdown_local initiative into PLAN_PROJECT_ROOT/initiatives/.md. Use this to capture an idea you (the AI client) already gathered via WebFetch / chat summary / customer-call notes — plan-master deliberately does NOT crawl URLs; you summarize, this tool persists. Only works when PLAN_SOURCE=markdown_local; for Linear / JIRA / Notion, create the issue in that platform instead. If id is omitted, auto-generates IDEA-NNN. Returns {id, written_to, source, overwritten, next_step_hint}. Typical chain: add_initiative -> score_initiative -> generate_spec_draft -> mk-spec-master.parse_spec.

analyze_initiativeA

Force a senior-PM analysis SOP on one initiative BEFORE scoring. Returns the initiative body + a structured checklist (target users / competition / market signal / risks / MVP scope / out-of-scope / RICE rationale) the AI client must fill in inline as its response. Loads plan-knowledge.md context if present. The tool does NOT call an LLM — it scaffolds the prompt so the AI doesn't shortcut into a shallow read. Use this WHENEVER an idea originates from chat / WebFetch and lacks a thorough product analysis. After filling the checklist, call add_initiative(overwrite=true) with the enriched body, then score_initiative. Framework options: 'default' (7 sections), 'lite' (4 sections), 'lean_canvas' (9 blocks). Returns {initiative, framework, methodology_context, analysis_checklist, instructions, next_step_hint}.

score_initiativeA

Score one initiative with RICE or Impact-Effort. Pass initiative_id to score a source-resolved record (RICE inputs are read from raw_metadata) or raw_text + overrides for an ad-hoc score without a source record. method = 'rice' (default) or 'impact_effort'. overrides = {reach, impact, confidence, effort} — any subset; takes precedence over what was in the source. RICE tier thresholds: P0 > 25, P1 10..25, P2 3..10, P3 < 3. Every call with initiative_id appends a scored decision to the index at PLAN_PROJECT_ROOT/.mk-plan-master/index.json. Returns {initiative_id, method, score, breakdown, tier, rationale, stored}.

rank_backlogA

Score every initiative the active adapter exposes and return the top-N descending. Pure arithmetic, no LLM call — the rationale string is generated from the breakdown so the output stays deterministic. Optional filters mirror list_initiatives: status, label. method defaults to 'rice'; limit defaults to 10. Auto-archives a snapshot to .mk-plan-master/history/.json so get_planning_history / get_decision_signature can compute trend deltas across cycles (debounced to 5 minutes by default). Returns {method, count, ranking[]}.

generate_spec_draftA

Produce a markdown spec draft for one initiative, shaped so mk-spec-master.parse_spec(raw_text=...) can ingest it verbatim. Three templates: 'default' (title / source / OKR / context / acceptance criteria / out-of-scope), 'lite' (title / context / acceptance criteria), 'detailed' (default + risks + dependencies + estimated effort). Appends a spec_generated decision to the index. Returns {markdown, suggested_filename, template_used, ready_for_mk_spec_master, next_step_hint}.

generate_roadmapA

Pack the ranked backlog into a quarterly roadmap markdown, respecting an engineering capacity envelope (in engineer-months × 4 person-weeks) minus a buffer (default 20%). Uses a greedy score-per-effort packer — items with the highest RICE-per-pw ratio land first. Output is split into P0 commitments / P1 commitments / P2 stretch / Deferred / Capacity summary. Required: capacity_engineer_months (float), period (str like 'Q3 2026'). Optional: okr (str — pinned at top), method (default 'rice'), buffer_pct (default 20). Returns {markdown, scheduled[], deferred[], capacity_used_pw, capacity_total_pw, buffer_pw, method, period}.

analyze_roadmap_balanceA

Classify the top-N ranked initiatives into feature / tech_debt / strategic / unlabeled buckets by label, then surface ratio + score-share + a terse heuristic advisory. Use when a user asks 'is the roadmap balanced' / 'are we starving tech debt' / 'do we have any strategic bets'. Label vocabularies are configurable: feature_labels (default ['feature', 'product']), tech_debt_labels (default ['tech-debt', 'refactor', 'infra']), strategic_labels (default ['strategic', 'bet', 'moonshot']). Returns {method, totals, ratio_pct, score_share_pct, advisory}.

init_plan_knowledgeA

Create PLAN_PROJECT_ROOT/plan-knowledge.md from a starter template. The file carries methodology (RICE, WSJF, Impact-Effort, OKR mapping, INVEST, personas / job-stories, decision-log convention) plus TODO sections for active OKRs / personas / strategic bets / tech-debt zones / glossary / roadmap rhythm. Other mk-plan-master tools lean on this indirectly via get_plan_context. Idempotent — refuses to overwrite an existing file unless overwrite=true. Optional project_name labels the file. Override location via the PLAN_KNOWLEDGE_FILE env var.

get_plan_contextA

Read PLAN_PROJECT_ROOT/plan-knowledge.md (or fall back to built-in defaults if missing). Call near the start of a planning session so the same methodology + domain glossary colours every scoring decision that follows. Optional section filters to a single heading (partial-match, case-insensitive) — e.g. section='RICE' returns just the RICE block. Returns {source: 'file'|'builtin', content, ...}.

get_planning_historyA

Return trend deltas (current vs ~7 days ago / vs window_days ago) for the top-10 RICE-ranked backlog snapshots archived by rank_backlog. Surfaces churn (entries added/dropped) plus the average score of the current top-10. Use when a user asks 'are we improving' / 'show me the trend' / 'is the same idea always at the top'. Returns {snapshots_count, trend_7d, trend_30d, summary}.

get_decision_signatureA

Scan history + index for chronic patterns: ghost initiatives (appear in top-10 in >50% of snapshots but never spec_generated), score whiplash (RICE swings >50% between snapshots → bad data quality), orphan OKRs (OKRs in the index with zero initiatives in the current top-10). Use when a user asks 'which ideas keep getting punted' / 'why does this score keep moving' / 'which OKR has no execution'. Args: window_days (default 30). Returns {ghost_initiatives, score_whiplash, orphan_okrs, summary}.

get_telemetryA

Aggregate the tool-usage log written by this server. Surfaces: which tools are called most, which fail most (error rate), p50 / p95 / p99 latency, and which declared tools have never been called in the window (dead surface). Records contain only tool name + timing + ok flag — argument values are never logged. Use when a user asks 'what's the AI actually using' / 'which tools are slow' / 'which tools are unused'. Args: window_days (default 7). Returns {calls_total, calls_by_tool, error_rate_pct, p50_ms, p95_ms, p99_ms, top_tools[], dead_tools[]}.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kao273183/mk-plan-master'

If you have feedback or need assistance with the MCP directory API, please join our Discord server