Skip to main content
Glama

fetch_initiative

Pull an initiative record by ID with scoring inputs for reach, impact, confidence, effort, and okr. Use with score_initiative to rank by RICE or Impact-Effort.

Instructions

Pull a single initiative by id from the active source. Returns the full Initiative record {id, source, title, body, url, status, labels, raw_metadata}. raw_metadata holds scoring inputs (reach / impact / confidence / effort / okr) plus any source-specific fields. Pair with score_initiative to get a RICE / Impact-Effort rank.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
initiative_idYes

Implementation Reference

  • The main tool handler function: extracts initiative_id from arguments, resolves the active source adapter, calls source.fetch(), and returns the full Initiative record (converted to dict) or a structured error.
    def fetch_initiative_tool(arguments: dict) -> dict[str, Any]:
        initiative_id = arguments.get("initiative_id") or arguments.get("id")
        if not initiative_id:
            return _error(
                "initiative_id is required",
                retryable=False,
                hint="Pass initiative_id (or id) — typically discovered via list_initiatives.",
            )
    
        try:
            source = get_source(SOURCE_NAME)
        except ValueError as exc:
            return _error(
                str(exc),
                retryable=False,
                hint="Set PLAN_SOURCE to one of the available adapters.",
            )
    
        try:
            initiative = source.fetch(str(initiative_id))
        except ValueError as exc:
            return _error(
                str(exc),
                retryable=False,
                hint="Run list_initiatives first to confirm the id exists in the active source.",
            )
        except Exception as exc:
            return _error(
                f"{type(exc).__name__}: {exc}",
                retryable=True,
                hint="Transient adapter error — retry, then check adapter credentials / network.",
            )
    
        return asdict(initiative)
  • Tool registration with inputSchema — defines 'initiative_id' as a required string property and includes the description of what the tool returns.
    Tool(
        name="fetch_initiative",
        description=(
            "Pull a single initiative by id from the active source. Returns "
            "the full Initiative record {id, source, title, body, url, status, "
            "labels, raw_metadata}. raw_metadata holds scoring inputs (reach / "
            "impact / confidence / effort / okr) plus any source-specific "
            "fields. Pair with score_initiative to get a RICE / Impact-Effort rank."
        ),
        inputSchema={
            "type": "object",
            "properties": {
                "initiative_id": {"type": "string"},
            },
            "required": ["initiative_id"],
        },
  • Dispatch table mapping 'fetch_initiative' string to the fetch_initiative_tool function.
    _DISPATCH: dict[str, Callable[[dict], dict]] = {
        "get_plan_source_info": initiatives_tools.get_plan_source_info_tool,
        "list_initiatives": initiatives_tools.list_initiatives_tool,
        "fetch_initiative": initiatives_tools.fetch_initiative_tool,
  • Internal _fetch_initiative helper used by the scoring tool — also calls get_source + source.fetch, re-used from the scoring module.
    def _fetch_initiative(initiative_id: str) -> Initiative:
        source = get_source(SOURCE_NAME)
        return source.fetch(initiative_id)
  • MarkdownLocalAdapter.fetch() — the default adapter's implementation that parses frontmatter from .md files and returns the full Initiative dataclass with raw_metadata.
    def fetch(self, initiative_id: str) -> Initiative:
        for path in _initiative_files():
            meta, body = _parse_frontmatter(path.read_text(encoding="utf-8"))
            if _initiative_id_from(meta, path) != initiative_id:
                continue
    
            labels = meta.get("labels") or []
            if isinstance(labels, str):
                labels = [labels]
    
            raw_metadata = {
                k: v for k, v in meta.items() if k not in _RESERVED_KEYS
            }
    
            return Initiative(
                id=initiative_id,
                source=self.name,
                title=str(meta.get("title") or initiative_id),
                body=body.strip(),
                url=str(path),
                status=str(meta.get("status") or ""),
                labels=list(labels),
                raw_metadata=raw_metadata,
            )
    
        raise ValueError(
            f"initiative_id={initiative_id!r} not found under {INITIATIVES_DIR}"
        )
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It correctly indicates a read operation ('Pull') and lists return fields, but omits details on error handling, auth requirements, or data freshness. The description is adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action and return structure. Every sentence adds value: what it does, what it returns, and how it pairs with another tool. No excessive wording.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given lack of output schema, the description compensates by listing return fields. Also pairs with score_initiative for context. However, missing usage guidelines and parameter details reduce completeness for independent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description adds minimal value: 'by id' merely restates the parameter name. No information on format, source of ID, or constraints beyond the schema's type string.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The verb 'Pull' and resource 'initiative by id' are specific and clear. The description distinguishes from siblings by mentioning pairing with score_initiative and implying it as a single-record fetch, unlike list_initiatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context by mentioning pairing with score_initiative for ranking, but does not explicitly state when to use this tool vs alternatives like list_initiatives or add_initiative. Lacks when-not guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kao273183/mk-plan-master'

If you have feedback or need assistance with the MCP directory API, please join our Discord server