Skip to main content
Glama
paulieb89

PyP6Xer MCP Server

pyp6xer_earned_value

Read-onlyIdempotent

Calculate Earned Value Management metrics from Primavera P6 XER data. Provides BCWS, BCWP, ACWP, SPI, CPI, CV, SV, EAC, VAC.

Instructions

Calculate Earned Value Management (EVM) metrics.

Metrics:

  • BCWS (PV): Budgeted Cost of Work Scheduled = total budgeted cost × duration %

  • BCWP (EV): Budgeted Cost of Work Performed = sum of (budget × % complete) per task

  • ACWP (AC): Actual Cost of Work Performed = sum of actual costs

  • SPI: Schedule Performance Index = EV / PV

  • CPI: Cost Performance Index = EV / AC

  • CV: Cost Variance = EV - AC

  • SV: Schedule Variance = EV - PV

  • EAC: Estimate at Completion = BAC / CPI

  • VAC: Variance at Completion = BAC - EAC

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
cache_keyNoCache key identifying the loaded XER file (set when calling pyp6xer_load_file)default
proj_idNoProject ID or short name; uses first project if omitted

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The handler function that calculates Earned Value Management (EVM) metrics: BCWS (PV), BCWP (EV), ACWP (AC), SPI, CPI, CV, SV, EAC, VAC.
    def pyp6xer_earned_value(
        cache_key: Annotated[str, Field(description="Cache key identifying the loaded XER file (set when calling pyp6xer_load_file)")] = "default",
        proj_id: Annotated[str | None, Field(description="Project ID or short name; uses first project if omitted")] = None,
        ctx: Context = None,
    ) -> str:
        """Calculate Earned Value Management (EVM) metrics.
    
        Metrics:
        - BCWS (PV): Budgeted Cost of Work Scheduled = total budgeted cost × duration %
        - BCWP (EV): Budgeted Cost of Work Performed = sum of (budget × % complete) per task
        - ACWP (AC): Actual Cost of Work Performed = sum of actual costs
        - SPI: Schedule Performance Index = EV / PV
        - CPI: Cost Performance Index = EV / AC
        - CV:  Cost Variance = EV - AC
        - SV:  Schedule Variance = EV - PV
        - EAC: Estimate at Completion = BAC / CPI
        - VAC: Variance at Completion = BAC - EAC
        """
        xer = _get_xer(ctx, cache_key)
        proj = _get_project(xer, proj_id)
        tasks = proj.tasks if proj_id else list(xer.tasks.values())
    
        bac = sum(t.budgeted_cost for t in tasks)  # Budget at Completion
        acwp = sum(t.actual_cost for t in tasks)
        bcwp = sum(t.budgeted_cost * t.percent_complete for t in tasks)
        bcws = bac * proj.duration_percent  # simplified PV
    
        spi = round(bcwp / bcws, 3) if bcws else None
        cpi = round(bcwp / acwp, 3) if acwp else None
        eac = round(bac / cpi, 2) if cpi else None
        vac = round(bac - eac, 2) if eac else None
    
        return json.dumps({
            "data_date": _fmt_date(proj.data_date),
            "BAC": round(bac, 2),
            "BCWS_PV": round(bcws, 2),
            "BCWP_EV": round(bcwp, 2),
            "ACWP_AC": round(acwp, 2),
            "SPI": spi,
            "CPI": cpi,
            "CV": round(bcwp - acwp, 2),
            "SV": round(bcwp - bcws, 2),
            "EAC": eac,
            "VAC": vac,
            "interpretation": {
                "SPI": (
                    "On schedule" if spi and spi >= 1.0
                    else "Behind schedule" if spi else "N/A"
                ),
                "CPI": (
                    "Under budget" if cpi and cpi >= 1.0
                    else "Over budget" if cpi else "N/A"
                ),
            },
        }, indent=2)
  • Tool registration with FastMCP decorator defining the tool name 'pyp6xer_earned_value' and its parameters (cache_key, proj_id, ctx with Pydantic annotations).
    @mcp.tool(annotations=ToolAnnotations(readOnlyHint=True, destructiveHint=False, idempotentHint=True, openWorldHint=False))
    def pyp6xer_earned_value(
        cache_key: Annotated[str, Field(description="Cache key identifying the loaded XER file (set when calling pyp6xer_load_file)")] = "default",
        proj_id: Annotated[str | None, Field(description="Project ID or short name; uses first project if omitted")] = None,
        ctx: Context = None,
    ) -> str:
  • server.py:1355-1357 (registration)
    Registration via @mcp.tool decorator - registers the function as an MCP tool with read-only, non-destructive, idempotent, and open-world hints.
    @mcp.tool(annotations=ToolAnnotations(readOnlyHint=True, destructiveHint=False, idempotentHint=True, openWorldHint=False))
    def pyp6xer_earned_value(
  • The pyp6xer_generate_report helper reuses the same earned value calculation logic (lines 1490-1495) for monthly report generation.
    bac  = sum(t.budgeted_cost for t in tasks)
    acwp = sum(t.actual_cost for t in tasks)
    bcwp = sum(t.budgeted_cost * t.percent_complete for t in tasks)  # 0.0–1.0 scale
    bcws = bac * proj.duration_percent
    spi  = round(bcwp / bcws, 3) if bcws else None
    cpi  = round(bcwp / acwp, 3) if acwp else None
    
    return json.dumps({
        "report_type": "monthly_progress",
        "project": {
            "short_name": proj.short_name,
            "name": proj.name,
            "data_date": _fmt_date(proj.data_date),
            "planned_finish": _fmt_date(proj.finish_date),
        },
        "progress": {
            "total_activities": len(tasks),
            "not_started": not_started,
            "in_progress": in_progress,
            "completed": completed,
            "weighted_percent_complete": round(weighted_pct * 100, 1),
            "milestones_total": len(milestones),
            "milestones_complete": ms_done,
        },
        "health": {
            "score": score,
            "rating": (
                "Excellent" if score >= 85 else
                "Good" if score >= 70 else
                "Fair" if score >= 55 else "Poor"
            ),
            "issues": issues,
        },
        "critical_path": {
            "count": len(critical),
            "pct_of_total": round(len(critical) / len(tasks) * 100, 1) if tasks else 0,
        },
        "slipping_activities": slipping[:10],
        "slipping_count": len(slipping),
        "earned_value": {
            "BAC": round(bac, 2),
            "SPI": spi,
            "CPI": cpi,
            "ACWP": round(acwp, 2),
            "BCWP": round(bcwp, 2),
        },
        "cost_summary": {
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly and idempotent. The description adds formulas and metric definitions, which provide some behavioral context beyond annotations. However, it does not disclose any side effects or data dependencies beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is structured as a list of formulas, which is easy to parse. It is not overly verbose, but the formula details could be slightly more condensed. Still, it earns its place by providing precise metric definitions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given there is an output schema (not shown), the description explains the computed metrics and their formulas, which is helpful for an agent. It covers the core purpose, but could mention that budgets and progress data must be present in the loaded XER file.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers both parameters with clear descriptions (cache_key for loaded file, proj_id for project selection). The tool description does not add further meaning about parameter usage or constraints, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The tool name and description clearly state it calculates EVM metrics. The description lists all key metrics with formulas, distinguishing it clearly from sibling analysis tools like critical_path or float_analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool over alternatives, nor does it specify prerequisites or exclusions. The context hints at requiring a loaded XER file via cache_key, but no direct guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/paulieb89/pyp6xer-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server