Skip to main content
Glama
paulieb89

PyP6Xer MCP Server

pyp6xer_load_file

Read-onlyIdempotent

Load a Primavera P6 XER file into the analysis cache via local path, URL, or base64 string to enable simultaneous schedule analysis.

Instructions

Load a Primavera P6 XER file into the analysis cache.

Accepts a local file path, an HTTP/HTTPS URL, or a base64-encoded string of the file's binary content. The loaded data is stored under cache_key so multiple schedules can be open simultaneously.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
cache_keyNoCache key identifying the loaded XER file (set when calling pyp6xer_load_file)default
file_pathNoLocal file path or HTTP/HTTPS URL to the XER file
file_contentNoBase64-encoded XER file bytes (for direct uploads from Claude/ChatGPT)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The pyp6xer_load_file tool function definition (handler). It loads a Primavera P6 XER file from a local path, HTTP/HTTPS URL, or base64-encoded content, parses it, stores it in the shared cache, and returns a summary of the loaded data.
    @mcp.tool(annotations=ToolAnnotations(readOnlyHint=True, destructiveHint=False, idempotentHint=True, openWorldHint=False))
    def pyp6xer_load_file(
        cache_key: Annotated[str, Field(description="Cache key identifying the loaded XER file (set when calling pyp6xer_load_file)")] = "default",
        file_path: Annotated[str | None, Field(description="Local file path or HTTP/HTTPS URL to the XER file")] = None,
        file_content: Annotated[str | None, Field(description="Base64-encoded XER file bytes (for direct uploads from Claude/ChatGPT)")] = None,
        ctx: Context = None,
    ) -> str:
        """Load a Primavera P6 XER file into the analysis cache.
    
        Accepts a local file path, an HTTP/HTTPS URL, or a base64-encoded string
        of the file's binary content. The loaded data is stored under cache_key
        so multiple schedules can be open simultaneously.
    
        Args:
            cache_key: Identifier for this file in the cache (default: "default").
            file_path: Local path (e.g. "/data/project.xer") or URL
                       (e.g. "https://example.com/project.xer").
            file_content: Base64-encoded XER file bytes (for direct uploads).
        """
        source, xer, raw_text = _load_xer_content(file_path, file_content)
        header, table_order, raw_tables = _parse_raw_tables(raw_text)
    
        cache = ctx.lifespan_context["cache"]
        cache[cache_key] = {
            "xer": xer,
            "raw_tables": raw_tables,
            "table_order": table_order,
            "header": header,
            "source": source,
        }
    
        proj_names = [f"{p.short_name} – {p.name}" for p in xer.projects.values()]
        result = {
            "status": "loaded",
            "cache_key": cache_key,
            "source": source,
            "projects": proj_names,
            "total_activities": len(xer.tasks),
            "total_relationships": len(xer.relationships),
            "total_resources": len(xer.resources),
        }
        return json.dumps(result, indent=2)
  • server.py:299-300 (registration)
    The @mcp.tool decorator that registers pyp6xer_load_file as an MCP tool with annotations marking it as read-only, non-destructive, idempotent, and not open-world.
    @mcp.tool(annotations=ToolAnnotations(readOnlyHint=True, destructiveHint=False, idempotentHint=True, openWorldHint=False))
    def pyp6xer_get_activity_schema() -> str:
  • The _load_xer_content helper function used by pyp6xer_load_file to handle decoding from local path, URL, or base64 content.
    def _load_xer_content(file_path: str | None, file_content: str | None) -> tuple[str, Xer]:
        """Load XER from path/URL/base64. Returns (source_label, Xer)."""
        if file_content:
            raw_bytes = base64.b64decode(file_content)
            text = raw_bytes.decode(Xer.CODEC, errors="replace")
            xer = Xer(text)
            return "base64_upload", xer, text
    
        if not file_path:
            raise ValueError("Provide either file_path or file_content.")
    
        if file_path.startswith(("http://", "https://")):
            response = httpx.get(file_path, timeout=60, follow_redirects=True)
            response.raise_for_status()
            text = response.content.decode(Xer.CODEC, errors="replace")
            xer = Xer(text)
            return file_path, xer, text
    
        with open(file_path, "rb") as f:
            raw_bytes = f.read()
        text = raw_bytes.decode(Xer.CODEC, errors="replace")
        xer = Xer(text)
        return file_path, xer, text
  • The _parse_raw_tables helper function that parses raw XER table data preserving column order for round-trip write support.
    def _parse_raw_tables(content: str) -> tuple[str, list[str], dict]:
        """
        Parse XER content, preserving column order per table for round-trip write support.
    
        Returns:
            header      - ERMHDR line
            table_order - ordered list of table names
            raw_tables  - {table_name: {"cols": [...], "rows": [dict]}}
        """
        sections = content.split("%T\t")
        header = sections.pop(0).strip()
        table_order: list[str] = []
        raw_tables: dict = {}
    
        for section in sections:
            lines = [ln for ln in section.splitlines() if ln.strip()]
            if not lines:
                continue
            name = lines[0].strip()
            if len(lines) < 2:
                table_order.append(name)
                raw_tables[name] = {"cols": [], "rows": []}
                continue
            cols = lines[1].strip().split("\t")[1:]  # skip %F prefix
            rows = []
            for line in lines[2:]:
                if line.startswith("%R"):
                    vals = line.strip().split("\t")[1:]
                    # Pad if fewer values than columns
                    while len(vals) < len(cols):
                        vals.append("")
                    rows.append(dict(zip(cols, vals)))
            table_order.append(name)
            raw_tables[name] = {"cols": cols, "rows": rows}
    
        return header, table_order, raw_tables
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint, destructiveHint, idempotentHint) align with the description. The description adds context on multiple input sources and cache storage, which is helpful beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, no unnecessary words. Front-loaded with main action, followed by input options and caching behavior. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Output schema exists, so return values need not be described. The description covers all input methods and caching, making it complete for a load tool. No gaps identified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage. The description adds value by grouping parameters and explaining purpose (e.g., 'for direct uploads from Claude/ChatGPT' for file_content). It clarifies the relationship between cache_key and simultaneous schedules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool loads a Primavera P6 XER file into an analysis cache, specifying three input methods. It distinguishes itself from sibling tools which handle other operations like listing, updating, or analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool (to load an XER file) and mentions cache_key for simultaneous schedules. It does not explicitly contrast with alternatives, but context implies this is the initial loading step before analysis tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/paulieb89/pyp6xer-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server