Skip to main content
Glama

Example plans generated with PlanExe

What is PlanExe?

PlanExe is an open-source tool and the premier planning tool for AI agents. It turns a single plain-english goal statement into a 40-page, strategic plan in ~15 minutes using local or cloud models. It's an accelerator for outlines, but no silver bullet for polished plans.

Typical output contains:

  • Executive summary

  • Gantt chart

  • Governance structure

  • Role descriptions

  • Stakeholder maps

  • Risk registers

  • SWOT analyses

PlanExe produces well-structured, domain-aware output: correct terminology, logical task sequencing, and coherent sections. For technical topics (engineering programs, regulated industries), it often gets the vocabulary and structure right. Think of it as a first-draft scaffold that gives you something concrete to critique and refine.

However, the output has consistent weaknesses that matter: budgets are assumed rather than derived, timeline estimates are not grounded in real resource constraints, risk mitigations tend toward generic advice, and legal/regulatory details are plausible-sounding but unverified. The output should be treated as a structured starting point, not a deliverable. How much work it saves depends heavily on the project. For brainstorming or a first outline, it can save hours. For a client-ready plan, expect significant rework on every number, timeline, and risk section.


Model Context Protocol (MCP)

PlanExe exposes an MCP server for AI agents at https://mcp.planexe.org/

Assuming you have an MCP-compatible client (Claude, Cursor, Codex, LM Studio, Windsurf, OpenClaw, Antigravity).

The Tool workflow

  1. example_plans (optional, preview what PlanExe output looks like)

  2. example_prompts

  3. model_profiles (optional, helps choose model_profile)

  4. non-tool step: draft/approve prompt

  5. plan_create

  6. plan_status (poll every 5 minutes until done)

  7. optional if failed: plan_retry

  8. download the result via plan_file_info

Concurrency note: each plan_create call returns a new plan_id; server-side global per-client concurrency is not capped, so clients should track their own parallel plans.

Option A: Remote MCP (fastest path)

Prerequisites

Use this endpoint directly in your MCP client:

{
  "mcpServers": {
    "planexe": {
      "url": "https://mcp.planexe.org/mcp",
      "headers": {
        "X-API-Key": "pex_your_api_key_here"
      }
    }
  }
}

Option B: Run MCP server locally with Docker

Prerequisites

  • Docker

  • OpenRouter account

  • Create a PlanExe .env file with OPENROUTER_API_KEY.

Start the full stack:

docker compose up --build

Make sure that you can create plans in the web interface, before proceeding to MCP.

Then connect your client to:

  • http://localhost:8001/mcp

For local docker defaults, auth is disabled in docker-compose.yml.

MCP docs


Running the Pipeline via CLI (no frontend)

If you have a local Python environment set up and want to invoke the pipeline directly — without the Flask UI — use the planexe CLI script at the repo root.

# Create a plan from a text prompt
./planexe create_plan \
    --plan-text "Small coffee shop in Copenhagen, Denmark" \
    --output-dir ./planexe-outputs/1984-12-31/MyCoffeeShop_v1

# Or: read the plan prompt from a file
./planexe create_plan \
    --plan-file my_plan.txt \
    --output-dir ./planexe-outputs/1984-12-31/MyCoffeeShop_v1

The script creates the run directory if it does not exist and writes:

File

Contents

start_time.json

{"server_iso_utc": "<current UTC ISO timestamp>"}

plan.txt

The plain-text plan prompt


For AI Agents

PlanExe is designed as infrastructure for AI agents. If you are an AI agent reading this:

  • Discovery: Read public/llms.txt for service metadata, MCP tools, and endpoints.

  • MCP endpoint: https://mcp.planexe.org/mcp — connect with an API key from home.planexe.org.

  • Workflow: Call example_prompts to learn the expected prompt format, draft a detailed prompt (~300-800 words of flowing prose), then call plan_create.

  • Agent guide: See docs/mcp/autonomous_agent_guide.md for the complete autonomous workflow.

  • Key outputs in zip: wbs_level1.json (work packages), wbs_level2.json (tasks), pre_project_assessment.json (feasibility).


Prerequisite: Docker with Docker Compose installed; you only need basic Docker knowledge. No local Python setup is required because everything runs in containers.

Follow these steps

  1. Clone the repo and enter it:

git clone https://github.com/PlanExeOrg/PlanExe.git
cd PlanExe
  1. Provide an LLM provider. Copy .env.docker-example to .env and fill in OPENROUTER_API_KEY with your key from OpenRouter. The containers mount .env and llm_config/; pick a model profile there. For host-side Ollama, use the docker-ollama-llama3.1 entry and ensure Ollama is listening on http://host.docker.internal:11434.

  2. Start the stack (first run builds the images):

docker compose up worker_plan frontend_multi_user

The worker listens on http://localhost:8000 and the UI comes up on http://localhost:5001 after the Postgres and worker healthchecks pass.

  1. Open http://localhost:5001 in your browser, create an account (or log in with the admin credentials from .env), enter your idea, and watch progress with:

docker compose logs -f worker_plan

Outputs are written to run/ on the host (mounted into both containers).

  1. Stop with Ctrl+C (or docker compose down). Rebuild after code/dependency changes:

docker compose build --no-cache worker_plan frontend_multi_user

For compose tips, alternate ports, or troubleshooting, see docs/docker.md or docker-compose.md.

Configuration

Config A: Run a model in the cloud using a paid provider. Follow the instructions in OpenRouter.

Config B: Run models locally on a high-end computer. Follow the instructions for either Ollama or LM Studio. When using host-side tools with Docker, point the model URL at the host (for example http://host.docker.internal:11434 for Ollama).

Recommendation: I recommend Config A as it offers the most straightforward path to getting PlanExe working reliably.


For help or feedback.

Join the PlanExe Discord.

A
license - permissive license
-
quality - not tested
B
maintenance

Maintenance

Maintainers
<1hResponse time
Release cycle
Releases (12mo)
Issues opened vs closed

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PlanExeOrg/PlanExe'

If you have feedback or need assistance with the MCP directory API, please join our Discord server