Skip to main content
Glama

Analysis MCP

by RCSnyder

analysis-mcp

A FastMCP server for critical thinking and multi-perspective analysis of current affairs.

Uses the LLM-Orchestrator Pattern: Tools return structured prompts for the calling LLM to execute, ensuring provider-agnostic operation and full transparency.

🧠 How It Works

  1. You ask the LLM to analyze something

  2. LLM calls MCP tool to get a structured analysis plan

  3. Tool returns {trace_id, outline, next_prompt}

  4. LLM executes the next_prompt using its own model

  5. Result combines tool planning + LLM generation

This pattern means:

  • ✅ Works with any LLM provider (Claude, GPT, local models)

  • ✅ No API keys needed in the MCP server

  • ✅ Full traceability via trace_id

  • ✅ Composable with other MCP tools

Features

Cognitive Tools:

  • deconstruct_claim - Plan a claim deconstruction (returns structured prompt)

  • compare_positions - Plan multi-perspective analysis (returns structured prompt)

  • apply_lens - Plan lens-based analysis through 9 analytical frameworks

  • get_trace - Retrieve previous analysis plans for iteration

9 Analytical Lenses: historical, economic, geopolitical, psychological, technological, sociocultural, philosophical, systems, media

Quick Start with Claude Desktop

  1. Install via uvx (recommended):

Edit your Claude Desktop config file:

  • MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json

  • Windows: %APPDATA%\Claude\claude_desktop_config.json

Add this to the mcpServers section:

{ "mcpServers": { "analysis-mcp": { "command": "uvx", "args": [ "git+https://github.com/YOUR_USERNAME/analysis_mcp", "analysis-mcp" ] } } }
  1. Restart Claude Desktop

  2. Verify installation: Look for the 🔌 icon in Claude Desktop showing the analysis-mcp server is connected

Alternative: Local Development Installation

If you want to modify the code or run it locally:

# Clone the repo git clone https://github.com/YOUR_USERNAME/analysis_mcp.git cd analysis_mcp # Create virtual environment python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate # Install in editable mode pip install -e ".[dev]" # Run tests pytest -v # Run server directly (for testing) python -m analysis_mcp.server

For local development in Claude Desktop, update your config to point to the local path:

{ "mcpServers": { "analysis-mcp": { "command": "python", "args": [ "-m", "analysis_mcp.server" ], "cwd": "/absolute/path/to/analysis_mcp", "env": { "PYTHONPATH": "/absolute/path/to/analysis_mcp/src" } } } }

Usage Examples

Once connected to Claude Desktop, you can use these tools:

Example 1: Deconstruct a claim

Analyze this claim: "AI will replace all human jobs within 10 years"

The tool returns:

{ "trace_id": "deconstruct-1730830000000-abc123", "outline": { "claim": "AI will replace all human jobs within 10 years", "analysis_sections": ["assumptions", "evidence", "implications", "hidden_premises"] }, "next_prompt": "TASK: Deconstruct the following claim...\n\nCONTENT:\nAI will replace all human jobs within 10 years\n\nINSTRUCTIONS:\n1. List all implicit and explicit ASSUMPTIONS..." }

Claude then executes the next_prompt and provides the full analysis.

Example 2: Compare perspectives

Compare progressive vs conservative perspectives on universal basic income

Example 3: Apply analytical lens

Apply an economic lens to analyze: "Federal Reserve raises interest rates"

Example 4: Recall previous analysis

Retrieve trace deconstruct-1730830000000-abc123

🔄 LLM-Orchestrator Pattern Details

Each tool is a prompt compiler, not a generator:

User → LLM: "Analyze X" LLM → MCP Tool: deconstruct_claim("X") Tool → LLM: {trace_id, outline, next_prompt} LLM → LLM: executes next_prompt LLM → User: final analysis

Why this pattern?

  • Server never calls external APIs (no cost, no keys)

  • Uses the calling LLM's provider & model

  • Deterministic planning + flexible generation

  • Easy to trace, log, and iterate

  • Composable with other tools

Available Lenses

  • historical - Compare to precedents and patterns

  • economic - Analyze resource flows and incentives

  • geopolitical - Examine power balances and strategy

  • psychological - Identify biases and manipulation

  • technological - Explore tech's role and impact

  • sociocultural - Analyze identity and narratives

  • philosophical - Apply ethical frameworks

  • systems - Map feedback loops and leverage points

  • media - Deconstruct framing and agenda-setting

Trace Storage

Analysis plans are logged to ~/.analysis_mcp/traces/ as JSON files. Each trace contains:

  • trace_id - Unique identifier

  • tool - Which tool was called

  • input - Original parameters

  • outline - Structured analysis plan

  • next_prompt - The prompt for LLM execution

  • timestamp - When it was created

Use get_trace(trace_id) to retrieve any previous analysis plan.

Troubleshooting

Server not connecting?

  • Verify uvx is installed: pip install uvx

  • Check Claude Desktop logs (Help → View Logs)

  • Ensure your config JSON is valid

Tools not appearing?

  • Restart Claude Desktop after config changes

  • Check the 🔌 icon shows "analysis-mcp" as connected

Contributing

Pull requests welcome! Please run tests before submitting:

pytest -v
-
security - not tested
F
license - not found
-
quality - not tested

local-only server

The server can only run on the client's local machine because it depends on local resources.

Provides cognitive tools for critical thinking and multi-perspective analysis of current affairs through structured prompts, including claim deconstruction, perspective comparison, and analysis through 9 analytical lenses (historical, economic, geopolitical, etc.).

  1. 🧠 How It Works
    1. Features
      1. Quick Start with Claude Desktop
        1. Alternative: Local Development Installation
          1. Usage Examples
            1. 🔄 LLM-Orchestrator Pattern Details
              1. Available Lenses
                1. Trace Storage
                  1. Troubleshooting
                    1. Contributing

                      MCP directory API

                      We provide all the information about MCP servers via our MCP API.

                      curl -X GET 'https://glama.ai/api/mcp/v1/servers/RCSnyder/analysis_mcp'

                      If you have feedback or need assistance with the MCP directory API, please join our Discord server