Skip to main content
Glama
andyc00ps

Lumar MCP Server

by andyc00ps

Lumar MCP Server

An MCP (Model Context Protocol) server that wraps the Lumar GraphQL API, built with FastMCP.

This lets any MCP-compatible AI assistant (Claude, etc.) query your Lumar SEO crawl data conversationally — list projects, inspect crawls, pull URL-level metrics, generate reports, and download raw data.

Tools

Tool

Description

list_accounts

List all accessible Lumar accounts

list_projects

List projects within an account

get_project

Get project details and recent crawls

list_crawls

List crawls for a project

get_crawl

Get details for a specific crawl

get_report_stats

Get report statistics and first 100 URLs

get_url_data

Get URL-level SEO metrics with filtering and pagination

create_report_download

Trigger a downloadable report file

get_report_download

Check download status / get URL

get_raw_data_exports

Get Parquet download URLs for full crawl data

run_query

Execute any arbitrary GraphQL query

Setup

1. Get Lumar API credentials

In your Lumar account, create a User Key to get a User Key ID and Secret. See Lumar API docs for details.

2. Install

# With uv (recommended)
uv pip install -e .

# Or with pip
pip install -e .

3. Configure credentials

Set environment variables:

export LUMAR_USER_KEY_ID="your-key-id"
export LUMAR_SECRET="your-secret"

4. Run

# As a module
python -m lumar_mcp.server

# Or via the entry point
lumar-mcp

Claude Desktop configuration

Add this to your Claude Desktop claude_desktop_config.json:

{
  "mcpServers": {
    "lumar": {
      "command": "python",
      "args": ["-m", "lumar_mcp.server"],
      "env": {
        "LUMAR_USER_KEY_ID": "your-key-id",
        "LUMAR_SECRET": "your-secret"
      }
    }
  }
}

Or if using uv:

{
  "mcpServers": {
    "lumar": {
      "command": "uv",
      "args": ["run", "--directory", "/path/to/lumar-mcp-server", "python", "-m", "lumar_mcp.server"],
      "env": {
        "LUMAR_USER_KEY_ID": "your-key-id",
        "LUMAR_SECRET": "your-secret"
      }
    }
  }
}

Example usage

Once connected, you can ask Claude things like:

  • "List my Lumar accounts and projects"

  • "Show me the last 5 crawls for project X"

  • "Get the broken pages report for crawl Y"

  • "Find all URLs with HTTP 500 errors in the latest crawl"

  • "Download the raw crawl data as Parquet"

  • "Generate a report download for indexable pages"

Common report template codes

These are some frequently used Lumar report template codes you can pass to get_report_stats and get_url_data:

  • indexable_pages — Pages that can be indexed

  • non_indexable_pages — Pages blocked from indexing

  • broken_pages — Pages returning 4xx/5xx errors

  • https_pages — HTTPS page analysis

  • orphaned_google_search_console_pages — Pages in GSC but not found in crawl

  • duplicate_pages — Pages with duplicate content

  • thin_pages — Pages with low word count

Project structure

lumar-mcp-server/
├── pyproject.toml          # Package config and dependencies
├── README.md
└── lumar_mcp/
    ├── __init__.py
    ├── client.py           # Lumar GraphQL client with auth
    ├── queries.py          # Pre-built GraphQL queries
    └── server.py           # FastMCP server with all tools

Extending

To add new tools, edit server.py and add a new function decorated with @mcp.tool. Add the corresponding GraphQL query to queries.py. The run_query tool also allows executing arbitrary GraphQL queries for anything not covered by the built-in tools.

References

-
security - not tested
F
license - not found
-
quality - not tested

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/andyc00ps/lumar-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server