Skip to main content
Glama

of-mcp — Open Finance Brasil docs as an MCP server

A local MCP server that turns the public Confluence space at openfinancebrasil.atlassian.net/wiki/spaces/OF into four tools any MCP-compatible client (Claude Desktop, Claude Code, Cursor, etc.) can call:

Tool

What it returns

search_docs

Top BM25 chunks (compact snippets + page id + URL)

get_page

Full markdown of one page; optional section= slice

list_sections

Page tree (root pages, or children of a page id)

answer_question

Verbose RAG context (full chunks + citations) for the host LLM to answer

Why this design (token economy)

  • No embeddings. SQLite FTS5 (BM25) is local, instantaneous, and costs zero tokens. For keyword-heavy technical docs this is usually as good as semantic search. Optional [semantic] extra is left as a hook if you need re-ranking.

  • Heading-based chunks (~300 tokens). search_docs returns 6 small snippets by default — typically a few hundred tokens total — instead of whole pages.

  • No internal LLM call. answer_question returns the context, the calling assistant does the synthesis in its own context window. Avoids paying twice.

Setup

git clone <this-folder> of-mcp
cd of-mcp
python -m venv .venv
source .venv/bin/activate          # Windows: .venv\Scripts\activate
pip install -e .
cp .env.example .env                # optional — defaults are fine

No Atlassian account or token needed. The OF Confluence space is public, and the crawler hits the REST API anonymously. The ATLASSIAN_EMAIL / ATLASSIAN_API_TOKEN variables in .env.example exist only as a fallback in case Atlassian ever restricts the space — leave them empty.

Build the index

of-mcp-crawl              # full crawl, incremental on subsequent runs
of-mcp-crawl --force      # re-index every page
of-mcp-reindex            # re-chunk from cached pages, no network

The index lives in ./data/of.db (SQLite + FTS5).

Wire it up to Claude Desktop / Claude Code

Add this to your MCP client config (e.g. ~/.config/claude/claude_desktop_config.json):

{
  "mcpServers": {
    "of-mcp": {
      "command": "/absolute/path/to/of-mcp/.venv/bin/of-mcp",
      "env": {
        "OF_MCP_DB_PATH": "/absolute/path/to/of-mcp/data/of.db"
      }
    }
  }
}

For Claude Code:

claude mcp add of-mcp /absolute/path/to/of-mcp/.venv/bin/of-mcp

Restart the client. You should see the four tools available.

Usage examples

In your MCP-enabled chat:

use search_docs with query "iniciação de pagamento erros 422"

open the page returned and read the "Erros" section using get_page

use answer_question for "como funciona consentimento recorrente?"

Project layout

of-mcp/
├── pyproject.toml
├── .env.example
├── README.md
├── data/                    # SQLite index lives here (gitignored)
└── src/of_mcp/
    ├── server.py            # FastMCP server + tool definitions
    ├── crawler.py           # Confluence REST client + crawl orchestrator
    ├── chunker.py           # HTML → Markdown → heading chunks
    ├── search.py            # BM25 wrapper + output formatters
    ├── db.py                # SQLite schema + FTS5 triggers
    ├── config.py            # env loader
    └── scripts/
        ├── crawl.py         # `of-mcp-crawl` entrypoint
        └── reindex.py       # `of-mcp-reindex` entrypoint

Re-indexing on a schedule

The crawler is incremental (skips pages whose Confluence version is unchanged), so a daily cron is cheap:

30 4 * * *  cd /path/to/of-mcp && .venv/bin/of-mcp-crawl >> crawl.log 2>&1

License

MIT — do whatever you want, but the OF documentation itself belongs to the Open Finance Brasil structure governance.

Install Server
A
license - permissive license
A
quality
C
maintenance

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ThiTheGoat/of-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server