Skip to main content
Glama

Tell Claude Code "add AI chat to mysite.com" and it will crawl your content, build a local vector store, spin up a chat server, and hand you an embed snippet. No cloud infra. No database. Just one API key.

Demo

You:    "Set up AI chat for mybakery.com using this FAQ and menu content"

Claude: I'll set up RAG-powered chat for mybakery.com.

        [1/4] ragchat_setup
              Splitting content into sections...
              Generating embeddings for 12 documents...
              Domain "mybakery.com" configured with 12 documents.

        [2/4] ragchat_test
              Query: "What are your hours?"
              Reply: "We're open Monday-Saturday 7am-6pm, and Sunday 8am-2pm."
              RAG Sources: mybakery.com-3 (0.91), mybakery.com-7 (0.84)
              Latency: 340ms

        [3/4] ragchat_serve
              Chat server running at http://localhost:3456
              POST /chat ready.

        [4/4] ragchat_widget
              Generated embed code. Paste this into your HTML:

              <script src="http://localhost:3456/widget.js"></script>

You:    Done. Live chat on my site in under 60 seconds.

Quick Start

1. Clone and build

git clone https://github.com/gogabrielordonez/mcp-ragchat
cd mcp-ragchat
npm install && npm run build

2. Configure Claude Code (~/.claude/mcp.json)

{
  "mcpServers": {
    "ragchat": {
      "command": "node",
      "args": ["/absolute/path/to/mcp-ragchat/dist/mcp-server.js"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

3. Use it

Open Claude Code and say:

"Add AI chat to mysite.com. Here's the content: [paste your markdown]"

Claude handles the rest.

Tools

Tool

What it does

ragchat_setup

Seed a knowledge base from markdown content. Each ## section becomes a searchable document with vector embeddings.

ragchat_test

Send a test message to verify RAG retrieval and LLM response quality.

ragchat_serve

Start a local HTTP chat server with CORS and input sanitization.

ragchat_widget

Generate a self-contained <script> tag -- a floating chat bubble, no dependencies.

ragchat_status

List all configured domains with document counts and config details.

How It Works

                        +------------------+
                        |  Your Markdown   |
                        +--------+---------+
                                 |
                          ragchat_setup
                                 |
                    +------------v-------------+
                    |   Local Vector Store      |
                    |   ~/.mcp-ragchat/domains/ |
                    |     vectors.json          |
                    |     config.json           |
                    +------------+-------------+
                                 |
          User Question          |
               |                 |
        +------v------+  +------v------+
        |  Embedding  |  |  Cosine     |
        |  Provider   +->+  Similarity |
        +-------------+  +------+------+
                                |
                         Top 3 chunks
                                |
                    +----------v-----------+
                    |  System Prompt       |
                    |  + RAG Context       |
                    |  + User Message      |
                    +----------+-----------+
                               |
                    +----------v-----------+
                    |     LLM Provider     |
                    +----------+-----------+
                               |
                            Reply

Everything runs locally. No cloud infrastructure. Bring your own API key.

Supported Providers

LLM (chat completions)

Provider

Env Var

Default Model

OpenAI

OPENAI_API_KEY

gpt-4o-mini

Anthropic

ANTHROPIC_API_KEY

claude-sonnet-4-5-20250929

Google Gemini

GEMINI_API_KEY

gemini-2.0-flash

Provider

Env Var

Default Model

OpenAI

OPENAI_API_KEY

text-embedding-3-small

Google Gemini

GEMINI_API_KEY

text-embedding-004

AWS Bedrock

AWS_REGION + IAM

amazon.titan-embed-text-v2:0

Override defaults with LLM_MODEL and EMBEDDING_MODEL environment variables.

Architecture

~/.mcp-ragchat/domains/
  mysite.com/
    config.json     -- system prompt, settings
    vectors.json    -- documents + embedding vectors
  • Vector store -- Local JSON files with cosine similarity search. Zero external dependencies.

  • Chat server -- Node.js HTTP server with CORS and input sanitization.

  • Widget -- Self-contained <script> tag. No frameworks, no build step.

Contributing

Issues and pull requests are welcome.

Star History

Star History Chart


Enterprise

Need multi-tenancy, security guardrails, audit trails, and managed infrastructure? Check out Supersonic -- the enterprise AI platform built on the same RAG pipeline.


MIT License -- Gabriel Ordonez

Install Server
A
security – no known vulnerabilities
F
license - not found
A
quality - confirmed to work

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gogabrielordonez/mcp-ragchat'

If you have feedback or need assistance with the MCP directory API, please join our Discord server