Skip to main content
Glama

Context MCP Server

by fikrimastor

Context MCP Server

A CloudFlare Workers-based Model Context Protocol (MCP) server that provides semantic memory and journal capabilities with zero-setup user experience.

Features

  • Zero-Setup Experience: Users get unique URLs with no local installation required
  • Semantic Search: BGE-Base-EN-v1.5 embeddings with vector similarity search
  • User Isolation: Complete data privacy with user-specific access control
  • Real-Time Communication: Server-Sent Events (SSE) for live MCP protocol communication
  • Scalable Architecture: Built on CloudFlare's serverless infrastructure

Core Tools

  • addMemory: Store memories with semantic search capabilities
  • searchMemory: Find relevant memories using semantic similarity
  • addJournal: Create journal entries with optional titles and tags
  • searchJournals: Search journal entries semantically
  • getRecentActivity: Get recent memories and journal entries

Architecture

  • CloudFlare Workers: Serverless compute for the MCP server
  • D1 Database: SQLite-based storage for structured data
  • Vectorize: Vector database for semantic search
  • CloudFlare AI: BGE-Base-EN-v1.5 embeddings generation
  • KV Store: Session management and caching

Quick Start

Prerequisites

  • Node.js 18+ installed
  • CloudFlare account with Workers, D1, and Vectorize access
  • Wrangler CLI installed and authenticated
npm install -g wrangler wrangler login

Setup

  1. Clone and Install
git clone <repository-url> cd context-mcp npm install
  1. Database Setup
npm run setup

This script will:

  • Create D1 database and update wrangler.toml
  • Set up database schema with proper indexes
  • Create Vectorize index for embeddings
  • Configure KV namespace for sessions
  1. Deploy
npm run deploy
  1. Test the Deployment
# Health check curl https://your-worker.workers.dev/health # Generate a user ID curl https://your-worker.workers.dev/generate-user

Optional: Seed Test Data

npm run seed [USER_ID]

Usage

For MCP Clients

Connect to your deployed worker using the SSE endpoint:

https://your-worker.workers.dev/{USER_ID}/sse

Example with Claude Desktop

Add to your claude_desktop_config.json:

{ "mcpServers": { "context": { "command": "npx", "args": ["@modelcontextprotocol/server-sse", "https://your-worker.workers.dev/{USER_ID}/sse"] } } }

Direct HTTP API

You can also use HTTP POST requests to the MCP endpoint:

curl -X POST https://your-worker.workers.dev/{USER_ID} \ -H "Content-Type: application/json" \ -d '{ "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": { "name": "addMemory", "arguments": { "content": "Learning about MCP protocol implementation", "tags": ["learning", "mcp"] } } }'

Tool Reference

addMemory

Store a new memory with semantic search capabilities.

{ "name": "addMemory", "arguments": { "content": "The memory content to store", "tags": ["optional", "tags"] } }

searchMemory

Search memories using semantic similarity.

{ "name": "searchMemory", "arguments": { "query": "Search query text", "limit": 5, "tags": ["optional", "filter"] } }

addJournal

Create a new journal entry.

{ "name": "addJournal", "arguments": { "title": "Optional title", "content": "Journal entry content", "tags": ["optional", "tags"] } }

searchJournals

Search journal entries semantically.

{ "name": "searchJournals", "arguments": { "query": "Search query text", "limit": 5, "tags": ["optional", "filter"] } }

getRecentActivity

Get recent memories and journal entries.

{ "name": "getRecentActivity", "arguments": { "days": 7, "limit": 10 } }

Development

Local Development

npm run dev

This starts a local development server with hot reloading.

Database Operations

# Execute SQL file npm run db:execute -- --file=schema.sql # Run SQL command npm run db:query -- "SELECT COUNT(*) FROM memories;" # View logs npm run logs

Type Checking

npm run build

Project Structure

context-mcp/ ├── src/ │ ├── worker.ts # Main CloudFlare Worker │ ├── mcp-handler.ts # MCP protocol implementation │ ├── sse-handler.ts # Server-Sent Events handler │ └── types.ts # TypeScript type definitions ├── scripts/ │ ├── setup-database.js # Database setup automation │ └── seed-data.js # Test data seeding ├── schema.sql # Database schema ├── wrangler.toml # CloudFlare configuration └── package.json # Dependencies and scripts

Configuration

Environment Variables

Set in wrangler.toml under [vars]:

[vars] NODE_ENV = "production" # Add custom variables here

Bindings

The worker uses these CloudFlare bindings:

  • DB: D1 Database for structured data
  • VECTORIZE: Vector search index
  • AI: BGE embeddings generation
  • SESSIONS: KV namespace for sessions

Security

  • User Isolation: All data is scoped to user IDs
  • UUID Validation: Proper user ID format validation
  • CORS Headers: Configured for cross-origin requests
  • Error Handling: No sensitive data exposed in errors

Performance

  • Vector Search: Sub-100ms semantic similarity queries
  • Database Queries: Optimized with proper indexing
  • Connection Management: Automatic cleanup of stale SSE connections
  • Heartbeat: 30-second intervals to maintain connections

Monitoring

Health Check

curl https://your-worker.workers.dev/health

Connection Status

The SSE handler provides connection monitoring capabilities for debugging.

Logs

npm run logs

View real-time CloudFlare Worker logs.

Troubleshooting

Common Issues

  1. Database not found: Run npm run setup to create database
  2. Embedding errors: Ensure CloudFlare AI binding is configured
  3. SSE connection issues: Check browser console for connection errors
  4. Vector search returning no results: Verify data was added with embeddings

Debug Steps

  1. Check health endpoint: https://your-worker.workers.dev/health
  2. Verify user ID format (must be valid UUID)
  3. Check CloudFlare dashboard for binding configuration
  4. Review worker logs: npm run logs

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make changes and test thoroughly
  4. Submit a pull request

License

MIT License - see LICENSE file for details.

Roadmap

  • Enhanced metadata filtering for vector search
  • File attachment support for journal entries
  • Export/import functionality
  • Advanced analytics and insights
  • Multi-language embedding support
  • Real-time collaboration features

Built with ❤️ using CloudFlare Workers and the Model Context Protocol.

-
security - not tested
F
license - not found
-
quality - not tested

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

A CloudFlare Workers-based MCP server that provides semantic memory and journal capabilities with vector search. Enables users to store, search, and retrieve memories and journal entries using AI-powered semantic similarity without any local setup required.

  1. Features
    1. Core Tools
      1. Architecture
        1. Quick Start
          1. Prerequisites
          2. Setup
          3. Optional: Seed Test Data
        2. Usage
          1. For MCP Clients
          2. Example with Claude Desktop
          3. Direct HTTP API
        3. Tool Reference
          1. addMemory
          2. searchMemory
          3. addJournal
          4. searchJournals
          5. getRecentActivity
        4. Development
          1. Local Development
          2. Database Operations
          3. Type Checking
        5. Project Structure
          1. Configuration
            1. Environment Variables
            2. Bindings
          2. Security
            1. Performance
              1. Monitoring
                1. Health Check
                2. Connection Status
                3. Logs
              2. Troubleshooting
                1. Common Issues
                2. Debug Steps
              3. Contributing
                1. License
                  1. Roadmap

                    MCP directory API

                    We provide all the information about MCP servers via our MCP API.

                    curl -X GET 'https://glama.ai/api/mcp/v1/servers/fikrimastor/context-mcp'

                    If you have feedback or need assistance with the MCP directory API, please join our Discord server