Skip to main content
Glama
connectimtiazh

Bi-Temporal Knowledge Graph MCP Server

Bi-Temporal Knowledge Graph MCP Server

A production-ready MCP (Model Context Protocol) server that gives your AI agents persistent memory with full temporal tracking. Save facts, extract entities using AI, and query historical data with time-travel capabilities.

Build intelligent AI agents with persistent memory that understands time and context

Architecture

This server uses a single-file "Database-Blind" architecture:

  • main.py - Everything in one file: FalkorDB driver, session management, entity extraction, memory tools, and your custom automation tools

Structure:

  1. Configuration & Database Driver

  2. Session Store & Entity Extractor

  3. Graphiti Memory Core

  4. Core MCP Memory Tools

  5. CUSTOM AUTOMATION TOOLS section (add your webhook tools here!)

  6. Server Startup

Note: This server focuses solely on memory operations. For advanced workflow orchestration, see the optional Automation Engine OS section.


⭐ Star This Repo

If you find this project useful, please give it a star! It helps others discover the project and motivates continued development.

How to star this repo


Resources


πŸ“‘ Table of Contents


✨ Features

🧠 Bi-Temporal Knowledge Graph

  • Smart Memory: Automatically tracks when facts were created AND when they became true in reality

  • Conflict Resolution: When you move locations or change jobs, old facts are automatically invalidated

  • Time Travel Queries: Ask "Where did John live in March 2024?" and get accurate historical answers

  • Session Tracking: Maintains context across conversations with automatic cleanup

πŸ€– AI-Powered Entity Extraction

  • Natural Language Understanding: Just tell it in plain English - "Alice moved to San Francisco and started working at Google"

  • Automatic Relationship Discovery: AI extracts entities and relationships without manual input

  • OpenAI Integration: Uses GPT-4 for intelligent entity extraction

  • Graceful Degradation: Works without AI - just add facts manually

πŸ› οΈ Simple Tool Extension

  • Single-File Architecture: Everything in one main.py file for easy customization

  • Direct @mcp.tool() Pattern: Add tools with a simple decorator - no config files needed

  • Single & Multi-Webhook: Execute one webhook or fire multiple in parallel

  • Clear Custom Section: Marked section in main.py shows exactly where to add your tools

πŸš€ Production Ready

  • Docker Support: Complete docker-compose setup included

  • Replit Optimized: Built specifically for Replit Autoscale environments

  • Resource Management: Automatic session cleanup and connection pooling

  • Health Checks: Built-in monitoring and status endpoints

  • 100% Privacy-Friendly: Your data stays in your database


🎬 How It Works

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ 1. Natural Language Input β”‚ β”‚ "Bob moved to NYC and joined Google as a PM" β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ 2. AI Entity Extraction (OpenAI) β”‚ β”‚ β€’ Bob -> lives in -> NYC β”‚ β”‚ β€’ Bob -> works at -> Google β”‚ β”‚ β€’ Bob -> has role -> PM β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ 3. Bi-Temporal Storage (FalkorDB) β”‚ β”‚ β€’ Fact: Bob works at Google β”‚ β”‚ β€’ created_at: 2024-12-19T10:00:00Z β”‚ β”‚ β€’ valid_at: 2024-12-19T10:00:00Z β”‚ β”‚ β€’ invalid_at: null (still true) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ 4. Query Anytime β”‚ β”‚ β€’ "Where does Bob work now?" β†’ Google β”‚ β”‚ β€’ "What was Bob's job history?" β†’ All past jobs β”‚ β”‚ β€’ "Where did Bob live in 2023?" β†’ Historical data β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“Έ Screenshots

Memory in Action

Knowledge Graph Example

AI Entity Extraction

Entity Extraction Demo

Dynamic Tool Generation

Tool Generator Interface

Temporal Queries

Time-Travel Query Results


πŸŽ₯ Video Tutorial

Watch the complete setup and usage guide:

Bi-Temporal MCP Server Tutorial

Topics covered:

  • Installation & setup (0:00)

  • Adding your first facts (2:30)

  • Using AI entity extraction (5:15)

  • Creating automation tools (8:45)

  • Temporal queries (12:20)

  • Deployment to production (15:00)


πŸš€ Quick Start

# 1. Download and extract wget https://github.com/YOUR_USERNAME/bitemporal-mcp-server/archive/main.zip unzip main.zip cd bitemporal-mcp-server-main # 2. Configure echo "OPENAI_API_KEY=sk-your-key" > .env # 3. Start everything (FalkorDB + MCP Server) docker-compose up -d # 4. Verify it's running curl http://localhost:8080/health

That's it! πŸŽ‰ Your server is now running at http://localhost:8080/sse

Option 2: Python (Local Development)

# 1. Install dependencies pip install -r requirements.txt # 2. Configure cp .env.example .env # Edit .env with your settings # 3. Start FalkorDB (Docker) docker run -d -p 6379:6379 falkordb/falkordb:latest # 4. Run the server python main.py

Option 3: One-Click Deploy

Deploy to Replit


πŸ› οΈ Adding Custom Automation Tools

Add your custom automation tools directly in main.py in the CUSTOM AUTOMATION TOOLS section.

Step 1: Find the Custom Tools Section

Open main.py and scroll to around line 800 - look for this clearly marked section:

# ============================================================================= # # β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ•— # β–ˆβ–ˆβ•”β•β•β•β•β• β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•β•β• β•šβ•β•β–ˆβ–ˆβ•”β•β•β• β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ•‘ # β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β–ˆβ–ˆβ–ˆβ–ˆβ•”β–ˆβ–ˆβ•‘ # β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β•šβ•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ # β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•‘ β•šβ•β• β–ˆβ–ˆβ•‘ # β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β•β• β•šβ•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β• # # AUTOMATION TOOLS # # ADD YOUR CUSTOM AUTOMATION TOOLS BELOW # # =============================================================================

This is where you'll add your webhook tools using the @mcp.tool() decorator.

Step 2: Add Your Tool

Add a decorated async function with @mcp.tool():

@mcp.tool() async def send_slack_notification(message: str, channel: str = "#general") -> str: """Send a notification to Slack.""" import httpx payload = {"text": message, "channel": channel} url = "https://hooks.slack.com/services/YOUR/WEBHOOK/URL" async with httpx.AsyncClient() as client: try: resp = await client.post(url, json=payload) return f"Success: Slack notification sent ({resp.status_code})" except Exception as e: return f"Error: {str(e)}"

The function's docstring becomes the tool description that the AI sees.

Step 3: Restart the Server

Restart the MCP server to load your new tools.

Example: LinkedIn Poster Tools

The Automation Engine App generates tools like this:

@mcp.tool() async def linkedin_post_image(caption: str, imageurl: str) -> str: """Posts an image with a caption to your LinkedIn page.""" import httpx payload = {"caption": caption, "imageUrl": imageurl} url = "https://webhook.latenode.com/YOUR/WEBHOOK/URL" async with httpx.AsyncClient() as client: try: resp = await client.post(url, json=payload) return f"Success: LinkedIn image posted ({resp.status_code})" except Exception as e: return f"Error: {str(e)}"

Example: Multi-Webhook Broadcast

Fire multiple webhooks in parallel:

@mcp.tool() async def broadcast_alert(message: str) -> str: """Send alerts to multiple platforms in parallel.""" import httpx import asyncio webhooks = [ ("https://hooks.slack.com/...", {"text": message}), ("https://discord.com/api/webhooks/...", {"content": message}), ] async def send(url, data): async with httpx.AsyncClient() as client: return await client.post(url, json=data) results = await asyncio.gather(*[send(url, data) for url, data in webhooks]) return f"Broadcast complete: {len(results)} webhooks fired"

πŸ“– API Reference - Memory Tools

All available MCP tools for managing your knowledge graph:

Core Memory Operations

add_fact

Add a new fact to the knowledge graph with bi-temporal tracking.

await add_fact( source_entity="John", relation="works at", target_entity="Google", group_id="my_org", # Optional session_id="session_123", # Optional valid_at="2024-01-15T00:00:00Z" # Optional - when fact became true )

Smart Conflict Resolution: When adding location or employment facts, previous facts of the same type are automatically invalidated.

add_message

Add a natural language message and automatically extract entities using AI.

await add_message( content="Alice moved to San Francisco and started working at OpenAI", session_id="session_123", group_id="my_org", # Optional extract_entities=True # Uses OpenAI for extraction )

Returns: Extracted entities and relationships as facts.

query_facts

Query facts from the knowledge graph.

await query_facts( entity_name="John", # Optional - filter by entity group_id="my_org", # Optional include_invalid=False, # Include invalidated facts max_facts=20 )

query_at_time

Time-travel query - get facts valid at a specific point in time.

await query_at_time( timestamp="2024-01-15T00:00:00Z", entity_name="John", # Optional group_id="my_org", # Optional max_facts=20 )

Use Case: "Where did John work in January 2024?"

get_episodes

Get recent conversation sessions/episodes.

await get_episodes( group_ids=["my_org"], # Optional max_episodes=10 )

clear_graph

Clear all data for specified groups. Warning: Permanent deletion!

await clear_graph( group_ids=["my_org"] # Optional - defaults to DEFAULT_GROUP_ID )

Server Management

get_status

Get comprehensive server status and statistics.

await get_status() # Returns: node counts, relationship types, session stats, connection status

force_cleanup

Manually trigger cleanup of expired sessions and idle connections.

await force_cleanup() # Returns: cleanup statistics

πŸ’‘ Use Cases

Personal Knowledge Management

Track your life events, relationships, and locations with full history:

await add_message( "I met Sarah at the tech conference. She works at OpenAI.", session_id="my_life" ) # Later: "Where did I meet Sarah?" β†’ "At the tech conference"

Customer Relationship Management

Monitor customer interactions with automatic conflict resolution:

await add_fact("CustomerA", "status", "premium") # Automatically invalidates previous "status" facts # Query history: "What was CustomerA's status in January?"

AI Agent Memory

Give your AI agents persistent, queryable memory:

# Agent learns from conversation await add_message( "User prefers morning meetings and uses Slack", session_id="agent_123" ) # Agent recalls later: "What are the user's preferences?"

Workflow Automation

Combine knowledge with actions:

# When fact changes, trigger automation if customer_upgraded_to_premium: await notify_sales_team(customer_name=name) await update_crm(customer_id=id, tier="premium")

❓ Frequently Asked Questions

Q: Does this require OpenAI?

A: No! OpenAI is optional for AI entity extraction. You can add facts manually without it.

Q: Can I use this with Claude Desktop?

A: Yes! Add the server URL to your claude_desktop_config.json:

{ "mcpServers": { "knowledge-graph": { "url": "http://localhost:8080/sse" } } }

Q: How do I query historical data?

A: Use the query_at_time tool:

await query_at_time( timestamp="2024-01-15T00:00:00Z", entity_name="John" )

Q: Can I deploy this to production?

A: Absolutely! See DEPLOYMENT.md for guides on:

  • Replit Autoscale

  • Railway

  • Render

  • Fly.io

  • Docker

  • VPS

Q: How does fact invalidation work?

A: When you add a fact about location or employment, the system automatically finds previous facts of the same type and marks them as invalid_at: current_time. Your query results only show current facts unless you specifically request historical data.

Q: Can I create multi-webhook tools?

A: Yes! Add a tool to the Custom Tools section in main.py using asyncio.gather() to fire multiple webhooks simultaneously. See the Adding Custom Tools section for examples.

Q: Is my data secure?

A: Yes! Everything runs in your infrastructure. No data is sent anywhere except:

  • OpenAI (only if you use entity extraction)

  • Your configured webhooks (only when you call them)

Q: How much does it cost to run?

A: Free for self-hosting! Only costs:

  • FalkorDB hosting (free tier available)

  • OpenAI API usage (optional, ~$0.001 per extraction)


πŸ“‹ Changelog

[1.0.0] - 2024-12-19

Added

  • βœ… Full bi-temporal tracking (created_at, valid_at, invalid_at, expired_at)

  • βœ… Smart conflict resolution for location and employment changes

  • βœ… Session-aware episodic memory with 30-minute TTL

  • βœ… OpenAI-powered entity extraction from natural language

  • βœ… Dynamic tool generator for automation workflows

  • βœ… Single webhook tool template

  • βœ… Multi-webhook parallel execution template

  • βœ… Docker and Docker Compose support

  • βœ… Replit Autoscale optimization

  • βœ… Background cleanup manager

  • βœ… Comprehensive documentation and examples

Supported Features

Feature

Status

Notes

Bi-Temporal Tracking

βœ…

Full implementation

AI Entity Extraction

βœ…

OpenAI GPT-4

Smart Invalidation

βœ…

Location, employment, relationships

Session Management

βœ…

Auto-cleanup after 30 min

Custom Tools

βœ…

Single & multi-webhook via @mcp.tool()

Parallel Webhooks

βœ…

asyncio.gather

Docker Support

βœ…

Complete stack included

Health Checks

βœ…

Built-in monitoring


πŸ†˜ Support

Need Help?

  1. Check Documentation: Start with QUICKSTART.md

  2. Join Community: High Ticket AI Builders - Free access!

  3. Watch Tutorial: Video Guide

  4. Report Bugs: GitHub Issues


πŸ”§ Optional: Automation Engine OS

Need a visual tool to orchestrate your workflows?

If you want to manage webhook configurations, generate tools automatically, and orchestrate complex workflows without writing code, check out Automation Engine OS - it's free when you join our community!

What Automation Engine OS provides:

  • Visual webhook configuration builder

  • Automatic MCP tool code generation

  • Workflow orchestration dashboard

  • Multi-webhook template management

  • One-click tool deployment to your MCP server

Get free access: Join High Ticket AI Builders

Note: Automation Engine OS is completely optional. This MCP server works standalone - you can manually add tools to the Custom Tools section in main.py as shown in the Adding Custom Automation Tools section.


🀝 Contributing

Contributions are welcome! Areas for improvement:

  • πŸ” Additional temporal query operators

  • 🧠 Enhanced entity extraction prompts

  • πŸ”§ More webhook authentication methods

  • πŸ“Š Performance optimizations

  • 🌐 Additional deployment platforms

  • πŸ“– More examples and tutorials

To contribute:

  1. Fork the repository

  2. Create your feature branch (git checkout -b feature/AmazingFeature)

  3. Commit your changes (git commit -m 'Add some AmazingFeature')

  4. Push to the branch (git push origin feature/AmazingFeature)

  5. Open a Pull Request


πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

TL;DR: You can use this commercially, modify it, distribute it. Just keep the license notice.


πŸ™ Acknowledgments

  • Built with FastMCP

  • Powered by FalkorDB

  • AI features via OpenAI

  • Inspired by the High Ticket AI Builders community


⭐ Star History

Star History Chart


πŸ“ž Connect


Built with ❀️ for the High Ticket AI Builders ecosystem

If this project helps you, please consider giving it a ⭐!

⬆ Back to Top

-
security - not tested
F
license - not found
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/connectimtiazh/Graphiti-Knowledge-MCP-Server-Starter'

If you have feedback or need assistance with the MCP directory API, please join our Discord server