Skip to main content
Glama
connectimtiazh

Bi-Temporal Knowledge Graph MCP Server

Bi-Temporal Knowledge Graph MCP Server

A production-ready MCP (Model Context Protocol) server that gives your AI agents persistent memory with full temporal tracking. Save facts, extract entities using AI, and query historical data with time-travel capabilities.

Build intelligent AI agents with persistent memory that understands time and context

Architecture

This server uses a single-file "Database-Blind" architecture:

  • main.py - Everything in one file: FalkorDB driver, session management, entity extraction, memory tools, and your custom automation tools

Structure:

  1. Configuration & Database Driver

  2. Session Store & Entity Extractor

  3. Graphiti Memory Core

  4. Core MCP Memory Tools

  5. CUSTOM AUTOMATION TOOLS section (add your webhook tools here!)

  6. Server Startup

Note: This server focuses solely on memory operations. For advanced workflow orchestration, see the optional Automation Engine OS section.


โญ Star This Repo

If you find this project useful, please give it a star! It helps others discover the project and motivates continued development.

How to star this repo


Resources


๐Ÿ“‘ Table of Contents


โœจ Features

๐Ÿง  Bi-Temporal Knowledge Graph

  • Smart Memory: Automatically tracks when facts were created AND when they became true in reality

  • Conflict Resolution: When you move locations or change jobs, old facts are automatically invalidated

  • Time Travel Queries: Ask "Where did John live in March 2024?" and get accurate historical answers

  • Session Tracking: Maintains context across conversations with automatic cleanup

๐Ÿค– AI-Powered Entity Extraction

  • Natural Language Understanding: Just tell it in plain English - "Alice moved to San Francisco and started working at Google"

  • Automatic Relationship Discovery: AI extracts entities and relationships without manual input

  • OpenAI Integration: Uses GPT-4 for intelligent entity extraction

  • Graceful Degradation: Works without AI - just add facts manually

๐Ÿ› ๏ธ Simple Tool Extension

  • Single-File Architecture: Everything in one main.py file for easy customization

  • Direct @mcp.tool() Pattern: Add tools with a simple decorator - no config files needed

  • Single & Multi-Webhook: Execute one webhook or fire multiple in parallel

  • Clear Custom Section: Marked section in main.py shows exactly where to add your tools

๐Ÿš€ Production Ready

  • Docker Support: Complete docker-compose setup included

  • Replit Optimized: Built specifically for Replit Autoscale environments

  • Resource Management: Automatic session cleanup and connection pooling

  • Health Checks: Built-in monitoring and status endpoints

  • 100% Privacy-Friendly: Your data stays in your database


๐ŸŽฌ How It Works

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ 1. Natural Language Input โ”‚ โ”‚ "Bob moved to NYC and joined Google as a PM" โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ–ผ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ 2. AI Entity Extraction (OpenAI) โ”‚ โ”‚ โ€ข Bob -> lives in -> NYC โ”‚ โ”‚ โ€ข Bob -> works at -> Google โ”‚ โ”‚ โ€ข Bob -> has role -> PM โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ–ผ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ 3. Bi-Temporal Storage (FalkorDB) โ”‚ โ”‚ โ€ข Fact: Bob works at Google โ”‚ โ”‚ โ€ข created_at: 2024-12-19T10:00:00Z โ”‚ โ”‚ โ€ข valid_at: 2024-12-19T10:00:00Z โ”‚ โ”‚ โ€ข invalid_at: null (still true) โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ–ผ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ 4. Query Anytime โ”‚ โ”‚ โ€ข "Where does Bob work now?" โ†’ Google โ”‚ โ”‚ โ€ข "What was Bob's job history?" โ†’ All past jobs โ”‚ โ”‚ โ€ข "Where did Bob live in 2023?" โ†’ Historical data โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

๐Ÿ“ธ Screenshots

Memory in Action

Knowledge Graph Example

AI Entity Extraction

Entity Extraction Demo

Dynamic Tool Generation

Tool Generator Interface

Temporal Queries

Time-Travel Query Results


๐ŸŽฅ Video Tutorial

Watch the complete setup and usage guide:

Bi-Temporal MCP Server Tutorial

Topics covered:

  • Installation & setup (0:00)

  • Adding your first facts (2:30)

  • Using AI entity extraction (5:15)

  • Creating automation tools (8:45)

  • Temporal queries (12:20)

  • Deployment to production (15:00)


๐Ÿš€ Quick Start

# 1. Download and extract wget https://github.com/YOUR_USERNAME/bitemporal-mcp-server/archive/main.zip unzip main.zip cd bitemporal-mcp-server-main # 2. Configure echo "OPENAI_API_KEY=sk-your-key" > .env # 3. Start everything (FalkorDB + MCP Server) docker-compose up -d # 4. Verify it's running curl http://localhost:8080/health

That's it! ๐ŸŽ‰ Your server is now running at http://localhost:8080/sse

Option 2: Python (Local Development)

# 1. Install dependencies pip install -r requirements.txt # 2. Configure cp .env.example .env # Edit .env with your settings # 3. Start FalkorDB (Docker) docker run -d -p 6379:6379 falkordb/falkordb:latest # 4. Run the server python main.py

Option 3: One-Click Deploy

Deploy to Replit


๐Ÿ› ๏ธ Adding Custom Automation Tools

Add your custom automation tools directly in main.py in the CUSTOM AUTOMATION TOOLS section.

Step 1: Find the Custom Tools Section

Open main.py and scroll to around line 800 - look for this clearly marked section:

# ============================================================================= # # โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ•— โ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ•— # โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ• โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ• โ•šโ•โ•โ–ˆโ–ˆโ•”โ•โ•โ• โ–ˆโ–ˆโ•”โ•โ•โ•โ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ•‘ # โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•”โ–ˆโ–ˆโ–ˆโ–ˆโ•”โ–ˆโ–ˆโ•‘ # โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ•šโ•โ•โ•โ•โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘โ•šโ–ˆโ–ˆโ•”โ•โ–ˆโ–ˆโ•‘ # โ•šโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ•šโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•”โ• โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ•šโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•”โ• โ–ˆโ–ˆโ•‘ โ•šโ•โ• โ–ˆโ–ˆโ•‘ # โ•šโ•โ•โ•โ•โ•โ• โ•šโ•โ•โ•โ•โ•โ• โ•šโ•โ•โ•โ•โ•โ•โ• โ•šโ•โ• โ•šโ•โ•โ•โ•โ•โ• โ•šโ•โ• โ•šโ•โ• # # AUTOMATION TOOLS # # ADD YOUR CUSTOM AUTOMATION TOOLS BELOW # # =============================================================================

This is where you'll add your webhook tools using the @mcp.tool() decorator.

Step 2: Add Your Tool

Add a decorated async function with @mcp.tool():

@mcp.tool() async def send_slack_notification(message: str, channel: str = "#general") -> str: """Send a notification to Slack.""" import httpx payload = {"text": message, "channel": channel} url = "https://hooks.slack.com/services/YOUR/WEBHOOK/URL" async with httpx.AsyncClient() as client: try: resp = await client.post(url, json=payload) return f"Success: Slack notification sent ({resp.status_code})" except Exception as e: return f"Error: {str(e)}"

The function's docstring becomes the tool description that the AI sees.

Step 3: Restart the Server

Restart the MCP server to load your new tools.

Example: LinkedIn Poster Tools

The Automation Engine App generates tools like this:

@mcp.tool() async def linkedin_post_image(caption: str, imageurl: str) -> str: """Posts an image with a caption to your LinkedIn page.""" import httpx payload = {"caption": caption, "imageUrl": imageurl} url = "https://webhook.latenode.com/YOUR/WEBHOOK/URL" async with httpx.AsyncClient() as client: try: resp = await client.post(url, json=payload) return f"Success: LinkedIn image posted ({resp.status_code})" except Exception as e: return f"Error: {str(e)}"

Example: Multi-Webhook Broadcast

Fire multiple webhooks in parallel:

@mcp.tool() async def broadcast_alert(message: str) -> str: """Send alerts to multiple platforms in parallel.""" import httpx import asyncio webhooks = [ ("https://hooks.slack.com/...", {"text": message}), ("https://discord.com/api/webhooks/...", {"content": message}), ] async def send(url, data): async with httpx.AsyncClient() as client: return await client.post(url, json=data) results = await asyncio.gather(*[send(url, data) for url, data in webhooks]) return f"Broadcast complete: {len(results)} webhooks fired"

๐Ÿ“– API Reference - Memory Tools

All available MCP tools for managing your knowledge graph:

Core Memory Operations

add_fact

Add a new fact to the knowledge graph with bi-temporal tracking.

await add_fact( source_entity="John", relation="works at", target_entity="Google", group_id="my_org", # Optional session_id="session_123", # Optional valid_at="2024-01-15T00:00:00Z" # Optional - when fact became true )

Smart Conflict Resolution: When adding location or employment facts, previous facts of the same type are automatically invalidated.

add_message

Add a natural language message and automatically extract entities using AI.

await add_message( content="Alice moved to San Francisco and started working at OpenAI", session_id="session_123", group_id="my_org", # Optional extract_entities=True # Uses OpenAI for extraction )

Returns: Extracted entities and relationships as facts.

query_facts

Query facts from the knowledge graph.

await query_facts( entity_name="John", # Optional - filter by entity group_id="my_org", # Optional include_invalid=False, # Include invalidated facts max_facts=20 )

query_at_time

Time-travel query - get facts valid at a specific point in time.

await query_at_time( timestamp="2024-01-15T00:00:00Z", entity_name="John", # Optional group_id="my_org", # Optional max_facts=20 )

Use Case: "Where did John work in January 2024?"

get_episodes

Get recent conversation sessions/episodes.

await get_episodes( group_ids=["my_org"], # Optional max_episodes=10 )

clear_graph

Clear all data for specified groups. Warning: Permanent deletion!

await clear_graph( group_ids=["my_org"] # Optional - defaults to DEFAULT_GROUP_ID )

Server Management

get_status

Get comprehensive server status and statistics.

await get_status() # Returns: node counts, relationship types, session stats, connection status

force_cleanup

Manually trigger cleanup of expired sessions and idle connections.

await force_cleanup() # Returns: cleanup statistics

๐Ÿ’ก Use Cases

Personal Knowledge Management

Track your life events, relationships, and locations with full history:

await add_message( "I met Sarah at the tech conference. She works at OpenAI.", session_id="my_life" ) # Later: "Where did I meet Sarah?" โ†’ "At the tech conference"

Customer Relationship Management

Monitor customer interactions with automatic conflict resolution:

await add_fact("CustomerA", "status", "premium") # Automatically invalidates previous "status" facts # Query history: "What was CustomerA's status in January?"

AI Agent Memory

Give your AI agents persistent, queryable memory:

# Agent learns from conversation await add_message( "User prefers morning meetings and uses Slack", session_id="agent_123" ) # Agent recalls later: "What are the user's preferences?"

Workflow Automation

Combine knowledge with actions:

# When fact changes, trigger automation if customer_upgraded_to_premium: await notify_sales_team(customer_name=name) await update_crm(customer_id=id, tier="premium")

โ“ Frequently Asked Questions

Q: Does this require OpenAI?

A: No! OpenAI is optional for AI entity extraction. You can add facts manually without it.

Q: Can I use this with Claude Desktop?

A: Yes! Add the server URL to your claude_desktop_config.json:

{ "mcpServers": { "knowledge-graph": { "url": "http://localhost:8080/sse" } } }

Q: How do I query historical data?

A: Use the query_at_time tool:

await query_at_time( timestamp="2024-01-15T00:00:00Z", entity_name="John" )

Q: Can I deploy this to production?

A: Absolutely! See DEPLOYMENT.md for guides on:

  • Replit Autoscale

  • Railway

  • Render

  • Fly.io

  • Docker

  • VPS

Q: How does fact invalidation work?

A: When you add a fact about location or employment, the system automatically finds previous facts of the same type and marks them as invalid_at: current_time. Your query results only show current facts unless you specifically request historical data.

Q: Can I create multi-webhook tools?

A: Yes! Add a tool to the Custom Tools section in main.py using asyncio.gather() to fire multiple webhooks simultaneously. See the Adding Custom Tools section for examples.

Q: Is my data secure?

A: Yes! Everything runs in your infrastructure. No data is sent anywhere except:

  • OpenAI (only if you use entity extraction)

  • Your configured webhooks (only when you call them)

Q: How much does it cost to run?

A: Free for self-hosting! Only costs:

  • FalkorDB hosting (free tier available)

  • OpenAI API usage (optional, ~$0.001 per extraction)


๐Ÿ“‹ Changelog

[1.0.0] - 2024-12-19

Added

  • โœ… Full bi-temporal tracking (created_at, valid_at, invalid_at, expired_at)

  • โœ… Smart conflict resolution for location and employment changes

  • โœ… Session-aware episodic memory with 30-minute TTL

  • โœ… OpenAI-powered entity extraction from natural language

  • โœ… Dynamic tool generator for automation workflows

  • โœ… Single webhook tool template

  • โœ… Multi-webhook parallel execution template

  • โœ… Docker and Docker Compose support

  • โœ… Replit Autoscale optimization

  • โœ… Background cleanup manager

  • โœ… Comprehensive documentation and examples

Supported Features

Feature

Status

Notes

Bi-Temporal Tracking

โœ…

Full implementation

AI Entity Extraction

โœ…

OpenAI GPT-4

Smart Invalidation

โœ…

Location, employment, relationships

Session Management

โœ…

Auto-cleanup after 30 min

Custom Tools

โœ…

Single & multi-webhook via @mcp.tool()

Parallel Webhooks

โœ…

asyncio.gather

Docker Support

โœ…

Complete stack included

Health Checks

โœ…

Built-in monitoring


๐Ÿ†˜ Support

Need Help?

  1. Check Documentation: Start with QUICKSTART.md

  2. Join Community: High Ticket AI Builders - Free access!

  3. Watch Tutorial: Video Guide

  4. Report Bugs: GitHub Issues


๐Ÿ”ง Optional: Automation Engine OS

Need a visual tool to orchestrate your workflows?

If you want to manage webhook configurations, generate tools automatically, and orchestrate complex workflows without writing code, check out Automation Engine OS - it's free when you join our community!

What Automation Engine OS provides:

  • Visual webhook configuration builder

  • Automatic MCP tool code generation

  • Workflow orchestration dashboard

  • Multi-webhook template management

  • One-click tool deployment to your MCP server

Get free access: Join High Ticket AI Builders

Note: Automation Engine OS is completely optional. This MCP server works standalone - you can manually add tools to the Custom Tools section in main.py as shown in the Adding Custom Automation Tools section.


๐Ÿค Contributing

Contributions are welcome! Areas for improvement:

  • ๐Ÿ” Additional temporal query operators

  • ๐Ÿง  Enhanced entity extraction prompts

  • ๐Ÿ”ง More webhook authentication methods

  • ๐Ÿ“Š Performance optimizations

  • ๐ŸŒ Additional deployment platforms

  • ๐Ÿ“– More examples and tutorials

To contribute:

  1. Fork the repository

  2. Create your feature branch (git checkout -b feature/AmazingFeature)

  3. Commit your changes (git commit -m 'Add some AmazingFeature')

  4. Push to the branch (git push origin feature/AmazingFeature)

  5. Open a Pull Request


๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

TL;DR: You can use this commercially, modify it, distribute it. Just keep the license notice.


๐Ÿ™ Acknowledgments

  • Built with FastMCP

  • Powered by FalkorDB

  • AI features via OpenAI

  • Inspired by the High Ticket AI Builders community


โญ Star History

Star History Chart


๐Ÿ“ž Connect


Built with โค๏ธ for the High Ticket AI Builders ecosystem

If this project helps you, please consider giving it a โญ!

โฌ† Back to Top

-
security - not tested
F
license - not found
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/connectimtiazh/Graphiti-Knowledge-MCP-Server-Starter'

If you have feedback or need assistance with the MCP directory API, please join our Discord server