Skip to main content
Glama

DevFlow MCP

by Takin-Profit

DevFlow MCP: Smart Memory for AI Agents

Ever wished your AI could remember things between conversations? DevFlow MCP gives any AI that supports the Model Context Protocol (like Claude Desktop) a persistent, searchable memory that actually makes sense.

Think of it as giving your AI a brain that doesn't reset every time you start a new chat.

What Makes This Different?

Most AI memory systems are either too complex to set up or too simple to be useful. DevFlow MCP hits the sweet spot:

Actually Works Out of the Box: No Docker containers, no external databases to configure. Just install and run.

Built for Real Development: Created by developers who got tired of explaining the same context over and over to AI assistants. This system understands how software projects actually work.

Honest About What It Does: Every feature documented here actually exists and works. No promises about features "coming soon" or half-implemented APIs.

Type-Safe Throughout: Zero any types in the entire codebase. If TypeScript is happy, the code works.

The Story Behind This Project

This started as a simple problem: AI assistants kept forgetting important project context between sessions. Existing solutions were either enterprise-grade overkill or toy projects that couldn't handle real workloads.

So we built something that actually solves the problem. DevFlow MCP has been battle-tested on real projects, handling everything from quick prototypes to complex enterprise applications.


Core Concepts

Entities

Entities are the primary nodes in the knowledge graph. Each entity has:

  • A unique name (identifier)

  • An entity type (e.g., "person", "organization", "event")

  • A list of observations

  • Vector embeddings (for semantic search)

  • Complete version history

Example:

{ "name": "John_Smith", "entityType": "person", "observations": ["Speaks fluent Spanish"] }

Relations

Relations define directed connections between entities with enhanced properties:

  • Strength indicators (0.0-1.0)

  • Confidence levels (0.0-1.0)

  • Rich metadata (source, timestamps, tags)

  • Temporal awareness with version history

  • Time-based confidence decay

Example:

{ "from": "John_Smith", "to": "Anthropic", "relationType": "works_at", "strength": 0.9, "confidence": 0.95, "metadata": { "source": "linkedin_profile", "last_verified": "2025-03-21" } }

Prompts (Workflow Guidance)

DevFlow MCP includes workflow-aware prompts that teach AI agents how to use the knowledge graph effectively in a cascading development workflow (planner → task creator → coder → reviewer).

What are prompts? Prompts are instructional messages that guide AI agents on which tools to call and when. They appear as slash commands in Claude Desktop (e.g., /init-project) and provide context-aware documentation.

Important: Prompts don't save data themselves—they return guidance text that tells the AI which tools to call. The AI then calls those tools (like create_entities, semantic_search) which actually interact with the database.

Available Prompts

1. /init-project - Start New Projects

Guides planners on creating initial feature entities and structuring planning information.

Arguments:

  • projectName (required): Name of the project or feature

  • description (required): High-level description

  • goals (optional): Specific goals or requirements

What it teaches:

  • How to create "feature" entities for high-level projects

  • How to document decisions early

  • How to plan tasks and link them to features

  • Best practices for structuring project information

Example usage in Claude Desktop:

/init-project projectName="UserAuthentication" description="Implement secure user login system" goals="Support OAuth, 2FA, and password reset"
2. /get-context - Retrieve Relevant Information

Helps any agent search the knowledge graph for relevant history, dependencies, and context before starting work.

Arguments:

  • query (required): What are you working on? (used for semantic search)

  • entityTypes (optional): Filter by types (feature, task, decision, component, test)

  • includeHistory (optional): Include version history (default: false)

What it teaches:

  • How to use semantic search to find related work

  • How to check dependencies via relations

  • How to review design decisions

  • How to understand entity version history

Example usage:

/get-context query="authentication implementation" entityTypes=["component","decision"] includeHistory=true
3. /remember-work - Store Completed Work

Guides agents on saving their work with appropriate entity types and relations.

Arguments:

  • workType (required): Type of work (feature, task, decision, component, test)

  • name (required): Name/title of the work

  • description (required): What did you do? (stored as observations)

  • implementsTask (optional): Task this work implements (creates "implements" relation)

  • partOfFeature (optional): Feature this is part of (creates "part_of" relation)

  • dependsOn (optional): Components this depends on (creates "depends_on" relations)

  • keyDecisions (optional): Important decisions made

What it teaches:

  • How to create entities with correct types

  • How to set up relations between entities

  • How to document decisions separately

  • How to maintain the knowledge graph structure

Example usage:

/remember-work workType="component" name="AuthService" description="Implemented OAuth login flow with JWT tokens" implementsTask="UserAuth" partOfFeature="Authentication" dependsOn=["TokenManager","UserDB"]
4. /review-context - Get Full Review Context

Helps reviewers gather all relevant information about a piece of work before providing feedback.

Arguments:

  • entityName (required): Name of the entity to review

  • includeRelated (optional): Include related entities (default: true)

  • includeDecisions (optional): Include decision history (default: true)

What it teaches:

  • How to get the entity being reviewed

  • How to find related work (dependencies, implementations)

  • How to review design decisions

  • How to check test coverage

  • How to add review feedback as observations

Example usage:

/review-context entityName="AuthService" includeRelated=true includeDecisions=true

Cascading Workflow Example

Here's how prompts guide a complete development workflow:

1. Planner Agent:

/init-project projectName="UserDashboard" description="Create user analytics dashboard" # AI learns to create feature entity, plan tasks

2. Task Creator Agent:

/get-context query="dashboard features" # AI learns to search for related work, then creates task entities

3. Developer Agent:

/get-context query="dashboard UI components" # AI learns to find relevant components and decisions /remember-work workType="component" name="DashboardWidget" description="Created widget framework" # AI learns to store work with proper relations

4. Reviewer Agent:

/review-context entityName="DashboardWidget" # AI learns to get full context, check tests, add feedback

Why Prompts Matter

  • Consistency: All agents use the same structured approach

  • Context preservation: Work is stored with proper metadata and relations

  • Discoverability: Future agents can find relevant history via semantic search

  • Workflow awareness: Each prompt knows its place in the development cycle

  • Self-documenting: Prompts teach agents best practices

How It Works Under the Hood

DevFlow MCP stores everything in a single SQLite database file. Yes, really - just one file on your computer.

Why SQLite Instead of Something "Fancier"?

We tried the complex stuff first. External databases, Docker containers, cloud services - they all work, but they're overkill for what most developers actually need.

SQLite gives you:

  • One file to rule them all: Your entire knowledge graph lives in a single .db file you can copy, backup, or version control

  • No setup headaches: No servers to configure, no containers to manage, no cloud accounts to create

  • Surprisingly fast: SQLite handles millions of records without breaking a sweat

  • Vector search built-in: The sqlite-vec extension handles semantic search natively

  • Works everywhere: From your laptop to production servers, SQLite just works

Getting Started (It's Ridiculously Simple)

# Install globally npm install -g devflow-mcp # Run it (creates database automatically) dfm mcp # Want to use a specific file? Set the location DFM_SQLITE_LOCATION=./my-project-memory.db dfm mcp

No configuration files. No environment setup. No "getting started" tutorials that take 3 hours. It just works.

Requirements: Node.js 23+ (for the latest SQLite features)

Advanced Features

Semantic Search

Find semantically related entities based on meaning rather than just keywords:

  • Vector Embeddings: Entities are automatically encoded into high-dimensional vector space using OpenAI's embedding models

  • Cosine Similarity: Find related concepts even when they use different terminology

  • Configurable Thresholds: Set minimum similarity scores to control result relevance

  • Cross-Modal Search: Query with text to find relevant entities regardless of how they were described

  • Multi-Model Support: Compatible with multiple embedding models (OpenAI text-embedding-3-small/large)

  • Contextual Retrieval: Retrieve information based on semantic meaning rather than exact keyword matches

  • Optimized Defaults: Tuned parameters for balance between precision and recall (0.6 similarity threshold, hybrid search enabled)

  • Hybrid Search: Combines semantic and keyword search for more comprehensive results

  • Adaptive Search: System intelligently chooses between vector-only, keyword-only, or hybrid search based on query characteristics and available data

  • Performance Optimization: Prioritizes vector search for semantic understanding while maintaining fallback mechanisms for resilience

  • Query-Aware Processing: Adjusts search strategy based on query complexity and available entity embeddings

Temporal Awareness

Track complete history of entities and relations with point-in-time graph retrieval:

  • Full Version History: Every change to an entity or relation is preserved with timestamps

  • Point-in-Time Queries: Retrieve the exact state of the knowledge graph at any moment in the past

  • Change Tracking: Automatically records createdAt, updatedAt, validFrom, and validTo timestamps

  • Temporal Consistency: Maintain a historically accurate view of how knowledge evolved

  • Non-Destructive Updates: Updates create new versions rather than overwriting existing data

  • Time-Based Filtering: Filter graph elements based on temporal criteria

  • History Exploration: Investigate how specific information changed over time

Confidence Decay

Relations automatically decay in confidence over time based on configurable half-life:

  • Time-Based Decay: Confidence in relations naturally decreases over time if not reinforced

  • Configurable Half-Life: Define how quickly information becomes less certain (default: 30 days)

  • Minimum Confidence Floors: Set thresholds to prevent over-decay of important information

  • Decay Metadata: Each relation includes detailed decay calculation information

  • Non-Destructive: Original confidence values are preserved alongside decayed values

  • Reinforcement Learning: Relations regain confidence when reinforced by new observations

  • Reference Time Flexibility: Calculate decay based on arbitrary reference times for historical analysis

Advanced Metadata

Rich metadata support for both entities and relations with custom fields:

  • Source Tracking: Record where information originated (user input, analysis, external sources)

  • Confidence Levels: Assign confidence scores (0.0-1.0) to relations based on certainty

  • Relation Strength: Indicate importance or strength of relationships (0.0-1.0)

  • Temporal Metadata: Track when information was added, modified, or verified

  • Custom Tags: Add arbitrary tags for classification and filtering

  • Structured Data: Store complex structured data within metadata fields

  • Query Support: Search and filter based on metadata properties

  • Extensible Schema: Add custom fields as needed without modifying the core data model

MCP API Tools

The following tools are available to LLM client hosts through the Model Context Protocol:

Entity Management

  • create_entities

    • Create multiple new entities in the knowledge graph

    • Input: entities (array of objects)

      • Each object contains:

        • name (string): Entity identifier

        • entityType (string): Type classification

        • observations (string[]): Associated observations

  • add_observations

    • Add new observations to existing entities

    • Input: observations (array of objects)

      • Each object contains:

        • entityName (string): Target entity

        • contents (string[]): New observations to add

    • Note: Unlike relations, observations do not support strength, confidence, or metadata fields. Observations are atomic facts about entities.

  • delete_entities

    • Remove entities and their relations

    • Input: entityNames (string[])

  • delete_observations

    • Remove specific observations from entities

    • Input: deletions (array of objects)

      • Each object contains:

        • entityName (string): Target entity

        • observations (string[]): Observations to remove

Relation Management

  • create_relations

    • Create multiple new relations between entities with enhanced properties

    • Input: relations (array of objects)

      • Each object contains:

        • from (string): Source entity name

        • to (string): Target entity name

        • relationType (string): Relationship type

        • strength (number, optional): Relation strength (0.0-1.0)

        • confidence (number, optional): Confidence level (0.0-1.0)

        • metadata (object, optional): Custom metadata fields

  • get_relation

    • Get a specific relation with its enhanced properties

    • Input:

      • from (string): Source entity name

      • to (string): Target entity name

      • relationType (string): Relationship type

  • update_relation

    • Update an existing relation with enhanced properties

    • Input: relation (object):

      • Contains:

        • from (string): Source entity name

        • to (string): Target entity name

        • relationType (string): Relationship type

        • strength (number, optional): Relation strength (0.0-1.0)

        • confidence (number, optional): Confidence level (0.0-1.0)

        • metadata (object, optional): Custom metadata fields

  • delete_relations

    • Remove specific relations from the graph

    • Input: relations (array of objects)

      • Each object contains:

        • from (string): Source entity name

        • to (string): Target entity name

        • relationType (string): Relationship type

Graph Operations

  • read_graph

    • Read the entire knowledge graph

    • No input required

  • search_nodes

    • Search for nodes based on query

    • Input: query (string)

  • open_nodes

    • Retrieve specific nodes by name

    • Input: names (string[])

Semantic Search

  • semantic_search

    • Search for entities semantically using vector embeddings and similarity

    • Input:

      • query (string): The text query to search for semantically

      • limit (number, optional): Maximum results to return (default: 10)

      • min_similarity (number, optional): Minimum similarity threshold (0.0-1.0, default: 0.6)

      • entity_types (string[], optional): Filter results by entity types

      • hybrid_search (boolean, optional): Combine keyword and semantic search (default: true)

      • semantic_weight (number, optional): Weight of semantic results in hybrid search (0.0-1.0, default: 0.6)

    • Features:

      • Intelligently selects optimal search method (vector, keyword, or hybrid) based on query context

      • Gracefully handles queries with no semantic matches through fallback mechanisms

      • Maintains high performance with automatic optimization decisions

  • get_entity_embedding

    • Get the vector embedding for a specific entity

    • Input:

      • entity_name (string): The name of the entity to get the embedding for

Temporal Features

  • get_entity_history

    • Get complete version history of an entity

    • Input: entityName (string)

  • get_relation_history

    • Get complete version history of a relation

    • Input:

      • from (string): Source entity name

      • to (string): Target entity name

      • relationType (string): Relationship type

  • get_graph_at_time

    • Get the state of the graph at a specific timestamp

    • Input: timestamp (number): Unix timestamp (milliseconds since epoch)

  • get_decayed_graph

    • Get graph with time-decayed confidence values

    • Input: options (object, optional):

      • reference_time (number): Reference timestamp for decay calculation (milliseconds since epoch)

      • decay_factor (number): Optional decay factor override

Configuration

Environment Variables

Configure DevFlow MCP with these environment variables:

# SQLite Configuration DFM_SQLITE_LOCATION=./knowledge.db # Embedding Service Configuration OPENAI_API_KEY=your-openai-api-key OPENAI_EMBEDDING_MODEL=text-embedding-3-small # Debug Settings DEBUG=true

Embedding Models

Available OpenAI embedding models:

  • text-embedding-3-small: Efficient, cost-effective (1536 dimensions)

  • text-embedding-3-large: Higher accuracy, more expensive (3072 dimensions)

  • text-embedding-ada-002: Legacy model (1536 dimensions)

OpenAI API Configuration

To use semantic search, you'll need to configure OpenAI API credentials:

  1. Obtain an API key from OpenAI

  2. Configure your environment with:

# OpenAI API Key for embeddings OPENAI_API_KEY=your-openai-api-key # Default embedding model OPENAI_EMBEDDING_MODEL=text-embedding-3-small

Note: For testing environments, the system will mock embedding generation if no API key is provided. However, using real embeddings is recommended for integration testing.

Integration with Claude Desktop

Configuration

For local development, add this to your claude_desktop_config.json:

{ "mcpServers": { "devflow": { "command": "dfm", "args": ["mcp"], "env": { "DFM_SQLITE_LOCATION": "./knowledge.db", "OPENAI_API_KEY": "your-openai-api-key", "OPENAI_EMBEDDING_MODEL": "text-embedding-3-small", "DEBUG": "true" } } } }

Important: Always explicitly specify the embedding model in your Claude Desktop configuration to ensure consistent behavior.

Recommended System Prompts

For optimal integration with Claude, add these statements to your system prompt:

You have access to the DevFlow MCP knowledge graph memory system, which provides you with persistent memory capabilities. Your memory tools are provided by DevFlow MCP, a sophisticated knowledge graph implementation. When asked about past conversations or user information, always check the DevFlow MCP knowledge graph first. You should use semantic_search to find relevant information in your memory when answering questions.

Testing Semantic Search

Once configured, Claude can access the semantic search capabilities through natural language:

  1. To create entities with semantic embeddings:

    User: "Remember that Python is a high-level programming language known for its readability and JavaScript is primarily used for web development."
  2. To search semantically:

    User: "What programming languages do you know about that are good for web development?"
  3. To retrieve specific information:

    User: "Tell me everything you know about Python."

The power of this approach is that users can interact naturally, while the LLM handles the complexity of selecting and using the appropriate memory tools.

Real-World Applications

DevFlow MCP's adaptive search capabilities provide practical benefits:

  1. Query Versatility: Users don't need to worry about how to phrase questions - the system adapts to different query types automatically

  2. Failure Resilience: Even when semantic matches aren't available, the system can fall back to alternative methods without user intervention

  3. Performance Efficiency: By intelligently selecting the optimal search method, the system balances performance and relevance for each query

  4. Improved Context Retrieval: LLM conversations benefit from better context retrieval as the system can find relevant information across complex knowledge graphs

For example, when a user asks "What do you know about machine learning?", the system can retrieve conceptually related entities even if they don't explicitly mention "machine learning" - perhaps entities about neural networks, data science, or specific algorithms. But if semantic search yields insufficient results, the system automatically adjusts its approach to ensure useful information is still returned.

Troubleshooting

Vector Search Diagnostics

DevFlow MCP includes built-in diagnostic capabilities to help troubleshoot vector search issues:

  • Embedding Verification: The system checks if entities have valid embeddings and automatically generates them if missing

  • Vector Index Status: Verifies that the vector index exists and is in the ONLINE state

  • Fallback Search: If vector search fails, the system falls back to text-based search

  • Detailed Logging: Comprehensive logging of vector search operations for troubleshooting

Debug Tools (when DEBUG=true)

Additional diagnostic tools become available when debug mode is enabled:

  • diagnose_vector_search: Information about the SQLite vector index, embedding counts, and search functionality

  • force_generate_embedding: Forces the generation of an embedding for a specific entity

  • debug_embedding_config: Information about the current embedding service configuration

Developer Reset

To completely reset your SQLite database during development:

# Remove the database file rm -f ./knowledge.db # Or if using a custom location rm -f $DFM_SQLITE_LOCATION # Restart your application - schema will be recreated automatically dfm mcp

Building and Development

# Clone the repository git clone https://github.com/takinprofit/dev-flow-mcp.git cd dev-flow-mcp # Install dependencies (uses pnpm, not npm) pnpm install # Build the project pnpm run build # Run tests pnpm test # Check test coverage pnpm run test:coverage # Type checking npx tsc --noEmit # Linting npx ultracite check src/ npx ultracite fix src/

Installation

Local Development

For development or contributing to the project:

# Clone the repository git clone https://github.com/takinprofit/dev-flow-mcp.git cd dev-flow-mcp # Install dependencies pnpm install # Build the CLI pnpm run build # The CLI will be available as 'dfm' command

License

MIT

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Takin-Profit/devflow-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server