Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Task Context MCP Serversearch for best practices on analyzing Python developer CVs"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Task Context MCP Server
An MCP (Model Context Protocol) server for managing task contexts and artifacts to enable AI agents to autonomously manage and improve execution processes for repetitive task types.
Overview
Important Distinction: This system manages task contexts (reusable task types/categories), NOT individual task instances.
For example:
Task Context: "Analyze applicant CV for Python developer of specific stack"
NOT stored: Individual applicant details or specific CV analyses
Stored: Reusable artifacts (practices, rules, prompts, learnings) applicable to ANY CV analysis of this type
This MCP server provides a SQLite-based storage system that enables AI agents to:
Store and retrieve task contexts with associated artifacts (practices, rules, prompts, learnings)
Perform full-text search across historical learnings and best practices using SQLite FTS5
Manage artifact lifecycles with active/archived status tracking
Enable autonomous process improvement with minimal user intervention
Store multiple artifacts of each type per task context
Features
Core Functionality
Task Context Management: Create, update, archive, and retrieve task contexts (reusable task types)
Artifact Storage: Store multiple practices, rules, prompts, and learnings for each task context
Full-Text Search: Efficient search across all artifacts using SQLite FTS5
Lifecycle Management: Track active vs archived artifacts with reasons
Transaction Safety: ACID compliance for all database operations
MCP Tools Available
get_active_task_contexts- Get all currently active task contextscreate_task_context- Create a new task context with summary and descriptionget_artifacts_for_task_context- Retrieve all artifacts for a specific task contextcreate_artifact- Create a new artifact (multiple per type allowed)update_artifact- Update an existing artifact's summary and/or contentarchive_artifact- Archive artifacts with optional reasonsearch_artifacts- Full-text search across all artifactsreflect_and_update_artifacts- Reflect on learnings and get prompted to update artifacts
Installation
Prerequisites
Python 3.12+
uv package manager
Setup
Usage
Running the MCP Server
MCP Client Configuration
For VS Code/Cursor
Add to your .cursor/mcp.json:
MCP Tools Available
The server provides the following tools via MCP:
1. get_active_task_contexts
Get all active task contexts in the system with their metadata.
Returns: List of active task contexts with id, summary, description, creation/update dates
2. create_task_context
Create a new task context (reusable task type) with summary and description.
Parameters:
summary(string): Brief task context description (e.g., "CV Analysis for Python Developer")description(string): Detailed task context description
Returns: Created task context information
3. get_artifacts_for_task_context
Retrieve all active artifacts for a specific task context.
Parameters:
task_context_id(string): ID of the task contextartifact_types(optional list): Types to retrieve ('practice', 'rule', 'prompt', 'result')include_archived(boolean): Whether to include archived artifacts
Returns: All matching artifacts with content
4. create_artifact
Create a new artifact for a task context. Multiple artifacts of the same type are allowed.
Parameters:
task_context_id(string): Associated task context IDartifact_type(string): Type ('practice', 'rule', 'prompt', 'result')summary(string): Brief descriptioncontent(string): Full artifact content
Returns: Created artifact information
Artifact Types:
practice: Best practices and guidelines for executing the task type
rule: Specific rules and constraints to follow
prompt: Template prompts useful for the task type
result: General patterns and learnings from past work (NOT individual execution results)
5. update_artifact
Update an existing artifact's summary and/or content.
Parameters:
artifact_id(string): ID of the artifact to updatesummary(optional string): New summarycontent(optional string): New content
Returns: Updated artifact information
6. archive_artifact
Archive an artifact, marking it as no longer active.
Parameters:
artifact_id(string): ID of artifact to archivereason(optional string): Reason for archiving
Returns: Archived artifact information
7. search_artifacts
Perform full-text search across all artifacts.
Parameters:
query(string): Search querylimit(integer): Maximum results (default: 10)
Returns: Matching artifacts ranked by relevance
8. reflect_and_update_artifacts
Reflect on task execution learnings and get prompted to update artifacts autonomously.
Parameters:
task_context_id(string): ID of the task context used for this worklearnings(string): What was learned during task execution (mistakes, corrections, patterns, etc.)
Returns: Reflection summary with current artifacts and required actions
Purpose: Ensures agents autonomously manage artifacts by explicitly prompting them to create/update/archive based on their learnings
Architecture
Database Schema
task_contexts: Task context definitions with metadata and status tracking
artifacts: Artifact storage with lifecycle management (multiple per type per context)
artifacts_fts: FTS5 virtual table for full-text search indexing
Database Migrations: The project uses Alembic for automatic schema migrations. When you modify the database models, Alembic automatically detects changes and updates the database. See docs/MIGRATIONS.md for details.
Key Components
src/task_context_mcp/main.py: MCP server implementation with FastMCPsrc/task_context_mcp/database/models.py: SQLAlchemy ORM modelssrc/task_context_mcp/database/database.py: Database operations and FTS5 managementsrc/task_context_mcp/database/migrations.py: Alembic migration utilitiessrc/task_context_mcp/config/: Configuration management with Pydantic settingsalembic/: Database migration scripts and configuration
Technology Stack
Database: SQLite 3.35+ with FTS5 extension
ORM: SQLAlchemy 2.0+ for type-safe database operations
Migrations: Alembic 1.17+ for automatic schema migrations
MCP Framework: FastMCP for Model Context Protocol implementation
Configuration: Pydantic Settings for environment-based config
Logging: Loguru for structured, multi-level logging
Development: UV for Python package and dependency management
Business Requirements Alignment
This implementation fulfills all requirements from docs/BRD.md:
✅ Task Context Catalog: UUID-based task context identification with metadata
✅ Artifact Storage: Lifecycle management with active/archived status, multiple per type
✅ Full-Text Search: FTS5-based search with BM25 ranking
✅ Context Loading: Automatic retrieval based on task context matching
✅ Autonomous Updates: Agent-driven improvements with feedback loops
✅ ACID Compliance: Transaction-based operations with SQLite
✅ Minimal Query Processing: Support for natural language task context matching
Use Case Scenarios
Scenario 1: Working on a New Task Type
User Request: "Help me analyze this CV for a Python developer position"
Agent Analysis: Agent analyzes the request and identifies it as a CV analysis task type
Task Context Discovery: Agent calls
get_active_task_contextsto check for existing similar contextsTask Context Creation: No matching context found, so agent calls
create_task_contextwith:Summary: "CV Analysis for Python Developer"
Description: "Analyze applicant CVs for Python developer positions with specific tech stack requirements"
Context Loading: Agent calls
get_artifacts_for_task_contextto load any existing artifactsTask Execution: Agent uses loaded artifacts (practices, rules, prompts) to analyze the CV
Artifact Creation: Based on learnings, agent calls
create_artifactto store successful approaches
Scenario 2: Continuing Work on Existing Task Type
User Request: "Analyze another CV for a Python developer"
Task Context Matching: Agent calls
get_active_task_contextsand finds matching context by summary/descriptionContext Retrieval: Agent calls
get_artifacts_for_task_contextwith the context ID to load all relevant artifactsTask Execution: Agent uses the loaded context (practices, rules, prompts, learnings) to analyze the new CV
Process Improvement: Agent refines artifacts based on current execution and user feedback
Scenario 3: Finding Similar Past Work
User Request: "Help me optimize this database query"
Search for Inspiration: Agent calls
search_artifactswith keywords like "database optimization" or "query performance"Review Results: Agent examines returned artifacts for similar past approaches
Adapt Patterns: Agent adapts successful patterns from historical artifacts to current task
Store New Artifacts: Agent creates new artifacts documenting the current successful approach
Scenario 4: Autonomous Process Improvement
Task Completion: Agent completes a task and receives user feedback
Success Analysis: Agent analyzes whether the execution was successful
Artifact Updates:
Successful approaches:
create_artifactto add new practices/rules/learningsRefinements needed:
update_artifactto improve existing artifactsOutdated methods:
archive_artifactwith reason for archival
Future Benefit: Subsequent tasks of the same type automatically benefit from the improved artifacts
Configuration
The server uses the following configuration (via environment variables or .env file):
TASK_CONTEXT_MCP__DATA_DIR: Data directory path (default:./data)TASK_CONTEXT_MCP__DATABASE_URL: Database URL (default:sqlite:///./data/task_context.db)TASK_CONTEXT_MCP__LOGGING_LEVEL: Logging level (default:INFO)
Data Model
Task Contexts
id: Unique UUID identifier
summary: Brief task context description for matching
description: Detailed task context description
creation_date: When task context was created
updated_date: When task context was last modified
status: 'active' or 'archived'
Artifacts
id: Unique UUID identifier
task_context_id: Reference to associated task context
artifact_type: 'practice', 'rule', 'prompt', or 'result'
summary: Brief artifact description
content: Full artifact content
status: 'active' or 'archived'
archived_at: Timestamp when archived (if applicable)
archivation_reason: Reason for archiving
created_at: When artifact was created
Note: Multiple artifacts of the same type can exist per task context. For example, a CV analysis context might have 5 different rules, 3 practices, 2 prompts, and several learnings.
Development
Running Tests
Code Quality
License
MIT License - see LICENSE file for details.