Provides containerized deployment of the Valkey MCP task management server with support for multiple transport protocols (STDIO, SSE, Streamable HTTP).
Container images are published to GitHub Container Registry and the project includes GitHub Actions workflows for linting, publishing, and releasing.
Uses Valkey-Glide v2 as the official Go client for connecting to and interacting with Valkey for data persistence.
Supports rich Markdown-formatted notes for both plans and tasks, including headings, lists, tables, code blocks with syntax highlighting, links, images, and formatting.
Valkey MCP Task Management Server
A task management system that implements the Model Context Protocol (MCP) for seamless integration with agentic AI tools. This system allows AI agents to create, manage, and track tasks within plans using Valkey as the persistence layer.
Features
Plan management (create, read, update, delete)
Task management (create, read, update, delete)
Task ordering and prioritization
Status tracking for tasks
Notes support with Markdown formatting for both plans and tasks
MCP server for AI agent integration
Supports STDIO, SSE and Streamable HTTP transport protocols
Docker container support for easy deployment
Architecture
The system is built using:
Go: For the backend implementation
Valkey: For data persistence
Valkey-Glide v2: Official Go client for Valkey
Model Context Protocol: For AI agent integration
Quick Start
Docker Deployment
The MCP server is designed to run one protocol at a time for simplicity. By default, all protocols are disabled and you need to explicitly enable the one you want to use.
Prerequisites
Create a named volume for Valkey data persistence:
docker volume create valkey-data
Running with SSE (Recommended for most use cases)
Running with Streamable HTTP
Running with STDIO (For direct process communication)
Using the Container Images
The container images are published to GitHub Container Registry and can be pulled using:
MCP API Reference
The MCP server supports two transport protocols: Server-Sent Events (SSE) and Streamable HTTP. Each protocol exposes similar endpoints but with different interaction patterns.
Server-Sent Events (SSE) Endpoints
GET /sse/list_functions: Lists all available functionsPOST /sse/invoke/{function_name}: Invokes a function with the given parameters
Streamable HTTP Endpoints
POST /mcp: Handles all MCP requests using JSON formatFor function listing:
{"method": "list_functions", "params": {}}For function invocation:
{"method": "invoke", "params": {"function": "function_name", "params": {...}}}
Transport Selection
The server automatically selects the appropriate transport based on:
URL Path: Connect to the specific endpoint for your preferred transport
Content Type: When connecting to the root path (
/), the server redirects based on content type:application/json→ Streamable HTTPOther content types → SSE
Health Check
GET /health: Returns server health status
Available Functions
Plan Management
create_plan: Create a new planget_plan: Get a plan by IDlist_plans: List all planslist_plans_by_application: List all plans for a specific applicationupdate_plan: Update an existing plandelete_plan: Delete a plan by IDupdate_plan_notes: Update notes for a planget_plan_notes: Get notes for a plan
Task Management
create_task: Create a new task in a planget_task: Get a task by IDlist_tasks_by_plan: List all tasks in a planlist_tasks_by_status: List all tasks with a specific statusupdate_task: Update an existing taskdelete_task: Delete a task by IDreorder_task: Change the order of a task within its planupdate_task_notes: Update notes for a taskget_task_notes: Get notes for a task
MCP Configuration
Local MCP Configuration
To configure an AI agent to use the local MCP server, add the following to your MCP configuration file (the exact file location depends on your AI Agent):
Using SSE Transport (Default)
Note: The docker container should already be running.
Using Streamable HTTP Transport
Note: The docker container should already be running.
Using STDIO Transport
STDIO transport allows the MCP server to communicate via standard input/output, which is useful for legacy AI tools that rely on stdin/stdout for communication.
For agentic tools that need to start and manage the MCP server process, use a configuration like this:
Docker MCP Configuration
When running in Docker, use the container name as the hostname:
Using SSE Transport (Default)
Notes Functionality
The system supports rich Markdown-formatted notes for both plans and tasks. This feature is particularly useful for AI agents to maintain context between sessions and document important information.
Notes Features
Full Markdown support including:
Headings, lists, and tables
Code blocks with syntax highlighting
Links and images
Emphasis and formatting
Separate notes for plans and tasks
Dedicated MCP tools for managing notes
Notes are included in all relevant API responses
Best Practices for Notes
Maintain Context: Use notes to document important context that should persist between sessions
Document Decisions: Record key decisions and their rationale
Track Progress: Use notes to track progress and next steps
Organize Information: Use Markdown formatting to structure information clearly
Code Examples: Include code snippets with proper syntax highlighting
Notes Security
Notes content is sanitized to prevent XSS and other security issues while preserving Markdown formatting.
MCP Resources
In addition to MCP tools, the system provides MCP resources that allow AI agents to access structured data directly. These resources provide a complete view of plans and tasks in a single request, which is more efficient than making multiple tool calls.
Available Resources
Plan Resource
The Plan Resource provides a complete view of a plan, including its tasks and notes. It supports the following URI patterns:
Single Plan:
ai-tasks://plans/{id}/full- Returns a specific plan with its tasksAll Plans:
ai-tasks://plans/full- Returns all plans with their tasksApplication Plans:
ai-tasks://applications/{app_id}/plans/full- Returns all plans for a specific application
Each resource returns a JSON object or array with the following structure:
Using MCP Resources
AI agents can access these resources using the MCP resource API. Here's an example of how to read a resource:
This will return the complete plan resource including all tasks, which is more efficient than making separate calls to get the plan and then its tasks.
Using with AI Agents
AI agents can interact with this task management system through the MCP API using either SSE or Streamable HTTP transport. Here are examples for both transport protocols:
Using SSE Transport
The agent calls
/sse/list_functionsto discover available functionsThe agent calls
/sse/invoke/create_planwith parameters:{ "application_id": "my-app", "name": "New Feature Development", "description": "Implement new features for the application", "notes": "# Project Notes\n\nThis project aims to implement the following features:\n\n- Feature A\n- Feature B\n- Feature C" }The agent can add tasks to the plan using either:
Individual task creation with
/sse/invoke/create_taskBulk task creation with
/sse/invoke/bulk_create_tasksfor multiple tasks at once:{ "plan_id": "plan-123", "tasks_json": "[ { \"title\": \"Task 1\", \"description\": \"Description for task 1\", \"priority\": \"high\", \"status\": \"pending\", \"notes\": \"# Task Notes\\n\\nThis task requires the following steps:\\n\\n1. Step one\\n2. Step two\\n3. Step three\" }, { \"title\": \"Task 2\", \"description\": \"Description for task 2\", \"priority\": \"medium\", \"status\": \"pending\" } ]" }
The agent calls
/sse/invoke/update_taskto update task status as work progresses
Sample Agent Prompt
Here's a sample prompt that would trigger an AI agent to use the MCP task management system:
With this prompt, an AI agent with access to the Valkey MCP Task Management Server would:
Create a new plan with application_id "inventory-manager" and the specified Markdown-formatted notes
Add the five specified tasks to the plan
Add detailed Markdown-formatted notes to the database schema task
Set appropriate priorities for each task
Update the status of the first two tasks to "in_progress"
Return a summary of the created plan and tasks
Developer Documentation
For information on how to set up a development environment, contribute to the project, and understand the codebase structure, please refer to the Developer Guide.
For contribution guidelines, including commit message format and pull request process, see Contributing Guidelines.
License
This project is licensed under the BSD-3-Clause License.