Provides containerization support with volume mounting for database persistence, allowing portable deployment of the flash card management system
Uses OpenAI embeddings for semantic search across flash card projects and cards, enabling similarity-based retrieval of content
Leverages SQLite for persistent storage of flash card projects and cards, including their associated metadata and embedding information
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@FlashCardsMCPsearch for cards about machine learning algorithms"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
FlashCardsMCP
This is a dockerized Python Model Context Protocol (MCP) server for managing flash card projects. It uses OpenAI embeddings and SQLite for semantic search and storage.
Features
List all project names and ids
Semantic search for project by name (using OpenAI embeddings)
Get random flash card by project id
Add flash card to project (with question, answer, optional hint, optional description)
List all flash cards by project
Semantic search for flash cards by query (using OpenAI embeddings)
Global semantic search for cards across all projects
Retrieve a card by its id
All API/tool responses include a
typefield:projectorcardNo binary embedding data is ever returned in API responses
Related MCP server: MCP-AnkiConnect
API/Tool Design
All tools raise
ValueErrorfor not found or empty resultsProject and card creation tools return the full object, not just the id
See
.github/copilot-instructions.mdfor code generation rules
Getting Started
Install dependencies:
pip install -r requirements.txtRun the server:
python main.pyRun with Docker:
docker build -t flash-card-mcp . # Run with database persistence (recommended): docker run -v $(pwd)/storage:/app/storage/database.db flash-card-mcp
Environment Variables
OPENAI_API_KEY: Required. Set this environment variable to your OpenAI API key to enable embedding generation. Example:
export OPENAI_API_KEY=sk-...your-key...You must set this variable before running the server or running the Docker container.
Usage
This server exposes its API via the Model Context Protocol (MCP) using FastMCP. You can call the following tools:
get_all_projects()→ List all projectsadd_project(name)→ Create a new project (returns full project dict)search_project_by_name(name)→ Semantic search for a project (returns full project dict)get_random_card_by_project(project_id)→ Get a random card from a projectadd_card(project_id, question, answer, hint=None, description=None)→ Add a card (returns full card dict)get_all_cards_by_project(project_id)→ List all cards in a projectsearch_cards_by_embedding(project_id, query)→ Semantic search for cards in a projectglobal_search_cards_by_embedding(query)→ Semantic search for cards across all projectsget_card_by_id(card_id)→ Retrieve a card by its id
All returned objects include a type field and never include binary embedding data.
Development
All project and card data is stored in SQLite (
database.db)Embeddings are generated using OpenAI's
text-embedding-ada-002modelThe server is implemented in
main.pyanddb.pySee
.github/copilot-instructions.mdfor code and API rules
Inspector
For more details, see the code and docstrings in main.py and db.py.