Enables semantic search capabilities using Ollama embeddings, allowing natural language queries to find conceptually related tiddlers with vector similarity search.
Provides tools for managing TiddlyWiki wikis via HTTP API, including searching tiddlers using filter syntax or semantic search, creating and updating tiddlers with custom fields, and deleting tiddlers with content preview.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@TiddlyWiki MCP Serversearch for journal entries about productivity tips"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
TiddlyWiki MCP Server
A Model Context Protocol (MCP) server that provides AI assistants with access to TiddlyWiki wikis via the HTTP API. Supports semantic search using Ollama embeddings.
Features
MCP Tools
search_tiddlers - Search tiddlers using TiddlyWiki filter syntax, semantic similarity, or hybrid (both combined)
create_tiddler - Create new tiddlers with custom fields
update_tiddler - Update existing tiddlers with diff preview
delete_tiddler - Delete tiddlers with content preview
MCP Resources
filter-reference://syntax - Complete TiddlyWiki filter syntax reference
Semantic Search
When Ollama is available, the server provides semantic search capabilities:
Natural language queries find conceptually related tiddlers
Uses
nomic-embed-textembeddings modelSQLite-vec for efficient vector similarity search
Background sync keeps embeddings up-to-date
Hybrid mode combines filter results with semantic reranking
Requirements
Node.js 22+
TiddlyWiki with HTTP API enabled (e.g., TiddlyWiki on Node.js with
listencommand)Ollama (optional, for semantic search)
Build Prerequisites
This project uses native SQLite modules that require compilation. You'll need:
Linux:
build-essential, Python 3macOS: Xcode Command Line Tools (
xcode-select --install)Windows: Visual Studio Build Tools, Python 3
Installation
From npm (recommended)
TIDDLYWIKI_URL=http://localhost:8080 npx tiddlywiki-mcp-serverOr install globally:
npm install -g tiddlywiki-mcp-server
TIDDLYWIKI_URL=http://localhost:8080 tiddlywiki-mcp-serverFrom source
git clone https://github.com/ppetru/tiddlywiki-mcp.git
cd tiddlywiki-mcp
npm install
npm run buildQuick Start
1. Start TiddlyWiki with HTTP API
# Install TiddlyWiki if you haven't already
npm install -g tiddlywiki
# Create a new wiki and start it with HTTP API
tiddlywiki mywiki --init server
tiddlywiki mywiki --listen port=80802. (Optional) Set up Ollama for Semantic Search
# Install Ollama from https://ollama.ai
# Then pull the embedding model:
ollama pull nomic-embed-text3. Start the MCP Server
TIDDLYWIKI_URL=http://localhost:8080 npx tiddlywiki-mcp-serverConfiguration
All configuration is via environment variables. See .env.example for a complete reference.
Required
Variable | Description |
| URL of your TiddlyWiki server (e.g., |
Optional
Variable | Default | Description |
|
| Transport mode: |
|
| HTTP server port (when using http transport) |
|
| Ollama API URL |
|
| Embedding model name |
|
| Enable/disable semantic search |
|
| SQLite database path for embeddings |
|
| HTTP header for authentication (can be any header your TiddlyWiki expects) |
|
| Username for TiddlyWiki API requests |
Usage
stdio Mode (Claude Desktop)
Add to your Claude Desktop configuration (claude_desktop_config.json):
{
"mcpServers": {
"tiddlywiki": {
"command": "npx",
"args": ["tiddlywiki-mcp-server"],
"env": {
"TIDDLYWIKI_URL": "http://localhost:8080"
}
}
}
}HTTP Mode
Start the server:
TIDDLYWIKI_URL=http://localhost:8080 MCP_TRANSPORT=http MCP_PORT=3000 npx tiddlywiki-mcp-serverThe server exposes:
GET /health- Health check endpointPOST /mcp- MCP JSON-RPC endpoint (stateless mode)
Example Tool Usage
Filter search (TiddlyWiki filter syntax):
{
"name": "search_tiddlers",
"arguments": {
"filter": "[tag[Journal]prefix[2025-01]]",
"includeText": true
}
}Semantic search (natural language):
{
"name": "search_tiddlers",
"arguments": {
"semantic": "times I felt anxious about work",
"limit": 10
}
}Hybrid search (filter + semantic reranking):
{
"name": "search_tiddlers",
"arguments": {
"filter": "[tag[Journal]]",
"semantic": "productivity tips",
"limit": 20
}
}Development
Setup
npm installRunning Tests
npm testTests run quickly (~1s) and include unit tests for all tool handlers.
Linting
npm run lint # Check for issues
npm run format # Fix formatting
npm run format:check # Check formatting onlyType Checking
npm run typecheckPre-commit Hooks
Pre-commit hooks are configured with lefthook and run automatically:
Format check (Prettier)
Lint (ESLint)
Tests (Vitest)
Type check (TypeScript)
Building
npm run buildArchitecture
src/
├── index.ts # Entry point, transport setup, server lifecycle
├── tiddlywiki-http.ts # TiddlyWiki HTTP API client
├── service-discovery.ts # URL resolution (direct URLs, Consul SRV, hostname:port)
├── filter-reference.ts # Filter syntax documentation
├── logger.ts # Structured logging
├── tools/ # MCP tool handlers
│ ├── types.ts # Shared types and Zod schemas
│ ├── search-tiddlers.ts
│ ├── create-tiddler.ts
│ ├── update-tiddler.ts
│ └── delete-tiddler.ts
└── embeddings/ # Semantic search infrastructure
├── database.ts # SQLite-vec database
├── ollama-client.ts # Ollama API client
└── sync-worker.ts # Background embedding syncKey Design Decisions
Stateless HTTP mode: Each request gets its own Server/Transport instance to prevent request ID collisions with concurrent clients
Graceful degradation: Semantic search is optional; the server works without Ollama
Token-aware responses: Search results are validated against token limits with pagination suggestions
Background sync: Embeddings are updated periodically without blocking requests
License
MIT