Skip to main content
Glama

Poetry MCP Server

Poetry MCP Server

A Model Context Protocol (MCP) server for managing poetry catalogs, nexuses, and submissions.

Status: Core functionality complete - Catalog management, enrichment tools, and LLM-powered analysis operational

Overview

Poetry MCP is a specialized MCP server that treats poems as artifacts (not knowledge graph nodes), providing:

  • State-based catalog tracking (fledgeling → completed)

  • Thematic connections via "nexuses" (themes, motifs, forms)

  • Quality scoring on multiple dimensions

  • Submission tracking to literary venues

  • Influence lineage tracking

Architecture: No database - BASE files define VIEW DEFINITIONS (queries), actual data lives in markdown frontmatter. On startup, the MCP server scans poem files and loads frontmatter into Pydantic models in memory.

Three Types of Metadata

Poetry MCP uses three complementary ways to evaluate poems:

Type

What It Measures

Example

Nexus

(binary)

What does this poem

contain

?

Contains water imagery (yes/no)

Quality

(scalar)

What does this poem

achieve

?

Scores 8/10 on "Surprise"

Influence

(lineage)

Where does this poem

come from

?

Descended from William Bronk

8 Universal Quality Dimensions: Detail, Life, Music, Mystery, Sufficient Thought, Surprise, Syntax, Unity. The MCP server provides grade_poem_quality() which returns poem content and quality rubrics for agent-based scoring (0-10 scale with reasoning).

Architectural Philosophy

Poetry workflow requires catalog-based tracking (poems as artifacts with states/metadata) rather than knowledge graph systems (atomic ideas with semantic links).

Why poems aren't notes:

  • Move through production states (fledgeling → completed)

  • Connect to thematic/formal nexuses (not logical relationships)

  • Get scored on quality dimensions (scalar ratings)

  • Have submission histories (transactions with venues)

  • Descend from influences (lineage, not logic)

Vault Directory Structure

The Poetry vault organizes poems and metadata across specialized directories:

/Poetry/ ├── catalog/ # State-based poem organization (381 poems) │ ├── catalog.base # View definition for all poems │ ├── Completed/ # 49 poems │ ├── Fledgelings/ # 172 poems │ ├── Needs Research/# 10 poems │ ├── Risks/ # 22 poems │ └── Still Cooking/ # 65 poems ├── nexus/ # Thematic/formal connection points │ ├── nexus.base # Registry of available nexuses │ ├── themes/ # 17 thematic connections │ ├── forms/ # 4 structural patterns │ └── motifs/ # 4 compositional patterns ├── Qualities/ # 8 universal quality dimensions │ └── qualities.base # Quality definitions and rubrics ├── influences/ # Writer/movement/aesthetic lineage │ └── influences.base ├── techniques/ # Generative methods and processes │ └── techniques.base ├── venues/ # Publication venue metadata (22 venues) │ ├── venues.base # Venue registry (payment, response time, aesthetic) │ └── [venue files] # Individual venue profiles ├── Submissions/ # Historical submission records │ ├── Submissions.base # Submission tracking │ └── [submission files] # Date_PoemTitle_VenueName.md ├── analysis/ # Research documents and comparisons └── craft-notes/ # Personal aphorisms and principles

Personal Directories: Users may create additional directories for personal workflow (e.g., journal/, scripts/, transitional poem collections). These are not indexed by the MCP server.

Nexus Taxonomy

Nexuses represent binary connections - a poem either contains a nexus or doesn't. The taxonomy has three categories:

  • Forms (4): Structural patterns defining how a poem is arranged (American Sentence, Free Verse, Prose Poem, Catalog Poem)

  • Themes (17): Subject matter and imagery systems (Water-Liquid Imagery, Body-Mouth, etc.)

  • Motifs (4): Cross-nexus compositional patterns requiring multiple themes (American Grotesque, Failed Transcendence, etc.)

Note: The specific nexuses evolve over time as new patterns emerge in the poetry practice. See nexus/ directory for current instances.

Architecture: Agent-Based Analysis

This MCP server follows the data provider pattern:

Server Responsibilities:

  • Catalog management (scan, index, search)

  • Data access (poems, nexuses, quality rubrics)

  • Data modification (update tags, move files)

Agent (Claude) Responsibilities:

  • Poetry analysis (theme detection)

  • Quality assessment (grading dimensions)

  • Batch processing (multiple poem analysis)

Why This Pattern?

  • ✅ No API keys needed in server

  • ✅ Server stays lightweight and data-focused

  • ✅ Agent uses natural language understanding

  • ✅ Transparent analysis (you see the reasoning)

  • ✅ Flexible - agent can adjust analysis approach

Workflow:

1. Tool call → Server returns poem + analysis context 2. Agent analyzes data using natural language reasoning 3. Agent provides structured results (themes/scores/confidence) 4. User applies results with data modification tools

Requirements

  • Python 3.10 or higher

  • FastMCP 0.2.0+

  • Pydantic 2.0+

Note: No API keys needed! The MCP server provides data, your MCP client (Claude Desktop) performs analysis.

Development Setup

Installation

# Clone repository git clone <repository-url> cd poetry-mcp # Install with dev dependencies pip install -e ".[dev]"

Running Tests

# Run all tests pytest tests/ # Run with coverage pytest tests/ --cov=poetry_mcp --cov-report=html # Run specific test file pytest tests/test_models.py -v

Code Quality

# Format code black src/ tests/ # Lint code ruff check src/ tests/ # Type checking mypy src/

Project Structure

src/poetry_mcp/ ├── __init__.py # Package metadata ├── server.py # FastMCP server entry point and tool registration ├── config.py # Configuration management ├── errors.py # Custom exceptions ├── models/ # Pydantic data models │ ├── poem.py # Poem model with frontmatter │ ├── nexus.py # Nexus and registry models │ ├── results.py # Search and sync results │ └── enrichment.py # LLM response models ├── parsers/ # BASE file and frontmatter parsers │ ├── base_parser.py # Generic BASE file parser │ ├── nexus_parser.py # Nexus registry loader │ └── frontmatter.py # YAML frontmatter extraction ├── writers/ # Frontmatter modification tools │ └── frontmatter_writer.py # Atomic frontmatter updates ├── catalog/ # Catalog management and indexing │ ├── catalog.py # Main catalog class │ └── index.py # In-memory search indices └── tools/ # MCP tool implementations └── enrichment_tools.py # All enrichment operations tests/ ├── conftest.py # Pytest fixtures └── fixtures/ # Test data ├── base_files/ # Sample .base files └── markdown/ # Sample poem files docs/ ├── CANONICAL_TAGS.md # Quick reference for all canonical tags (forms, themes, motifs) └── FRONTMATTER_SCHEMA.md # Poem frontmatter property definitions

Configuration

Configuration is loaded from ~/.config/poetry-mcp/config.yaml:

vault: path: /path/to/Poetry catalog_dir: catalog nexus_dir: nexus search: default_limit: 20 case_sensitive: false logging: level: INFO file: ~/.config/poetry-mcp/poetry-mcp.log

MCP Client Setup

Poetry MCP implements the Model Context Protocol (MCP) standard and can be used with any MCP-compatible client.

Configuration Format

MCP clients typically use JSON configuration to connect to servers. Add this to your MCP client's config:

{ "mcpServers": { "poetry-mcp": { "command": "uv", "args": [ "--directory", "/path/to/poetry-mcp", "run", "poetry-mcp" ], "env": { "POETRY_VAULT_PATH": "/path/to/your/Poetry/vault" } } } }

Alternative: Using python directly

If you have the package installed globally:

{ "mcpServers": { "poetry-mcp": { "command": "python", "args": ["-m", "poetry_mcp.server"], "env": { "POETRY_VAULT_PATH": "/path/to/your/Poetry/vault" } } } }

Client-Specific Setup

Claude Desktop:

  • Config location (macOS): ~/Library/Application Support/Claude/claude_desktop_config.json

  • Config location (Windows): %APPDATA%\Claude\claude_desktop_config.json

  • After updating config, restart Claude Desktop completely

Other MCP Clients:

  • Consult your client's documentation for config file location

  • Use the JSON format above with your specific vault path

Verification

After configuring your MCP client:

  1. Restart the client application

  2. Start a new conversation/session

  3. Check that poetry-mcp tools are available

  4. Try: "What poetry tools are available?"

  5. Try: "Get catalog stats" - should show your poem count

Troubleshooting

Server won't start:

  • Verify POETRY_VAULT_PATH points to correct directory

  • Check the vault has a catalog/ subdirectory

  • Review client logs for error messages

No poems found:

  • Run sync_catalog tool first to index poems

  • Verify vault path is correct

  • Check markdown files have proper frontmatter (see FRONTMATTER_SCHEMA.md)

Tools not appearing:

  • Completely restart your MCP client

  • Validate JSON config syntax

  • Verify uv or python is in system PATH

Quick Start

Basic Usage

# Start the server (auto-syncs catalog on startup) poetry-mcp start # Or run directly with Python python -m poetry_mcp.server

Example Workflows

Agent-Based Theme Analysis:

# 1. Server provides poem and theme data data = await find_nexuses_for_poem("my-poem-id", max_suggestions=3) # 2. Agent (Claude) analyzes the poem against available themes # Agent sees: # - data['poem']: {id, title, content, current_tags} # - data['available_themes']: [{name, canonical_tag, description}, ...] # - data['instructions']: Analysis guidance # 3. Agent identifies matching themes with confidence: # Example agent response: # "This poem strongly engages with: # - Water-Liquid (0.85): 'river flows through ancient stones' # - Body-Bones (0.67): skeletal imagery in stanza 2" # 4. User applies suggested tags await link_poem_to_nexus("my-poem-id", "Water-Liquid", "theme")

Batch Theme Discovery:

# 1. Get poems needing enrichment data = await get_poems_for_enrichment(max_poems=10) # 2. Agent analyzes data['poems'] against data['available_themes'] # Agent suggests themes for each poem # 3. User applies high-confidence tags for poem in analyzed_poems: await link_poem_to_nexus(poem['id'], suggested_theme, "theme")

Agent-Based Quality Grading:

# 1. Server provides poem and quality rubric data = await grade_poem_quality("my-poem-id") # 2. Agent grades data['poem'] on data['dimensions'] # Agent sees 8 quality dimensions with descriptions # Agent provides scores 0-10 with evidence # Example agent response: # "Quality Assessment: # - Detail: 8/10 - Strong sensory imagery ('ancient stones worn smooth') # - Life: 6/10 - Adequate vitality but some static passages # - Music: 9/10 - Excellent rhythm and sonic patterns"

Maintenance:

# Sync wikilinks with tags result = await sync_nexus_tags("my-poem-id", direction="both") print(f"Tags added: {result['tags_added']}") print(f"Links added: {result['links_added']}") # Move poem to completed state result = await move_poem_to_state("my-poem-id", "completed") print(f"Moved to: {result['new_path']}")

Available Tools

Catalog Management

  • sync_catalog - Scan vault and build in-memory catalog index

  • get_poem - Retrieve poem by ID or title

  • search_poems - Search with filters (query, states, forms, tags)

  • find_poems_by_tag - Find poems by tag combinations

  • list_poems_by_state - List poems in specific states

  • get_catalog_stats - Get catalog statistics and health metrics

  • get_server_info - Server status and configuration

Enrichment Tools

  • get_all_nexuses - Browse available themes, motifs, and forms

  • link_poem_to_nexus - Add nexus tags to poem frontmatter

  • sync_nexus_tags - Sync [[Nexus]] wikilinks with frontmatter tags

  • move_poem_to_state - Move poems between state directories

Agent Analysis Tools

These tools return data for YOUR (the agent's) analysis

  • find_nexuses_for_poem - Get poem + themes for agent to analyze and suggest matches

  • get_poems_for_enrichment - Get batch of poems for agent to analyze and suggest themes

  • grade_poem_quality - Get poem + quality rubric for agent to grade

Development Roadmap

  • Phase 0: Project Setup - Dependencies, structure, tooling

  • Phase 1: Core Data Models - Pydantic models for Poem, Nexus, Quality, etc.

  • Phase 2: Configuration System - YAML config loading and validation

  • Phase 3: BASE File Parser - Parse Obsidian YAML files

  • Phase 4: Catalog Management - Scan filesystem, index poems

  • Phase 5: MCP Tools Phase 1 - Core catalog/search tools

  • Phase 6: MCP Server Setup - FastMCP initialization and tool registration

  • Phase 7 (Sprint 1): Enrichment Foundation - Frontmatter writer, nexus registry

  • Phase 8 (Sprint 2): LLM Integration - Theme detection, batch enrichment

  • Phase 9 (Sprint 4): Maintenance Tools - Tag sync, state moves, quality grading

  • Phase 10 (Sprint 3): Advanced Discovery - Similarity search, cluster analysis

See IMPLEMENTATION_CHECKLIST.md for detailed progress tracking.

Data Synchronization

How BASE File Changes Work

Poetry MCP loads BASE files into memory as Pydantic models on startup. Understanding the sync behavior:

v1 (Current - Phases 0-6):

1. Server starts → Parse BASE files → Create Pydantic models in RAM 2. Models stay in memory during server lifetime 3. Edit BASE file in Obsidian → Models remain unchanged 4. Restart server → Re-parse BASE files → Fresh models loaded

To see your changes: Simply restart the MCP server (< 3 seconds). Claude Desktop will reconnect automatically.

Future Convenience Features (v2+)

Manual Reload Tool

Call from Claude when you've made BASE file changes:

# No server restart needed reload_catalog()

Benefits:

  • Instant refresh without disconnecting Claude

  • Selective reloading (only changed files)

  • Maintains conversation context

Automatic File Watching

Real-time synchronization using the watchdog library:

# config.yaml performance: watch_files: true watch_debounce_seconds: 2.0

Features:

  • Detects BASE file changes automatically

  • Debouncing (waits for all saves to complete)

  • Smart reload (only changed files)

  • Handles concurrent modifications safely

When you edit in Obsidian:

  1. Save changes → File watcher detects change

  2. Waits 2 seconds (Obsidian may save multiple files)

  3. Reloads changed BASE files

  4. Updates Pydantic models in memory

  5. Changes visible in next Claude query

Why Not in v1?

Complexity trade-offs:

  • File watching adds dependencies (watchdog library)

  • Requires debouncing logic (multiple rapid saves)

  • Needs concurrent modification handling

  • Adds error recovery complexity

Current approach prioritizes:

  • ✅ Simple implementation for Phases 0-6

  • ✅ Fast manual restart (2-3 seconds total)

  • ✅ Reliable data consistency

  • ✅ Easier debugging during development

v2 can add these features based on user feedback.

License

MIT

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/james-livefront/poetry-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server