Skip to main content
Glama

export_to_markdown

Export knowledge base content to organized markdown files for documentation, sharing, or backup purposes.

Instructions

Export knowledge base to markdown files

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
export_pathNoPath to export to (optional)

Implementation Reference

  • Main handler function for the 'export_to_markdown' tool. Initializes default paths if not provided, creates MarkdownExporter instance, calls export_all(), and returns the export path.
    def export_to_markdown(db_path: Optional[Path] = None, export_path: Optional[Path] = None):
        """Export knowledge base to markdown files"""
        if db_path is None:
            db_path = Path.home() / ".mcp-standards" / "knowledge.db"
        if export_path is None:
            export_path = Path.home() / ".mcp-standards" / "exports"
            
        exporter = MarkdownExporter(db_path, export_path)
        exporter.export_all()
        
        return export_path
  • MarkdownExporter class implementing the core export logic: creates folder structure, exports daily logs, by source, decisions, and generates index file with statistics.
    class MarkdownExporter:
        """Export knowledge episodes to organized markdown files"""
        
        def __init__(self, db_path: Path, export_path: Path):
            self.db_path = db_path
            self.export_path = export_path
            self.export_path.mkdir(parents=True, exist_ok=True)
            
        def export_all(self):
            """Export all knowledge to markdown files"""
            # Create folder structure
            folders = {
                "daily": self.export_path / "daily",
                "projects": self.export_path / "projects",
                "decisions": self.export_path / "decisions",
                "research": self.export_path / "research",
                "tools": self.export_path / "tools",
            }
            
            for folder in folders.values():
                folder.mkdir(exist_ok=True)
                
            # Export different views
            self._export_daily_logs(folders["daily"])
            self._export_by_source(folders["tools"])
            self._export_decisions(folders["decisions"])
            self._create_index()
            
        def _export_daily_logs(self, folder: Path):
            """Export daily activity logs"""
            with sqlite3.connect(self.db_path) as conn:
                conn.row_factory = sqlite3.Row
                
                # Get all dates with activity
                cursor = conn.execute("""
                    SELECT DISTINCT date(timestamp) as day
                    FROM episodes
                    ORDER BY day DESC
                """)
                
                for row in cursor:
                    day = row['day']
                    day_file = folder / f"{day}.md"
                    
                    # Get all episodes for this day
                    episodes = conn.execute("""
                        SELECT * FROM episodes
                        WHERE date(timestamp) = ?
                        ORDER BY timestamp
                    """, (day,)).fetchall()
                    
                    # Write daily markdown
                    content = f"# Daily Log: {day}\n\n"
                    
                    for episode in episodes:
                        content += f"## {episode['name']}\n"
                        content += f"*{episode['timestamp']} - Source: {episode['source']}*\n\n"
                        content += f"{episode['content']}\n\n"
                        
                        if episode['tags']:
                            tags = json.loads(episode['tags'])
                            content += f"Tags: {', '.join(tags)}\n\n"
                        
                        content += "---\n\n"
                    
                    day_file.write_text(content)
        
        def _export_by_source(self, folder: Path):
            """Export episodes grouped by source/tool"""
            with sqlite3.connect(self.db_path) as conn:
                conn.row_factory = sqlite3.Row
                
                # Get all sources
                sources = conn.execute("""
                    SELECT DISTINCT source FROM episodes
                """).fetchall()
                
                for source_row in sources:
                    source = source_row['source']
                    source_file = folder / f"{source}.md"
                    
                    # Get episodes for this source
                    episodes = conn.execute("""
                        SELECT * FROM episodes
                        WHERE source = ?
                        ORDER BY timestamp DESC
                    """, (source,)).fetchall()
                    
                    # Write source markdown
                    content = f"# {source.title()} Knowledge\n\n"
                    
                    for episode in episodes:
                        content += f"## {episode['name']}\n"
                        content += f"*{episode['timestamp']}*\n\n"
                        content += f"{episode['content']}\n\n"
                        content += "---\n\n"
                    
                    source_file.write_text(content)
        
        def _export_decisions(self, folder: Path):
            """Export decision-related episodes"""
            with sqlite3.connect(self.db_path) as conn:
                conn.row_factory = sqlite3.Row
                
                # Search for decision-related content
                decisions = conn.execute("""
                    SELECT * FROM episodes
                    WHERE content LIKE '%decision%' 
                       OR content LIKE '%decided%'
                       OR content LIKE '%chose%'
                       OR name LIKE '%decision%'
                    ORDER BY timestamp DESC
                """).fetchall()
                
                if decisions:
                    content = "# Decisions Log\n\n"
                    
                    for decision in decisions:
                        content += f"## {decision['name']}\n"
                        content += f"*{decision['timestamp']}*\n\n"
                        content += f"{decision['content']}\n\n"
                        content += "---\n\n"
                    
                    (folder / "decisions.md").write_text(content)
        
        def _create_index(self):
            """Create main index file"""
            with sqlite3.connect(self.db_path) as conn:
                # Get statistics
                stats = {
                    "total_episodes": conn.execute("SELECT COUNT(*) FROM episodes").fetchone()[0],
                    "total_tools": conn.execute("SELECT COUNT(*) FROM tool_logs").fetchone()[0],
                    "sources": conn.execute("SELECT DISTINCT source FROM episodes").fetchall(),
                    "recent": conn.execute("""
                        SELECT * FROM episodes 
                        ORDER BY timestamp DESC 
                        LIMIT 10
                    """).fetchall()
                }
                
            # Create index markdown
            content = f"""# Claude Memory Knowledge Base
    
    *Last updated: {datetime.now().strftime('%Y-%m-%d %H:%M')}*
    
    ## Statistics
    
    - **Total Episodes**: {stats['total_episodes']}
    - **Tool Executions Logged**: {stats['total_tools']}
    - **Knowledge Sources**: {len(stats['sources'])}
    
    ## Navigation
    
    - [Daily Logs](./daily/) - Day-by-day activity
    - [Tools](./tools/) - Knowledge by tool/source
    - [Decisions](./decisions/decisions.md) - Key decisions made
    - [Research](./research/) - Research findings
    
    ## Recent Activity
    
    """
            
            conn.row_factory = sqlite3.Row
            for episode in stats['recent']:
                content += f"- **{episode[2]}** - {episode[4]} ({episode[1][:10]})\n"
            
            (self.export_path / "README.md").write_text(content)
  • Tool registration in the basic ClaudeMemoryMCP server, defining the tool name, description, and input schema (optional export_path).
    Tool(
        name="export_to_markdown",
        description="Export knowledge base to markdown files",
        inputSchema={
            "type": "object",
            "properties": {
                "export_path": {"type": "string", "description": "Path to export to (optional)"},
            },
        },
    ),
  • Input schema definition for the export_to_markdown tool: accepts optional string export_path.
    inputSchema={
        "type": "object",
        "properties": {
            "export_path": {"type": "string", "description": "Path to export to (optional)"},
        },
    },
  • Tool registration in the enhanced server, identical schema.
    Tool(
        name="export_to_markdown",
        description="Export knowledge base to markdown files",
        inputSchema={
            "type": "object",
            "properties": {
                "export_path": {"type": "string", "description": "Export path (optional)"},
            },
        },
    ),
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool performs an export operation, implying it reads from a knowledge base and writes markdown files, but lacks details on permissions needed, whether it overwrites existing files, error handling, or output location defaults. This is inadequate for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is front-loaded with the core action and resource, making it easy to parse quickly, which is ideal for conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool involves mutation (exporting files) with no annotations and no output schema, the description is incomplete. It fails to address critical aspects like what the export includes (e.g., all knowledge base content or filtered subsets), success/failure indicators, or behavioral traits, leaving significant gaps for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'export_path' documented as an optional string for the export destination. The description adds no additional parameter semantics beyond what the schema provides, such as format examples or default behaviors, so it meets the baseline for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Export') and target resource ('knowledge base to markdown files'), making the purpose immediately understandable. However, it doesn't differentiate from potential sibling tools that might also export content in different formats or from different sources, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There are no mentions of prerequisites, timing considerations, or comparisons to sibling tools like 'suggest_claudemd_update' or 'update_claudemd' that might involve similar content manipulation. This leaves the agent without contextual usage information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/airmcp-com/mcp-standards'

If you have feedback or need assistance with the MCP directory API, please join our Discord server