Skip to main content
Glama

optimize_readme

Restructures and condenses README files by extracting detailed sections into separate documentation, focusing on clarity and conciseness for better project understanding.

Instructions

Optimize README content by restructuring, condensing, and extracting detailed sections to separate documentation

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
readme_pathYesPath to the README file to optimize
strategyNoOptimization strategycommunity_focused
max_lengthNoTarget maximum length in lines
include_tldrNoGenerate and include TL;DR section
preserve_existingNoPreserve existing content structure where possible
output_pathNoPath to write optimized README (if not specified, returns content only)
create_docs_directoryNoCreate docs/ directory for extracted content

Implementation Reference

  • The main handler function `optimizeReadme` that implements the core logic of the `optimize_readme` MCP tool. It handles input validation, README parsing, TL;DR generation, section extraction, documentation structure creation, and returns structured results in MCPToolResponse format.
    export async function optimizeReadme( input: Partial<OptimizeReadmeInput>, ): Promise< MCPToolResponse<{ optimization: OptimizationResult; nextSteps: string[] }> > { const startTime = Date.now(); try { // Validate input const validatedInput = OptimizeReadmeInputSchema.parse(input); const { readme_path, strategy, max_length, include_tldr, output_path, create_docs_directory, } = validatedInput; // Read original README const originalContent = await fs.readFile(readme_path, "utf-8"); const originalLength = originalContent.split("\n").length; // Parse README structure const sections = parseReadmeStructure(originalContent); // Generate TL;DR if requested const tldrGenerated = include_tldr ? generateTldr(originalContent, sections) : null; // Identify sections to extract const extractedSections = identifySectionsToExtract( sections, strategy, max_length, ); // Create basic optimization result const optimizedContent = originalContent + "\n\n## TL;DR\n\n" + (tldrGenerated || "Quick overview of the project."); const restructuringChanges = [ { type: "added" as const, section: "TL;DR", description: "Added concise project overview", impact: "Helps users quickly understand project value", }, ]; const optimizedLength = optimizedContent.split("\n").length; const reductionPercentage = Math.round( ((originalLength - optimizedLength) / originalLength) * 100, ); // Create docs directory and extract detailed content if requested if (create_docs_directory && extractedSections.length > 0) { await createDocsStructure(path.dirname(readme_path), extractedSections); } // Write optimized README if output path specified if (output_path) { await fs.writeFile(output_path, optimizedContent, "utf-8"); } const recommendations = generateOptimizationRecommendations( originalLength, optimizedLength, extractedSections, strategy, ); const optimization: OptimizationResult = { originalLength, optimizedLength, reductionPercentage, optimizedContent, extractedSections, tldrGenerated, restructuringChanges, recommendations, }; const nextSteps = generateOptimizationNextSteps( optimization, validatedInput, ); return { success: true, data: { optimization, nextSteps, }, metadata: { toolVersion: "1.0.0", executionTime: Date.now() - startTime, timestamp: new Date().toISOString(), }, }; } catch (error) { return { success: false, error: { code: "OPTIMIZATION_FAILED", message: "Failed to optimize README", details: error instanceof Error ? error.message : "Unknown error", resolution: "Check README file path and permissions", }, metadata: { toolVersion: "1.0.0", executionTime: Date.now() - startTime, timestamp: new Date().toISOString(), }, }; } }
  • Zod input schema `OptimizeReadmeInputSchema` and TypeScript type `OptimizeReadmeInput` defining the parameters for the `optimize_readme` tool, including README path, optimization strategy, length limits, and output options.
    const OptimizeReadmeInputSchema = z.object({ readme_path: z.string().min(1, "README path is required"), strategy: z .enum([ "community_focused", "enterprise_focused", "developer_focused", "general", ]) .optional() .default("community_focused"), max_length: z.number().min(50).max(1000).optional().default(300), include_tldr: z.boolean().optional().default(true), preserve_existing: z.boolean().optional().default(false), output_path: z.string().optional(), create_docs_directory: z.boolean().optional().default(true), });
  • Helper function `parseReadmeStructure` that parses the README markdown into structured sections (title, content, level, lines, word count, essential flag) used for analysis and optimization decisions.
    function parseReadmeStructure(content: string): ReadmeSection[] { const lines = content.split("\n"); const sections: ReadmeSection[] = []; let currentTitle = ""; let currentLevel = 0; let currentStartLine = 0; lines.forEach((line, index) => { const headingMatch = line.match(/^(#{1,6})\s+(.+)$/); if (headingMatch) { // Save previous section if (currentTitle) { const endLine = index - 1; const sectionContent = lines .slice(currentStartLine, endLine + 1) .join("\n"); const wordCount = sectionContent.split(/\s+/).length; const isEssential = isEssentialSection(currentTitle); sections.push({ title: currentTitle, content: sectionContent, level: currentLevel, startLine: currentStartLine, endLine: endLine, wordCount: wordCount, isEssential: isEssential, }); } // Start new section currentTitle = headingMatch[2].trim(); currentLevel = headingMatch[1].length; currentStartLine = index; } }); // Add final section if (currentTitle) { const endLine = lines.length - 1; const sectionContent = lines .slice(currentStartLine, endLine + 1) .join("\n"); const wordCount = sectionContent.split(/\s+/).length; const isEssential = isEssentialSection(currentTitle); sections.push({ title: currentTitle, content: sectionContent, level: currentLevel, startLine: currentStartLine, endLine: endLine, wordCount: wordCount, isEssential: isEssential, }); } return sections; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/documcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server