Skip to main content
Glama
127,274 tools. Last updated 2026-05-05 13:07

"A server for reading files to provide context for writing large documents" matching MCP tools:

  • Attach any file to a Bear note using file path or base64 data. Provide the note ID from note search, or specify a title. The server reads and encodes local files automatically.
  • Check if a secret exists in a scope without reading its value. Use as a precondition before reading or writing to avoid prompting for already configured keys. Returns 'true' for valid secrets, 'false' for missing or expired ones.
  • View the complete diff for a single file without truncation to review all changes in large files like lock files or generated code before staging.
    MIT
  • Retrieve specific chapter content by title or index for efficient document access. Ideal for targeted reading, updating chapters, or navigating large files without loading the entire document. Includes summary and navigation details.
  • Extract text content from LibreOffice documents as Markdown for AI processing. Supports pagination and character limits to handle large files efficiently.
    MIT

Matching MCP Servers

Matching MCP Connectors

  • Connect YNAB to AI assistants like ChatGPT and Claude via a hosted remote MCP server with OAuth. Provides tools for reading budgets, accounts, categories, transactions, analyzing spending patterns, forecasting cash flow, tracking goal progress, and managing funds — all after signing in with your own YNAB account.

  • The verified hub for conferences and journals. Powered by AI to match your scholarly ambitions with the world's most prestigious academic opportunities.

  • Replace or insert text at specific line ranges in files using line-number operations. Ideal for large files or precise edits without context-heavy processing.
    MIT
  • Retrieve file metadata like size and line count without accessing file contents. Helps determine optimal reading strategies for large files by analyzing file characteristics first.
    MIT
  • Retrieve cached context to avoid redundant operations like reading multiple files, using glob patterns to find relevant saved data.
  • Analyzes large codebases by distributing files across multiple LLM services for parallel processing, reducing analysis time for projects with 20+ files.
    MIT
  • Offload expensive tasks to a cheaper AI model or summarize large vault files automatically. Small files are returned directly; large files are summarized by a worker model.
    MIT
  • Analyze code or files with a large context window, optionally focusing on security, architecture, or performance for targeted insights.
    Mozilla Public 2.0
  • Download files from POHODA documents storage. Retrieve file content for documents under 100KB or get size information for larger files using relative paths.
    MIT
  • Sample documents from an Azure CosmosDB container to discover field names, data types, and frequency, helping understand the data model before writing queries.
    MIT
  • Retrieve all Memory Bank files (product-context, active-context, progress, decision-log, system-patterns) as a single JSON object for quick context loading in AI assistants.
    MIT
  • Analyze recent oncology lab results by downloading documents and providing patient context for chemotherapy interpretation. Returns lab data inline for AI analysis with configurable limits.
  • Retrieve file content from Bitbucket repositories to read source code, configuration files, or documentation. Supports pagination for large files by specifying line ranges.
    Apache 2.0
  • Parse Swagger/OpenAPI documents to extract basic information quickly, ideal for handling large API specifications with optional filtering and caching.
    ISC