Skip to main content
Glama

Context Optimizer MCP Server

Context Optimizer MCP Server

npm version license node tests

A Model Context Protocol (MCP) server that provides context optimization tools for AI coding assistants including GitHub Copilot, Cursor AI, Claude Desktop, and other MCP-compatible assistants. It enables AI assistants to extract targeted information rather than processing large files and command outputs in their entirety.

This server provides context optimization functionality similar to the VS Code Copilot Context Optimizer extension, but with compatibility across MCP-supporting applications.

Features

  • 🔍 File Analysis Tool (askAboutFile) - Extract specific information from files without loading entire contents
  • 🖥️ Terminal Execution Tool (runAndExtract) - Execute commands and extract relevant information using LLM analysis
  • ❓ Follow-up Questions Tool (askFollowUp) - Continue conversations about previous terminal executions
  • 🔬 Research Tools (researchTopic, deepResearch) - Conduct web research using Exa.ai's API
  • 🔒 Security Controls - Path validation, command filtering, and session management
  • 🔧 Multi-LLM Support - Works with Google Gemini, Claude (Anthropic), and OpenAI
  • ⚙️ Environment Variable Configuration - API key management through system environment variables
  • 🏗️ Simple Configuration - Environment variables only, no config files to manage
  • 🧪 Comprehensive Testing - Unit tests, integration tests, and security validation

Quick Start

1. Install globally:

npm install -g context-optimizer-mcp-server

2. Set environment variables (see docs/guides/usage.md for OS-specific instructions):

export CONTEXT_OPT_LLM_PROVIDER="gemini" export CONTEXT_OPT_GEMINI_KEY="your-gemini-api-key" export CONTEXT_OPT_EXA_KEY="your-exa-api-key" export CONTEXT_OPT_ALLOWED_PATHS="/path/to/your/projects"

3. Add to your MCP client configuration:

For Claude Desktop (claude_desktop_config.json):

{ "mcpServers": { "context-optimizer": { "command": "context-optimizer-mcp" } } }

For VS Code (mcp.json):

{ "servers": { "context-optimizer": { "command": "context-optimizer-mcp" } } }

For complete setup instructions including OS-specific environment variable configuration and AI assistant setup, see docs/guides/usage.md.

Available Tools

  • askAboutFile - Extract specific information from files without loading entire contents into chat context. Perfect for checking if files contain specific functions, extracting import/export statements, or understanding file purpose without reading the full content.
  • runAndExtract - Execute terminal commands and intelligently extract relevant information using LLM analysis. Supports non-interactive commands with security validation, timeouts, and session management for follow-up questions.
  • askFollowUp - Continue conversations about previous terminal executions without re-running commands. Access complete context from previous runAndExtract calls including full command output and execution details.
  • researchTopic - Conduct quick, focused web research on software development topics using Exa.ai's research capabilities. Get current best practices, implementation guidance, and up-to-date information on evolving technologies.
  • deepResearch - Comprehensive research and analysis using Exa.ai's exhaustive capabilities for critical decision-making and complex architectural planning. Ideal for strategic technology decisions, architecture planning, and long-term roadmap development.

For detailed tool documentation and examples, see docs/tools.md and docs/guides/usage.md.

Documentation

All documentation is organized under the docs/ directory:

TopicLocationDescription
Architecturedocs/architecture.mdSystem design and component overview
Tools Referencedocs/tools.mdComplete tool documentation and examples
Usage Guidedocs/guides/usage.mdComplete setup and configuration
VS Code Setupdocs/guides/vs-code-setup.mdVS Code specific configuration
Troubleshootingdocs/guides/troubleshooting.mdCommon issues and solutions
API Keysdocs/reference/api-keys.mdAPI key management
Testingdocs/reference/testing.mdTesting framework and procedures
Changelogdocs/reference/changelog.mdVersion history
Contributingdocs/reference/contributing.mdDevelopment guidelines
Securitydocs/reference/security.mdSecurity policy
Code of Conductdocs/reference/code-of-conduct.mdCommunity guidelines
  • Get Started: See docs/guides/usage.md for complete setup instructions
  • Tools Reference: Check docs/tools.md for detailed tool documentation
  • Troubleshooting: Check docs/guides/troubleshooting.md for common issues
  • VS Code Setup: Follow docs/guides/vs-code-setup.md for VS Code configuration

Testing

# Run all tests (skips LLM integration tests without API keys) npm test # Run tests with API keys for full integration testing # Set environment variables first: export CONTEXT_OPT_LLM_PROVIDER="gemini" export CONTEXT_OPT_GEMINI_KEY="your-gemini-key" export CONTEXT_OPT_EXA_KEY="your-exa-key" npm test # Now runs all tests including LLM integration # Run in watch mode npm run test:watch

For detailed testing setup, see docs/reference/testing.md.

Contributing

Contributions are welcome! Please read docs/reference/contributing.md for guidelines on development workflow, coding standards, testing, and submitting pull requests.

Community

  • Code of Conduct: See docs/reference/code-of-conduct.md
  • Security Reports: Follow docs/reference/security.md for responsible disclosure
  • Issues: Use GitHub Issues for bugs & feature requests
  • Pull Requests: Ensure tests pass and docs are updated
  • Discussions: (If enabled) Use for open-ended questions/ideas

License

MIT License - see LICENSE file for details.

-
security - not tested
A
license - permissive license
-
quality - not tested

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

Provides AI coding assistants with context optimization tools including targeted file analysis, intelligent terminal command execution with LLM-powered output extraction, and web research capabilities. Helps reduce token usage by extracting only relevant information instead of processing entire files and command outputs.

  1. Features
    1. Quick Start
      1. Available Tools
        1. Documentation
          1. Quick Links
        2. Testing
          1. Contributing
            1. Community
              1. License
                1. Related Projects

                  Related MCP Servers

                  • -
                    security
                    A
                    license
                    -
                    quality
                    Provides code manipulation, execution, and version control capabilities. It allows AI assistants to read, write, and execute code while maintaining a history of changes.
                    Last updated -
                    9
                    Python
                    MIT License
                  • A
                    security
                    A
                    license
                    A
                    quality
                    Provides intelligent context management for AI development sessions, allowing users to track token usage, manage conversation context, and seamlessly restore context when reaching token limits.
                    Last updated -
                    8
                    3
                    2
                    TypeScript
                    Apache 2.0
                    • Linux
                    • Apple
                  • -
                    security
                    F
                    license
                    -
                    quality
                    A personal AI coding assistant that connects to various development environments and helps automate tasks, provide codebase insights, and improve coding decisions by leveraging the Model Context Protocol.
                    Last updated -
                    Python
                    • Apple
                    • Linux
                  • -
                    security
                    F
                    license
                    -
                    quality
                    Intelligently analyzes codebases to enhance LLM prompts with relevant context, featuring adaptive context management and task detection to produce higher quality AI responses.
                    Last updated -
                    TypeScript

                  View all related MCP servers

                  MCP directory API

                  We provide all the information about MCP servers via our MCP API.

                  curl -X GET 'https://glama.ai/api/mcp/v1/servers/malaksedarous/context-optimizer-mcp-server'

                  If you have feedback or need assistance with the MCP directory API, please join our Discord server