Improve AI prompts by applying targeted feedback to enhance clarity, specificity, and model compatibility while preserving original structure and project context.
Provides actionable suggestions to improve code quality by analyzing and optimizing for performance, readability, maintainability, accessibility, or type safety based on user-defined focus and priority.
A simple MCP server implementation in TypeScript that communicates over stdio, allowing users to ask questions that end with 'yes or no' to trigger the MCP tool in Cursor.
Intelligently analyzes codebases to enhance LLM prompts with relevant context, featuring adaptive context management and task detection to produce higher quality AI responses.
A lightweight MCP server that provides a unified interface to various LLM providers including OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama.