Retrieve user-specific guidelines for writing tests, including naming conventions, structure, assertions, mocking, and coverage. Use before writing or modifying test code to ensure adherence to defined patterns and preferences.
Create or modify files by writing content to specified paths using a Python-based file management tool. Supports custom encoding and optional overwrite functionality.
Generate structured prompt templates and context schemas from user requirements or existing prompts, enabling testable and adaptable AI interactions. Integrates with MCP server for efficient prompt management and evaluation.
Apply specified variables to a template prompt to generate tailored results. Use this tool by providing the prompt ID and variable values for customizable outputs.
Query Box AI to analyze a specific file and generate responses based on a provided prompt. Use this tool to extract insights or answers directly from file content stored in Box.
Transforms prompts into Chain of Draft (CoD) or Chain of Thought (CoT) format to enhance LLM reasoning quality while reducing token usage by up to 92.4%, supporting multiple LLM providers including Claude, GPT, Ollama, and local models.
A simple MCP server implementation in TypeScript that communicates over stdio, allowing users to ask questions that end with 'yes or no' to trigger the MCP tool in Cursor.
A lightweight MCP server that provides a unified interface to various LLM providers including OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama.