custom_prompt
Execute custom prompts for code analysis and generation tasks using local LLM with optional file context support for single or multi-file projects.
Instructions
Universal fallback executor for any custom prompt with optional file context. Uses dynamic token allocation based on your loaded model - can handle everything from quick tasks to comprehensive multi-file analysis. The Swiss Army knife when no other specialized function matches your needs.
WORKFLOW: Flexible analysis and generation for any development task TIP: Provide clear instructions for any analysis or generation task SAVES: Claude context for strategic decisions
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| analysisDepth | No | Level of analysis detail | detailed |
| analysisType | No | Type of analysis to perform | general |
| code | No | The code to analyze (for single-file analysis) | |
| context | No | Optional structured context object for the task | |
| filePath | No | Path to single file to analyze | |
| files | No | Array of specific file paths to include as context | |
| language | No | Programming language (if applicable) | text |
| maxDepth | No | Maximum directory depth for multi-file discovery (1-5) | |
| projectPath | No | Path to project root (for multi-file analysis) | |
| prompt | Yes | The custom prompt/task to send to local LLM | |
| working_directory | No | Working directory context (defaults to current working directory) |