Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| LOG_LEVEL | No | Logging level: debug, info, warn, error | info |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| generate_prompt | Transform a raw idea into a well-structured, actionable prompt optimized for AI assistants. Use this tool when you need to: • Create a new prompt from scratch • Structure a vague idea into a clear request • Generate role-specific prompts (coding, writing, research, etc.) Supports templates: coding (for programming tasks), writing (for content creation), research (for investigation), analysis (for data/business analysis), factcheck (for verification), general (versatile). IMPORTANT: When available, pass workspace context (file structure, package.json, tech stack) to generate prompts that align with the user's project. |
| refine_prompt | Iteratively improve an existing prompt based on specific feedback. Use this tool when you need to: • Improve a prompt that didn't get good results • Add missing context or constraints • Make a prompt more specific or clearer • Adapt a prompt for a different AI model The tool preserves the original structure while applying targeted improvements. IMPORTANT: When available, pass workspace context (file structure, package.json, tech stack) to ensure refined prompts comply with the user's project scope and original request. |
| analyze_prompt | Evaluate prompt quality and get actionable improvement suggestions. Use this tool when you need to: • Assess if a prompt is well-structured • Identify weaknesses before using a prompt • Get specific suggestions for improvement • Compare prompt quality before/after refinement Returns scores (0-100) for: clarity, specificity, structure, actionability. |
| get_server_status | Get PromptArchitect server status and performance metrics. Use this tool to check: • Whether AI (Gemini) is available • Cache hit rate and request statistics • Average response latency |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
| Template Categories | List of available template categories |
| Debug Code | Analyze and fix bugs in code |
| Code Review | Review code for quality, security, and best practices |
| Blog Post | Generate a blog post outline or draft |
| Professional Email | Draft a professional email |
| Research Summary | Summarize research findings |
| SWOT Analysis | Conduct a SWOT analysis |
| Comparison Analysis | Compare multiple options or solutions |
| Fact Check | Verify claims and statements |
| Coding Templates | All templates in the coding category |
| Writing Templates | All templates in the writing category |
| Research Templates | All templates in the research category |
| Analysis Templates | All templates in the analysis category |
| Factcheck Templates | All templates in the factcheck category |