Skip to main content
Glama

analyze_prompt

Evaluate prompt quality and get actionable improvement suggestions by assessing clarity, specificity, structure, and actionability with specific scores and recommendations.

Instructions

Evaluate prompt quality and get actionable improvement suggestions.

Use this tool when you need to: • Assess if a prompt is well-structured • Identify weaknesses before using a prompt • Get specific suggestions for improvement • Compare prompt quality before/after refinement

Returns scores (0-100) for: clarity, specificity, structure, actionability.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe prompt to analyze.
evaluationCriteriaNoSpecific criteria to evaluate. Default: all criteria.

Implementation Reference

  • Core handler function for the analyze_prompt tool. Analyzes prompt quality using API when available, with comprehensive fallback rule-based scoring and suggestions.
    export async function analyzePrompt(input: AnalyzePromptInput): Promise<AnalysisResult> { const { prompt, evaluationCriteria = DEFAULT_CRITERIA } = input; logger.info('Analyzing prompt', { promptLength: prompt.length, criteria: evaluationCriteria }); // Calculate structural metadata (always needed) const metadata = calculateMetadata(prompt); // Use PromptArchitect API if (isApiClientAvailable()) { try { const response = await apiAnalyzePrompt({ prompt, evaluationCriteria, }); logger.info('Analyzed via PromptArchitect API'); return { scores: { overall: response.overallScore, clarity: response.scores.clarity, specificity: response.scores.specificity, structure: response.scores.structure, actionability: response.scores.actionability, }, suggestions: response.suggestions || [], strengths: response.strengths || identifyStrengths(prompt, metadata), weaknesses: response.weaknesses || identifyWeaknesses(prompt, metadata), metadata, }; } catch (error) { logger.warn('API request failed, using fallback', { error: error instanceof Error ? error.message : 'Unknown error' }); } } // Fallback rule-based analysis logger.warn('Using fallback rule-based analysis'); return performRuleBasedAnalysis(prompt, metadata); }
  • Zod schema defining input validation for the analyze_prompt tool.
    export const analyzePromptSchema = z.object({ prompt: z.string().min(1).describe('The prompt to analyze'), evaluationCriteria: z.array(z.string()).optional().describe('Specific criteria to evaluate'), });
  • src/server.ts:233-244 (registration)
    MCP server request handler registration for calling the analyze_prompt tool.
    case 'analyze_prompt': { const input = analyzePromptSchema.parse(args); const result = await analyzePrompt(input); return { content: [ { type: 'text', text: JSON.stringify(result, null, 2), }, ], }; }
  • src/server.ts:155-181 (registration)
    Tool metadata registration in MCP listTools handler, including description and input schema.
    { name: 'analyze_prompt', description: `Evaluate prompt quality and get actionable improvement suggestions. Use this tool when you need to: • Assess if a prompt is well-structured • Identify weaknesses before using a prompt • Get specific suggestions for improvement • Compare prompt quality before/after refinement Returns scores (0-100) for: clarity, specificity, structure, actionability.`, inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The prompt to analyze.', }, evaluationCriteria: { type: 'array', items: { type: 'string' }, description: 'Specific criteria to evaluate. Default: all criteria.', }, }, required: ['prompt'], }, },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MerabyLabs/promptarchitect-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server