Skip to main content
Glama
thana0623

prompts-mcp-server

by thana0623

check_requirements

Check requirements for clarity by running five standard checks. Identifies ambiguities and generates clarifying questions to avoid assumptions.

Instructions

【需求澄清】执行 5 项需求明确标准检查。不明确时生成追问问题,禁止猜测执行。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
taskDescriptionYes用户提出的任务需求描述

Implementation Reference

  • Core handler: The `checkRequirements` function that executes the 5-item clarity check logic. It evaluates a task description against 5 standards (goal, input/output, constraints, acceptance criteria, scope) using heuristic keyword matching and returns a ClarityCheckResult with status, unclear items, and follow-up questions.
    export function checkRequirements(taskDescription: string): ClarityCheckResult {
      const items: ClarityCheckItem[] = [
        {
          id: 'goal',
          label: '目标明确',
          question: '能用一句话说清"这个需求要解决什么问题"?',
          status: '❌',
          detail: '',
        },
        {
          id: 'io',
          label: '输入输出明确',
          question: '"从哪来"和"到哪去"都要清楚?',
          status: '❌',
          detail: '',
        },
        {
          id: 'constraints',
          label: '约束明确',
          question: '有没有"不能改的地方"?',
          status: '❌',
          detail: '',
        },
        {
          id: 'acceptance',
          label: '验收标准明确',
          question: '"什么时候算完成"要有具体标准?',
          status: '❌',
          detail: '',
        },
        {
          id: 'scope',
          label: '影响范围明确',
          question: '要改哪些文件/模块、要更新哪些 docs?',
          status: '❌',
          detail: '',
        },
      ];
    
      // 简单启发式判断
      const desc = taskDescription.trim();
      if (desc.length > 10) {
        items[0].status = '⚠️';
        items[0].detail = '有描述但不够清晰';
      }
      if (desc.length > 30) {
        items[0].status = '✅';
        items[0].detail = '有较清晰的目标描述';
      }
    
      // 检查是否包含关键词
      const lower = desc.toLowerCase();
      if (/输入|从.*来|接口|参数|request|input|from/i.test(lower)) {
        items[1].status = '⚠️';
        items[1].detail = '提到了输入来源';
      }
      if (/输出|返回|结果|response|output|to/i.test(lower)) {
        items[1].status = '⚠️';
        items[1].detail = '提到了输出目标';
      }
      if (/输入.*输出|从.*到|接口.*返回|request.*response/i.test(lower)) {
        items[1].status = '✅';
        items[1].detail = '输入输出较明确';
      }
    
      if (/不能|不要|禁止|约束|限制|边界|except|but|only|must not/i.test(lower)) {
        items[2].status = '✅';
        items[2].detail = '提到了约束条件';
      }
    
      if (/完成|标准|验收|通过|测试|test|pass|done|finish/i.test(lower)) {
        items[3].status = '⚠️';
        items[3].detail = '提到了完成标准';
      }
      if (/验收标准|测试用例|test case|acceptance/i.test(lower)) {
        items[3].status = '✅';
        items[3].detail = '验收标准较明确';
      }
    
      if (/文件|模块|页面|影响|修改|change|file|module|page/i.test(lower)) {
        items[4].status = '⚠️';
        items[4].detail = '提到了影响范围';
      }
      if (/影响.*文件|修改.*模块|涉及.*页面|change.*file|affect.*module/i.test(lower)) {
        items[4].status = '✅';
        items[4].detail = '影响范围较明确';
      }
    
      const unclearItems = items.filter(i => i.status !== '✅').map(i => i.label);
      const allClear = unclearItems.length === 0;
    
      // 生成追问问题
      const followUpQuestions: string[] = [];
      if (items[0].status !== '✅') {
        followUpQuestions.push('1. 你要解决的具体问题是什么?');
      }
      if (items[1].status !== '✅') {
        followUpQuestions.push('2. 期望输出是什么,成功标准是什么?');
      }
      if (items[2].status !== '✅') {
        followUpQuestions.push('3. 是否有不能改动的约束、技术选型或业务边界?');
      }
      if (items[3].status !== '✅') {
        followUpQuestions.push('4. "什么时候算完成"要有具体标准?');
      }
      if (items[4].status !== '✅') {
        followUpQuestions.push('5. 这次变更影响哪些文件、模块或页面?');
      }
    
      return { taskDescription, items, allClear, unclearItems, followUpQuestions };
    }
  • Type definitions: `ClarityCheckItem` interface (id, label, question, status with '❌'|'⚠️'|'✅', detail) and `ClarityCheckResult` interface (taskDescription, items, allClear, unclearItems, followUpQuestions).
    export interface ClarityCheckItem {
      id: string;
      label: string;
      question: string;
      status: '❌' | '⚠️' | '✅';
      detail: string;
    }
    
    export interface ClarityCheckResult {
      taskDescription: string;
      items: ClarityCheckItem[];
      allClear: boolean;
      unclearItems: string[];
      followUpQuestions: string[];
    }
  • src/index.ts:109-122 (registration)
    Tool registration in MCP server: declares the 'check_requirements' tool with its description ('5 项需求明确标准检查') and inputSchema (requires taskDescription string).
    {
      name: 'check_requirements',
      description: '【需求澄清】执行 5 项需求明确标准检查。不明确时生成追问问题,禁止猜测执行。',
      inputSchema: {
        type: 'object',
        properties: {
          taskDescription: {
            type: 'string',
            description: '用户提出的任务需求描述',
          },
        },
        required: ['taskDescription'],
      },
    },
  • src/index.ts:242-243 (registration)
    Tool dispatch: The CallToolRequestSchema handler routes 'check_requirements' to `this.handleCheckRequirements(args)`.
    case 'check_requirements':
      return this.handleCheckRequirements(args);
  • MCP handler wrapper: `handleCheckRequirements` method that extracts taskDescription from args, calls `checkRequirements()` from requirements-check.ts, formats result with `formatCheckResult()`, and returns as MCP text content.
    private async handleCheckRequirements(args: any) {
      const taskDescription = typeof args?.taskDescription === 'string' ? args.taskDescription : '';
      const result = checkRequirements(taskDescription);
      const formatted = formatCheckResult(result);
    
      return {
        content: [{ type: 'text', text: formatted }],
      };
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries burden. It discloses the no-guessing rule and question generation, but lacks detail on the 5 standards and output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise with two sentences; front-loaded with purpose and key behavioral rule. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks description of return value (likely list of questions or report). For a tool with no output schema, this is a notable gap. Sibling tools suggest a workflow but missing output context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Single parameter has 100% schema coverage; description does not add significant new meaning beyond using the parameter in context. Baseline score appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool checks requirements against 5 clarity standards and generates follow-up questions. It is specific and distinct from sibling tools like make_plan.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to generate questions when unclear and prohibits guessing, providing clear usage guidance. Could mention when to prefer this tool over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/thana0623/prompts-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server