Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Generate Manual Test Casesgenerate test cases from docs/login.pdf using P1 rules in CSV format"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP Server: Generate Manual Test Cases
This MCP server provides the generate_testcases tool to generate manual test cases from documentation and rules you supply.
Installation
npm install
npm run buildRunning the server
Production:
npm start(runsnode dist/index.js)Development:
npm run dev(runs with ts-node)
The server communicates over stdio (stdin/stdout) and is intended to be started by Cursor or another MCP client.
Configuring in Cursor
Add to your MCP config (e.g. Cursor: Settings → MCP or ~/.cursor/mcp.json):
{
"mcpServers": {
"manual-testcases": {
"command": "node",
"args": ["/PATH/TO/PROJECT/dist/index.js"]
}
}
}Example with a real path:
{
"mcpServers": {
"manual-testcases": {
"command": "node",
"args": ["/Users/huenguyen/Desktop/hue-data/hue-data/workspace/mcp-manual-testcases/dist/index.js"]
}
}
}After adding this, Cursor will expose the generate_testcases tool from this server.
Tool: generate_testcases
Generates test cases from documentation and rules.
Parameters
Parameter | Required | Description |
| No | Document content (text). Omit if using |
| No | Path to the document file (txt, md, or PDF). Prefer when available. |
| No |
|
| Yes | Rules for generating test cases (format, priority, scope, language, etc.). |
| No | Default |
| No | Max tokens for the LLM response (default 4096). |
Note: At least one of document_content or document_path is required.
How it works
Read document: From
document_contentor by reading the file atdocument_path. PDF files are supported (text is extracted automatically).Combine with rules: Build a prompt from document + rules.
Generate test cases:
If
use_llm === trueand the client (e.g. Cursor) supports sampling (LLM): the server sends a request to the client so the LLM generates test cases and returns the result.Otherwise (client does not support sampling or
use_llm === false): the server returns the formatted prompt for you to copy and use with an external LLM.
Example rules
"Output test cases as a table: ID, Description, Preconditions, Steps, Expected result, Priority."
"Priority P1 for login/payment flows; P2 for secondary screens."
"Output language: English."
"Each scenario has at most 10 steps; split into multiple scenarios if more complex."
Example usage in Cursor
You can ask the AI in Cursor, for example:
"Use the generate_testcases tool: document_path is
docs/feature-login.md, rules are 'Table format, English, P1 for happy path'."Or paste document content and use
document_contentwithrules.
Generate test cases locally (script)
The script reads doc (txt, md, or PDF) and rules, then either calls OpenAI to generate test cases or prints the formatted prompt.
Environment variables:
Variable | Description |
| Path to requirement document (default: |
| Path to rules file (default: |
|
|
| Output file path. For CSV, default is |
| If set, the script calls OpenAI to generate test cases. |
Examples:
# Print formatted prompt (no API key needed)
npm run generate-testcases
# Read requirement from PDF and output CSV (requires OPENAI_API_KEY)
export OPENAI_API_KEY=sk-your-key
export DOC_PATH=path/to/requirements.pdf
export RULES_PATH=samples/rules-example.txt
export OUTPUT_FORMAT=csv
export OUTPUT_FILE=samples/generated-testcases.csv
npm run generate-testcasesCSV columns match the rules format: 模块, 标题, 前置条件, 步骤描述, 预期结果, test1测试人员, test1测试结果, buglink, PRE测试人员, PRE测试结果, buglink.
Quick try with samples
The samples/ folder contains:
doc-example.md— sample requirement doc (中控后台PC, 6.1–6.4). It references prototype images inImages/(image1.png–image4.png).images/(orImages/) — prototype screenshots; when present next to the doc, the script and MCP prompt include the image list so generated test cases can reference them (e.g. “参照原型 Images/image1.png”).rules-example.txt— sample rules (table columns, 模块 format 代理后台-AGBE/…, language, strict/quality rules).
Generate test cases from doc + images + rules (MCP or script):
MCP: Call tool
generate_testcaseswithdocument_path:samples/doc-example.md,rules: content fromsamples/rules-example.txt, and optionallyoutput_format:"csv"or"markdown".Script:
DOC_PATH=samples/doc-example.md RULES_PATH=samples/rules-example.txt OUTPUT_FORMAT=both npm run generate-testcases(setOPENAI_API_KEYfor LLM generation).
Project structure
mcp-manual-testcases/
├── src/
│ └── index.ts # MCP server + generate_testcases tool
├── samples/
│ ├── doc-example.md # Sample requirement (references Images/)
│ ├── images/ # Prototype images (image1.png …)
│ ├── rules-example.txt # Sample rules
│ ├── generated-testcases.md
│ └── generated-testcases.csv
├── dist/ # Build output (after npm run build)
├── package.json
├── tsconfig.json
└── README.mdLicense
ISC
depend on doc-example.md and images folder generate tescase for the feature depend on mcp and rules
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.