Generate Manual Test Cases

Generate Manual Test Cases

This MCP server provides a tool to generate manual test cases in Markdown or CSV format from documentation files and custom rules. It supports text and PDF inputs and can leverage LLM sampling to automate the creation of detailed test scenarios.

Category
Visit Server

README

MCP Server: Generate Manual Test Cases

This MCP server provides the generate_testcases tool to generate manual test cases from documentation and rules you supply.

Installation

npm install
npm run build

Running the server

  • Production: npm start (runs node dist/index.js)
  • Development: npm run dev (runs with ts-node)

The server communicates over stdio (stdin/stdout) and is intended to be started by Cursor or another MCP client.

Configuring in Cursor

Add to your MCP config (e.g. Cursor: Settings → MCP or ~/.cursor/mcp.json):

{
  "mcpServers": {
    "manual-testcases": {
      "command": "node",
      "args": ["/PATH/TO/PROJECT/dist/index.js"]
    }
  }
}

Example with a real path:

{
  "mcpServers": {
    "manual-testcases": {
      "command": "node",
      "args": ["/Users/huenguyen/Desktop/hue-data/hue-data/workspace/mcp-manual-testcases/dist/index.js"]
    }
  }
}

After adding this, Cursor will expose the generate_testcases tool from this server.

Tool: generate_testcases

Generates test cases from documentation and rules.

Parameters

Parameter Required Description
document_content No Document content (text). Omit if using document_path.
document_path No Path to the document file (txt, md, or PDF). Prefer when available.
output_format No "markdown" (default) or "csv". CSV uses columns: 模块,标题,前置条件,步骤描述,预期结果,test1测试人员,test1测试结果,buglink,PRE测试人员,PRE测试结果,buglink.
rules Yes Rules for generating test cases (format, priority, scope, language, etc.).
use_llm No Default true. If true and client supports sampling, the LLM is used to generate test cases; if false, only the formatted prompt is returned.
max_tokens No Max tokens for the LLM response (default 4096).

Note: At least one of document_content or document_path is required.

How it works

  1. Read document: From document_content or by reading the file at document_path. PDF files are supported (text is extracted automatically).
  2. Combine with rules: Build a prompt from document + rules.
  3. Generate test cases:
    • If use_llm === true and the client (e.g. Cursor) supports sampling (LLM): the server sends a request to the client so the LLM generates test cases and returns the result.
    • Otherwise (client does not support sampling or use_llm === false): the server returns the formatted prompt for you to copy and use with an external LLM.

Example rules

  • "Output test cases as a table: ID, Description, Preconditions, Steps, Expected result, Priority."
  • "Priority P1 for login/payment flows; P2 for secondary screens."
  • "Output language: English."
  • "Each scenario has at most 10 steps; split into multiple scenarios if more complex."

Example usage in Cursor

You can ask the AI in Cursor, for example:

  • "Use the generate_testcases tool: document_path is docs/feature-login.md, rules are 'Table format, English, P1 for happy path'."
  • Or paste document content and use document_content with rules.

Generate test cases locally (script)

The script reads doc (txt, md, or PDF) and rules, then either calls OpenAI to generate test cases or prints the formatted prompt.

Environment variables:

Variable Description
DOC_PATH Path to requirement document (default: samples/doc-example.md). PDF is supported.
RULES_PATH Path to rules file (default: samples/rules-example.txt).
OUTPUT_FORMAT prompt (default) or csv.
OUTPUT_FILE Output file path. For CSV, default is samples/generated-testcases.csv.
OPENAI_API_KEY If set, the script calls OpenAI to generate test cases.

Examples:

# Print formatted prompt (no API key needed)
npm run generate-testcases

# Read requirement from PDF and output CSV (requires OPENAI_API_KEY)
export OPENAI_API_KEY=sk-your-key
export DOC_PATH=path/to/requirements.pdf
export RULES_PATH=samples/rules-example.txt
export OUTPUT_FORMAT=csv
export OUTPUT_FILE=samples/generated-testcases.csv
npm run generate-testcases

CSV columns match the rules format: 模块, 标题, 前置条件, 步骤描述, 预期结果, test1测试人员, test1测试结果, buglink, PRE测试人员, PRE测试结果, buglink.

Quick try with samples

The samples/ folder contains:

  • doc-example.md — sample requirement doc (中控后台PC, 6.1–6.4). It references prototype images in Images/ (image1.png–image4.png).
  • images/ (or Images/) — prototype screenshots; when present next to the doc, the script and MCP prompt include the image list so generated test cases can reference them (e.g. “参照原型 Images/image1.png”).
  • rules-example.txt — sample rules (table columns, 模块 format 代理后台-AGBE/…, language, strict/quality rules).

Generate test cases from doc + images + rules (MCP or script):

  • MCP: Call tool generate_testcases with document_path: samples/doc-example.md, rules: content from samples/rules-example.txt, and optionally output_format: "csv" or "markdown".
  • Script: DOC_PATH=samples/doc-example.md RULES_PATH=samples/rules-example.txt OUTPUT_FORMAT=both npm run generate-testcases (set OPENAI_API_KEY for LLM generation).

Project structure

mcp-manual-testcases/
├── src/
│   └── index.ts       # MCP server + generate_testcases tool
├── samples/
│   ├── doc-example.md      # Sample requirement (references Images/)
│   ├── images/             # Prototype images (image1.png …)
│   ├── rules-example.txt   # Sample rules
│   ├── generated-testcases.md
│   └── generated-testcases.csv
├── dist/              # Build output (after npm run build)
├── package.json
├── tsconfig.json
└── README.md

License

ISC

depend on doc-example.md and images folder generate tescase for the feature depend on mcp and rules

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
E2B

E2B

Using MCP to run code via e2b.

Official
Featured