Corcava logoLe seul outil métier dont vous avez besoinCorcava
Menu

Checklist QA avec MCP : générer les étapes de test à partir des tâches

Generate comprehensive test cases from task descriptions using MCP. This QA workflow shows you how to read a task, create test steps covering functionality and edge cases, and add them as a comment—with variants for regression tests, smoke tests, and integration tests.

Ce que ce workflow permet

QA checklist workflow with MCP helps you create thorough test coverage:

Résultats clés

  • Test case generation: Create comprehensive test steps from task descriptions
  • Coverage analysis: Identify what needs to be tested (happy path, edge cases, error handling)
  • Test type variants: Generate regression, smoke, and integration test cases
  • Documentation: Add test cases as comments for QA tracking
  • Quality assurance: Ensure tasks have clear, testable outcomes

Prérequis

Before using this workflow, ensure you have:

Workflow pas à pas

Step 1: Read the Task

Start by getting the full task context:

Task Reading Prompt

"Read task #[ID] and analyze what needs to be tested: - Get the task title, description, and acceptance criteria - Review any existing comments for context - Identify the functionality that needs testing Show me the task details before generating test cases."

What the AI does:

  1. Calls get_task to retrieve task details
  2. Calls list_task_comments to get existing context
  3. Analyzes task description and acceptance criteria
  4. Presents task context for test case generation

Step 2: Generate Test Cases

Create comprehensive test steps:

Test Case Generation Prompt

"For task #[ID], generate test cases covering: - Happy path scenarios (main functionality) - Edge cases and boundary conditions - Error handling and invalid inputs - Integration points (if applicable) Format as numbered test steps with: 1. Test case name 2. Steps to execute 3. Expected result Show me all test cases before adding them as a comment."

What the AI does:

  1. Analyzes task requirements
  2. Identifies test scenarios
  3. Creates detailed test steps
  4. Formats test cases clearly

Step 3: Review Test Cases

Review the generated test cases:

Review Prompt

"Review the generated test cases: - Check that all acceptance criteria are covered - Verify test steps are clear and actionable - Ensure edge cases are included - Confirm test cases are comprehensive Show me the complete test case list formatted for a comment."

What the AI does:

  1. Formats test cases for comment
  2. Ensures all criteria are covered
  3. Presents formatted test case list
  4. Waits for approval before posting

Step 4: Add Test Cases as Comment (After Approval)

Post the test cases after confirmation:

Post Test Cases Prompt (Requires Approval)

"Add the test cases as a comment to task #[ID]: - Comment title: 'QA Test Cases' - Include all generated test cases - Format clearly with numbered steps Show me the comment text first, then wait for my confirmation. I must type 'CONFIRM' or 'YES, ADD COMMENT' before you post the comment."

What the AI does:

  1. Drafts comment with test cases
  2. Shows preview for approval
  3. Waits for confirmation token
  4. Calls add_task_comment after approval

Complete Workflow Prompts

Here are complete, copy-paste ready prompts:

Full QA Test Case Generation

"Generate QA test cases for task #[ID]: 1. Read the task and analyze what needs to be tested 2. Generate comprehensive test cases covering: - Happy path scenarios - Edge cases and boundary conditions - Error handling - Integration points (if applicable) 3. Format test cases as numbered steps with test name, steps, and expected result 4. Show me all test cases for review 5. Wait for my 'CONFIRM' or 'YES, ADD COMMENT' before posting as a comment Important: Always preview test cases before posting, and only post after explicit confirmation."

Use this when: You want comprehensive test case generation

Quick Test Case Generation

"Generate test cases for task #[ID]. Cover happy path, edge cases, and errors. Show me the test cases, then wait for my 'CONFIRM' before posting as a comment."

Use this when: You want a quick test case generation

Workflow Variants by Test Type

Variant 1: Regression Test Cases

Generate regression test cases that verify existing functionality:

"Generate regression test cases for task #[ID]: - Focus on verifying existing functionality still works - Test backward compatibility - Check for breaking changes - Include tests for related features that might be affected Format as regression test cases with clear pass/fail criteria."

What happens: AI generates regression-focused test cases

Variant 2: Smoke Test Cases

Generate quick smoke test cases for basic verification:

"Generate smoke test cases for task #[ID]: - Focus on critical path functionality only - Keep tests minimal and fast - Verify core features work - Skip edge cases and detailed scenarios Format as quick smoke tests (5-10 test cases max)."

What happens: AI generates minimal smoke test cases

Variant 3: Integration Test Cases

Generate integration test cases for system interactions:

"Generate integration test cases for task #[ID]: - Focus on interactions between components - Test API endpoints and data flow - Verify system integration points - Include end-to-end scenarios Format as integration test cases with system-level verification."

What happens: AI generates integration-focused test cases

Variant 4: End-to-End Test Cases

Generate end-to-end test cases for complete user workflows:

"Generate end-to-end test cases for task #[ID]: - Cover complete user workflows from start to finish - Test full user journeys - Include multiple user roles if applicable - Verify complete feature functionality Format as end-to-end test cases with full workflow steps."

What happens: AI generates comprehensive end-to-end test cases

Example: Complete QA Test Case Session

Here's a real example of how a QA test case generation session flows:

User Prompt

"Generate QA test cases for task #123."

AI Response (Task Details)

Task #123: Implement user authentication

Description: Add user login/logout functionality with email and password authentication.

Acceptance Criteria:

  • User can register with email and password
  • User can login with credentials
  • User can logout
  • Session management works correctly

AI Generated Test Cases

QA Test Cases (Ready to Post):

Happy Path Tests:

  1. User Registration: Steps: Navigate to register page, enter valid email and password, submit. Expected: User account created, redirect to login.
  2. User Login: Steps: Enter registered email and password, click login. Expected: User logged in, session created, redirect to dashboard.
  3. User Logout: Steps: Click logout button. Expected: Session terminated, redirect to login page.

Edge Cases:

  1. Duplicate Email Registration: Steps: Register with existing email. Expected: Error message, account not created.
  2. Invalid Login Credentials: Steps: Enter wrong password. Expected: Error message, no session created.
  3. Session Expiry: Steps: Wait for session timeout, try to access protected page. Expected: Redirect to login, session expired message.

Error Handling:

  1. Invalid Email Format: Steps: Enter invalid email format. Expected: Validation error, registration/login blocked.
  2. Weak Password: Steps: Enter password that doesn't meet requirements. Expected: Validation error, password requirements shown.

Type 'CONFIRM' or 'YES, ADD COMMENT' to post these test cases.

User Approval

"CONFIRM"

AI Action (After Approval)

✅ Posted QA test cases comment to task #123

Test cases have been added to the task for QA tracking.

Safety Patterns

✅ Preview and Approval Workflow

This workflow uses preview and approval for all write operations:

  • Read-first: Always read task details before generating test cases
  • Preview test cases: Show all test cases before posting
  • Confirmation tokens: Require "CONFIRM" or "YES, ADD COMMENT" before posting
  • Review step: Allow review and editing before posting
  • Clear formatting: Test cases formatted clearly for readability

Troubleshooting

Task Not Found

Symptom: Cannot retrieve task details

Possible causes:

  • Task ID incorrect
  • Task in different workspace
  • API key lacks read access

Fix: Verify task ID and API key permissions. Check workspace access.

Test Cases Too Generic

Symptom: Generated test cases are too vague

Possible causes:

  • Task description lacks detail
  • Acceptance criteria missing

Fix: Ask for more specific test cases: "Generate detailed test cases with specific steps and expected results"

Comments Not Being Posted

Symptom: AI doesn't post comments even after approval

Possible causes:

  • Confirmation token not recognized
  • API key lacks write permissions
  • Task ID incorrect

Fix: Use exact confirmation: "CONFIRM" or "YES, ADD COMMENT". Verify API key has write access.

Related Tools

This workflow uses these Corcava MCP tools:

Related Use Cases

Generate Comprehensive Test Cases

Connect your AI assistant to Corcava and create thorough QA test coverage

Continue Reading

Progress updates with MCP

Routine that prompts the user for updates, then posts standardized progress comments to a curated list of tasks. Include...

Backlog grooming using MCP

Backlog grooming recipe for identifying tasks with no recent activity, suggesting next actions, and optionally adding cl...