Use case · QA / Test engineer

Test case generation

Produce in 15-30 minutes an exhaustive test plan (happy-path + edge cases) from a user story.

Test case generation is one of the most rentable activities to inject AI into the QA flow. From a user story, AI can produce in minutes 20-50 test cases covering expected behaviors, edge cases, errors. QA keeps central value: prioritize, execute, identify real bugs AI didn't think to test. This guide presents the workflow.

  1. Submit user story and context

    Story + acceptance criteria + technical context (API, UI, mobile). Richer context = more relevant generated cases.

  2. Request 4 categories of cases

    Happy-path (3-5 cases), edge cases (5-10), errors and invalid inputs (5-10), regression tests (3-5). Systematic coverage.

  3. Hierarchize by priority

    AI produces a lot; QA prioritizes. Criteria: business impact, usage frequency, criticality. Top 20% covers 80% of real bugs.

  4. Convert to tool format

    Per stack: Gherkin for Cucumber, TestRail/Xray format, or simply markdown list. AI can convert between formats.

  5. Maintain over evolution

    At each feature evolution: have test cases updated. Makes testing alive rather than debt.

2 tested and optimized prompts. Adapt the bracketed variables [VARIABLE] to your context.

Test plan from user story

You're a senior QA. Generate a test plan for this user story:

**User story**: [STORY]
**Acceptance criteria**: [LIST]
**Technical stack**: [WEB / MOBILE / API]
**Project context**: [USEFUL INFO]

Produce:
1. **Happy-path** (3-5 cases): nominal behavior
2. **Edge cases** (5-10): boundary values, empty states, first uses, deactivated accounts, partial permissions
3. **Error cases** (5-10): invalid inputs, timeouts, network errors, concurrency conflicts, missing data
4. **Regression tests** (3-5): possible impact on existing features

For each case: (a) ID, (b) title, (c) prerequisites, (d) steps, (e) expected result, (f) priority.

Targeted exploratory test

For this feature:

[FEATURE]

Produce a 60-90 min targeted exploration plan:
1. **Charters** (3-5): short exploration missions
2. **Tactics** per charter: approaches to use (boundary testing, error guessing, persona-based)
3. **Hidden risks**: zones where bugs are likely
4. **Questions to ask** during exploration

Objective: structured but open exploration finding bugs automated tests don't.

Curated selection of the 3 best AI tools for test case generation.

Logo Claude AI
Claude AI
4.9/5· 55 reviews·Free

Why for this use case: Most rigorous for exhaustive case generation with well-anticipated edge cases.

Logo Claude Code
Claude Code
4.9/5· 92 reviews·20 USD/month

Why for this use case: To generate in project context: code access, conventions, existing fixtures.

Logo ChatGPT
ChatGPT
4.9/5· 528 reviews·20 USD/month

Why for this use case: Code Interpreter useful to generate varied test datasets.

Time saved

70% on planning (15-30 min vs 1-2h)

Quality gain

Exhaustive edge case coverage, format ready for QA tools

Stack cost

$20-30/month

Estimates based on 2026 benchmarks and user feedback. Actual ROI depends on your context.

Are generated test cases sufficient?

For systematic coverage: yes. For creativity (improbable cases revealing subtle bugs): less. Best practice: AI for mechanical 80%, human exploration for remaining 20%.

Can AI prioritize test cases?

For indicative prioritization based on technical criticality: yes. For business prioritization (financial impact, affected client segment): less. QA arbitrates per context.

Should all generated cases be automated?

No. Classic rule: 70% automated (regression, smoke), 20% manual (exploration, UX), 10% out of scope. AI can advise distribution but team's choice.

Does AI really improve quality?

Indirectly: exhaustive coverage, fewer omissions. Also: frees time for exploration and critical testing. Net: fewer prod bugs, more release confidence.

Transparency: some links are affiliate links. No impact on our evaluations or prices.