The QA Engineer's Guide to Scrum: Roles, Events, and Best Practices
A practical guide to Scrum from a QA perspective — how testing fits into sprints, what QA does in each Scrum event, how to influence quality from inside the team, and common anti-patterns to avoid.
Scrum is the most widely adopted Agile framework, used by the majority of software development teams worldwide. For QA engineers transitioning from Waterfall or joining their first Scrum team, understanding how testing fits into the Scrum rhythm is essential. This guide covers Scrum from a QA perspective — not as a passive participant, but as an active quality advocate.
Scrum Basics for QA Engineers
Scrum organises work into sprints — time-boxed iterations, typically 1-4 weeks, at the end of which working software is delivered. The three Scrum roles are:
Product Owner — owns the product backlog, prioritises what gets built, defines acceptance criteria.
Scrum Master — facilitates the process, removes impediments, coaches the team on Scrum practices. Not the team manager.
Development Team — the cross-functional team that builds the product. QA engineers are part of the development team in Scrum — not a separate department.
This is the most important conceptual shift for QA engineers from Waterfall backgrounds: in Scrum, quality is not a separate phase owned by QA — it's a team property that everyone contributes to, with QA engineers playing a specialist leadership role.
QA's Role in the Five Scrum Events
1. Sprint Planning
Sprint planning determines what the team commits to building in the upcoming sprint. QA engineers should be active here, not passive.
What to do in sprint planning:
-
Estimate testing effort for each story. If a feature requires significant test scenario design, automation work, or exploratory testing, that effort needs to be in the estimate or the sprint will be undercapacitated.
-
Flag testability concerns early. "This story says 'user can upload a document' — what file types are supported? What's the size limit? What happens if the upload fails mid-way? These need to be in the acceptance criteria before we commit."
-
Identify test dependencies. "This story depends on the auth service being available in the test environment — is that confirmed?"
-
Challenge vague acceptance criteria. Vague stories produce vague tests and inadequate coverage. The sprint planning conversation is the cheapest time to resolve ambiguity.
2. Daily Standup
The standup covers: what I did yesterday, what I'm doing today, any blockers. QA-specific patterns:
-
Communicate testing progress: "I finished testing AUTH-123 and found two issues — one's minor and one's a blocker for AUTH-124. I'm logging both now."
-
Surface blockers early: "I'm blocked on testing the payment flow because the Stripe test keys aren't configured in staging yet."
-
Coordinate with developers: "I'll be ready to test DEV-456 this afternoon — is the feature branch deployed to staging?"
3. Sprint Review
The sprint review demonstrates working software to stakeholders. QA's role:
-
Present testing results. A brief summary of what was tested, test coverage achieved, and any known defects not yet resolved.
-
Facilitate the demo in some teams — QA engineers often have the broadest system knowledge and can demonstrate features confidently.
-
Note feedback from stakeholders that has quality implications ("That error message is confusing" is a quality concern that should turn into a bug or story).
4. Sprint Retrospective
The retrospective asks: what went well, what didn't, what will we change? QA perspective:
-
Raise quality data: "We had three defects escape to production this sprint. Two were in areas we don't have automated coverage for — I'd like us to prioritise that next sprint."
-
Address process issues: "We keep getting stories that arrive for testing on the last day with insufficient time to test properly. Can we agree on a ready-for-testing gate earlier in the sprint?"
-
Celebrate quality wins: "The new login flow automation we added caught the regression from last week's change before it reached production — that's exactly what we built it for."
5. Sprint Backlog Refinement (Grooming)
Refinement happens mid-sprint to prepare stories for the next sprint. This is where QA can have the most influence.
QA activities in refinement:
- Write or review acceptance criteria
- Break stories into testable chunks
- Identify edge cases and add them as sub-tasks
- Estimate test effort
- Flag stories that need a technical spike before testing is possible
The Definition of Done: QA's Most Important Tool
The Definition of Done (DoD) defines what it means for a story to be "finished." Without QA input, teams default to "it works on my machine." Push for a DoD that includes quality:
Definition of Done:
✅ Acceptance criteria met and verified by QA
✅ Unit tests written and passing
✅ API tests updated for changed endpoints
✅ No critical or high bugs outstanding
✅ Code reviewed (including test review)
✅ Deployed to staging and smoke tested
✅ Documentation updated if API changed
A DoD without testing criteria is an incomplete DoD. Advocate for this at every retrospective until it's embedded in the team's working agreement.
Sprint Testing: Practical Patterns
The "desk check"
Before formally testing a story, QA engineers do a quick desk check with the developer when the feature is "dev complete":
- Developer demonstrates the happy path on their machine
- QA reviews: does this match the acceptance criteria? Are there obvious gaps?
- Any issues resolved before the story moves to the test environment
Desk checks catch low-hanging fruit (missing validation, incomplete flows) without consuming full testing cycles.
Pairing on tests
For complex or high-risk features, QA and developer pair to write tests together. The developer understands implementation details that help write better tests; the QA engineer understands edge cases and risk. The result is better coverage and shared ownership.
Testing in the sprint vs. testing after
A common dysfunction is "sprint commitments" that don't include testing time. Features are "coded" in one sprint and "tested" in the next — this is mini-Waterfall, not Agile.
Effective Scrum teams size sprints so testing is included in the sprint that builds the feature. This requires:
- Honest story estimation that includes testing effort
- Starting testing early in the sprint (as soon as a feature is dev-complete, not at the end)
- Developers helping with unit and API tests so QA's effort focuses on validation and E2E
Common Scrum Anti-Patterns for QA
The testing bottleneck. QA can't keep pace with development, stories pile up, and sprint end means "coded" not "done." Fix: involve QA earlier (desk checks, pairing), ensure DoD includes testing, and balance team capacity.
QA as gatekeeper. When QA is seen as the team responsible for quality rather than a quality advocate, developers stop caring about their own code quality. Fix: make quality metrics visible to the whole team; celebrate shared ownership.
Ignoring automation debt. Sprints deliver features but never invest in automation. Manual regression grows sprint over sprint. Fix: include automation stories in every sprint backlog; treat automation debt like technical debt.
Testing only at the end. All testing crammed into the last two days of a two-week sprint. Fix: test incrementally as stories complete, not in a batch at sprint end.
Vague acceptance criteria. "User can search for products." Search by what? With what results? With no results? Fix: use Given/When/Then format for acceptance criteria:
Given I am on the products page
When I search for "wireless headphones"
Then I see products matching "wireless headphones"
And results are sorted by relevance
And the search term is highlighted in results
QA Career in a Scrum World
The best QA engineers in Agile teams are not test executors — they're quality advocates who influence the entire development process. They write automation, yes, but they also:
- Shape requirements to be testable and complete
- Coach developers on writing better tests
- Drive adoption of quality engineering practices across the team
- Use defect data to identify systemic process improvements
- Own the quality strategy for their team
This is a higher-value, more influential role than "the person who tests things at the end." Claiming it requires active participation in every Scrum event, not just the testing phases.
For more on QE strategy and how to build this kind of practice, see our Quality Engineering Strategy Roadmap. For the automation practices that underpin effective Agile testing, see our Playwright and API Testing guides.