Risk-Based Testing Approach Using Azure DevOps
How to implement risk-based testing in Azure DevOps. Learn to prioritise tests by risk, configure pipelines to run high-risk tests first, use Azure Boards for risk assessment, and make confident release decisions based on risk coverage.
Risk-based testing is about focus: testing the parts of your system that matter most first. In Azure DevOps, this means configuring pipelines, tagging test cases by risk level, and using coverage data to make confident release decisions.
Risk assessment in Azure Boards
Start by assessing risk at the user story level. Add a custom field Risk Level to the User Story work item type:
- Go to Organisation Settings → Process → [Your process] → Work item types → User Story
- Add field: Risk Level (Picklist: Critical, High, Medium, Low)
Now every user story has a risk designation. Test cases linked to Critical and High stories run first.
Risk classification criteria
| Risk Level | Criteria |
|---|---|
| Critical | Payment processing, authentication, data integrity, legal compliance |
| High | Core user journeys, data export/import, third-party integrations |
| Medium | Secondary features, admin functions, reporting |
| Low | UI polish, non-critical content, rarely-used features |
Tagging test cases by risk
Tag test cases in Azure Test Plans:
risk:critical — payment flows, auth, data integrity
risk:high — core user journeys, key integrations
risk:medium — secondary features
risk:low — cosmetic, rarely used
Create a query-based suite for each risk level:
Risk Critical Suite:
Type = Test Case AND Tags CONTAINS "risk:critical"
Risk High Suite:
Type = Test Case AND Tags CONTAINS "risk:high"
Risk-ordered pipeline execution
Run critical and high-risk tests first, fail fast:
stages:
# Critical risk tests — run on every PR
- stage: CriticalTests
displayName: Critical Risk Tests
jobs:
- job: Critical
steps:
- script: npx playwright test --grep "@critical"
displayName: Critical path tests
# High risk — run on merge to main
- stage: HighRiskTests
displayName: High Risk Tests
dependsOn: CriticalTests
condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))
jobs:
- job: HighRisk
steps:
- script: npx playwright test --grep "@high"
displayName: High risk tests
# Full regression — nightly
- stage: FullRegression
displayName: Full Regression
dependsOn: HighRiskTests
condition: eq(variables['Build.Reason'], 'Schedule')
jobs:
- job: All
steps:
- script: npx playwright test
displayName: All testsRisk coverage report for release decisions
Before releasing, produce a risk coverage summary:
Sprint 9 Release — Risk Coverage Report
Critical Risk Areas (Payment, Auth):
Test cases: 24
Executed: 24 (100%)
Passed: 24 (100%)
Open bugs: 0
Decision: ✓ CLEAR
High Risk Areas (Checkout, Profile):
Test cases: 42
Executed: 42 (100%)
Passed: 41 (97.6%)
Open bugs: 1 (P2 — discount rounding edge case)
Decision: ✓ RELEASE WITH KNOWN DEFECT (documented)
Medium Risk Areas (Admin, Reports):
Test cases: 28
Executed: 28 (100%)
Passed: 28 (100%)
Decision: ✓ CLEAR
Low Risk Areas (UI polish):
Test cases: 14
Executed: 10 (71%) ← Not fully executed
Passed: 10 (100% of executed)
Decision: ✓ ACCEPTABLE (low risk, not fully tested)
Overall:
Critical/High risk: 100% coverage ✓
Release recommendation: APPROVED
Common errors and fixes
Error: Risk tags aren't consistent across the team
Fix: Enforce tag standardisation in the Definition of Done. Before a test case moves to "Ready" state, it must have a risk: tag. Add this to your team's work item workflow validation.
Error: Critical tests run but high-risk tests are skipped even on main branch merges
Fix: Check the condition on the HighRiskTests stage. ne(variables['Build.Reason'], 'PullRequest') should allow it to run on push to main. Verify with Build.Reason variable values: IndividualCI (push), PullRequest, Schedule.
Error: Release decision is made without checking all critical test results Fix: Add a required approval gate that references the risk coverage report. The approver must confirm they've reviewed the critical coverage before approving the production deployment.
Stay ahead in AI-driven QA
Get practical tutorials on test automation, AI testing, and quality engineering — straight to your inbox. No spam, unsubscribe any time.
Discussion
Sign in with GitHub to comment · powered by Giscus