Skip to main content
Back to blog

Azure DevOps Test Case Best Practices for QA Teams

Proven best practices for writing, organising, and maintaining test cases in Azure DevOps. Covers naming conventions, step quality, tagging strategy, reuse patterns, and how to keep large test suites maintainable.

InnovateBits6 min read
Share

Good test cases are the foundation of reliable QA. In Azure DevOps, test cases are shared work items — they're reused across sprints, linked to requirements, referenced in bug reports, and used to generate reports that executives read. Quality matters at every level.

These are the best practices that make the difference between a test suite that degrades over time and one that grows stronger.


Naming conventions

Consistent names make test cases findable and understandable without reading the steps.

Pattern: [Feature/Page] — [User action or scenario] — [Key condition]

✓ Checkout — Apply discount code — Valid code, first use
✓ Checkout — Apply discount code — Expired code
✓ Checkout — Apply discount code — Already-used code (single use)
✓ Login — Submit form — Empty password field
✓ Profile — Upload avatar — File exceeds 2MB limit

✗ Discount code test
✗ Test login 1
✗ Check if upload works

The three-part pattern ensures that a QA engineer unfamiliar with the test immediately knows:

  • What part of the application is under test
  • What action triggers the behaviour
  • What condition distinguishes this case from similar ones

Step quality standards

Every step must be independently executable:

Bad step:

Action: Test the login form
Expected: It should work

Good steps:

Step 1:
Action: Navigate to https://staging.app.com/login
Expected: Page title is "Sign In — AppName", email field and password field visible

Step 2:
Action: Enter "invalid@example.com" in email field, leave password empty, click Sign In
Expected: Password field shows red border, message "Password is required" below field
         Email field shows no error (valid email format)

Step 3:
Action: Enter "invalid@example.com" and "wrongpassword", click Sign In
Expected: Message "Invalid email or password" shown above form (not revealing which is wrong)

The rule: a tester who has never seen the application should be able to execute the steps and know exactly whether they passed or failed.


Preconditions and postconditions

Always specify what must be true before the test starts:

Preconditions:
- User is logged in as a registered customer (email: testuser@example.com)
- Shopping cart is empty
- Product "Laptop Pro X" (SKU: LPX-001) is in stock with price £899.99

Postconditions:
- Delete the test order created during execution
- Reset the test user's wishlist to empty

Preconditions ensure consistent test results. Without them, the same test case run by two testers in different states will produce inconsistent results.


Tagging strategy

Establish a consistent tag taxonomy and enforce it. Suggested tags:

By test type:
  regression      — must run before every release
  smoke           — critical path, run before any testing begins
  exploratory     — unscripted, manual investigation
  performance     — load or speed measurement
  security        — security-specific checks
  accessibility   — WCAG/ADA compliance

By area:
  checkout  auth  search  profile  admin  api  mobile

By sprint (when written):
  sprint-7  sprint-8  sprint-9

By status:
  needs-update  — steps are outdated but not yet fixed
  automated     — covered by automated tests (still useful for doc)
  deprecated    — replaced by another test case

Run bulk tag updates when promoting test cases from sprint to regression status after a feature ships.


Managing shared steps

Create shared steps for any action that appears in 3+ test cases:

Shared steps worth creating:
  - Standard login (admin / member / viewer)
  - Add product to cart
  - Complete checkout (test payment method)
  - Create a test order via API
  - Navigate to user settings

Update shared steps immediately when the underlying flow changes — all test cases using the shared step update automatically.


Reusing test cases across suites

Test cases in Azure DevOps are work items — they're reusable across multiple suites and test plans. Don't duplicate:

✓ Add TC-201 to both "Sprint 7 Regression" and "Wishlist Feature" suite
  (same work item, referenced in two places)

✗ Create TC-201a (copy of TC-201) for regression use
  (now you have two work items to maintain separately)

To add an existing test case to a suite: + Add test cases → Enter test case IDs or search.


Keeping test cases current

Stale test cases are worse than no test cases — they waste tester time and produce false results.

Maintenance workflow:

  1. After any UI or API change, search for affected test cases (Tags CONTAINS "checkout")
  2. Review each test case for accuracy
  3. Update steps, add tag updated-sprint-N, remove needs-update tag
  4. Note the change in the comment: "Updated step 2 — new checkout flow after v3.1 redesign"

Schedule a quarterly test case audit:

  • 30 minutes per major feature area
  • Review the 10 oldest test cases (sort by Created Date ascending)
  • Close any test cases for removed features
  • Update steps for changed flows

Quality checklist for new test cases

Before saving a new test case, verify:

☑ Title follows [Feature — Action — Condition] pattern
☑ Preconditions documented
☑ Every step has both Action AND Expected Result
☑ Expected results are specific (not "should work" or "correct")
☑ Negative test cases included (not just happy path)
☑ Boundary conditions covered (max, min, empty, null)
☑ Tagged with correct type, area, and sprint
☑ Linked to user story with "Tests" link type
☑ Priority set (don't leave at default "4")
☑ Postconditions documented (cleanup steps)

Common errors and fixes

Error: Test cases are duplicated across sprints instead of reused Fix: Educate the team on reuse. Use the "Add existing test cases" function instead of creating new ones. Periodic audits with a query (Title CONTAINS [duplicate keywords]) find accidental duplicates.

Error: Test case steps use vague language ("verify X works") Fix: Establish a peer review process for new test cases. Before any test case moves to "Ready" state, another team member must verify the steps are executable as written.

Error: Tags are inconsistent (regression, Regression, REGRESSION all used) Fix: Create a tags reference document and pin it to the team wiki. Azure DevOps tags are case-insensitive for filtering but display as entered — standardise on lowercase.

Error: Test cases don't reflect recent UI changes Fix: Add test case review to the definition of done for each user story. Before a story is closed, the QA engineer confirms all linked test cases still accurately reflect the implemented feature.

Free newsletter

Stay ahead in AI-driven QA

Get practical tutorials on test automation, AI testing, and quality engineering — straight to your inbox. No spam, unsubscribe any time.

Discussion

Sign in with GitHub to comment · powered by Giscus