Skip to main content
Back to blog

CI/CD for QA Engineers Using Azure DevOps: Beginner Guide

A beginner-friendly guide to CI/CD for QA engineers using Azure DevOps. Learn what CI/CD is, how pipelines work, how to integrate automated tests, and how to read and act on pipeline results — with real YAML examples.

InnovateBits5 min read
Share

CI/CD is no longer just a developer concern. QA engineers who understand how pipelines work — and how to integrate tests into them — deliver significantly more value than those who only test manually. This guide explains CI/CD from the ground up for QA engineers who want to get practical quickly.


What CI/CD means for QA

Continuous Integration (CI): Every time a developer pushes code, an automated process runs — compiling the code, running tests, and reporting results. If tests fail, the pipeline fails and the developer is notified immediately.

Continuous Delivery (CD): After CI succeeds, the code is automatically deployed to staging (or production, in continuous deployment). QA engineers test in staging, knowing it always contains the latest passing code.

For QA, CI/CD means:

  • Tests run automatically on every code change — no waiting for a dev to "push to staging"
  • Every deployment to staging is known to have passed automated tests
  • Failures are caught immediately, not days later
  • You can run regression tests on-demand without setting anything up manually

How Azure Pipelines works

An Azure Pipeline is defined in a YAML file stored in the repository (azure-pipelines.yml). When a trigger event occurs (push, PR, schedule), Azure DevOps reads the YAML and executes the defined jobs on an agent (a virtual machine).

# azure-pipelines.yml — minimal QA-focused pipeline
 
trigger:
  branches:
    include:
      - main
 
pool:
  vmImage: ubuntu-latest    # Microsoft-hosted Linux VM
 
stages:
  - stage: Test
    jobs:
      - job: RunTests
        steps:
          - script: echo "Pipeline started"
          - script: npm ci
          - script: npm test

Your first test-integrated pipeline

trigger:
  branches:
    include:
      - main
      - feature/*
 
pool:
  vmImage: ubuntu-latest
 
variables:
  NODE_VERSION: '20.x'
 
stages:
  - stage: Build
    displayName: Build Application
    jobs:
      - job: Build
        steps:
          - task: NodeTool@0
            inputs:
              versionSpec: $(NODE_VERSION)
            displayName: Setup Node.js
 
          - script: npm ci
            displayName: Install dependencies
 
          - script: npm run build
            displayName: Build application
 
  - stage: UnitTest
    displayName: Unit Tests
    dependsOn: Build
    jobs:
      - job: Unit
        steps:
          - script: npm ci
          - script: npm run test:unit -- --reporter=junit --output=test-results/unit.xml
            displayName: Run unit tests
 
          - task: PublishTestResults@2
            displayName: Publish unit test results
            inputs:
              testResultsFormat: JUnit
              testResultsFiles: test-results/unit.xml
              testRunTitle: Unit Tests
            condition: always()
 
  - stage: E2ETest
    displayName: End-to-End Tests
    dependsOn: Build
    jobs:
      - job: E2E
        steps:
          - script: npm ci
          - script: npx playwright install --with-deps chromium
            displayName: Install Playwright browsers
 
          - script: npx playwright test --reporter=junit,html
            displayName: Run E2E tests
            env:
              BASE_URL: $(STAGING_URL)
 
          - task: PublishTestResults@2
            displayName: Publish E2E results
            inputs:
              testResultsFormat: JUnit
              testResultsFiles: test-results/e2e.xml
              testRunTitle: E2E Tests
            condition: always()
 
          - task: PublishPipelineArtifact@1
            displayName: Upload test report
            inputs:
              targetPath: playwright-report
              artifact: playwright-report
            condition: always()

Reading pipeline results

After a pipeline run, navigate to Pipelines → [Pipeline] → [Run].

The summary shows:

  • Stage results (pass/fail for each stage)
  • Test tab: total tests, passed, failed, skipped
  • Artifacts: downloadable reports

Click the Tests tab to see:

  • All test cases sorted by status
  • Failed tests with error messages
  • Duration per test
  • Test history (was this test failing before?)

For each failed test, click it to see:

  • The exact assertion that failed
  • Stack trace
  • Any attached screenshots (if your test framework captures them)

Investigating pipeline failures

When a pipeline fails, QA engineers need to distinguish between:

  1. New product bug — a test that was passing now fails due to a code change
  2. Environment issue — the staging environment has a problem unrelated to the code
  3. Test flakiness — the test sometimes passes and sometimes fails for non-deterministic reasons
  4. Test code bug — the test itself is wrong or outdated

Diagnosis steps:

1. Check: was this test passing before this commit?
   → Yes: Likely a regression introduced by this commit
   → No: Was it failing for multiple commits? Likely flaky or environment

2. Click the failed test → read the error message and stack trace
   → Assertion error: test expects X but got Y → product bug or test bug
   → Timeout error: test waited for element that never appeared → flaky or slow environment
   → Connection error: can't reach the application → environment issue

3. Re-run the failed job (click Re-run failed jobs)
   → Passes on re-run: flaky test
   → Still fails: product bug or test bug

Scheduled nightly regression

Run the full regression suite overnight:

schedules:
  - cron: "0 1 * * 1-5"   # 1 AM Monday-Friday UTC
    displayName: Nightly regression
    branches:
      include:
        - main
    always: true            # Run even if no code change
 
trigger: none               # Don't run on push (handled by PR pipeline)

Set up email notifications: Pipeline → Notifications → Add subscription for "Build fails" and "Build succeeds after failure".


Common errors and fixes

Error: Pipeline fails with "npm: command not found" Fix: Add the NodeTool@0 task before any npm commands to install Node.js on the agent.

Error: PublishTestResults task finds no files Fix: Check the glob pattern in testResultsFiles. Run find . -name "*.xml" in a preceding script step to debug the actual file path.

Error: E2E tests fail with "Browser not found" Fix: Add npx playwright install --with-deps chromium before the test script. Microsoft-hosted agents don't have Playwright browsers pre-installed.

Error: STAGING_URL is undefined in the test script Fix: Declare the variable in Pipeline → Variables or Library → Variable Groups. Reference it in YAML as $(STAGING_URL) and pass it to scripts as env: BASE_URL: $(STAGING_URL).

Error: Pipeline runs but tests never execute Fix: Check whether npm test or npx playwright test exits successfully even with 0 tests. Verify the test file discovery pattern in your test config.

Free newsletter

Stay ahead in AI-driven QA

Get practical tutorials on test automation, AI testing, and quality engineering — straight to your inbox. No spam, unsubscribe any time.

Discussion

Sign in with GitHub to comment · powered by Giscus