Skip to main content
Welcome — this lesson covers change management for reliability in Site Reliability Engineering (SRE). Change is both essential and risky. This guide explains how to accept that change is inevitable, and how to reduce the chance that a change causes production outages. Follow these practices to move faster with lower risk. Start with a common story: an engineer pushes a hotfix directly to production without review because “it was just a quick config tweak.” Small, unreviewed changes frequently cause large, preventable incidents. That’s why structured change management is essential. Change is the primary source of production incidents. At Google, roughly 70% of incidents were traced to system modifications — deployments, configuration edits, dependency updates, or environment changes. In short: most production issues are caused, directly or indirectly, by change.
A slide titled "Changes and Service Reliability" stating that change is the primary source of production incidents, with a pie chart from Google's analysis showing 70% caused by system modifications.
If you care about uptime, you must care about how changes are made. Most engineers will at some point be involved in a production outage. The difference between a career-limiting event and a learning experience is often how well your change management process limits and contains damage. Common change failure modes (what often goes wrong):
Change TypeTypical Failure Modes
Deployments / RolloutsExpose code bugs, configuration drift, or rollout tooling errors
Configuration editsTypos, wrong environment targets, or incorrect values
Dependency upgradesAPI changes, behavioral regressions, or compatibility breaks
Environment changesOS/runtime updates, container runtime differences, infra changes
Database migrationsLong-running locks, schema incompatibilities, or data corruption
A presentation slide titled "Changes and Service Reliability" showing a table of common change failure modes and their failure causes. It lists Deployment Failures, Config Changes, Dependency Updates, Env Changes, and DB Migrations with brief descriptions of what can go wrong.
The core SRE challenge is balancing two competing goals: velocity (delivering business value quickly) and stability (keeping services reliable during change). Your processes should let teams move fast while keeping risk acceptable.
A slide titled "Changes and Service Reliability" showing a balance scale that contrasts Velocity (deploying changes quickly for business value) on the left with Stability (maintaining reliable service during changes) on the right.
Think of change like training at the gym: someone who trains consistently builds resilience. Frequent, small, validated changes increase resilience, speed feedback loops, and make recovery easier. Infrequent, large, untested changes are high risk. When testing and validation improve, teams can increase agility while preserving reliability. This produces a virtuous cycle: Small, frequent changes → faster feedback → improved testing and validation → higher confidence → greater stability.
A slide titled "Change and Stability — Partners, Not Opposites" showing a colorful circular diagram called "The Change Confidence Loop." The loop links small, frequent changes to better feedback, improved testing/validation, increased confidence, and greater stability.
Deployment strategies that reduce blast radius and increase safety
StrategyDescriptionWhen to use
Blue-greenDeploy the new version alongside the current one, validate it, then switch traffic. Enables near-instant rollback and environment isolation.Fast API updates, minimal downtime requirements
CanaryRoll out to a small subset of servers or users (e.g., 5% of traffic) and observe behavior before increasing traffic.Testing new logic against production traffic with limited exposure
Feature flagsShip code behind toggles and control feature exposure independently of deploys; enables gradual rollouts and rapid rollbacks.Complex feature rollouts, gradual experimentation, A/B testing
A presentation slide titled "Safe Deployment Strategies" showing a table that compares three deployment approaches—Blue-Green, Canary, and Feature Flags—with short descriptions, KodeKloud use cases, and benefits. The table highlights examples like instant rollback for Blue-Green, limited blast radius for Canary, and separating deployment from feature release for Feature Flags.
Monitoring and observability during change Change without monitoring is flying blind. Track these key metrics during and after deployments to determine whether a change is safe or if it should be halted/rolled back:
MetricWhy it matters
Error ratesDetect spikes in failures immediately after a change
LatencyIdentify degraded response times affecting UX
TrafficVerify that users can access services and that routing behaves correctly
Saturation (CPU, memory, disk)Catch resource pressure that can lead to cascading failures
Deployment progressDetect stuck rollouts or nodes that fail to update
Use dashboards and alerting based on these metrics to trigger automated or human responses.
A presentation slide titled "Monitoring Changes" with a target and several colorful arrows on the left and a checklist on the right listing monitoring metrics: Error Rates, Latency, Traffic, and Saturation with brief questions about each. The slide illustrates tracking system health and performance after changes.
When in doubt — roll back Prefer automatic rollbacks where possible. Modern deployment systems can detect problematic behavior and revert changes before human intervention is required. Common automatic rollback triggers:
  • Error rate thresholds exceeded
  • Latency thresholds exceeded
  • Health checks failing for the new version
  • Deployment timeouts or stalled rollouts
These triggers should align with your SLOs and error budgets so rollbacks happen before user experience degrades unacceptably.
A presentation slide titled "Automatic Rollback Mechanisms" showing four colored tiles for rollback triggers: Error Rate Thresholds, Latency Thresholds, Failed Health Checks, and Deployment Timeouts. Each tile includes a short description of when to roll back (excess errors, degraded response times, failed health checks, or deployments taking too long).
Practical example — Argo Rollouts + Prometheus automated canary analysis Below is a practical Argo Rollouts example that references an AnalysisTemplate named “error-rate-check”. The template uses a Prometheus query to compute an error rate and triggers a failure (and thus a rollback) if the measured error rate exceeds the provided threshold argument. Rollout (references the analysis template and sets the errorRate arg):
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: example-rollout
spec:
  replicas: 3
  strategy:
    canary:
      steps:
        - setWeight: 10
        - pause: { duration: 30s }
        - analysis:
            templates:
              - templateName: error-rate-check
                args:
                  - name: errorRate
                    value: "5" # 5% error threshold
        - setWeight: 50
        - pause: { duration: 30s }
        - setWeight: 100
AnalysisTemplate (invoked by the Rollout; uses a Prometheus query and a failureCondition referencing the passed argument):
apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
  name: error-rate-check
spec:
  metrics:
    - name: http-error-rate
      provider:
        prometheus:
          # Prometheus query computing percentage of error requests over total requests in the last 1m
          query: |
            sum(rate(http_request_errors_total[1m]))
            /
            sum(rate(http_requests_total[1m])) * 100
      # Use the passed-in argument to decide when the metric indicates failure.
      # This template evaluates to a numeric "result" and the rollout will fail if the condition is true.
      failureCondition: result > {{args.errorRate}}
Align thresholds (for example, 5%) with your SLOs and error budget. Use conservative values for early-stage rollouts and tighten them as you gain confidence.
Post-deployment verification Continue validating after rollout with a layered approach:
  • Smoke tests: basic functionality checks to confirm the service is up.
  • Integration tests: validate interactions between services.
  • Performance tests: simulate real-world load to check behavior under stress.
  • Canary user testing: let a subset of real users try the change.
  • Gradual traffic shifting: ramp traffic slowly while watching metrics.
A slide titled "Post-Deployment Verification" showing a funnel diagram of the deployment verification process with stages: Smoke Tests, Integration Tests, Performance Tests, Canary User Testing, and Gradual Traffic Shifting. Each stage includes a short note (e.g., basic functionality checks, validating component interactions, verifying performance, real users testing, and gradually shifting traffic).
Change management policy — what to define A written policy should specify different procedures for different change types (regular deployments, DB changes, emergency changes). For example:
Change CategorySchedulingTypical WindowApproval
Regular deploymentsPlanned and scheduledTue/Thu 10:00–14:00 UTCTeam or release manager
Database changes/migrationsLow-traffic windowSun 02:00–06:00 UTCDB lead + SRE
Emergency changesAs needed, follow emergency processN/APre-defined approver (SRE Lead)
A slide titled "Practical Change Management Rules" showing a three-column table comparing Regular Deployments, Database Changes, and Emergency Changes across rows for days, time, reason, approval required, and approver. It lists windows like Tuesday/Thursday 10:00–14:00 UTC for regular deployments and Sunday 02:00–06:00 UTC for database changes, while emergency changes are N/A and require SRE Lead approval.
Your policy should also specify:
  • When to require a postmortem (for example, after emergency changes or any incident causing customer impact).
  • Freeze periods when changes are restricted (e.g., during product launches or high-traffic events).
  • Risk categories (low/medium/high) with approval gates and explicit approvers.
Wrap-up Effective change management helps teams innovate without compromising reliability. Every deployment, configuration tweak, or feature also impacts capacity — the next step in reliable operations is capacity planning. Related resources and references: