Step-by-step guide to deploy and configure an Azure Load Balancer, covering frontend/backend setup, load balancing rules, health probes, NAT, outbound connectivity, and zone-redundant deployment
This guide walks through deploying an Azure Load Balancer from the portal, explaining each component and showing the step-by-step configuration you need to build a zone-redundant, load-balanced web tier.Key concepts covered:
Frontend IP configurations and public vs. internal load balancers
Backend pools (VM NICs or static IPs)
Load balancing rules and session persistence
Health probes (TCP/HTTP/HTTPS)
Inbound NAT rules for per-VM access (SSH/RDP)
Outbound connectivity and NAT gateway recommendations
First step: choose your Azure subscription and resource group, provide a unique name for the Load Balancer, select the deployment region, and pick the appropriate SKU (Basic, Standard, or Gateway) based on required features and scale.
Select the Load Balancer type (Public/internet-facing or Internal/private) and the tier (Regional or Global). Once those are chosen, the next focus is backend pools.
Session persistence determines whether requests from the same client are consistently routed to the same backend instance. By default, Azure Load Balancer uses a five-tuple hash:
Source IP
Source port
Destination IP
Destination port
Protocol
Persistence options:
None — no stickiness (five-tuple hashing; best distribution)
Client IP — two-tuple hashing (source IP, destination IP)
Client IP and protocol — three-tuple hashing (source IP, destination IP, protocol)
Use session persistence for applications that require session awareness — for example, shopping carts, login sessions, or forms.
Health probes continuously check backend instances and ensure traffic is routed only to healthy VMs. Configure a probe and associate it with your load balancing rule.Probe types and parameters:
With a Standard Load Balancer, backend VMs do not automatically get outbound Internet access. To enable SNAT-based outbound connectivity, create an explicit outbound rule that uses the load balancer’s public IP or public IP prefix. One outbound rule can apply to multiple backend pools.If you require large-scale outbound connections (to avoid SNAT port exhaustion), use a NAT gateway for the subnet — it’s optimized for high-scale outbound translations.
Use a NAT gateway when your backend VMs require many concurrent outbound connections or when SNAT port exhaustion becomes a concern. For simple outbound needs, a Standard Load Balancer outbound rule is sufficient.
Portal walkthrough — build a public Standard Load Balancer (zone-redundant)
In this example we place a public Standard Load Balancer in front of three Linux VMs distributed across three availability zones to achieve zone redundancy.I have three VMs deployed in East US 2 across three availability zones. The goal is to place the Load Balancer in front of them so if one zone fails the remaining zones still serve traffic.
Steps (summary):
Create a Standard Load Balancer (use Standard SKU unless you have a specific Basic requirement).
Add a frontend IP configuration (create a new public IP — you can enable availability zones here for zone redundancy).
Create a backend pool and add the three VM NICs.
Create a health probe (e.g., HTTP /).
Create a load balancing rule mapping frontend port 80 → backend port 80 with the probe and desired session persistence.
Optionally add inbound NAT rules for SSH to individual VMs and configure outbound rules or a NAT gateway for Internet access.
Create a Standard Load Balancer and select subscription, resource group, name (e.g., “AZ-700 Web LB”), region (East US 2), SKU = Standard.
Note: the Basic SKU has limited capabilities and may not be available in all subscriptions or regions; choose Standard for production and zone-redundant scenarios.Add a frontend IP configuration for a public load balancer: give it a name (e.g., AZ-700-LB-FE), choose IPv4, and create a new Public IP (e.g., AZ-700-LB-PIP). Optionally select availability zones for the public IP to enable zone redundancy.
Add the backend pool by selecting the VNet where the VMs live and then adding each VM NIC (or IP configuration).
You can skip inbound rules / NAT rules during initial creation and add them later. Review and deploy the Load Balancer.
Health probe: AZ-700-web-HP (HTTP probe on /, interval 5s)
Session persistence: None (or Client IP if session affinity required)
Save to create the rule, which bridges the LB frontend port 80 to backend port 80.
After saving you’ll see the rule listed and associated with the backend pool and health probe.
Compare the load balancer frontend public IP with the IP shown on VM pages — it will be the load balancer’s frontend IP. Open that public IP in a browser to see responses from your backend web servers. Refreshing may return responses from different backend VMs due to five-tuple hashing.
Inbound NAT rules let you SSH/RDP to individual backend VMs through the Load Balancer’s public IP on different frontend ports.Example NAT rule:
Name: AZ700-LB-NAT-VM1
Target VM: vm-az700-lb-web-az1
Frontend IP: AZ-700-LB-FE
Frontend port: 9090
Backend port: 22 (SSH)
Connecting to the load balancer public IP on port 9090 will be translated to port 22 on vm-az700-lb-web-az1.
The portal may recommend migrating older NAT rule versions (V1) to NAT rule V2. Prefer V2 to avoid future migrations and ensure compatibility.
SSH through the load balancer:
Copy
ssh kodekloud@20.12.96.233 -p 9090The authenticity of host '[20.12.96.233]:9090 ([20.12.96.233]:9090)' can't be established.ED25519 key fingerprint is SHA256:ROcidCMQg4adTewDLIo6h4IZe/8iqG1ZWaMRvIDCUJM.This key is not known by any other names.Are you sure you want to continue connecting (yes/no/[fingerprint])?# type "yes" then enter password to complete SSH login
If VMs are behind a Standard Load Balancer and no outbound rule/NAT gateway is configured, outbound Internet access will not work by default. DNS resolution may succeed but TCP/HTTP connections fail.Example failing apt update when outbound is blocked:
Copy
kodekloud@vm-az700-lb-web-az1:~$ sudo apt update0% [Connecting to azure.archive.ubuntu.com (52.252.75.106)]W: Failed to fetch http://azure.archive.ubuntu.com/ubuntu/dists/jammy/InRelease Could not connect to azure.archive.ubuntu.com:80 (52.252.75.106), connection timed outW: Failed to fetch http://azure.archive.ubuntu.com/ubuntu/dists/jammy-updates/InRelease Unable to connect to azure.archive.ubuntu.com:http:W: Failed to fetch http://azure.archive.ubuntu.com/ubuntu/dists/jammy-backports/InRelease Unable to connect to azure.archive.ubuntu.com:http:W: Failed to fetch http://azure.archive.ubuntu.com/ubuntu/dists/jammy-security/InRelease Unable to connect to azure.archive.ubuntu.com:http:W: Some index files failed to download. They have been ignored, or old ones used instead.
After creating an outbound rule that allocates SNAT ports (or after attaching a NAT gateway), the update succeeds:
Copy
kodekloud@vm-az700-lb-web-az1:~$ sudo apt updateHit:1 http://azure.archive.ubuntu.com/ubuntu jammy InReleaseGet:2 http://azure.archive.ubuntu.com/ubuntu jammy-updates/InRelease [128 kB]Get:3 http://azure.archive.ubuntu.com/ubuntu jammy-backports/InRelease [127 kB]Get:4 http://azure.archive.ubuntu.com/ubuntu jammy-security/InRelease [129 kB]Get:5 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [2842 kB]Fetched 3227 kB in 1s (3386 kB/s)Reading package lists... DoneBuilding dependency tree... DoneReading state information... Done20 packages can be upgraded. Run 'apt list --upgradable' to see them.
This demonstrates how outbound rules enable backend VMs to use the load balancer’s public IP for Internet access via SNAT. For production or high-scale outbound needs, prefer NAT gateway.