Skip to main content
This article shows how to control traffic flow inside an Azure Virtual Network (VNet) using user-defined routes (UDRs) and a network virtual appliance (NVA). Instead of relying solely on Azure system routes, UDRs let you explicitly decide the next hop for selected destination prefixes — for example, forcing traffic to pass through an NVA or firewall for inspection. On the left side of the diagram below you can see a Virtual Network split into subnets: Frontend, DMZ (containing a virtual appliance), and Backend. The routing table shown is what allows you to route Frontend -> DMZ (virtual appliance) -> Backend rather than letting Azure send traffic directly.
A slide titled "Virtual Network Traffic Routing" showing a schematic of a virtual network with Frontend, DMZ (containing a Virtual Appliance), and Backend subnets connected via a Routing Table. To the right is a screenshot of an Azure "Create Route table" configuration pane.
Traffic from the Frontend subnet can be routed through the DMZ virtual appliance for inspection and then forwarded to the Backend. The key component making this possible is the Route Table: when you create a Route Table in Azure you add routes that specify a destination prefix and a next hop (for example, Next Hop Type = Virtual appliance and Next Hop Address = the NVA’s IP). When creating a route table you choose the subscription, resource group, name, region, and whether to propagate gateway routes (propagation is usually left enabled unless you have specific reasons to disable it). Next we’ll walk through a hub-and-spoke lab that demonstrates forcing spoke-to-spoke traffic through a hub NVA. In the lab the NVA is a Linux VM with IP forwarding enabled located in the hub VNet. Two spoke VNets each contain a VM that must send traffic to the other spoke via the hub NVA so the appliance can inspect and forward packets. Architecture diagram:
A simplified Azure hub-and-spoke network diagram showing two spoke VNets (each with a VM) peered to a hub VNet that contains an NVA. The diagram also shows traffic from the spoke VMs routed to the NVA via user-defined routes (UDR).
Lab specifics
  • Spoke 1 (East US) VM: 10.60.1.4
  • Spoke 2 (West US) VM: 10.50.1.4
  • Hub (Central US) NVA VM: 10.70.1.4 (IP forwarding enabled on NIC and in OS)
  • VNet peerings exist between each spoke and the hub. Peerings allow forwarded traffic so packets can traverse the hub.
The whole environment can be deployed with PowerShell. Below are the important parts of the deployment script (variables, NIC creation with IP forwarding for the hub, and cloud-init to enable IP forwarding inside the Linux guest). These are key, corrected segments — adjust names, prefixes and locations for your environment.
# ------------------------- Variables -------------------------
$resourceGroup       = "rg-az700-udr-nva"
$locationEUS         = "eastus"
$locationWUS         = "westus"
$locationCentral     = "centralus"
$username            = "kodekloud"
$passwordPlain       = "@dminP@ssw0rd"    # Lab only
$securePassword      = (ConvertTo-SecureString $passwordPlain -AsPlainText -Force)
$credential          = New-Object System.Management.Automation.PSCredential($username, $securePassword)

# Image (Ubuntu 22.04 LTS)
$imagePublisher      = "Canonical"
$imageOffer          = "0001-com-ubuntu-server-jammy"
$imageSku            = "22_04-lts-gen2"
$imageVersion        = "latest"

# VNet / VM naming (hub & spokes) - hub hosts the NVA in centralus
$vnet1Name           = "vnet-az700-udr-spoke-eus"
$vnet2Name           = "vnet-az700-udr-spoke-wus"
$vnet3Name           = "vnet-az700-udr-hub"
$subnet1Name         = "subnet-az700-udr-spoke-eus"
$subnet2Name         = "subnet-az700-udr-spoke-wus"
$subnet3Name         = "subnet-az700-udr-hub"

$vm1Name             = "vm-az700-udr-spoke-eus-01"
$vm2Name             = "vm-az700-udr-spoke-wus-01"
$vm3Name             = "vm-az700-udr-hub-01"   # NVA
Create NICs and enable IP forwarding on the hub/NVA NIC
# Example: create NICs (simplified)
$subnet1 = Get-AzVirtualNetworkSubnetConfig -Name $subnet1Name -VirtualNetwork $vnet1
$subnet2 = Get-AzVirtualNetworkSubnetConfig -Name $subnet2Name -VirtualNetwork $vnet2
$subnet3 = Get-AzVirtualNetworkSubnetConfig -Name $subnet3Name -VirtualNetwork $vnet3

$nic1 = New-AzNetworkInterface -Name "nic1" -ResourceGroupName $resourceGroup -Location $locationEUS -Subnet $subnet1 -PublicIpAddress $pip1 -NetworkSecurityGroup $nsg1
$nic2 = New-AzNetworkInterface -Name "nic2" -ResourceGroupName $resourceGroup -Location $locationWUS -Subnet $subnet2 -PublicIpAddress $pip2 -NetworkSecurityGroup $nsg2

# For the hub/NVA NIC: enable IP forwarding on the NIC
$nic3 = New-AzNetworkInterface -Name "nic3" -ResourceGroupName $resourceGroup -Location $locationCentral -Subnet $subnet3 -PublicIpAddress $pip3 -NetworkSecurityGroup $nsg3 -EnableIpForwarding
Enable IP forwarding inside the Linux guest (cloud-init)
# NVA VM cloud-init to enable IP forwarding in guest
$cloudInit = @"
#cloud-config
runcmd:
 - sysctl -w net.ipv4.ip_forward=1
 - sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
 - sysctl -p
"@

$cloudInitBytes = [System.Text.Encoding]::UTF8.GetBytes($cloudInit)
$cloudInitEncoded = [Convert]::ToBase64String($cloudInitBytes)

# Example VM creation snippet for NVA (attach cloud-init)
$vmCfg3 = New-AzVMConfig -VMName $vm3Name -VMSize "Standard_B1s" |
    Set-AzVMOperatingSystem -Linux -ComputerName $vm3Name -Credential $credential -CustomData $cloudInitEncoded |
    Set-AzVMSourceImage -PublisherName $imagePublisher -Offer $imageOffer -Skus $imageSku -Version $imageVersion |
    Add-AzVMNetworkInterface -Id $nic3.Id
After deployment you should have three VMs: one in each spoke and the hub NVA. Confirm the hub NIC has IP forwarding enabled and the guest OS has net.ipv4.ip_forward set. Azure portal VM overview (example for the East US spoke VM):
A screenshot of the Microsoft Azure portal displaying the overview page for a virtual machine named "vm-az700-udr-spoke-eus-01," showing details like operating system (Ubuntu 22.04), VM size, agent status, and networking information. The left pane lists other virtual machines while the right pane highlights IP addresses and resource settings.
Validate routing with tracepath/traceroute and tcpdump Before applying any UDRs, traffic between spokes might travel directly via Azure system routes (or via peering). To force inspection through the hub NVA we will create a route table with a user-defined route and associate it to the spoke subnet. Example: ping and tracepath from East US spoke to West US spoke (before/after UDR you will compare results). From vm-az700-udr-spoke-eus-01:
# On vm-az700-udr-spoke-eus-01
ping 10.50.1.4
PING 10.50.1.4 (10.50.1.4) 56(84) bytes of data.
64 bytes from 10.50.1.4: icmp_seq=1 ttl=64 time=72.1 ms
64 bytes from 10.50.1.4: icmp_seq=2 ttl=64 time=73.9 ms
...
--- 10.50.1.4 ping statistics ---
17 packets transmitted, 17 received, 0% packet loss
rtt min/avg/max/mdev = 71.320/72.075/73.875/0.658 ms

# Check path - tracepath shows first hop is NVA 10.70.1.4, then destination
tracepath 10.50.1.4
1?: [LOCALHOST]                                      pmtu 1500
1: 10.70.1.4                                        35.507ms
2: 10.50.1.4                                        74.068ms reached
Create the Route Table in the portal Creating a route table in the Azure portal is straightforward: pick the subscription and resource group, choose the region, and set Gateway route propagation according to your design. The screenshot below shows the Create Route table page used in the lab:
A screenshot of the Microsoft Azure portal showing the "Create Route table" page with project and instance details — subscription set to "Kodekloud Labs", resource group "rg-az700-udr-nva", region "East US", and options for propagating gateway routes. The form's navigation buttons ("Previous", "Next", "Review + create") are visible at the bottom.
Add a user route that matches the destination you want to intercept, and set Next Hop Type = Virtual appliance with Next Hop Address = hub/NVA IP (for example 10.70.1.4). You can target a single IP (/32) or an entire subnet (/24, /16, etc.). The example below shows adding a route to forward a specific VM IP through 10.70.1.4:
A screenshot of the Microsoft Azure portal showing the "rt-spoke-eus" route table with the "Add route" pane open. The form is creating a user-defined route named "rt-nva-froward" for destination 10.60.1.4/32 with next hop type "Virtual appliance" and next hop address 10.70.1.4.
Associate the Route Table to the spoke subnet After creating the route, associate the route table to the subnet in the spoke where you want the custom route applied. Association activates the UDR for that subnet.
A screenshot of the Microsoft Azure portal showing the "rt-spoke-eus | Subnets" route table page with no subnets listed. An "Associate subnet" pane is open on the right with a virtual network and a subnet selected.
Route precedence: Azure routing prefers the most specific prefix (the longest prefix match). A /32 is more specific than a /16, so 10.50.1.4/32 will match before 10.50.0.0/16.
Verify the effective routes You can inspect the NIC effective routes in the portal to confirm the user route is present and that it will override the broader system route. The user route to the virtual appliance (10.70.1.4) should appear in the effective routes view:
A screenshot of the Microsoft Azure portal showing the "Effective routes" table for a network interface (nic-az700-udr-spoke-eus-01), listing destination prefixes and next hops. A user route to a virtual appliance (10.70.1.4) is highlighted in the table.
Re-run the traceroute/tracepath from the spoke VM — you should now see the first hop is the NVA (10.70.1.4) and then the destination:
tracepath 10.50.1.4
1?: [LOCALHOST]                                      pmtu 1500
1: 10.70.1.4                                        35.507ms
2: 10.50.1.4                                        74.068ms reached

Resume: pmtu 1500 hops 2 back 1
Repeat for other spokes If you have multiple spokes, create a route table per spoke (or reuse where appropriate), add user routes pointing to the hub NVA, and associate each table to the corresponding spoke subnets. The portal associate-subnet pane for the West US route table used in the lab looks like this:
A screenshot of the Microsoft Azure portal showing the "rt-wus-spoke | Subnets" route table page with an "Associate subnet" pane open on the right, where a virtual network and subnet are selected. The OK button at the bottom of the pane is highlighted.
Tracepath from the West US spoke (after associating the UDR) shows traffic going via the NVA before reaching the East US destination:
# On vm-az700-udr-spoke-wus-01 tracing to 10.60.1.4 after UDR association
tracepath 10.60.1.4
1?: [LOCALHOST]                              pmtu 1500
1: 10.70.1.4                                  38.943ms
2: 10.60.1.4                                  69.654ms reached

Resume: pmtu 1500 hops 2 back 2
Confirm packet flow with tcpdump on the NVA Run tcpdump on the hub/NVA interface to verify that ICMP (or other traffic) arrives at the NVA and is forwarded onward:
# On hub (NVA) vm
sudo tcpdump -i eth0 -n icmp
# You should see ICMP echo requests from the source spoke and replies forwarded to the destination
Representative tcpdump output (abbreviated):
17:24:17.878331 IP 10.60.1.4 > 10.50.1.4: ICMP echo request, id 2, seq 1, length 64
17:24:17.908559 IP 10.50.1.4 > 10.60.1.4: ICMP echo reply, id 2, seq 1, length 64
...
17:24:22.916571 IP 10.50.1.4 > 10.60.1.4: ICMP echo reply, id 2, seq 6, length 64
This verifies the hub VM receives traffic and forwards it to the destination — in production you would replace the Linux forwarding VM with a managed firewall or NVA that can apply inspection and policy enforcement. Checklist: steps to force spoke-to-spoke traffic through a hub NVA
TaskPurposeExample / Notes
Enable IP forwarding on hub NICAllow the NIC to receive and forward forwarded packets-EnableIpForwarding on NIC
Enable net.ipv4.ip_forward in guestAllow the Linux VM to forward packets at OS levelModify /etc/sysctl.conf or use cloud-init
Create Route TableHold UDRs that define custom pathsSelect subscription, resource group, region
Add User RouteDirect destination prefix to NVANext hop type = Virtual appliance; Next hop address = 10.70.1.4
Associate Route Table to subnetApply the UDR to VMs in the subnetAssociation activates the route for that subnet
ValidateConfirm using tracepath/traceroute and tcpdumpEffective routes in portal show UDR precedence
Summary
  • UDRs let you control internal traffic paths in Azure VNets, forcing traffic through NVAs or firewalls for inspection and enforcement.
  • Ensure both the NIC (EnableIpForwarding) and the guest OS (net.ipv4.ip_forward=1) are configured when using a Linux VM as an NVA.
  • Create a Route Table, add routes (Next hop type = Virtual appliance, Next hop address = NVA IP), and associate the table with the relevant subnet(s).
  • Azure routing uses longest-prefix match: the most specific route (/32) wins over broader system routes (/16).
  • Validate path changes with tracepath/traceroute and tcpdump on the NVA.
Links and references Now that you understand UDRs, you can configure them to force traffic through Azure Firewall or other managed NVAs for centralized security inspection.