Skip to main content
This section explains how to diagnose and resolve network performance problems in Azure. It focuses on tools and procedures for measuring throughput, packet loss, and latency, and on how to interpret results to find capacity constraints or misconfigurations. Key Azure tools for network diagnostics include:
  • Azure Connectivity Toolkit (ACCTK) — PowerShell module for end-to-end testing.
  • Azure Network Watcher — packet capture, topology, and connection troubleshooting.
  • Azure Monitor / Network Performance Monitor — telemetry, baselining, and alerting.

Azure Connectivity Toolkit (ACCTK)

ACCTK is a portable PowerShell toolkit that runs network tests between endpoints to measure latency, packet loss, and bandwidth under different loads. Because it’s scriptable, ACCTK is useful for building repeatable diagnostics and collecting baseline performance data across environments. Example: run a short DHCP/connectivity and latency check against a remote host for 10 seconds:
Get-LinkDiagnostics -RemoteHost 127.0.0.1 -TestSeconds 10
Replace 127.0.0.1 with the actual remote IP or hostname you want to test. Run these tests during a maintenance window when possible, because bandwidth and multi-thread tests can generate significant traffic.
ACCTK common capabilities:
FeaturePurposeExample cmdlet
End-to-end packet loss and latency testsVerify quality across the entire network pathGet-LinkDiagnostics
Simulated single-thread and multi-thread bandwidth testsMeasure throughput and reveal host or path bottlenecksGet-LinkPerformance
PowerShell module & scriptingAutomate repeated diagnostics and collect baselinesImport module + scripts
A slide titled "AzCTK Features" with three blue rounded panels. The panels list: end-to-end packet loss and latency tests; simulated single/multi-thread bandwidth tests; and an installable PowerShell module for ease of use.
Because ACCTK is a PowerShell module, you can execute scheduled tests, roll up results to a central store, and compare against baselines for trend analysis. The sample below shows a multi-stage link performance run (varying session counts and window sizes). It reports measured bandwidth, packet loss, and latency (P50) for each stage:
E:\> Get-LinkPerformance -RemoteHost 127.0.0.1 -TestSeconds 10
6/30/2017 4:50:18 PM - Stage 1 of 6: No Load Ping Test...
6/30/2017 4:50:30 PM - Stage 2 of 6: Single Thread Test...
6/30/2017 4:50:56 PM - Stage 3 of 6: 6 Thread Test...
6/30/2017 4:51:22 PM - Stage 4 of 6: 16 Thread Test...
6/30/2017 4:51:49 PM - Stage 5 of 6: 16 Thread Test with 1Mb window...
6/30/2017 4:52:15 PM - Stage 6 of 6: 32 Thread Test...

Testing Complete!

Name                          Bandwidth        Loss     P50
----                          ---------        ----     ---
No Load                       N/A              0% loss  1.87 ms
1 Session                     6.79 Gbits/sec   0% loss  0.92 ms
6 Sessions                    8.39 Gbits/sec   0% loss  1.94 ms
16 Sessions                   7.50 Gbits/sec   0% loss  4.34 ms
16 Sessions with 1Mb window   7.33 Gbits/sec   0% loss  19.405 ms
32 Sessions                   7.17 Gbits/sec   0% loss  8.335 ms

E:\>
Use outputs like these to:
  • Confirm expected throughput (compare to NIC/vNIC and virtual machine SKU limits).
  • Detect packet loss and latency spikes caused by saturation, buffer sizes, or path issues.
  • Compare single-thread vs multi-thread behavior to reveal concurrency-related bottlenecks.
High bandwidth and multi-thread tests generate significant traffic and can affect production workloads and billing. Always notify stakeholders and run tests in maintenance windows or on isolated test segments when possible.

Troubleshooting checklist

Follow this ordered checklist to diagnose network performance issues efficiently:
  1. Reproduce problem with controlled test (e.g., Get-LinkDiagnostics / Get-LinkPerformance).
  2. Baseline current performance with multiple runs at different times and loads.
  3. Check Azure configuration:
    • Network Security Groups (NSGs) for dropped packets.
    • Route tables and UDRs for asymmetric routing.
    • Load balancer or Azure Firewall policies.
    • VM/vNIC sizing and offload settings.
  4. Inspect on-premises or upstream network (firewalls, routers, ISP/ExpressRoute).
  5. Capture packet traces (Azure Network Watcher) if routing or packet drops are suspected.
  6. Correlate findings with Azure Monitor metrics and logs (NIC throughput, queue depths, CPU).
Common causes and what to check:
Possible causeWhat to checkNext steps
Bandwidth saturationVM NIC and VM SKU limits, NSG throttles, peer capacityRun multi-thread tests; scale VM or upgrade NIC/sku
Packet lossNetwork Watcher packet capture, NSG logs, gateway healthInspect dropped packets, check ExpressRoute/ISP link
High latencyTraceroute, path MTU, routing, peeringValidate routing, inspect gateway and peering health
Asymmetric routingTraceroute from both ends, UDRsCorrect UDRs, ensure symmetric path for flows
ResourceDescription
Azure Network WatcherCapture, topology, connection troubleshoot, and diagnostics
Network Performance MonitorEnd-to-end monitoring and alerting for network performance
Azure ExpressRoute overviewDetails on ExpressRoute connectivity and troubleshooting
Use ACCTK together with Azure-native tooling (Network Watcher and Monitor) to get both active test results and passive telemetry — this combination helps you pinpoint whether issues are configuration-related, capacity-related, or caused by upstream providers.