Guides diagnosing and resolving Azure network performance issues using ACCTK, Network Watcher, and Monitor to measure throughput, latency, packet loss and identify configuration or capacity bottlenecks.
This section explains how to diagnose and resolve network performance problems in Azure. It focuses on tools and procedures for measuring throughput, packet loss, and latency, and on how to interpret results to find capacity constraints or misconfigurations.Key Azure tools for network diagnostics include:
Azure Connectivity Toolkit (ACCTK) — PowerShell module for end-to-end testing.
Azure Network Watcher — packet capture, topology, and connection troubleshooting.
ACCTK is a portable PowerShell toolkit that runs network tests between endpoints to measure latency, packet loss, and bandwidth under different loads. Because it’s scriptable, ACCTK is useful for building repeatable diagnostics and collecting baseline performance data across environments.Example: run a short DHCP/connectivity and latency check against a remote host for 10 seconds:
Replace 127.0.0.1 with the actual remote IP or hostname you want to test. Run these tests during a maintenance window when possible, because bandwidth and multi-thread tests can generate significant traffic.
ACCTK common capabilities:
Feature
Purpose
Example cmdlet
End-to-end packet loss and latency tests
Verify quality across the entire network path
Get-LinkDiagnostics
Simulated single-thread and multi-thread bandwidth tests
Measure throughput and reveal host or path bottlenecks
Get-LinkPerformance
PowerShell module & scripting
Automate repeated diagnostics and collect baselines
Import module + scripts
Because ACCTK is a PowerShell module, you can execute scheduled tests, roll up results to a central store, and compare against baselines for trend analysis.
The sample below shows a multi-stage link performance run (varying session counts and window sizes). It reports measured bandwidth, packet loss, and latency (P50) for each stage:
Copy
E:\> Get-LinkPerformance -RemoteHost 127.0.0.1 -TestSeconds 106/30/2017 4:50:18 PM - Stage 1 of 6: No Load Ping Test...6/30/2017 4:50:30 PM - Stage 2 of 6: Single Thread Test...6/30/2017 4:50:56 PM - Stage 3 of 6: 6 Thread Test...6/30/2017 4:51:22 PM - Stage 4 of 6: 16 Thread Test...6/30/2017 4:51:49 PM - Stage 5 of 6: 16 Thread Test with 1Mb window...6/30/2017 4:52:15 PM - Stage 6 of 6: 32 Thread Test...Testing Complete!Name Bandwidth Loss P50---- --------- ---- ---No Load N/A 0% loss 1.87 ms1 Session 6.79 Gbits/sec 0% loss 0.92 ms6 Sessions 8.39 Gbits/sec 0% loss 1.94 ms16 Sessions 7.50 Gbits/sec 0% loss 4.34 ms16 Sessions with 1Mb window 7.33 Gbits/sec 0% loss 19.405 ms32 Sessions 7.17 Gbits/sec 0% loss 8.335 msE:\>
Use outputs like these to:
Confirm expected throughput (compare to NIC/vNIC and virtual machine SKU limits).
Detect packet loss and latency spikes caused by saturation, buffer sizes, or path issues.
Compare single-thread vs multi-thread behavior to reveal concurrency-related bottlenecks.
High bandwidth and multi-thread tests generate significant traffic and can affect production workloads and billing. Always notify stakeholders and run tests in maintenance windows or on isolated test segments when possible.
Details on ExpressRoute connectivity and troubleshooting
Use ACCTK together with Azure-native tooling (Network Watcher and Monitor) to get both active test results and passive telemetry — this combination helps you pinpoint whether issues are configuration-related, capacity-related, or caused by upstream providers.