Benchmarks

A tunnel can run over UDP or TCP (see Transport Modes). These benchmarks measure how that choice affects throughput and latency for traffic flowing through a tunnel, especially as network conditions degrade.

tl;dr

UDP outperforms TCP on lossy, unpredictable networks — degraded wifi, mobile tethering, unstable VPNs, anything over the internet.

TCP has a raw throughput advantage on stable, predictable, low-latency networks where packet loss is not a concern.

The five most telling scenarios:

ScenarioUDP transportTCP transport
No loss, no latency (1 connection)830 Mbps900 Mbps
No loss, high latency — 150 ms one-way, ~Singapore to London (32 connections)134 Mbps59 Mbps
1% loss, cross-country — 40 ms one-way (32 connections)49 Mbps45 Mbps
1% loss, cross-country — 40 ms one-way (128 connections)95 Mbps0 Mbps
5% loss, cross-country — 40 ms one-way (128 connections)90 Mbps0 Mbps

No Packet Loss or Latency

Single Connection

MetricUDP transportTCP transport
TCP traffic upstream (to exit)830 Mbps900 Mbps
TCP traffic downstream (to entry)300 Mbps330 Mbps
UDP traffic5.3 Gbps4.5 Gbps
Latency (RTT)0.7 ms1.1 ms

TCP is slightly faster for throughput. UDP has meaningfully lower latency. Downstream is lower than upstream because traffic arriving at the entry node requires extra processing to reconstruct into TCP connections.

Parallel Connections

ConnectionsUDP transportTCP transport
32794 Mbps932 Mbps
64744 Mbps896 Mbps
128578 Mbps815 Mbps

With Packet Loss and Latency

Varying Loss

32 connections — a few parallel scans, SSH sessions, or file transfers.

LossDelay (one-way)UDP transportTCP transport
0.1%40 ms110 Mbps66 Mbps
0.5%40 ms54 Mbps48 Mbps
1%40 ms49 Mbps45 Mbps
2%40 ms52 Mbps46 Mbps
3%40 ms51 Mbps43 Mbps
5%40 ms49 Mbps46 Mbps

Both transports survive at 32 connections, but the UDP transport is consistently faster.

64 connections — multiple scans in parallel, several RDP or SSH sessions.

LossDelay (one-way)UDP transportTCP transport
0.1%40 ms103 Mbps71 Mbps
0.5%40 ms68 Mbps66 Mbps
1%40 ms69 Mbps68 Mbps
2%40 ms67 Mbps62 Mbps
3%40 ms66 Mbps66 Mbps
5%40 ms63 Mbps0 Mbps

The TCP transport drops to zero throughput at 5% loss. The UDP transport maintains throughput.

128 connections — dictionary attacks, full nmap scans, mass HTTP requests.

LossDelay (one-way)UDP transportTCP transport
0.1%40 ms150 Mbps112 Mbps
0.5%40 ms92 Mbps0 Mbps
1%40 ms95 Mbps0 Mbps
2%40 ms91 Mbps0 Mbps
3%40 ms93 Mbps0 Mbps
5%40 ms90 Mbps0 Mbps

At 128 connections, the TCP transport collapses at very low packet loss. The UDP transport maintains throughput across all tested loss rates.

Varying Latency

32 connections

LossDelay (one-way)UDP transportTCP transport
1%1 ms67 Mbps107 Mbps
1%10 ms62 Mbps56 Mbps
1%20 ms54 Mbps45 Mbps
1%40 ms54 Mbps48 Mbps
1%80 ms52 Mbps43 Mbps
1%150 ms52 Mbps38 Mbps

The TCP transport is faster at very low latency. As latency increases, the UDP transport takes over. Both survive across all latencies at 32 connections.

64 connections

LossDelay (one-way)UDP transportTCP transport
1%1 ms95 Mbps69 Mbps
1%10 ms72 Mbps68 Mbps
1%20 ms68 Mbps66 Mbps
1%40 ms67 Mbps67 Mbps
1%80 ms66 Mbps64 Mbps
1%150 ms60 Mbps0 Mbps

At high latency, the TCP transport produces zero throughput.

128 connections

LossDelay (one-way)UDP transportTCP transport
1%1 ms165 Mbps105 Mbps
1%10 ms106 Mbps107 Mbps
1%20 ms99 Mbps0 Mbps
1%40 ms95 Mbps0 Mbps
1%80 ms97 Mbps0 Mbps
1%150 ms90 Mbps0 Mbps

The TCP transport drops to zero throughput beyond low latency. The UDP transport remains stable across all latencies.

Varying Latency, No Loss

32 connections

LossDelay (one-way)UDP transportTCP transport
0%1 ms394 Mbps653 Mbps
0%10 ms142 Mbps429 Mbps
0%40 ms146 Mbps110 Mbps
0%80 ms135 Mbps95 Mbps
0%150 ms134 Mbps59 Mbps

64 connections

LossDelay (one-way)UDP transportTCP transport
0%1 ms345 Mbps1039 Mbps
0%10 ms149 Mbps449 Mbps
0%40 ms163 Mbps101 Mbps
0%80 ms150 Mbps70 Mbps
0%150 ms145 Mbps68 Mbps

128 connections

LossDelay (one-way)UDP transportTCP transport
0%1 ms379 Mbps687 Mbps
0%10 ms175 Mbps447 Mbps
0%40 ms176 Mbps155 Mbps
0%80 ms171 Mbps112 Mbps
0%150 ms195 Mbps108 Mbps

Without loss, the TCP transport wins at low latency. As latency increases, the UDP transport takes over and the gap widens.

TCP Advantages

Strengths

Benefits from the OS kernel's highly optimised TCP stack. On stable, low-latency networks with negligible packet loss, it reaches gigabit-class throughput. At very low latency (e.g. same data centre), it outperforms UDP by a meaningful margin. It also traverses HTTP proxies and CDN infrastructure via WebSocket, which UDP cannot.

Trade-offs

Multiplexes all tunnel connections over a single TCP connection — one port, one connection, one queue. When a packet at the front of that queue is lost, every connection behind it has to wait for the retransmission. The more connections sharing the queue, the more likely any single packet loss stalls everything. This is called head-of-line blocking, and it is a fundamental property of TCP — not a flaw in the WebSocket transport itself.

UDP Advantages

Strengths

Under the hood, QUIC gives each tunnel connection its own independent stream. A lost packet only stalls the affected connection. Everything else continues uninterrupted. More connections means more independent recovery paths, which is why UDP throughput scales with connection count under loss while TCP throughput drops to zero.

Trade-offs

Slightly lower raw throughput on clean, low-latency networks compared to TCP.

Methodology

Traffic is generated with iperf3 through a full end-to-end tunnel with encryption active. Entry and exit run as QEMU microVMs on the same host with KVM. Throughput values are medians across multiple runs.

Packet loss and latency scenarios use tc netem on the entry VM's ethernet interface. Loss is applied on egress. All delay values are one-way — double them to get round-trip time.

To reproduce: just bench benchmark from the repo root. The benchmark harness, VM images, and network emulation scripts are all in the repository.