Network Latency
Network Latency
Network Latency
I.
I NTRODUCTION
CPU
Gamewserver
...
Corewn
Corew1
Soft
IRQ
NAPI or
ZerowCopy
map
UDP
stack
UDPwBuffer
...
y
Memory
DMAwEngine
NIC
Fig. 1.
cop
...
Rx NIC Buffer
Application
OS
Driver
Hardware
increases: both the virtual NIC and its driver add additional
buffers to the packets path. The overall system load also
increases: The virtual NIC that processes packets for the VM
is emulated in software and the system also needs to run a
virtual switch to connect the virtual NIC to the physical NIC.
D. Sending Packets
The server also needs to respond to incoming packets. This
transmit path is similar to the receive path: The packet is first
placed into a transmit buffer associated with the socket and
copied from the user to the kernel space into a buffer associated
with the socket. The driver manages a buffer of Tx DMA
descriptors that are used to transfer the packet to the NIC.
III.
R ELATED W ORK
We ran all tests in our 10 GbE testbed with direct connections between the involved servers.
102
32
16
8
4
101
20
40
60
80
Offered Load [kpps]
Clients
Clients
Clients
Clients
100
Lat.
Lat.
Lat.
Lat.
4
8
16
32
C.
C.
C.
C.
TP
TP
TP
TP
250
200
104
103
150
102
100
101
50
100
16
64
Buffer Size [KiB] (log2 scale)
208
4
Probability [%]
Fig. 4.
10
C.
C.
C.
C.
10
100
Fig. 2.
clients
104
4
8
16
32
3
2
1
0
30
35
40
45
50
Latency [s]
55
60
65
70
Fig. 3. Histogram of latencies for 4 clients, default buffer size, 30 kpps offered
load
E VALUATION
103
102
101
100
VM Average
VM 95th Perc.
VM 99.9th Perc.
20
Linux Average
Linux 95th Perc.
Linux 99.9th Perc.
40
60
80
Offered Load [kpps]
Fig. 7.
0.4
0.2
0
100
150
200
250
300
500
550
4 C. DPDK
32 C. DPDK
104
103
102
101
0
20
40
60
80
Offered Load [kpps]
100
2
1
0
600
Fig. 6.
Histogram of VM latencies for 4 clients, default buffer size,
30 kpps offered load
4 C. Linux
32 C. Linux
105
100
100
Probability [%]
Probability [%]
Fig. 5.
104
11
Fig. 8.
11.5
12
12.5
Latency [s]
13
13.5
B. Virtual Machines
Virtualizing servers and cloud computing is nowadays a
commonplace technique to simplify server management and
increase availibility [19]. Virtualization comes with increased
latency for networking applications. A packet now needs to
be processed by two operating systems: by the hypervisor and
the virtualized guest OS.
Figure 5 shows how the latency changes if the game
server is moved into a VM. The scenario with 4 clients and
64 KiB buffer size is chosen as a representative example, other
configurations follow a similar pattern. The peak in latency at
around 20 kpps is visible in all configurations with VMs. It
is caused by an increase in the system load as the dynamic
adaption of the interrupt rate starts too late in this high-load
scenario.
The graph also shows how the 99.9th percentile of the
latency increases by a disproportionately large factor. This
is also visible in the histogram of the observed latencies
shown in Figure 6. The probability distribution is now a long
tail distribution which negatively affects the 99.9th percentile
latencies. This effect in VMs has also been observed by Xu
et. al. [20].
TABLE I.
Deployment
Buffer [KiB]
16
Linux
64
16
Linux VM
64
DPDK
N/A
Load* [%]
20
90
110
20
90
110
20
90
110
20
90
110
20
90
99
Average [s]
44.7
338.3
414.7
40.7
142.6
1240.3
209.3
460.3
1006.8
207.6
940.8
4312.6
10.6
12.3
15.2
*) Normalized to the load at which the first drops were observed, i.e. a load 100% indicates an overload scenario
) Normalized to the load at which the output hits the 10 GBit/s line rate
can be sent with a single call into the DPDK library which
directly passes them to the NIC without involving the
OS. DPDK also avoids expensive context switches (cf. Section II-B2) which contributes to the improvement. The perpacket processing costs are therefore significantly lower in
this scenario and thus achieve a higher throughput and lower
latency at the same time.
[2]
[3]
[4]
[5]
D. Comparison
[6]
[7]
[8]
[9]
VI.
C ONCLUSION
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
ACKNOWLEDGMENTS
This research has been supported by the DFG as part
of the MEMPHIS project (CA 595/5-2), the KIC EIT ICT
Labs on SDN, and the BMBF under EUREKA-Project SASER
(01BP12300A).
R EFERENCES
[1]
[18]
[19]
[20]