Rui Miao states that layer 4 load balancing is critical to ensure timely service availability and 40% of the datacenter traffic needs load balancing. Current software load balancers incur high cost, need more servers to scale out, experience high latency for high traffic volume even with optimization techniques like kernel bypassing. They also have poor isolation performance in the face of an attack. These software based load balancers can provide Per Connection Consistency (PCC) but cannot scale to traffic growth. On the other hand, partial offloading based load balancing can scale to traffic growth but cannot guarantee PCC. The authors propose Silkroad that provides the best of both worlds; it is built on switching ASICs providing multi-Tbps and also guarantee PCC.
Silkroad stores the mapping of Virtual IP addresses (VIP) of services to Direct IP addresses (DIP) of the servers in the switching ASICs SRAM as a connection in a connection table. Since DIP updates are pretty common i.e. 100 updates a minute, the challenge is how to store millions of connections in a connection table with limited SRAM. Silkroad employs a novel hashing design to compress a connection table and optimize the usage of SRAM. The other challenge for Silkroad is to do all operations and ensure PCC in just a few nanoseconds; it uses hardware primitives to handle connections, their state and dynamics.
Silkroad is implemented in a programmable switching ASIC with 400 lines of P4 code. The control plane functions are written with 1000 lines of C code on top of switch driver software. The authors also prepared a demo on a Tofino programmable switch. They are able to achieve full 6.5 Tbps line rate by replacing 100s of software load balancers with one Silkroad. The processing ingress to egress latency is in sub microseconds while guaranteeing PCC. They also establish resilience against attacks and hardware rate limiters for performance isolation.
In short, Silkroad was able to,
- provide a direct hardware path to the application traffic towards the application servers
- scale for traffic growth with switching ASICs
- optimize the usage of SRAMs to store connections
- ensure PCC under frequent DIP pool updates
- 100 to 1000 times savings in power and capital cost
Some of the questions that the author encountered are,
Q: How do you remove connections from a connection table?
A: Use a hardware timer timeout to scan all entries in the connection table and report the deleted entry to the software where software can delete the entry. This process has fewer bits per entry as a memory footprint.
Q. If a new rack is introduced in a server, how would the existing connections change?
A. Different load balancers can route the exiting connections to the new racks where they will be treated as new connections, and PCC can be violated just as it can happen with software load balancers.
No comments:
Post a Comment