Masoud Moshref (USC)
Minlan Yu (USC)
Ramesh Govindan (USC)
Amin Vahdat (Google, inc.)
Growth of data centers with respect to scale, speed and link utilization necessitates a programmed, precise and quick detection of events by the monitoring system. Such an active monitoring system would aid network management tasks by maintaining availability, performance and providing security. The paper presents Trumpet, an event monitoring system in which users provide a set of event definitions, which are then installed as triggers by the controller on end-hosts. The end-hosts evaluate the triggers for the incoming packets and sends the trigger evaluation results back to the controller which aggregates the trigger results. End-hosts are chosen for trigger evaluation because they are easily programmable, have sufficient processing power for finer-timescale and they already inspect every packet. Events are defined in term of packet drops, delays, duplicates; some examples are: transient congestion, burst loss, load imbalance, traffic surge, link failure.
Trumpet provides support for an event definition language which allows users to define network events by specifying a filter, that identifies the set of target packets over which a predicate is evaluated; users can also specify spatio-temporal granularity for aggregation. Trumpet consists of two components: Trumpet Event Manager (TEM) at the controller and Trumpet Packet Manager (TPM) at each end-host. Users submit events to TEM, which determines the set of end-hosts that should monitor them and install the triggers at the respective TPMs. TPM is co-located with software switch on a single core, which conserves CPU resource and avoids inter-core synchronization. TPM has two phases: 1. Match and Scatter (incoming packets are matched to a 5-tuple flow and the statistics are stored) and 2. Gather-Test-And-Report (at the specified time granularity, gathers the statistics and evaluates the predicate). The match phase is optimized by means of caching and TLB usage. The second phase is performed as an off path operation. But off path operation if scheduled improperly can cause delays in packet processing too - in order to bound this delay, when the packet queue is low they run the off path phase and run the on path otherwise.
Trumpet is implemented in C code and the evaluation is done on a 10-core Intel machine with 10G NIC. Trumpet can process 14.8 Mpps 64 byte packets at 10G and 650 byte packets at 4*10G, while evaluating 4K triggers at 10ms granularity. At moderate packet rates, it can detect events at 1ms. At full packet rate, it can process 16K triggers without delaying packets by more than 10 microseconds.
Q&A:
Q: You mentioned that Trumpet runs on the same CPU as the virtual switch, is it programmed on the virtual switch itself or is separated?
A: If the software switch runs on multiple cores, we have not explored the performance impact of that in particular with inter-core synchronization. I suspect that there would be something required. But we have thought of how you would scale Trumpet to larger bandwidth setting, one of the basic things would be to devote multiple CPU's and design your data structure so that you could do share nothing.
Minlan Yu (USC)
Ramesh Govindan (USC)
Amin Vahdat (Google, inc.)
Growth of data centers with respect to scale, speed and link utilization necessitates a programmed, precise and quick detection of events by the monitoring system. Such an active monitoring system would aid network management tasks by maintaining availability, performance and providing security. The paper presents Trumpet, an event monitoring system in which users provide a set of event definitions, which are then installed as triggers by the controller on end-hosts. The end-hosts evaluate the triggers for the incoming packets and sends the trigger evaluation results back to the controller which aggregates the trigger results. End-hosts are chosen for trigger evaluation because they are easily programmable, have sufficient processing power for finer-timescale and they already inspect every packet. Events are defined in term of packet drops, delays, duplicates; some examples are: transient congestion, burst loss, load imbalance, traffic surge, link failure.
Trumpet provides support for an event definition language which allows users to define network events by specifying a filter, that identifies the set of target packets over which a predicate is evaluated; users can also specify spatio-temporal granularity for aggregation. Trumpet consists of two components: Trumpet Event Manager (TEM) at the controller and Trumpet Packet Manager (TPM) at each end-host. Users submit events to TEM, which determines the set of end-hosts that should monitor them and install the triggers at the respective TPMs. TPM is co-located with software switch on a single core, which conserves CPU resource and avoids inter-core synchronization. TPM has two phases: 1. Match and Scatter (incoming packets are matched to a 5-tuple flow and the statistics are stored) and 2. Gather-Test-And-Report (at the specified time granularity, gathers the statistics and evaluates the predicate). The match phase is optimized by means of caching and TLB usage. The second phase is performed as an off path operation. But off path operation if scheduled improperly can cause delays in packet processing too - in order to bound this delay, when the packet queue is low they run the off path phase and run the on path otherwise.
Trumpet is implemented in C code and the evaluation is done on a 10-core Intel machine with 10G NIC. Trumpet can process 14.8 Mpps 64 byte packets at 10G and 650 byte packets at 4*10G, while evaluating 4K triggers at 10ms granularity. At moderate packet rates, it can detect events at 1ms. At full packet rate, it can process 16K triggers without delaying packets by more than 10 microseconds.
Q&A:
Q: What if you have a 10 ms congestion in one of the switches inside the data centers, then you have to go through all the flows to find the end point of congestion, then the program becomes much harder right? I am assuming you don't always know the exact route and don't have any path information.
A: That's a problem we have to solve. Certainly what you could do is, you could proactively install these events on all the servers and certainly the kinds of numbers that I presented will let you scale to that. There is an underlying assumption though, that perhaps the controller has some of the routing information that lets you find at least potential locations where you can install these events. It is not something which we have looked at, but it is a capability that would be necessary.
A: If the software switch runs on multiple cores, we have not explored the performance impact of that in particular with inter-core synchronization. I suspect that there would be something required. But we have thought of how you would scale Trumpet to larger bandwidth setting, one of the basic things would be to devote multiple CPU's and design your data structure so that you could do share nothing.
No comments:
Post a Comment