Yotam Harchol presents OpenBox which is a software defined framework for network wide deployment and management for network functions (NF).
Harchol points out that traditionally network functions have been implemented by middleboxes which are sold as monolithic boxes. He argues that while this has many benefits such as on demand scaling, it still requires requires separate management of each individual network function (NF) where each NF may have its own management interface and may be administered by a different administrator. This makes it hard to efficiently manage different NFs and evolve new NFs, their work is therefore aimed at making this management task easier and making it easier to innovate and evolve new NFs.
In order to achieve the goal, Harchol argues that we need to de-couple control plane of a NF from its data plane so that we can centralize the control. This allows the control functions to be carried out by a single entity in one place. However, this requires that the dataplane to be programmable.
In order the achieve this design, the team proposes OpenBox which is build on top of a new communication protocol. They key enabler behind OpenBox is that while different NFs may have different control logic they have similar data planes. Inside the OpenBox universe differences NF are called the OpenBox Applications. OpenBox works by combining the functionality of different applications, this is done by first decomposing each application in to a processing graph and then merging the processing graph of all the applications to form a single processing graph.
The data plane consists of OpenBox instances (OBIs) which perform various packet processing tasks, these tasks are received from the centralized OpenBox controller. The control plane comprises of the OpenBox controller which sets the processing tasks and controls provisioning and scaling of OBIs. The communication between OpenBox controller and OpenBox instances are done using a new communication protocol which provides a set of messages for communication and a set of processing blocks which can be joined together to build a network function.
To evaluate OpenBox the team implemented three applications including a firewall, a web cache and a load balancer and compare it with a static pipeline based NF implementation. They show that the throughput of a pipeline based NF is limited by the throughput of the slowest NF however, with OpenBox this restriction is removed since all OpenBox applications are executed on the same OBI. The single OBI instance also means that the latency is lower compared to the traditional pipeline based NF. Harchol also shows that OpenBox is scalable and reliable, if a service instance goes down the controller can react to take necessary steps to replace it.
To evaluate OpenBox the team implemented three applications including a firewall, a web cache and a load balancer and compare it with a static pipeline based NF implementation. They show that the throughput of a pipeline based NF is limited by the throughput of the slowest NF however, with OpenBox this restriction is removed since all OpenBox applications are executed on the same OBI. The single OBI instance also means that the latency is lower compared to the traditional pipeline based NF. Harchol also shows that OpenBox is scalable and reliable, if a service instance goes down the controller can react to take necessary steps to replace it.
The following questions were put forth after the talk:
Q: How does OpenBox performs fault tolerance, especially if a replica breaks down in the middle of processing a packet?
A: The current version of OpenBox by itself doesn't take care of this scenario. However, we have a mechanism for transactions which we have thought about but haven't implemented it yet.
Q: How would you take work previous work on fault tolerant middle box and add it to OpenBox
A: We haven't yet gotten to fault tolerance but we have a few ideas that we can employ for it.
Q: How is your work different form previous work?
Previous work is mainly focused on placement and resource sharing on the same physical machine, but doesn't de couple the control plane and the data plane, additionally you have to use specific languge. On OpenBox you can implement different control planes with different dataplanes.
Q: How does OpenBox performs fault tolerance, especially if a replica breaks down in the middle of processing a packet?
A: The current version of OpenBox by itself doesn't take care of this scenario. However, we have a mechanism for transactions which we have thought about but haven't implemented it yet.
Q: How would you take work previous work on fault tolerant middle box and add it to OpenBox
A: We haven't yet gotten to fault tolerance but we have a few ideas that we can employ for it.
Q: How is your work different form previous work?
Previous work is mainly focused on placement and resource sharing on the same physical machine, but doesn't de couple the control plane and the data plane, additionally you have to use specific languge. On OpenBox you can implement different control planes with different dataplanes.
No comments:
Post a Comment