Paper Title: Revisiting Resource Pooling: The Case for In-Network Resource Sharing
Authors: Ioannis Psaras, Lorenzo Saino, George Pavlou (University College London)
Presenter: Ioannis Psaras
Paper Link: http://conferences.sigcomm.org/hotnets/2014/papers/hotnets-XIII-final109.pdf
Resource pooling principle is leveraged to manage shared resources in networks. The main goal is to maintain stability and guarantee fairness. TCP effectively deal with uncertainty by suppressing demand and moving traffic as fast as the path's slowest link. The approach taken in this paper is to push as much traffic in the network, once we hit a bottleneck then we store temporary in routers caches and detour accordingly. Note, in-network storage *caches* are not used for as temporary storage for the most popular content, instead, it is used to store incoming content in temporarily. The assumptions are: 1) contents have name, 2) clients send network-layer contents. In this approach, clients regulate traffic that is pushed in the network, instead of senders. Fairness and stability is achieved in three phases: 1) push data phase, 2) cache & detour phase, and 3) back-pressure phase. Evaluation shows that there is high availability of detours in real typologies.
Questions:
Q: In the table that shows the available detour paths in real typologies, 2 hops detour availability means that there is no 1 hop but there 2 hop?
A: Yes
Q( Brighten Godfrey UIUC) Do you think putting storage at switches in DataCenter and use them in the way you suggested would yield a better flow completion time ?
A: Yes, it makes sense. The approach could fit to datacenter.
Authors: Ioannis Psaras, Lorenzo Saino, George Pavlou (University College London)
Presenter: Ioannis Psaras
Paper Link: http://conferences.sigcomm.org/hotnets/2014/papers/hotnets-XIII-final109.pdf
Resource pooling principle is leveraged to manage shared resources in networks. The main goal is to maintain stability and guarantee fairness. TCP effectively deal with uncertainty by suppressing demand and moving traffic as fast as the path's slowest link. The approach taken in this paper is to push as much traffic in the network, once we hit a bottleneck then we store temporary in routers caches and detour accordingly. Note, in-network storage *caches* are not used for as temporary storage for the most popular content, instead, it is used to store incoming content in temporarily. The assumptions are: 1) contents have name, 2) clients send network-layer contents. In this approach, clients regulate traffic that is pushed in the network, instead of senders. Fairness and stability is achieved in three phases: 1) push data phase, 2) cache & detour phase, and 3) back-pressure phase. Evaluation shows that there is high availability of detours in real typologies.
Questions:
Q: In the table that shows the available detour paths in real typologies, 2 hops detour availability means that there is no 1 hop but there 2 hop?
A: Yes
Q( Brighten Godfrey UIUC) Do you think putting storage at switches in DataCenter and use them in the way you suggested would yield a better flow completion time ?
A: Yes, it makes sense. The approach could fit to datacenter.
Very informative blog. Thanks for sharing. Lanai Screen Installation Kissimmee, FL
ReplyDelete