Authors: Arjun Singh (Google), Joon Ong (Google), Amit Agarwal (Google), Glen Anderson (Google), Ashby Armistead (Google), Roy Bannon (Google), Seb Boving (Google), Gaurav Desai (Google), Bob Felderman (Google), Paulie Germano (Google), Anand Kanagala (Google), Jeff Provost (Google), Jason Simmons (Google), Eiichi Tanda (Google), Jim Wanderer (Google), Urs Hoelzle (Google), Stephen Stuart (Google), Amin Vahdat (Google)
Presenter: Arjun Singh
Link to the Public Review by Ming Zhang
Summary:
Q/A (Paraphrased):
Q. You used DCTCP. What did you do for virtualized environments where you didn't have control over the end-host stack
A. I'm not too sure but I can't provide more details.
Q. It seems we are relearning what we learnt 30 years ago. Is it that earlier we focussed on "within" the switch Clos-type interconnections and now we are considering these distributed datacenters structured as Clos networks with centralized software for managing them
A. Yes
Q. What gains did DCTCP bring to your datacenter?
A. I don't know the precise numbers but the gains were quite significant.
Q. We in academia do not have access to a lot of things that companies like Google have. Would you like to comment on that?
A. Well, that is a good question. We also struggled with large-scale evaluation. As a result, we had to rely on virtualized testbed environments for testing new protocols and systems.
Q. Do you still have congestion hotspots? Is there more thirst for intra-datacenter bandwidth?
A. Even though we removed most of the congestion hotspots over time, the need for bandwidth is constantly growing.
Presenter: Arjun Singh
Link to the Public Review by Ming Zhang
Summary:
Large-scale datacenters operated by companies like Google, Facebook, and Amazon support hundreds of thousands of servers in a single facility. It is often useful to ask: How did they address the key challenges of scalability, manageability, cost, and evolvability as their datacenter networks grew over time. This paper studies the evolution of Google’s datacenter network during the last decade.
Arjun highlighted the three common themes across the five generations of evolution of Google’s datacenter networks. They are 1) the use of Clos topologies for achieving scalable performance and failure resilience, 2) use of centralized protocols for managing operational complexity, and 3) use of cheap off-the-shelf commodity devices.
Arjun showed how Google started with firehose 1.0 that provided few Tbps of aggregate capacity in 2004 to Jupiter that supports up to 1.3 Pbps of capacity. He walked through how they used the three key themes in Google's datacenter design that enabled them to achieve high performance, high availability, and ease of network management at relatively low costs. Finally, he talked about managing small on-chip buffers by leveraging ECN and DCTCP and providing high reliability using redundancy and diversity.
Q/A (Paraphrased):
Q. You used DCTCP. What did you do for virtualized environments where you didn't have control over the end-host stack
A. I'm not too sure but I can't provide more details.
Q. It seems we are relearning what we learnt 30 years ago. Is it that earlier we focussed on "within" the switch Clos-type interconnections and now we are considering these distributed datacenters structured as Clos networks with centralized software for managing them
A. Yes
Q. What gains did DCTCP bring to your datacenter?
A. I don't know the precise numbers but the gains were quite significant.
Q. We in academia do not have access to a lot of things that companies like Google have. Would you like to comment on that?
A. Well, that is a good question. We also struggled with large-scale evaluation. As a result, we had to rely on virtualized testbed environments for testing new protocols and systems.
Q. Do you still have congestion hotspots? Is there more thirst for intra-datacenter bandwidth?
A. Even though we removed most of the congestion hotspots over time, the need for bandwidth is constantly growing.
No comments:
Post a Comment