Presenter: Morley Mao
Co-authors: Junxian Huang, Feng Qian, Yihua Guo, Yuanyuan Zhou, Qiang Xu, Subhabrata Sen, and Oliver Spatscheck
LTE is a fairly new technology, so little is known about the bandwidth, latency, and RTTs that its users experience in commercial networks. Information about the properties of LTE in commercial networks would enable transport layers and applications that are more LTE-friendly.
In this work, they analyzed an anonymized packet header trace from a US metropolitan area, which included 3 TB of LTE traffic. They observed undesired slow starts in 12% of large flows. They created an algorithm to estimate available bandwidth and utilization from the trace.
They found the median bandwidth utilization to be 20%, and that for 71% of the large flows, the bandwidth utilization is below 50%. They also found high LTE bandwidth variability, and that TCP's performance was degraded by a limited receive window. They suggest that these problems could be addressed by updating RTT estimates in the transport layer and reading data from TCP buffers more quickly in the application layer.
Q: Which flavors of TCP were you looking at?
A: Cubic.
Q: Some applications intentionally use small TCP windows, did you investigate that?
A: The application may be doing some kind of rate limiting, so that is a possible explanation for the window. However, most applications in the trace only opened one TCP connection.
Q: To what degree do your observations depend on the particular LTE network?
A: Our observations are limited by the trace that we have. We did local experiments on two commercial networks and observed similar behavior. Our study is limited to LTE networks in the US.
Q: You seem to put some blame on the carrier for having large buffers - can this problem be fixed in practice? What should they do?
A: Yes that is a problem (buffer bloat). The loss rate in these networks is already low, so I am not sure if eliminating large buffers is the only solution.
No comments:
Post a Comment