Skip to Main content Skip to Navigation
Theses

Methodologies and Tools for the Evaluation of Transport Protocols in the Context of High-Speed Networks

Romaric Guillier 1
1 RESO - Protocols and softwares for very high-performance network
Inria Grenoble - Rhône-Alpes, LIP - Laboratoire de l'Informatique du Parallélisme
Abstract : Over the last twenty years, there has been a large increase both in terms of number of interconnected hosts and in terms of link capacities. At the beginning of the ARPANET in 1969, more than forty years ago, there were four nodes, all located in the USA and connected with a link speed of 50 kilobits per second. In 1986, when the NSFNET, the network of the US universities, was created, the number of hosts was over a few thousands, but the backbone links' speeds were still of 56 kilobits per second. From that moment on, the growth in number of interconnected hosts and in link capacity has been exponential, thanks to the success of popular applications like the WWW and technological advances like SDH, SONET or DWDM in the fiber optics networks domain or like ATM and the DSL technologies derived from it. The current-day Internet interconnects billions of users (half a billion hosts in late 2006) all around the world, has a cumulated core links speed over several Terabits per second and access links up to 10 Gbps. Another fundamental technological change is also that nowadays, access links start having speeds in the same order of magnitude as the backbone link's speed. For instance, it is common to find data centers where the nodes are interconnected with 1 Gbps or 10 Gbps links and have a connection to the outside with a similar speed. This change is important as only a few connections would be necessary to cause saturation in the backbone. This situation is likely to spread as DSL technologies are gradually replaced by Fiber-To-The-Home (FTTH). 5 Mbps connections aggregated on OC48 (2.5 Gbps) links will then be replaced by 100 Mbps to 1 Gbps links aggregated over 10 Gbps links or less depending on the actual optical technology used in the access network. In such environments, high congestion levels might not be rare and might cause disruption to high-end applications. Lots of promising applications have emerged and would benefit from this increase in capacity like Video On Demand (VoD) that allows end-users to host their own high-definition movie entertainment whenever they want or scientific experiments that gather huge volumes of data (e.g. LHC) that need to be transported to computational facilities scattered all around the globe. The interaction of users with remote objects through the use of communication networks, also known as Telepresence, starts to have a paramount role in a variety of domains ranging from medical or military applications to space exploration. But these applications are still built on top of the very successful softwares or protocols like TCP and IP that were designed a long time ago for a context of low-speed networks. Their design was dictated by one goal: providing a robust and fair interconnection between existing networks. Now put in a high-speed networks context, these extremely robust and scalable protocols are still able to deliver but they are starting to show their limits in terms of performance, security, functionalities and flexibility. The most typical example is the quasi ``one-size-fits-all'' TCP protocol at the transport layer. This protocol was designed to provide a connection-oriented ``virtual circuit'' service for applications like remote login or file transfer. These two applications have two nearly conflicting sets of requirements: the first one needs a low delay and has low requirements for bandwidth while the second one mostly requires high bandwidth. Both require reliable transmissions. TCP was designed as a ``one-size-fits-all'' protocol but as the capacity of the links increased in the wired networks, as the intrinsic characteristics changed for the wireless networks and as the application requirements diverged, TCP is no longer able to scale up in performance. Multiple solutions have been proposed over time, but today's protocol designers have a large disadvantage over those forty years ago: making four similar computers interact is not exactly the same thing as designing a communicating software or protocol fit to be used by billions of vastly heterogeneous computers. Introducing change in a system of such a scale is problematic: it makes it very improbable to make a complete overnight migration of every element in the network, especially if it requires an upgrade of all physical equipments. Thus the change can only be gradual and as software updates in the end-hosts at the very edge of the networks, as it would be easier than changing core hardware elements. The fear to see congestion collapse events in the Internet as in the late 80s is coming back. Caused by too much traffic on crowded links leading to poor performance for everybody over long periods of time, it had advocated for the introduction of a congestion control mechanism in the transport protocol. If the new solutions are behaving selfishly when sharing the common resources (the links), this fear is grounded. It is then necessary to provide ways to assert if a given solution has the appropriate properties before allowing it to roam freely over a large-scale shared system.
Complete list of metadatas

Cited literature [30 references]  Display  Hide  Download

https://tel.archives-ouvertes.fr/tel-00529664
Contributor : Romaric Guillier <>
Submitted on : Tuesday, October 26, 2010 - 11:49:40 AM
Last modification on : Monday, October 19, 2020 - 11:11:58 AM
Long-term archiving on: : Thursday, January 27, 2011 - 2:51:15 AM

Identifiers

  • HAL Id : tel-00529664, version 1

Citation

Romaric Guillier. Methodologies and Tools for the Evaluation of Transport Protocols in the Context of High-Speed Networks. Networking and Internet Architecture [cs.NI]. Ecole normale supérieure de lyon - ENS LYON, 2009. English. ⟨NNT : ENSL537⟩. ⟨tel-00529664⟩

Share

Metrics

Record views

559

Files downloads

1978