Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Blog Feed Post

RTT (Round Trip Time): Aka – Why bandwidth doesn’t matter

Myths about Bandwidth and Internet’s Speed

A great post over on ajaxian got me to thinking today.  Why is it whenever you hear people talking about speed on the internet, they use a single metric?  Whether they’re discussing the connection in the datacenter, their residential DSL, or the wireless connection via their mobile device, everyone references the bandwidth of their connection when talking about speed.  “Oh I just got a 20Mb/s connection at home, it’s blazing fast!".  That’s all well and good, and 20 Mb/s is indeed a lot of throughput for a residential connection. Unfortunately for Joe Average, about 98% of the population wouldn’t know what the heck to DO with 20 Mb/s of download speed, and even worse than that…they would likely see absolutely zero increase in performance while doing the one thing most people use their connection for the most, browsing the web.

No one seems to ever bother mentioning the true culprit for slow (or fast) web browsing performance: latency, measured in RTT (Round Trip Time). This is the measure of time it takes for your system to make a request to the server and receive a response back. I.E. one complete request loop. This is where the battle for speed when browsing the web is won and lost. A round trip is measured in milliseconds (ms). This represents how much time it will take regardless of file size (this is important later in the discussion) to make the trip from you to the server and back. This means each connection you have to open with the server for an additional request must take at least this long. You add in the time it takes to download each file after accounting for RTT.

“Impossible!” you say, “Clearly going from a 10Mb/s connection to the new, fast, fancy (expensive?) 20 Mb/s connection my provider is proud to now be offering will double my speed on the web! 20 is twice as much as 10 you dullard!” you assert? Oh how wrong you are, dear uneducated internet user.  Allow me to illuminate the situation via a brief discussion of what actually occurs when you are browsing the web. We’ll skip some of the fine grained details and all the DNS bits, but here’s the general idea:

Whenever you make a request for a web page on the net, you send out a request to a server.  That server, assuming you’re an allowed user, then sends a response. Assuming you don’t get redirected and are actually served a page, the server will send you a generally simple HTML page. This is a single, small file that contains the HTML code that tells your browser how to render the site.  Your computer receives the file, and your browser goes to work doing exactly that, rendering the HTML.

Up to this point people tend to understand the process, at least in broad strokes.  What happens next is what catches people I think.  Now that your browser is rendering the HTML, it is not done loading the page or making requests to the server, not by a long shot. You still haven’t downloaded any of the images or scripts.  The references to all of that are contained in the HTML. So as your browser renders the HTLM for the given site, it will begin sending requests out to the server asking for those bits of content.  It makes a new request for each and every image on the page, as well as any other file it needs (script files, CSS files, included HTML files, etc.)

Here are the two main points that need to be understood when discussing Bandwidth vs. RTT in regards to page load times:

1.) The average web page has over 50 objects that will need to be downloaded (reference:  http://www.websiteoptimization.com/speed/tweak/average-web-page/) to complete page rendering of a single page.

2.) Browsers cannot (generally speaking) request all 50 objects at once. They will request between 2-6 (again, generally speaking) objects at a time, depending on browser configuration.

This means that to receive the objects necessary for an average web page you will have to wait for around 25 Round Trips to occur, maybe even more. Assuming a reasonably low 150ms average RTT, that’s a full 3.75 seconds of page loading time not counting the time to download a single file. That’s just the time it takes for the network communication to happen to and from the server. Here’s where the bandwidth vs. RTT discussion takes a turn decidedly in the favor of RTT.

See, the file size of most of the files necessary when browsing the web is so minute that bandwidth really isn’t an issue. You’re talking about downloading 30-60 tiny files (60k ish on average). Even on a 2Mb/s connection which would be considered extremely slow by today’s standards these files would each be downloaded in a tiny fraction of a second each.  Since you can’t download more than a few at a time, you couldn’t even make use of a full 2 Mb/s connection, in most situations. So how do you expect going from 10Mb/s to 20Mb/s to actually increase the speed of browsing the web when you couldn’t even make use of a 2Mb/s connection?  The answer is: You shouldn’t.

Sure, if they were downloading huge files then bandwidth would be king, but for many small files in series, it does almost nothing. You still have to open 50 new connection, each of which has a built in 150ms of latency that can’t be avoided before even beginning to download the file.  However, if you could lower your latency, the RTT from you to the server, from 150ms down to 50ms, suddenly you’re shaving a full 2.5 seconds off of the inherent delay you’re dealing with for each page load. Talk about snappier page loads…that’s a huge improvement.

Now of course I realize that there are lots of things in place to make latency and RTT less of an issue. Advanced caching, pre-rendering of content where applicable so browsers don’t have to wait for ALL the content to finish downloading before the page starts rendering, etc.  Those are all great and they help alleviate the pain of higher latency connections, but the reality is that in today’s internet using world bandwidth is very rarely a concern when simply browsing the web. Adding more bandwidth will not, in almost all cases, increase the speed with which you can load websites.

Bandwidth is king of course, for multi-tasking on the web. If you’re the type to stream a video while downloading audio while uploading pictures while browsing the web while playing internet based games while running a fully functioning (and legal) torrent server out of your house…well then…you might want to stock up on bandwidth. But don’t let yourself be fooled into thinking that paying for more bandwidth in and of itself will speed up internet browsing in general when only performing that one task.

#Colin

Read the original blog entry...

More Stories By Colin Walker

Coming from a *Nix Software Engineering background, Colin is no stranger to long hours of coding, testing and deployment. His personal experiences such as on-stage performance and the like have helped to foster the evangelist in him. These days he splits his time between coding, technical writing and evangalism. He can be found on the road to just about anywhere to preach the good word about ADCs, Application Aware networking, Network Side Scripting and geekery in general to anyone that will listen.

Colin currently helps manage and maintain DevCentral (http://devcentral.f5.com). He is also a contributor in many ways, from Articles to Videos to numerous forum posts, to iRules coding and whatever else he can get his hands on that might benefit the community and allow it to continue to grow.

IoT & Smart Cities Stories
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...