Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Article

How to Speed Up sites like vancouver2010.com by more than 50% in 5 minutes

a standard approach in order to get to a high-level analysis result in 5 minutes

Many Web Sites that use JavaScript frameworks to make the site more interactive and more appealing to the end user suffer from poor performance. Over the past couple of months I’ve been contacted by users of our free dynaTrace AJAX Edition asking me to help them analyze their problems. In doing so, I’ve developed a standard approach in order to get to a high-level analysis result in 5 minutes.

As the Winter Olympics are a hot topic right now I checked out vancouver2010.com to see if they have any potential to improve their web site performance. It seems I found a perfect candidate for this 5 minutes guide :-)

Minute 1: Record your dynaTrace AJAX Session

Before I start recording a session I always turn on argument capturing via the preferences dialog:

Turn on Argument Capuring in the Preferences Dialog

Turn on Argument Capuring in the Preferences Dialog

The reason I do that is because I want to see the CSS Selectors passed to the $ or $$ lookup functions from various JavaScript frameworks like jQuery or Prototype. The main problem I’ve identified in my work are CSS Selectors per className that cause huge overhead on pages with many DOM elements. I wrote two blogs about the performance impact of CSS Selectors in jQuery and Prototype.

Now its time to start tracing. I executed the following scenario:
1. went to http://vancouver2010.com
2. click on Alpine skiing
3. click on Schedules & Results
4. click on the results of the February 17th race (that’s where we Austrians actually made it on the podium)

Minute 2: Identify poorly performing pages

After closing the browser, I return to dynaTrace AJAX Edition and look at the Summary View to analyze the individual page load times and to identify whether there is a lot of JavaScript, Rendering or Network time involved. Let’s see what we got here:

Identifying HotSpots on every page

Identifying HotSpots on every page

Here is what we can see
1. Across the board we have high JavaScript execution. The last page (schedule and results) tops it with almost 7 seconds in pure JavaScript
2. The first page has a big amount of Rendering Time – that is time spent in the browsers rendering engine
3. Page 2 and 4 have page load times (time till the onLoad event was triggered) of more than 5 seconds!!
4. Page 3 has a very high Network Time although it doesn’t have a very bad page load time. This means that we have content that was loaded after the onLoad

Minute 3: Analyze Timeline of slowest Page

I pick page 4 as we see a very high Page Load and very high JavaScript time. I drill down to the timeline view and analyze the page characteristic:

Where is the time spent on this page?

Where is the time spent on this page?

Here is what I can read from this timeline graph (moving the mouse over these blocks gives me a tooltip with timing and context information):
1. the readystatechangehandler takes 5.6 seconds in JavaScript. This handler is used by jQuery and calls all registered load handlers
2. the script FB.share takes 792 ms when it gets loaded
3. an XHR Request at the very beginning takes 820ms
4. we have about 80 images all coming from the same domain – this could be improved by using multiple domains
5. we have calls to external apps like facebook, google ads or google analytics

Minute 4: Identify poorly performing CSS Selectors

The biggest block is the JavaScript executed in the readystatechangehandler. I double click on it and end up in the PurePath view showing me the JavaScript trace of this event handler. I navigate to the actual handler implementation which gets called by jQuery. I expand the handler to see the methods it calls and which one consume the most time. It is not surprising to see a lot of jQuery Selector methods in there using a CSS className to identify the element:

PurePath View showing HotSpots in the onLoad event handlers

PurePath View showing HotSpots in the onLoad event handlers

I highlighted those calls that have a major impact on the performance of this event handler. You can see that most of the time is actually spent in the $ methods that is used to look up elements. Another thing that I can see is that they change the class name of the body to “en” which takes 550ms to execute.

As I am sure there are tons of calls to jQuery Selector Lookups in that JavaScript handler as well as in all other JavaScript handlers on the vancouver2010.com website I open up the HotSpot view. The HotSpot view shows me the JavaScript, DOM Access and Rendering Hotspots across all pages. I am interested in the $ methods only. In the HotSpot view I therefore filter for “$(” and also filter to only show the DOM API (we account the $ method to the DOM API and not to jQuery). Here is what I get after sorting the table by the Total Sum column:

HotSpot View showing all jQuery CSS Selectors and their performance overhead

HotSpot View showing all jQuery CSS Selectors and their performance overhead

The problem here is easy to explain. The site makes heavy use of the CSS Selectors to look up elements by class name. This type of lookup is not natively supported by Internet Explorer and therefore jQuery has to iterate through the whole DOM to find those elements. A better solution would be to use unique IDs - or at least add the tag name to the selector string – this also helps jQuery as it first finds all elements by tag name (which is natively implemented and therefore rather fast) and then only has to iterate through these elements. So instead of an average lookup time of between 50ms and 368ms this can be brought down to 5-10ms -> nice performance boost - eh? :-)

Minute 5: Identify network bottlenecks

In the timeline I saw many image requests coming from the same domain. As most browsers have a physical network connection limitation per domain (e.g.: IE7 uses 2) the browsers can only download so many images in parallel. All other images have to wait for a physical connection to become available. Drilling into the Network View for page 4 I can see all these 70+ images and how they “have to wait” to become downloaded. Once these images are cached this problem is no longer such a big deal – but for first time visitors it is definitely slowing down the page:

Network View showing wating times for Images

Network View showing waiting times for Images

The solution for this problem is using the concept of domain sharding. Using 2 domains to host the images allows the browser to use twice as many physical connections to download more images in parallel. This will speed up page the download of those images by 50%.

Conclusion

It is easy to analyze the performance hotspots of any web site out there. This is my approach to identify the most common problems that I’ve seen in my work. Besides the problems with CSS Selectors and Network Requests we see problems with poorly performing JavaScript routines (very often from 3rd party libraries), too many JavaScript files on the page, too many XHR (XmlHttpRequests) to the server and slow responses from the server of those XHR Requests. Especially for that last piece we then use our End-To-End Monitoring Solution by integrating the data captured with dynaTrace AJAX Edition with the server-side PurePath data captured with dynaTrace CAPM. Also – check out my blog about why end-to-end performance analysis is important and how to do it.

Feedback on this is always welcome. I am sure you have your own little tricks and processes to identify performance problems of your web sites. Feel free to share it with us at blog.dynatrace.com.

Related reading:

  1. Performance Analysis of dynamic JavaScript Menus In my previous post I talked about the impact of...
  2. 101 on jQuery Selector Performance Last week I got to analyze a web page that...
  3. 101 on Prototype CSS Selectors Performance implications of certain CSS Selectors are not specific to...
  4. Ensuring Web Site Performance – Why, What and How to Measure Automated and Accurately It is a fact that end user response time is...
  5. 101 on HTTPS Web Site Performance Impact I recently analyzed a secure web page that took 20...

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

IoT & Smart Cities Stories
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...