Characterizing and Measuring Response Time for Web Applications

Characterizing and Measuring Response Time for Web Applications

Jun 29

  • Created: Jun 29, 2011 1:27 PM

Characterizing and Measuring Response Time for Web Applications

What is response time?

Response time, or Responsiveness, is simply how quickly an interactive system responds to user input. In terms of the web, definitions vary. Some researchers define response time from the moment the user submits the request for a page to the moment where the page begins to render. Others extend the final moment until the page is fully rendered in the browser. For the purposes of this blog post, we’ll use the latter definition.

Characterizing the factors that contribute to a system’s responsiveness is not simple, particularly given the complex underpinnings and distributed nature of Today’s web applications. This post discusses a simple formula that can be used to arrive at a ballpark estimate of your application’s response time and help you understand the major factors that influence it.

What is an acceptable response time?

A study from 19981 on the psychological impact of long wait times on the web suggests that response time significantly affects how interesting users find a site’s content. If you are running an E-Commerce site, obviously keeping a user’s interest is paramount. But what does a “long wait” really mean?

Research into usability and human-computer interaction has been going on for at least 45 years. 20 years ago, usability expert Jakob Nielsen cited a famous research study in his book, Usability Engineering, suggesting three psychological limits on response time for interactive applications. The following is taken from a followup article written by Nielsen in 2010:

  • 0.1 seconds gives the feeling of instantaneous response — that is, the outcome feels like it was caused by the user, not the computer. This level of responsiveness is essential to support the feeling of direct manipulation.
  • 1 second keeps the user’s flow of thought seamless. Users can sense a delay, and thus know the computer is generating the outcome, but they still feel in control of the overall experience and that they’re moving freely rather than waiting on the computer. This degree of responsiveness is needed for good navigation.
  • 10 seconds keeps the user’s attention. From 1–10 seconds, users definitely feel at the mercy of the computer and wish it was faster, but they can handle it. After 10 seconds, they start thinking about other things, making it harder to get their brains back on track once the computer finally does respond.2

Not much has changed over the years. Users still complain about slow sites. Even though bandwidth has grown cheaper and large images are no longer the major source of page latency, the complexity of dynamic web applications has shifted delays from the network to the server. Moreover, companies like Google and Yahoo have led the charge against poor design practices contributing to slower sites.

What factors contribute to response time?

Campbell and Alstad in a paper entitled “Scaling Strategies and Tactics for Dynamic Web Applications”3 described an equation that can be used to characterize and measure web application responsiveness.

Response Time Equation

Breaking down the elements, we have:

  • Payload = the total size in Bytes sent to the browser including the page and all of its resource files.
  • Bandwidth = From client to server, the minimal bandwidth in bps across all network links.
  • AppTurns = the number of components (images, scripts, CSS, Flash, etc.) needed for the page to render.
  • Round-Trip Time (RTT) = the amount of time in ms it takes to communicate from client to server and back again.
  • Concurrency = the number of simultaneous requests a browser will make for resources.
  • Server Compute Time (Cs) = the time it takes for the server to parse the request, run application code, fetch data, and compose a response.
  • Client Compute Time (Cc) = the time it takes for the browser to render HTML, execute scripts, implement stylesheets, etc.

Armed with this equation, developers and system administrators can use various tools to gather data for each component metric. They can then focus their optimization efforts instead of blindly fiddling with nobs hoping to see an improvement in response time. Most of the elements here are straightforward. Payload can be reduced using compression and caching. AppTurns can be lowered by combining scripts and stylesheets, using image maps and creating CSS sprites for often used buttons and background images. Each of these things have been covered in more detail in Best Practice recommendations found online. The difficult metrics are going to be the client and server compute time, but there are tools available that can help you measure these as well.

Hopefully this post will help you focus your performance optimization efforts and has provided you with a bit more insight into what it means when users and superiors complain “The site is sooo slow!!”

References:

[1] J. Ramsay, A. Barbesi, J. Preece. “A psychological investigation of long retrieval times on the World Wide Web.” Interacting with Computers. Vol. 10, pp. 77-86. (1998)
[2] http://www.useit.com/alertbox/response-times.html
[3] http://www.cmg.org/conference/cmg2008/awards/8150.pdf

Posted in: Nexcess