Training
Certifications
Books
Special Offers
Community




 
Performance Testing Microsoft® .NET Web Applications
Author Microsoft ACE Team
Pages 320
Disk 1 Companion CD(s)
Level All Levels
Published 10/02/2002
ISBN 9780735615380
Price $39.99
To see this book's discounted price, select a reseller below.
 

More Information

About the Book
Table of Contents
Sample Chapter
Index
Related Series
Related Books
About the Author

Support: Book & CD

Rate this book
Barnes Noble Amazon Quantum Books

 


Chapter 2: Preparing and Planning for the Performance Test



2  Preparing and Planning for the Performance Test

Often Web applications fail to meet their customers' needs and expectations. When a Web application generates errors, has poor response times, or is unavailable, customers can easily become frustrated. If your performance test procedure or methodology is not well thought out and properly planned, the odds of a successful Web application launch are significantly reduced. This chapter identifies the key processes and planning required before you execute a single performance test. By following these steps you will enhance your odds of executing an effective Web application performance test. These steps include identifying performance goals, creating a user activity profile, and defining the key metrics to monitor and analyze when creating a performance test plan.

Identifying Performance Goals

High-level performance goals are critical to ensure your Web application meets or exceeds current or future projected requirements. The best approach is to use historical data or extensive marketing research. Examples of poor planning are e-commerce Web applications that can't handle the peak holiday shopping rush. Every year the media publicizes Web applications that cannot procure all their orders, suffer from slow user response times, Web server error messages, or system downtime. This costs not only in terms of lost sales, but in bad press as well.

High-level performance requirements can be broken down into the following three basic categories:

  • Response time acceptability
  • Throughput and concurrent user goals
  • Future performance growth requirements

Response Time Acceptability Goals and Targets

By researching how and where your users will connect to your Web application, you can build a table similar to Table 2-1 to show the connection speeds and latency of your potential customers. This can help you determine an acceptable amount of time it can take to load each page of your Web application.

Table 2-1  Predicted Connection Speeds

UserWorst ConnectionAverage ConnectionBest Connection
Line Speed28.8-kbps modem256-kbps DSL1.5-mbps T1
Latency1000 milliseconds100 milliseconds50 milliseconds

Once you have identified how your user base will access your Web application, you can determine your response time acceptability targets. These targets define how long it can acceptably take for user scenarios or content to load on various connections. For example, with all things being equal, a 70-kilobyte page will obviously load faster on a 256-kbps DSL connection than on a 28.8-kbps modem connection. The response time acceptability for your 28.8-kbps modem might be 15 seconds, while the 256-kbps DSL connection might be significantly less, at 5 seconds. Response time acceptability targets are useful when you perform an application network analysis, which is discussed in detail in Chapter 5. The purpose of conducting the application network analysis is to perform response time predictions at various connection speeds and latencies, determine the amount of data transferred between each tier, and determine how many network round trips occur with each step of a user scenario. If you do not have historical data or projections for potential customer connection speeds and latencies we recommend using worst-case estimates. The data in Table 2-1 represents worst, average, and best connections of typical end-user Internet connection speeds.

Throughput Goals and Concurrent User Targets

Answering the following questions will help to determine throughput goals and concurrent user targets:

  • How many concurrent users do we currently sustain or expect in a given time period?
  • What actions does a typical user perform on our Web application and which pages receive the most page views in a given time period?
  • How many user scenarios will my Web application process in a given time period?

The best place to gather this information is from historical data, which can be found in Web server log files, System Monitor data, and by monitoring database activity. If you are launching a new Web application, you may need to perform marketing research analysis for anticipated throughput and concurrent user targets. Historical production data or marketing research is useful to ensure you execute the performance tests using the right concurrent user levels. If, after you complete your performance tests, your Web application meets your throughput and concurrent usage requirements, you can continue adding load until your Web application either reaches a bottleneck or achieves maximum throughput. Table 2-2 shows predicted throughput goals and concurrent user profile expectation for the IBuySpy sample Web application. Using the information in this table, a performance test script can be created to mimic anticipated load on the Web application. The ratio column represents the percentage that this particular user operation is executed with respect to all the user operations. The anticipated load per hour is taken from historical data, which will be illustrated in the next section of this chapter and represents how many times per hour this particular user operation typically occurs.

Table 2-2  Throughput and Concurrent User Targets

User OperationsRatio Anticipated Load per hour
Basic Search14%1,400
Browse for Product62%6,200
Add to Cart10%1,000
Login and Checkout7%700
Register and Checkout7%700
Total 100%10,000

Performance Growth Analysis

A performance growth analysis is required if your Web application user base is expected to grow over a given time period. You need to account for user growth when performance testing. Performance testing and tuning your Web application after your development cycle is complete will cost more in time and money, when compared to fixing your performance problems during the software development life cycle (SDLC). In the real world example in Chapter 1, expenses incurred by finding and fixing their Web application performance issues after the SDLC included: lost marketing revenues due to bad press, lost users who are not patient enough to wait for slow page views, and test and development labor costs spent troubleshooting and fixing the issue. Taking a little extra time during the performance test cycle to populate your database with additional data to see how it will perform when it is larger will save you money in the long run. Also, execute the stress test with more load and with higher levels of concurrent users to predict future bottlenecks. By fixing these issues ahead of your growth curve, you will reduce your performance testing and tuning needs in the immediate future and ultimately provide your users with a better experience.

The easiest way to determine the growth capacity of your Web application is to calculate the increase in volume you are currently experiencing over a specific period of time. For example, assume your user base is growing at a rate of 10 percent per month. Table 2-3 illustrates an anticipated growth plan that can be used when performing your stress tests. This assumes your Web application is currently seeing 10,000 users per day and will grow at a rate of 10 percent per month. When determining your growth rate, don't forget to account for special promotions that may increase traffic to your Web application.

Table 2-3  Future Growth Profile

Time PeriodUsers Per Day
Current10,000 per day
Three months out13,310 per day
Six months out16,104 per day
Nine months out21,434 per day
Twelve months out28,529 per day

User Activity Profile

We use IIS logs to create user activity profiles. The IIS logs are text files that contain information about each request and can be viewed directly with a simple text editor or imported into a log analysis program. We recommend using a set of IIS logs covering at least a week's worth of user activity from your Web application to obtain realistic averages. Using more log files creates more reliable usage profiles and weightings. To illustrate the process of creating a user activity profile, we imported a month of IIS log data from a recent performance analysis on a typical e- commerce Web application into a commercial log file analyzer. These IIS log files are comprised of shopper page views related to Homepage, Search, Browse for Product, Add to Basket, and Checkout operations performed on the Web application. The logfile analyzer enabled us to generate Table 2-4. Many commercial log file analyzers that fit all budgets are available. These log analyzers can accurately import, parse, and report on Web application traffic patterns.

Table 2-4 User Activity Profile

User Operation/Page Name(s)Number of Page ViewsRatio
Homepage720,00040%
   default.aspx   720,000   40%
Search90,0005%
   search.aspx   90,000   5%
Browse450,00025%
   productfeatures.aspx   216,000   12%
   productoverview.aspx   234,000   13%
Add to Basket360,00020%
   basket.aspx   360,000   20%
Checkout180,00010%
   checkout.aspx   90,000   5%
   checkoutsubmit.aspx   54,000   3%
   confirmation.aspx   36,000   2%
Totals1,800,000100%

There is a distinction between a hit and a page view. A hit is defined as a request for any individual object or file that is on a Web application, while a page view or request is defined as the request to retrieve an HTML, ASP or ASP.NET page from a Web application and the transmittal of the requested page, which can contain references to many additional page elements. The page is basically what you see after the transfer and can consist of many other files. Page views do not include hits to images, component pages of a frame, or other non-HTML files.

Backend Activity Profile

A backend activity profile is used to identify user activity and performance bottlenecks at the database tier of an existing Web application. This information can be useful to ensure your performance test is accurate.

Identifying a Web Application's User Activity

Existing databases contain concrete information concerning what your users are doing with your Web application. Examples of this type of information for a typical e-commerce application include how many baskets are being created, how many orders are processed, how many logins occur, how many searches are taking place, and so on. This information can be gathered using simple queries to extract the data from your existing database. This data can assist you in creating user scenarios, user scenarios ratios, or other marketing information that can help you make decisions from the business side. For example, you can compare the number of baskets created to the number of checkouts to find the abandoned basket rate. This information can be important to designing your stress test to execute in the correct ratio. If you find that 50 percent of the baskets created turn into actual orders processed, you can mimic this ratio when executing your performance test.

Identifying a Web Application's Backend Performance Bottlenecks

If you are performance testing an existing Web application you can identify current performance bottlenecks by interrogating the database server for queries that take a long time to process, cause deadlocks, and result in high server resource utilization. This data collection process occurs during the planning phase of the performance testing methodology, and involves capturing SQL trace data using SQL Profiler, and Performance Monitor logs that are comprised of Windows and SQL Server objects in a typical Web application. In other words, the timeframe for the captured SQL trace should be when application performance goes from acceptable to poor performance. The captured information will give you a clearer picture of where the bottleneck is occurring. Chapter 8 walks you through the process of determining the source of the SQL performance issue. These possible causes include: blocking, locks, deadlocks, problematic queries, or stored procedures with long execution times.

Key Performance Metrics Criteria

As a performance analyst, tester, or developer, you must produce a blueprint on how to performance test the Web application to ensure the high- level performance goals are met. If you don't create a performance test plan, you may find out about requirements too late in the SDLC to properly test for them. Using the performance requirements criteria in the section above, you now need to identify key metrics that will be monitored and analyzed during the actual performance test.

Key metrics for the performance test include the following:

  • Server errors acceptability This may seem like a moot point, and no server errors are acceptable because they result in a bad user experience. However, during stress testing you will probably come across these errors, so you should be prepared to understand why they are occurring and decide whether they will happen in your live production environment, with real users hitting your Web application. For example, often when a stress test first begins and then again when it shuts down, errors are caused by too much load occurring too quickly, or by uncompleted page requests. These errors are caused by your stress test, so you can ignore them because they are unlikely to reoccur in the production environment.
  • Server utilization acceptability This is an important aspect of performance testing. By identifying this up front, you will be able to determine the maximum allowable level your servers should endure. When performance testing, this will be a key element in determining the maximum load to apply to your Web application. This metric can differ for each Web application and should be documented for support, development, test, and management teams to agree on. For example, you might ramp the Web tier up until we reach 75 percent CPU utilization. At this level we are serving approximately 2,000 users per server, which meets the concurrent user targets identified by our performance requirements. With documentation of these metrics, the support team can monitor the production Web servers looking for spikes that meet or exceed the performance requirement. The support team can begin to scale the Web application up or out to support the increased traffic.
  • Memory leaks or other stability issues These issues often arise when running extended performance tests. For example, if you execute a stress test for a short period of time you may not find the memory leak or other stability issues that only occur after an extended period of heavy activity. Many times multiple tests can be executed to accomplish different goals. You may want to run a quick one-hour test to determine your Web application's maximum throughput, and then run a weekend-long extended stress test to determine if your Web application can sustain this maximum load.
  • Processing delays These will occur in almost every Web application where complex business logic requires coding. The key is to minimize process delays to an acceptable amount of time. It's a good idea to know what's acceptable before performance testing, so you don't waste time escalating an issue to your development team that does not require fixing because it meets performance goals. Examples of processing delay acceptability are shown in Table 2-5 and include stored procedures taking more than 500 milliseconds and any Web page duration (measured by the time taken field in your Web tier logs) taking more than one second to process. Table 2-5 shows an example of a performance metric acceptability table. Your requirements may be different; the key point is to come up with a set of requirements that make sense for your Web application.

Table 2-5  Performance Metrics and Acceptable Levels

MetricLocationAcceptable Level
CPU utilizationPerformance Monitor < 75%
Memory—available MBPerformance Monitor > 128 MB
Memory—pages/second Performance Monitor< 2
ASP execution timePerformance Monitor< 1 second
DB processing delaysSQL Profiler< 500 milliseconds
Web tier processing delays Time Taken field of Web logs< 1 second

Mirroring the Production Environment

The performance test environment should be as close to the production environment as possible. This includes the server capacity and configuration, network environment, the Web tier load balancing scheme, and your backend database. By mirroring your production environment you ensure that your throughput numbers will be more accurate.

Putting It Together in a Performance Test Plan

The performance test plan is a strategy or formal approach to allow everyone involved in a Web application, from the development team, test team, and management team, to understand exactly how, why, and what part of the application is being performance tested. The following sections are found in a performance test plan:

  • Application overview This gives a brief description of the business purpose of the Web application. This may include some marketing data stating estimates or historical revenue produced by the Web application.
  • Architecture overview This depicts the hardware and software used for the performance test environment, and will include any deviations from the production environment. For example, document it if you have a Web cluster of four Web servers in the production environment, but only two Web servers in your performance test environment.
  • High-level goals This section illustrates what you are trying to accomplish by performance testing your Web application. Examples include identifying what throughput and concurrent usage levels you will be striving for as well as the maximum acceptable response times.
  • Performance test process This will include a description of your user scenarios, tools you use to stress test, and any intricacies you will put in your stress scripts. This section will also explain what ratios and sleep times or user think times you will include in your test script.
  • Performance test scripts The scripts are unlikely to be completed until after your performance analysis cycle has finished. But it is important to include these in the test plan to make them available in the next release or phase of the Web application test cycle. Because stress test scripts take time and effort to create, having test scripts available as a reference for future testing can save time.

Conclusion

Performance testing is a critical phase of any Web application's development cycle and needs to be the critical path for release to production. By planning properly before you start, you can ensure a successful performance test that will improve your odds of having a high-performing Web application when your customers begin to use it.



Last Updated: September 17, 2002
Top of Page