The 4 Hats of Application Performance Testing

8 minute read
four-hats-of-application-testing

Performance testing mitigates the risk of launching new applications by validating user experience and system scalability under a real-world workload before going live. Most companies today are doing it in some fashion, either internally or with the help of a partner. Many, however, still experience performance defects in production, and information technology (IT) executives wonder about gaps in their pre-production testing.

Doing performance testing properly requires a set of distinct skill that relatively few individual performance engineers possess and that few testing teams encompass, as they usually require four very different but complementary mindsets. I call these the four metaphorical hats of application performance testing.

These four hats contribute distinct value to the testing process:

  • The business analyst hat teases out from the application’s functionality and usage patterns to effectively plan key business transactions and a realistic target workload.

  • The developer hat coordinates a team effort to develop robust, data-driven and maintainable scripts.

  • The systems engineer hat identifies and configures the key infrastructure resources to monitor.

  • The data analyst hat correlates response times and resource utilizations and distills actionable results.

Unless your performance testing team has a strong leader experienced in all these roles, understanding the skills that underlie these four hats will help you put together a team and a process that embraces them. Doing so will raise your organization’s performance testing intelligence quotient (IQ) and the robustness of your next launch.

The Business Analyst Hat

This first hat is critical. You need to become (or enroll) a business analyst who talks with business stakeholders in their language, understands the business drivers and constraints, knows how users will use the application, converts usage statistics into a peak hour’s workload, and translates all this into quantifiable and measurable objectives. Without this foundation to build on, a lot of hard work may be expended and result in little value.

The testing approach is elaborated with a clear understanding of the key performance indicators (KPIs) and the graphs that illustrate them.

A good performance test seeks to validate:

  • Scalability

  • Capacity

  • Throughput

  • Workload achievement

Modeling a realistic target workload is often the most difficult task in the planning process. The business analyst should have the skills to mine business transaction metrics, production system statistics, and geographical user distributions. Then, that person should design a usage model that stands up to the business and technical scrutiny.

An important by-product of this planning activity is educating stakeholders on the process and value of performance testing and expectations of what it will deliver. The business analyst role should also think like a project manager, outlining and assigning key activities and managing the timeline.

Done properly, this sets a solid stage for the subsequent phases.

The Developer Hat

Once you’re ready to turn the plan into a test, it’s time to put on your developer hat and start thinking like a development lead. The goal in this phase is to build automated scripts that simulate real users navigating the application and convert the workload model into a test scenario in the scenario design interface of the load testing tool.

Development is the longest and most resource-intensive activity and often requires coordinating the efforts of several technical people to compress development time. Doing this effectively involves establishing script development standards (script, data and results directories, and script promotion process), naming conventions (for scripts, transaction timers, and data files) and script templates (to standardize common elements and structure). For scripts to be maintainable, they should be structured, modular, well documented and thoroughly unit-tested with parameterized data and realistic “think-times” inserted between user actions.

Development is not complete until an error-free “shakedown test” has been executed. This is a test that confirms that scripts, test data, resource monitors and load injectors—all the moving parts of a performance test—are all functioning as expected.

This is the most technical and unforgiving of the project phases, and the output is a set of robust scripts and test data that execute repeatedly and reliably. Achieving this requires strong development manager skills coupled with mastery of your load testing tools scripting capabilities.

The Systems Engineer Hat

In parallel with the development activity, you should put on your hard hat and conduct the equally important activities of a systems engineer (or find someone who can do this for you).

This work begins with modeling the environment. This includes talking with the system architect, studying network and physical diagrams, understanding the application services, and then creating a logical diagram of the “system-under-test”: network, compute and storage components; operating systems, application and database servers, application-specific services and queues; and key interfaces to other systems.

Once the “system-under-test” is clearly defined, the next step is to identify the key resources that support the processing paths, and select the points to monitor and configure tools to measure them. This takes the skills of a systems engineer, a database administrator (DBA), a network engineer, and an application expert—or someone who can interact with all these people—to accomplish this properly.

The outputs of this activity feed into the shakedown test to confirm that all the moving parts of the test are ready for show time.

The Data Analyst Hat

The raw results of running a performance test are thousands, if not millions, of data values. To make sense of all these numbers requires donning your data analyst hat (or turning to the right team member) to summarize, visualize, analyze and distill these results into objective observations and actionable recommendations.

This is perhaps the biggest gap in most performance testing teams. It requires years of experience and a keen eye for identifying patterns in the data and the ability to correlate different results to validate a hypothesis. It’s kind of like reading an X-ray: Any technician can take the pictures, but only an experienced radiologist can examine them to detect the hairline fracture and ultimately prescribe the cure.

A successful data analyst learns the elements of what I call the CAVIAR approach:

  • Collect: response times, errors, resources and anecdotes

  • Aggregate: average, maximum, 95th percentile and end-to-end results, at varying granularities

  • Visualize: response times, resources and bandwidth “over load”

  • Interpret: make observations, create and test hypotheses, support with data, and draw conclusions

  • Assess: compare to acceptable results and make recommendations

  • Report: executive summary and supporting details; assemble stakeholders and do read-outs

The person who handles this activity should have experience with data analysis and visualization tools. It’s also crucial to understand the audience you are sharing results with and adjust your communication style and content accordingly. You should not assume your audience will understand the messages in your graphs. You should clearly annotate them.

Remember to look at all of the results in the context of what is right for your business. For example, a high-profile Internet retailer should ensure an application delivers sub-second, or even sub-half-second, response times, or customers will be unhappy. If it’s an internal enterprise system, perhaps four- to five-second response times will be acceptable. Think about not only what the application is, but also who the users are and from where they are accessing it.

The outcome of this phase should be conclusions and recommendations derived from objective, data-supported observations. They should be confirmed by manual tests, and the extended technical team should agree with the results and ultimately should be able to identify and fix the problems.

Putting the 4 Hats into Action

Some organizations are fortunate enough to have either a broadly experienced performance engineer or a solid team with members who can fulfill the duties of each of the four hats.

However, no organization is perfect.

When there is turnover, processes are not always well-documented to train new hires and institutional knowledge is lost. Small teams can have trouble maintaining quality under the pressures of their workload.

Doing performance testing right, with repeatable, accurate and actionable results is a specialty that requires extensive and continual experience. You could do it yourself, just like you could change your own water heater, but if you hire an experienced plumber, you know your new appliance will function leak-free.

Consider expanding your team to include a partner if you feel your team is lacking in any of the four areas of expertise. Without a business analyst on board, the team can invest a lot of hard work that yields results with little value. Poorly designed scripts make maintenance difficult, which hinders the ability to produce repeatable results. If you test without monitoring, you have no visibility into what’s affecting response degradation. And, in order to deliver business value, it is critical to provide well-interpreted, actionable results.

Application performance testing is still a young discipline lacking in certification training or even in a common language. It is a practice that is steadily evolving. These four hats will continue to play a fundamental role in successful application performance testing.

Counterintuitively, it is a good day when something breaks during application performance testing. When you run into a problem during a test, it means you have uncovered a flaw before it has a chance to derail your system in the real world. That is where you get a solid return on your investment: preventing a failure from escaping to production and helping your company avoid lost revenue and damage to your brand reputation.

Take the time to do performance testing properly the first time. Your senior leaders will thank you.

Transform your business with cloud
Leave a Comment

You Might Also Like

More Info Provided By

About the Authors

Popular Today

Slideshows

Videos

@ForsytheTech

1

How Do You Accelerate Your Innovation?

CA Continuous Delivery Solutions help accelerate application delivery, improve application quality, and reduce costs - all at the same time.

A triple threat to meet the challenges of today's IT organizations:
1. Faster time-to-market: often reducing the timeline for delivery of new application features by 25-50%.
2. Accelerated continuous delivery: reducing deployment cycles from weeks and days to hours or minutes.
3. Increased application quality: discovering 60%-90% more defects at least one stage earlier in the software development life cycle (SDLC), where break-fix and defect resolution costs are dramatically lower.

Join leading companies that have benefited from solutions from CA Technologies. Accelerate your innovation rate.

Learn more at http://www.ca.com/us/products/application-delivery.aspx