CEOWORLD magazine

5th Avenue, New York, NY 10001, United States
Phone: +1 3479835101
Email: info@ceoworld.biz
CEOWORLD magazine - Latest - Education and Career - Top-Down Strategies for Improving Enterprise, Web and Mobile Application Performance

Education and Career

Top-Down Strategies for Improving Enterprise, Web and Mobile Application Performance

With most firms today running application-driven websites, developing/maintaining in-house software, or customizing purchased software applications, business leaders are increasingly confronted with the question of software performance. Although the issues and functions surrounding web and mobile application development, testing and performance are worlds away from conventional high-level concerns, the performance and quality of enterprise, web or mobile applications can certainly impact both productivity and the bottom line. And, while the potential solutions to performance issues may change as applications transition to cloud-based services, basic problems persist.

Consequently, business leaders may be drawn into discussions about performance issues, often evidenced by diminished user satisfaction and high abandonment rates. In my work as a consulting manager for organizations struggling with software performance (and in particular, experiencing problems with performance testing), I have witnessed the frustration of business executives trying to interpret and make decisions about performance improvement efforts. Yet, the actions of these individuals can determine whether or not development and testing teams deliver an outcome that users find acceptable.

Although the causes of poor application performance are legion, in the area of performance testing specifically I have identified three fundamental roadblocks that are often exacerbated or even propelled by lack of understanding and awareness among executive management. Decision makers who are alert to the potential existence of these issues, and who provide top-down support for their resolution, help to clear essential hurdles that hamper application performance.

The Two Faces of Software Testing        

When I speak of “performance testing,” I am not referring to all software testing. Three types of testing—functional, performance and usability—govern most aspects of software quality. Even though “performance” is a term loosely used to quantify the success of all software behaviors, for the purposes of development and testing, function and performance mean very different things.

  • Functional testing ensures specific events and actions happen as desired. For example, exploring whether a mobile app will open when a user presses an icon is the job of functional testing. If the application is correctly coded (and the device’s touch screen is working properly), the command to open the application should execute when the user presses the right button.
  • Performance testing, on the other hand, examines how completely and efficiently certain operations and events take place. For instance, in a situation where an app user creates search parameters and presses a Search button, performance testing explores how long it takes the app to request and retrieve the results.
  • Usability testing is far more esoteric than the earlier two, requiring varying opinions and alternatives to determine what works best for the user, not just what is the fastest. Accuracy, speed and user interface all combine to create the “user experience.”

In this context, performance is a function not only of application coding but also of the time it takes the desired action to execute, which might involve communicating with various servers, interacting with third-party services and other operations. Performance is also impacted by the number of user requests hitting the application server or the network, the speed and quality of network connections, and myriad additional factors.

When Performance Testing Fails

As mentioned earlier, there are a number of technically-centered reasons why performance testing might not produce the desired results. However, testers stand little chance of meeting their goals unless corporate leadership understands the importance of avoiding three key missteps.

Wanting to Test Everything

One of the first steps in performance testing is to define requirements. In the academic realm, a test isn’t meaningful unless it asks questions that will identify whether or not students have achieved desired learning goals. The same is true for software performance testing; business analysts must work with stakeholders to define a serviceable set of requirements that will quantify performance and identify deficiencies.

Yet, when I help organizations develop requirements, I often hear, “Test everything.” This approach works with functional testing, which involves a finite set of operations and actions. In performance testing however, a single operation might require communication with dozens of services—some of which might change based upon user input. It might involve multiple distinct actions (e.g., checking a database, looking up a shipping rate, etc.). Finally, for each operation or action tested, testers must replicate transaction flow for a meaningful number of users under a variety of performance variables.

All of those elements increase scope and complexity. Consequently, performance tests must be restricted only to operations and actions that are either crucial to system functionality or have a history or potential of causing system instability.

Putting Convenience First

Imagine what would happen if a lab technician conducted a coronary stress test by asking the patient to walk slowly the entire time. The results would be meaningless, because they wouldn’t provide insight into the patient’s heart function under stress. Yet, every day I see organizational management stipulating “testing for convenience”—restricting application server tests to times or targets that are convenient but not practical.

In some instances, decision makers want tests to run only during periods when they won’t impact the performance of servers used for other activities. In other cases, teams are asked to test before application servers are ready, which requires conjectural simulations. Other issues that hamper tests (and testers) are having access only to load-balancing servers (which cannot mimic real-world performance), not being informed of periodic, load-intensive operations that will affect application performance, and more.

If possible, performance tests should be run on the physical server that will process application requests over multiple/long periods that cover the full range of server activities. Companies that don’t want to put application servers under such duress should explore virtualization, where testers pull historical server activity logs and incorporate that data into virtualized tests that simulate real-world conditions. However, virtualization creates its own set of performance risks and issues as applications under test share systems with other applications that are not under test or are not running in the test environment.

Ignoring the Need for Technological Parity

A third miscue that can dramatically reduce performance testing efficiency is failure to address testers’ struggles to keep up with the pace of technology. In my experience, developers do a good job of advocating with management to fund new tools that support next-generation protocols and languages and/or streamline the development process.

In theory, these are great improvements, but testing tools always lag behind development tools. If testers cannot work with the protocols or languages developers are using, or if they’re receiving output that’s incompatible with their testing tools, they will waste both their own and the developers’ time trying to create workarounds. Whether or not they succeed, testing won’t be as effective as possible.

To resolve this issue, business leaders must encourage end-to-end quality by ensuring everyone has the tools that best enable them to do their jobs. This can mean giving testers more training in new protocols and languages, or engaging outside experts to write scripts or custom programming to bridge development and testing tools. To do anything else puts undeserved targets on testers’ backs and results in lower application performance and quality.

Performance Testing Is King

In a January 2015 survey for Hewlett Packard Enterprise, Dimensional Research found that 96 percent of app users think performance is important, and 49 percent expect apps to respond in two seconds or less. The message is clear: perform or suffer the consequences.

Mobile application usage and abandonment survey from HPE Software Solutions

Of course, no amount of testing can eliminate all performance issues. Even so, robust, well-designed and executed performance testing can substantially reduce the risk of failure. Testers must be given the appropriate support to provide the most reliable results possible—and to work around the disparities between development and testing. When executive leaders champion these directives with mid-level management, it not only helps to eliminate the roadblocks mentioned here, it also communicates to development and testing teams that the pursuit of performance is an executive priority.

******
By Steve Antonoff, PTS Consulting Manager at Orasi Software.


Add CEOWORLD magazine to your Google News feed.
Follow CEOWORLD magazine headlines on: Google News, LinkedIn, Twitter, and Facebook.
Copyright 2024 The CEOWORLD magazine. All rights reserved. This material (and any extract from it) must not be copied, redistributed or placed on any website, without CEOWORLD magazine' prior written consent. For media queries, please contact: info@ceoworld.biz
CEOWORLD magazine - Latest - Education and Career - Top-Down Strategies for Improving Enterprise, Web and Mobile Application Performance
Steve Antonoff
Steve Antonoff is Consulting Manager, Performance Testing Services at Orasi Software, where he manages a growing team of performance testing consultants. Antonoff has been working in software engineering since 1998. He has been involved with performance and load testing since 1996 using a variety of tools, although he specializes in HPE LoadRunner (Mercury Certified Product Consultant, Mercury Certified Instructor) and HPE Performance Center (HPE Accredited Software Engineer).