root cause of application performance issues


Applications running slow? The root cause might come as a surprise

Remember, back in the day, when you’d go to a website and it was down? Yes, down. We’ve come a long way in a short time.

In today’s computing environments, however, slow is the new down.  In a civilian agency, a slow application means lost productivity, and a slow military application in theater can mean the difference between life and death. Due to a constantly increasing reliance on mission critical applications, the government must now meet  – and in most cases surpass – the high performance standards that are being set by the commercial industry.  And the stakes continue to get higher.

So what can federal IT pros do to find the root cause of application performance issues to ensure applications operate at peak availability and response times are meeting requirements? The answer may come as a surprise.

Most IT teams focus on the hardware, after blaming and ruling out the network, of course. If an application is slow, the first thought is to add hardware – more memory, faster processors, upgrade storage to SSD drives– to combat the problem. Agencies have spent millions throwing hardware at performance issues without a good understanding of the true bottlenecks slowing down an application.

But a recent survey on application performance management by research firm Gleanster LLC reveals that the database is the No. 1 source of issues with performance.  In fact, 88 percent of respondents cite the database as the most common challenge or issue with application performance.

Understanding that the database is often the cause of application performance issues is just the beginning; knowing where to look and what to look for is the next step. There are two main challenges to trying to identify database performance issues:

There are a limited number of tools that assess database performance. Tools normally assess the health of a database (is it working, or is it broken?), but don’t identify and help remediate specific database performance issues.

Database monitoring tools that do provide more information don’t go much deeper. Most tools send information in and collect information from the database, with little to no insight about what happens inside the database that can impact performance.

To successfully assess database performance and uncover the root cause of application performance issues, IT pros must look at database performance from an end-to-end perspective.

In a best-practices scenario, the application performance team should be performing wait-time analysis as part of regular application and database maintenance. Wait-time analysis is a method that determines how long the database engine takes to receive, process, fulfill and return a request for information back to the user or application. A thorough wait-time analysis looks at every level of the database – from individual SQL statements to overall database capacity – and breaks down each step to the millisecond.

The next step is to look at the results, then correlate the information and compare. Maybe the database spends the most time writing to disk; maybe it spends more time reading memory. Understanding the breakdown of each step – and comparing each to one another and to the overall process – helps determine where there may be a slowdown and, more importantly, where to look to identify and fix the problem.

Ideally, all federal IT shops should implement regular wait-time analysis as a baseline of optimized performance. Having this baseline can help, for example, with more effective change management. If a change has been implemented, and there is a sudden slowdown in an application or in the database itself, a fresh analysis can help quickly pinpoint the location of the performance change, leading to a much quicker fix.

Our nearly insatiable need for faster performance may seem like a double-edged sword. On one hand, optimized application performance means greater efficiency; on the other hand, getting to that optimized state can seem like an expensive, unattainable goal.

Knowing how to optimize performance – and understanding that it may have nothing to do with hardware – is a great first step toward staying ahead of the growing need for instantaneous access to information.

About the Author

Chris LaPoint is vice president of product management at IT management software provider SolarWinds, based in Austin, Texas.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.