Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Finding Production System Performance Problems

Finding Production System Performance Problems

This article demonstrates how Wily Technology's Introscope can be used to reach accurate conclusions to resolve a typical Java application performance problem. The article will be useful for architects, operations managers, testers, and developers responsible for WebLogic application performance and will give readers a better understanding of practical approaches to analyzing, improving, and managing production performance, without developing monitoring code by hand.

A business-to-business catalog and ordering system had been running in production for several months without much promotion and had served increasing numbers of customers reliably and quickly.

Recently, though, new features of the system, such as the ability to evaluate alternative items while browsing the catalog, had been promoted by the marketing organization, which caused more intensive use of the system. Unfortunately, users complained about slow performance of the application while searching and browsing for items as well as while building an order.

Operations and business managers came to the development organization with a critical mission to resolve these performance problems quickly. While developers had many hypotheses about the source of the problem, they were frustrated that they had no effective way to find it. The company chose to use Wily's Introscope to tackle their problem.

Using Introscope in the load-testing environment the development group identified a number of bottlenecks and eliminated other possible causes of the performance problems. With this information, developers could quickly implement focused changes in the application code to improve the system's performance and eliminate the performance problems. The operations and development groups also realized that using Introscope to monitor applications in production would make both groups more productive, allowing them to avoid and fix performance problems before they adversely affected their customers.

The System and Its Symptoms
Understanding the system

This business-to-business system is a moderately complex application developed by a third party as a work-for-hire and turned over to the company to manage and maintain. The System Architecture diagram (Figure 1) gives a graphic representation of the parts of the system and how they interact. All of the components in the diagram are Java components.

The Controller receives requests for application services from the system and routes them to the appropriate components. It also manages user permissions and profiles, relying on an external Authentication Service.

The Catalog Browser implements business logic relating to searching for and browsing items. It relies on the Customers subsystem to retrieve information related to customers, such as preferred brands and pricing models; the Item Catalog subsystem for detailed item information, including pictures and descriptions, pricing, and combinations; and the Inventory system to show item availability while reviewing product information.

The Order Builder might also be called the shopping cart component. It combines user selections of products, quantities, and destinations with the Customers' pricing models, Item Catalog's prices and combinations, and the Inventory's on-hand information to create an order.

When an order is built, the Order Placer does the work of committing the order. It checks a Credit Verification Service if the customer's profile indicates it, updates the Customers and Inventory subsystems with the new order information, and has the Orders subsystem begin the process of fulfilling the order and billing the customer.

The Customers, Item Catalog, Inventory, and Orders components map the system's Java representations onto the back-end systems' representations and act as clients of the back-end systems.

The Customer System, Item Database, Inventory System, and Order System exist in the Java system as connectors to those back-ends provided by their suppliers. The actual back-end systems run on different platforms and are shared by different systems, such as point-of-sale systems.

Seeing the symptoms but not the cause(s)
The development organization has a load-testing environment, which allowed them to reproduce the problems that only appear under heavy load. However, because the system has many interacting components, which could not be measured directly, they could not isolate the cause of the problem, only verify its existence under certain conditions.

The back-end systems used by the production application are also used by the load-testing environment, as well as by many other systems, such as point-of-sale and telesales systems. Because they are shared, moderately used, and performing reliably, monitoring them directly does not provide useful information to troubleshoot the Java application's problems.

Individual developers frequently use a profiler to get very detailed information about the code they have written. However, because running the application with the profiler slows the application tremendously and produces enormous amounts of trace data, it has not proven useful to understand this production performance problem under load.

The development organization considered writing their own logging code into the application, but they decided not to for several reasons. Foremost is the great cost to the development organization in taking their developers' time away from creating new code to provide business value and occupying it with writing troubleshooting code, and building the infrastructure to store and present it. Other reasons included the risks of introducing and managing code changes in so many parts of the application and the damage to developers' morale from being assigned "grunt" work.

At the same time, there were many hypotheses about the source or sources of the performance problems. Some said it was the back-end systems' slow response, others believed that faster server hardware was needed, and others attributed the problem to the Web server.

In short, the development organization needed a way to get component-level performance information from the application while running under load without substantially changing its performance characteristics and without having to write it themselves. They also needed all this right away, since customer complaints were increasing daily. They discussed these needs with their BEA representatives who suggested that they contact Wily Technology (a BEA partner) about their product Introscope.

Introscope
Component-level monitoring

Introscope provides component-level performance information about live production Java applications as well as applications running under load in a testing environment. It monitors any Java application running in any contemporary JVM (JDK 1.1.3 or later) on any hardware and operating system platform. Moreover, installation and administration of Introscope with WebLogic Server 5.1 and later is particularly easy with a feature called AutoProbe Integration.

Out of the box, Introscope monitors many common Java and J2EE components such as servlets, JSPs, EJBs, and JDBC and Socket activity. In addition, users can configure Introscope to monitor any class or method that they have built themselves or integrated from a third party using the Custom Tracing features. Users can also change the components they are monitoring even after the application is deployed, as their needs for performance information change. More importantly, monitoring choices are made without the need to access or change source code.

Introscope measures the average response time and the responses per second for most of the components it monitors. For other components, measurements include the bytes per second coming into or leaving the Java system and CPU utilization. In addition to each component's individual performance information, it keeps track of component interaction and attributes the performance of each to the component that caused or called it (a feature called "Blame Technology"). These performance measurements are useful for understanding how an application's components are performing while under load. The "Blamed" measurements make bottlenecks in component interactions easy to identify.

Introscope uses a number of techniques to ensure that the overhead of collecting performance information remains low. Introscope is selective about the components it monitors, and places lightweight monitors on relatively heavyweight component activity. The Introscope Agent collects summary information about component performance and reports that information asynchronously to a separate Enterprise Manager component, which handles more CPU-intensive tasks such as storing the data and making data available to the Workstation, Introscope's GUI.

Historical data stored for analysis and reports
Introscope stores performance data in a JDBC-accessible database and/or comma-separated value (CSV) text files. The user controls exactly which data is stored and the frequency at which it is recorded. Once stored, the historical data can be viewed in the Workstation, or by using any technique that can query or report on the JDBC database or CSV files. Introscope includes sample component performance, service-level, and capacity-planning Crystal Reports.

Alerts for operations
Since Introscope is designed to manage Java systems in production, it can perform actions when performance measurements cross user-defined thresholds. Actions commonly triggered by Alerts include sending an e-mail, showing a dialog box in the Workstation, sending a message to a pager, writing to a log file, reconfiguring or restarting an application, and sending a message to another enterprise management system. In addition, an Introscope Alert can trigger any executable or shell script.

Customizable views
The Workstation is an application that allows users to view and manage their systems' component performance. Particularly useful is the ability for users to create customized Dashboards to present performance data graphically for different users' needs. One Dashboard might show an overview of a system with colored lights indicating system status, while another might show detailed performance information for a component and the services it uses.

Monitoring clusters
Application instances can be monitored individually or as a cluster. Many Agents (whether on one machine or many and whether in a cluster or working as different tiers) can report performance information to the same Enterprise Manager. The data from each Agent is handled separately but can easily be monitored by an Alert or displayed together in a Dashboard. Additionally, aggregates for a cluster can easily be set up, for example, to provide the combined average response time for a Servlet in all instances in a cluster.

The Approach
The development and operations groups arranged for a Wily performance consultant to come in for five days and work with Introscope on their system in their environment. The goals of the work were to improve the system's performance and understand how Introscope can manage this and other systems in production.

Installing Introscope with AutoProbe
On the first day, Introscope was set up quickly on the WebLogic machines in the load-testing environment by using the AutoProbe Integration feature with WebLogic Server. The Enterprise Manager was installed on a separate, shared, low-end box in the testing environment. An existing database server had database structures and a user added for Introscope to use. The Workstation was installed on several machines, including one in the testing environment, one in the operations management environment, and two in the development organization. That afternoon, the team was already viewing live component information with Introscope.

Customizing the monitoring environment
As with most systems, this one is made up of both J2EE components (which Introscope monitors out-of-the-box) and a number of custom components (which must be configured for Introscope to monitor). Introscope uses text files to configure which custom components to monitor, referencing the package, class, and method names of the primary ways the components are accessed. Based on discussions with the company's system architect, the package, class, and method names for Business Logic, External Service Provider, and Business Data Access Components were collected and used to create directive files for Introscope. This code snippet is a line from a custom directives file:

TraceOneMethodOfClass: com.company.onlinesales.logic.OrderBuilder addItem BlamedMethodTimer "Business Logic|Order Builder:Average Response Time (ms)"

The application server was restarted with this updated configuration and the performance information about these components became visible in Introscope.

Monitoring components and interactions under load
Moderate load was run against the system to begin to understand what performance information could be displayed and how it is represented.

From the load generator's point of view, the application behaved the same as before Introscope was introduced. There was no discernible difference in the performance of the application running under load with Introscope. This finding was crucial. First, the development organization needed to be confident that analysis with Introscope would not change the nature of the performance problem. Second, the operations group and system business sponsor would balk at the possibility of introducing large overhead, which would require additional server investment.

In Introscope, a component hierarchy of the application is visible in the Explorer window. It shows both the performance of individual components specified earlier in the configuration, as well as the performance of related components that are involved during the course of a component's work.

Figure 2 shows the performance information for the Catalog Browser and the Inventory components by themselves, as well as the Item Catalog performance when it is working on behalf of the Catalog Browser. As previously mentioned, Introscope's ability to associate performance information about one component with other components is called Blame Technology. Because the Inventory component works for several other components, it is extremely useful to be able to differentiate the performance of each component by its context. Without this feature, it would be difficult to find problems and bottlenecks that are caused by particular components' interactions rather than their aggregate performance. Another useful aspect of the Blame Technology is that Introscope does not have to be configured in advance as to which interactions to monitor.

Browsing Introscope's Explorer tree confirms that under heavy load the Catalog Browser and Order Builder components respond slowly. New information is now apparent: the Item Catalog and Inventory components are busier and slower, while other components do not appear to slow down much. Figure 3 shows the Inventory Average Response Time with moderate and heavy load.

Looking at the performance information for the monitored components, it is evident that in order to understand the performance of the Item Catalog and Inventory components, it is also important to monitor the Business Data System or DB components to see how the Item Catalog and Inventory components are using them.

Being able to view different performance information side-by-side would make this correlation and analysis easier than browsing in the Explorer. Introscope's Dashboards show selected performance information on the same screen. That customization is discussed below.

Homing in on the Problems
On day two of the project, Introscope's directive files were updated to include the Business Data System or DB components and the application was restarted and run under load again. Looking in the Explorer, the newly-configured components were shown as top-level components, as well as called resources under the Business Data Access Components.

To create Dashboards to conveniently show this information side-by-side, component metrics were dragged from the Explorer tree onto new Dashboards, automatically creating graph views, which were labeled and organized in Panels. The Business Logic components overview is shown in Figure 4. It shows the average response times and responses per second of the four Business Logic components. In this Dashboard, which shows the transition from moderate to heavy load, the much slower response times of all the Business Logic components except the Order Placer is clearly evident.

On the Catalog Browser Dashboard, shown in Figure 5, it appears that under higher load the Catalog Browser responds more slowly because it relies more frequently on the Item Catalog and the Inventory components, which are also much slower.

The analogous pattern is evident on the Order Builder Dashboard: the Item Catalog and Inventory components are both busier and much slower under heavier load.

From the Item Catalog Dashboard, it appears that under higher load, the Item Database component is also used much more frequently, but responds quickly. This implies that the bottleneck is not in the back-end system but in the Item Catalog component or the way it is used. A corresponding pattern is evident on the Inventory Dashboard, which shows the Item Database is used more often while still responding quickly.

The Results
Identifying and fixing problems

On day three of the engagement, conclusions about the sources of the performance problems became clear.

External circumstances
As suggested initially, there were two primary external contributing circumstances causing the slowdown: more users and more searches for related items by the Catalog Browser. The relationship between the number of users and the activity of the two bottleneck components was expected to be linear, and previous log analysis suggested this was true. However, with promotion of the "find related" feature, usage of the Inventory and Item Catalog components increased at a much greater rate than that of the number of users, which led to stress on the system and general application slowdown.

Inventory and Item Catalog
component bottlenecks

The analysis showed that the Inventory and Item Catalog components were accessed every time the Catalog Browser returned product description information for an item and every time the Order Builder added an item to an order. The combination of additional users and their use of the "find related" feature meant that there were many more Inventory and Item Catalog look-ups. The number of Inventory System and Item Database lookups was also greater, but the Inventory System and Item Database themselves did not slow noticeably. This suggested that the Inventory and Item Catalog components should be used less often, or they could cache their results in order to respond more quickly.

Eliminating alternative explanations
Many other possible sources of problems (other components, back-end systems, networking, memory) were quickly eliminated as suspects, reducing both the time taken to come to conclusions and the amount of work needed to reach them.

New release moved into production
Based on the conclusions made possible by Introscope, useful, localized changes could quickly be made in several components with a high degree of confidence that those changes would have a substantial beneficial effect on the application's performance under load.

Operations considers Introscope
On day four (while development worked on system code changes), Wily's performance consultant worked with operations line and management personnel to understand how Introscope could be deployed and used in the company's production environment to monitor their various Java systems. Sample Alerts and Dashboards were set up and test events were sent into the company's existing management framework.

Figure 6 shows one of the Alerts that operations set up. Figure 7 shows a resulting Alert message and the detailed information that Introscope provides when thresholds are crossed.

The operations group has an operations center in which monitoring consoles run all the time. Figure 8 shows an example of a Dashboard that might be displayed in the operations center, providing system status information at a glance.

Operations also spent some time understanding the database structure in which Introscope stores historical data. Some sample reports were run which showed how Introscope could be used both as a source of benchmarking reports during performance testing and for service-level and trend analysis reporting over time.

With Introscope's functionality, the operations and development group expect to be able to better understand how their systems are performing, respond to problems quickly when they occur, and avoid involving the development group except in exceptional circumstances. When these circumstances do occur, operations and development are confident that they will be able to share a view into the running application and avoid guessing and finger pointing about what the causes of problems might be.

Improved performance
By the end of the week, development had made the indicated changes and began to test the performance under load. Preliminary results indicated that the large slowdowns had disappeared. Preparations were begun to deploy the updated application to production with Introscope.

Conclusion
The causes of the performance problems were quickly identified. Much time-consuming investigation and potentially expensive purchases of server hardware were avoided. The development group could promptly make well-targeted changes to improve the application's performance for their customers. To learn more about Introscope call 1-800-GETWILY or visit www.wilytech.com

 

More Stories By Carl Seglem

Carl Seglem is a member of the Wily Technology Services team and has worked with Introscope at dozens of Fortune 1000 customers. Before joining Wily, he worked on information systems development and management at Scudder Kemper Investments, and KPMG. He can be reached at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...