Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

SPECjAppServer2002 Performance Tuning

SPECjAppServer2002 Performance Tuning

This article discusses the best known methods for tuning the performance of the BEA WebLogic application server running the SPECjAppServer2002 benchmark on Intel architecture platforms. We describe a top-down, data-driven, and closed-loop approach to performance tuning, and touch on key advantages of BEA WebLogic that improve the performance of J2EE workloads.

Introduction
Java has become increasingly important in server-based applications. Consequently, standardized, robust, and scalable application support frameworks have become critical. Java 2 Enterprise Edition (J2EE) addresses this need, providing a comprehensive specification for application servers, including componentized object models and life cycles, database access, security, transactional integrity, and safe multithreading. One such application server is the BEA WebLogic application server. SPECjAppServer2002 is the most recent client/server benchmark for measuring the performance of Java Enterprise application servers using a subset of J2EE APIs in a Web application with a focus on Enterprise JavaBeans (EJB) performance.

In this article we'll examine the performance of SPECjAppServer2002 running on WebLogic and Intel architecture server platforms. We describe an iterative, data-driven, top-down methodology (see Figure 1), and the tools needed to systematically improve system performance.

At the system level, we identify performance and scalability barriers such as input/output (I/O), and operating system and database bottlenecks; and techniques to overcome those barriers. At the application level, we'll discuss application design considerations and application server tuning. At the machine level, we'll discuss Java Virtual Machine (JVM) tuning.

Performance Tuning Methodology
Application server configurations frequently involve multiple interconnected computers. Given the complexity involved, ensuring an adequate level of performance in this environment requires a systematic approach. There are many factors that may impact the overall performance and scalability of the system. Examples of these factors include application design decisions, efficiency of user-written application code, system topology, database configuration and tuning, disk and network input/output (I/O) activity, operating system (OS) configuration, and application server resource throttling controls.

We first apply existing generic best-known methods (BKM) to the system under test and obtain initial performance data. The initial performance data establishes a baseline for us to move forward by applying changes to tune the system and measure performance enhancements arising from these tuning efforts. The steps in the iterative process, shown in Figure 2, are:

1.  Collect data: Gather performance data as the system is exercised using stress tests and performance monitoring tools to capture relevant data.
2.  Identify bottlenecks: Analyze the collected data to identify performance bottlenecks.
3.  Identify alternatives: Identify, explore, and select alternatives to address the bottlenecks.
4.  Apply solution: Apply the proposed solution.
5.  Test: Evaluate the performance effect of the corresponding action.

Once a given bottleneck is addressed, additional bottlenecks may appear, so the process starts again by collecting performance data and initiating the cycle, until the desired level of performance is attained.

Data collection utilizes the following types of tools:

  • System monitoring: Collect system-level resource utilization statistics such as CPU (e.g., % processor time), disk I/O (e.g., % disk time, read/write queue lengths, I/O rates, latencies), and network I/O (e.g., I/O rates, latencies). Examples of tools used to measure these quantities are "perfmon" on the Microsoft Windows OS, and "sar/iostat" on the Linux OS.
  • Application server monitoring: Gather and display key application server performance statistics such as queue depths, utilization of thread pools, and database connection pools. For example, BEA's WebLogic Console can be used to monitor such data.
  • Database monitoring tools: Collect database performance metrics, including cache hit ratio, disk operation characteristics (e.g., sort rates, table scan rates), SQL response times, and database table activity. These may be measured using Oracle 9i Performance Manager, for example.
  • Application profilers: Identify application-level hot spots and drill down to the code level. Intel's VTune Performance Analyzer may be used to accomplish this.

    Performance Tuning
    Before we start tuning the system, a lot of effort can be saved by following currently established BKMs. In this section, we'll look at how we applied BKMs to establish the baseline data. Then we'll describe the iterative approach to tune the system for best performance.

    A good source of tuning BKMs can be found from the full disclosures of the publications on the SPEC Web site.

    Establish Baseline by Applying Current Best Known Methods Hardware
    It's important to ensure that the BIOS settings and the populating of the memory subsystem are done following prescribed norms. Reading and following the system documentation can pay dividends. For example, a platform with 4 gigabytes of memory may perform better with four 1- gigabyte memory cards rather than with one 4-gigabyte memory card. We fully populated the memory banks for the systems under test to eliminate known memory latencies caused by having unfilled memory card slots.

    There are several hardware aspects that affect performance, including processor frequency, cache sizes, front-side bus (FSB) capacity, and memory speed. In particular, higher frequency and larger cache lead to improved SPECjAppServer2002 performance. In one study, the performance was improved by 40% when the frequency was increased by 50% and the cache size was doubled.

    Network equipment has become relatively inexpensive. We use 1Gbps NICs to reduce the risk that the network becomes a bottleneck.

    While focusing on the best performance on the application server, we want to reduce the risk that the database system may become a bottleneck. A high-performance disk array system is used for the database back end. We use eight disks for tables and four disks for logs. We also use raw partitions to avoid the OS file system overhead to access the disks.

    Operating Systems
    On some Linux systems, the default number of "open files" might be too small for such enterprise Java applications. We increased the limit by adding the following to /etc/sysctl.conf

    fs.file-max = 65535

    Similarly, a

    ulimit -n 65535

    is added to the application server startup script, or the user's initialization environment (.bashrc).

    The latest kernel or OS build with known good performance should be used on all systems. Similarly, the most current drivers, for example, the NIC drivers, should also be installed on the application server, database server, and client system.

    Once the baseline performance is established, we proceed to the iterative approach to tune the performance at the system, application server, and JVM levels.

    For the SPECjAppServer2002 workload, a higher throughput can be obtained by increasing the load (also known as the injection rate) on the system. However, there are response-time requirements that make merely increasing the injection rate overly simplistic. While increasing the injection rate, we need to tune the rest of the system so that adequate response times can be achieved for key transactions.

    One method we frequently apply is to estimate the likely load on the system that can be supported by scaling a compliant injection rate by CPU utilization to 90%. For example, if we have a compliant run at an injection rate of 100 that consumes 45% CPU, we will increase the injection rate to 200 and tune the system so that the response times are compliant.

    System Level Performance
    The response time is an important aspect of the SPECjAppServer2002 workload. Thus, lowering the use of resources by the system components will be helpful. During system-level tuning, the main goal is to saturate the application server CPU (i.e., close to 100% utilization). Reaching maximum throughput without full saturation of the CPU is an indicator of a performance bottleneck such as I/O contention, over-synchronization, or incorrect thread pool configuration. Conversely, a high response time metric with an injection rate well below CPU saturation indicates latency issues such as excessive disk I/O or improper database configuration.

    Application server CPU saturation indicates that there are no system-level bottlenecks outside of the application server. The throughput measured at this level would point out the maximum system capacity within the current application implementation and system configuration parameters. Further tuning may involve adjusting garbage collection parameters or adding application server nodes to a cluster.

    Most components will exhibit a nonlinear response time/throughput behavior. In other words, increasing the throughput will tend to increase the response time, with a disproportionate increase in response time at high throughputs. It is important to size these components such that the required throughput utilization for the component will be relatively low to allow for the response time to be relatively small as well. This is especially important for network capacity, disk capacity, and the capacity of the data bus connecting processors to memory and I/O.

    System monitoring tools (described earlier) can be used to track system performance metrics, which can help find bottlenecks. In a multitiered system setup where multiple computers are used, it is important to run these tools on all of the computers.

    Application Server-Level Performance
    The workload itself plays an important factor in performance and it may demand a specific optimal application server configuration. Many parameters can be tuned to optimize for both response times and throughput. Reducing overall response time can often help increase the capacity for further increase in throughput. It is also important to break the response times down into subcomponents, and to further tune the system so that response times of key subcomponents are optimized too.

    Many of these tunable parameters are easily accessible from common application servers such as the BEA WebLogic Server. The list of parameters presented here is not exhaustive. It is merely a starting point to tune the performance for your enterprise Java applications. The list includes tuning key application server parameters, and tuning key container parameters. You should bear in mind that these parameters are tuned to reduce response time for key transactions, such as new order and manufacturing, for the workload.

    Tuning key container parameters
    Many application server parameters can be tuned to help an application perform more effectively. The following parameters should be considered for most applications.

  • Setting a good value for the initial bean pool size improves the initial response time for EJBs: They are preallocated upon application server startup.
  • Setting an optimal value for bean cache size will prevent passivations: It increases performance by reducing file I/O activity.
  • Allocating large enough cache size for the appropriate stateful session bean can potentially improve the throughput: For example, you may want to increase the max-beans-in-cache specified for the CartSes EJB and measure the change in performance.

    Tuning key application server parameters
    Many application server parameters can be tuned to enable better sharing and interaction with virtual machines and operating systems. The following parameters should be considered for most applications.

  • A platform-optimized socket multiplexer should be used to improve server performance for I/O scalability: In particular, when a performance pack is available from a vendor it should be used. However, with the emergence of JDK 1.4, this effect has become less significant than before.
  • The thread pool size should be gradually increased until performance peaks: Beware of making this size too big, as a higher number may degrade performance due to unnecessary usage of system resources and excessive context switches.
  • BEA WebLogic Server supports the notion of multiple queues for transactions: You may find a specific distribution of executing threads to optimize for a specific workload. This is particularly important when certain transactions have tight response time limits and more threads for those transactions can be allocated accordingly. The support of multiple queues has a clear advantage over a single queue mechanism for shifting long response time transactions to less critical areas. We found that changing a thread pool size by as small a value as 1 can sometimes yield a big response time improvement.
  • The database connection pool should be set equal to or larger than the number of available execute threads: An execute thread does not need to wait for a connection. For optimistic concurrency the number of connection pools required is actually about 1.5 times the number of available execute threads.
  • Experimenting with the JDBC prepared statement cache size may yield a configuration that minimizes the need for parsing statements on the database. The value should be gradually increased until performance peaks. We started with a value of 100 for SPECjAppServer2002 and did not observe performance gains either increasing or decreasing the value.
  • Relationship caching and optimistic concurrency are two additional features provided by BEA WebLogic Server:

    JVM-level Performance
    Selecting the correct JVM is critical. It is essential to use a JVM that has been optimized for the underlying hardware. The best optimizations for various processor platforms are known, and a Java application needs to rely on the JVM to harness these optimizations.

    A JVM can provide configuration parameters to the users to let them identify which techniques the JVM should use for optimal performance of their application. We selected BEA WebLogic JRockit as our JVM as it is highly optimized for both Intel Xeon and Itanium platforms.

    The key JVM parameters are in the area of heap management, ranging from the selection of the garbage collection (GC) algorithm and the specification of heap sizes, down to the specifics of thread local allocation sizes and when the space for an object is cleared. It is usually preferable to set the minimum and maximum heap sizes to be the same to avoid runtime overhead associated with expanding and contracting the heap.

    The selected heap size can have a profound effect on performance. It is often desirable to set the heap space as large as possible provided you have enough memory on the system. We use a heap space of 1.5GB for our setup for the Xeon processor family, while we use a heap space of 12GB for our setup for the Itanium processor family as the 64-bit architecture systems allow us to use much more memory to boost the performance.

    The BEA JRockit JVM permits alternate garbage collection strategies to be specified. Parallel GC is a good starting option for the SPECjAppServer2002 workload.

    While rules-of-thumb can be created and experience can be a guide, there is no real substitute for running a variety of experiments to identify the JVM parameters that work best for a given application. This is especially important for JRockit, which exposes a rich set of parameters for you to squeeze the last drop of performance from your application.

    Summary
    This article described a top-down, data-driven, and closed-loop approach to boost SPECjAppServer2002 performance. The opportunities to improve performance were examined from the whole system including the software/hardware stack of the system level perspectives, the software applications, and the machine level for the virtual machine as well as the physical hardware. Our research suggested that all layers - not just one or two - of the system stack should be examined for performance bottleneck identification and removal.

    Acknowledgments
    Jason A Davidson, Ashish Jha, Michael LQ Jones, Tony TL Wang, D J Penney, Kumar Shiv, and Ricardo Morin provided key information for this article.

    References

  • Arnold, Ken; Gosling, James; Holmes, David. (2000). The Java Programming Language, Third Edition. Sun Microsystems, Inc.
  • Java Community Process, "Java 2 Platform, Enterprise Edition 1.3 Specification": http://jcp.org/aboutJava/ communityprocess/final/jsr058/
  • Standard Performance Evaluation Corporation (SPEC): www.spec.org/jAppServer2002/index.html
  • Chow, K.; Morin, R.; and Shiv, K. (February 2003). "Enterprise Java Performance: Best Practices." Intel Technology Journal. http://developer.intel.com/technology/ itj/2003/volume07issue01/.
  • Intel Corporation, "VTune Performance Analyzer": www.intel.com/software/products/vtune/vtune61
  • Patterson, David A.; Hennessy, John L. (1997) Computer Organization and Design: The Hardware/Software interface. Morgan Kaufmann Publishers.
  • BEA WebLogic Server 8.1: www.bea.com/framework.jsp? CNT=index.htm&FP=/content/products/server
  • Intel Corporation, "Intel Itanium 2 Processor Reference Manual for Software Development and Optimization": http://developer.intel.com/design/itanium2/manuals
  • Intel Corporation. "Intel Pentium 4 Processor Optimization Reference Manual": http://developer.intel.com/design/pentium4/manuals
  • More Stories By Gim Deisher

    Gim Deisher is a Senior Software Performance Engineer working with the
    Software and Solutions Group at Intel Corporation. She received a M.S.
    degree in Electrical Engineering from the Arizona State University in
    1992.

    More Stories By Kingsum Chow

    Kingsum Chow is a Senior Performance Engineer working with the Managed
    Runtime Environments group within the Software and Solutions Group
    (SSG). Kingsum has been involved in performance modeling and
    optimization of middleware application server stacks, with emphasis on
    J2EE and Java Virtual Machines. He has published 20 technical papers and
    presentations. He received his Ph.D. degree in Computer Science and
    Engineering from the University of Washington in 1996.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
    Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
    Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
    The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
    CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
    DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
    Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
    The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
    SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
    When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...