Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

SPECjAppServer2002 Performance Tuning

SPECjAppServer2002 Performance Tuning

This article discusses the best known methods for tuning the performance of the BEA WebLogic application server running the SPECjAppServer2002 benchmark on Intel architecture platforms. We describe a top-down, data-driven, and closed-loop approach to performance tuning, and touch on key advantages of BEA WebLogic that improve the performance of J2EE workloads.

Introduction
Java has become increasingly important in server-based applications. Consequently, standardized, robust, and scalable application support frameworks have become critical. Java 2 Enterprise Edition (J2EE) addresses this need, providing a comprehensive specification for application servers, including componentized object models and life cycles, database access, security, transactional integrity, and safe multithreading. One such application server is the BEA WebLogic application server. SPECjAppServer2002 is the most recent client/server benchmark for measuring the performance of Java Enterprise application servers using a subset of J2EE APIs in a Web application with a focus on Enterprise JavaBeans (EJB) performance.

In this article we'll examine the performance of SPECjAppServer2002 running on WebLogic and Intel architecture server platforms. We describe an iterative, data-driven, top-down methodology (see Figure 1), and the tools needed to systematically improve system performance.

At the system level, we identify performance and scalability barriers such as input/output (I/O), and operating system and database bottlenecks; and techniques to overcome those barriers. At the application level, we'll discuss application design considerations and application server tuning. At the machine level, we'll discuss Java Virtual Machine (JVM) tuning.

Performance Tuning Methodology
Application server configurations frequently involve multiple interconnected computers. Given the complexity involved, ensuring an adequate level of performance in this environment requires a systematic approach. There are many factors that may impact the overall performance and scalability of the system. Examples of these factors include application design decisions, efficiency of user-written application code, system topology, database configuration and tuning, disk and network input/output (I/O) activity, operating system (OS) configuration, and application server resource throttling controls.

We first apply existing generic best-known methods (BKM) to the system under test and obtain initial performance data. The initial performance data establishes a baseline for us to move forward by applying changes to tune the system and measure performance enhancements arising from these tuning efforts. The steps in the iterative process, shown in Figure 2, are:

1.  Collect data: Gather performance data as the system is exercised using stress tests and performance monitoring tools to capture relevant data.
2.  Identify bottlenecks: Analyze the collected data to identify performance bottlenecks.
3.  Identify alternatives: Identify, explore, and select alternatives to address the bottlenecks.
4.  Apply solution: Apply the proposed solution.
5.  Test: Evaluate the performance effect of the corresponding action.

Once a given bottleneck is addressed, additional bottlenecks may appear, so the process starts again by collecting performance data and initiating the cycle, until the desired level of performance is attained.

Data collection utilizes the following types of tools:

  • System monitoring: Collect system-level resource utilization statistics such as CPU (e.g., % processor time), disk I/O (e.g., % disk time, read/write queue lengths, I/O rates, latencies), and network I/O (e.g., I/O rates, latencies). Examples of tools used to measure these quantities are "perfmon" on the Microsoft Windows OS, and "sar/iostat" on the Linux OS.
  • Application server monitoring: Gather and display key application server performance statistics such as queue depths, utilization of thread pools, and database connection pools. For example, BEA's WebLogic Console can be used to monitor such data.
  • Database monitoring tools: Collect database performance metrics, including cache hit ratio, disk operation characteristics (e.g., sort rates, table scan rates), SQL response times, and database table activity. These may be measured using Oracle 9i Performance Manager, for example.
  • Application profilers: Identify application-level hot spots and drill down to the code level. Intel's VTune Performance Analyzer may be used to accomplish this.

    Performance Tuning
    Before we start tuning the system, a lot of effort can be saved by following currently established BKMs. In this section, we'll look at how we applied BKMs to establish the baseline data. Then we'll describe the iterative approach to tune the system for best performance.

    A good source of tuning BKMs can be found from the full disclosures of the publications on the SPEC Web site.

    Establish Baseline by Applying Current Best Known Methods Hardware
    It's important to ensure that the BIOS settings and the populating of the memory subsystem are done following prescribed norms. Reading and following the system documentation can pay dividends. For example, a platform with 4 gigabytes of memory may perform better with four 1- gigabyte memory cards rather than with one 4-gigabyte memory card. We fully populated the memory banks for the systems under test to eliminate known memory latencies caused by having unfilled memory card slots.

    There are several hardware aspects that affect performance, including processor frequency, cache sizes, front-side bus (FSB) capacity, and memory speed. In particular, higher frequency and larger cache lead to improved SPECjAppServer2002 performance. In one study, the performance was improved by 40% when the frequency was increased by 50% and the cache size was doubled.

    Network equipment has become relatively inexpensive. We use 1Gbps NICs to reduce the risk that the network becomes a bottleneck.

    While focusing on the best performance on the application server, we want to reduce the risk that the database system may become a bottleneck. A high-performance disk array system is used for the database back end. We use eight disks for tables and four disks for logs. We also use raw partitions to avoid the OS file system overhead to access the disks.

    Operating Systems
    On some Linux systems, the default number of "open files" might be too small for such enterprise Java applications. We increased the limit by adding the following to /etc/sysctl.conf

    fs.file-max = 65535

    Similarly, a

    ulimit -n 65535

    is added to the application server startup script, or the user's initialization environment (.bashrc).

    The latest kernel or OS build with known good performance should be used on all systems. Similarly, the most current drivers, for example, the NIC drivers, should also be installed on the application server, database server, and client system.

    Once the baseline performance is established, we proceed to the iterative approach to tune the performance at the system, application server, and JVM levels.

    For the SPECjAppServer2002 workload, a higher throughput can be obtained by increasing the load (also known as the injection rate) on the system. However, there are response-time requirements that make merely increasing the injection rate overly simplistic. While increasing the injection rate, we need to tune the rest of the system so that adequate response times can be achieved for key transactions.

    One method we frequently apply is to estimate the likely load on the system that can be supported by scaling a compliant injection rate by CPU utilization to 90%. For example, if we have a compliant run at an injection rate of 100 that consumes 45% CPU, we will increase the injection rate to 200 and tune the system so that the response times are compliant.

    System Level Performance
    The response time is an important aspect of the SPECjAppServer2002 workload. Thus, lowering the use of resources by the system components will be helpful. During system-level tuning, the main goal is to saturate the application server CPU (i.e., close to 100% utilization). Reaching maximum throughput without full saturation of the CPU is an indicator of a performance bottleneck such as I/O contention, over-synchronization, or incorrect thread pool configuration. Conversely, a high response time metric with an injection rate well below CPU saturation indicates latency issues such as excessive disk I/O or improper database configuration.

    Application server CPU saturation indicates that there are no system-level bottlenecks outside of the application server. The throughput measured at this level would point out the maximum system capacity within the current application implementation and system configuration parameters. Further tuning may involve adjusting garbage collection parameters or adding application server nodes to a cluster.

    Most components will exhibit a nonlinear response time/throughput behavior. In other words, increasing the throughput will tend to increase the response time, with a disproportionate increase in response time at high throughputs. It is important to size these components such that the required throughput utilization for the component will be relatively low to allow for the response time to be relatively small as well. This is especially important for network capacity, disk capacity, and the capacity of the data bus connecting processors to memory and I/O.

    System monitoring tools (described earlier) can be used to track system performance metrics, which can help find bottlenecks. In a multitiered system setup where multiple computers are used, it is important to run these tools on all of the computers.

    Application Server-Level Performance
    The workload itself plays an important factor in performance and it may demand a specific optimal application server configuration. Many parameters can be tuned to optimize for both response times and throughput. Reducing overall response time can often help increase the capacity for further increase in throughput. It is also important to break the response times down into subcomponents, and to further tune the system so that response times of key subcomponents are optimized too.

    Many of these tunable parameters are easily accessible from common application servers such as the BEA WebLogic Server. The list of parameters presented here is not exhaustive. It is merely a starting point to tune the performance for your enterprise Java applications. The list includes tuning key application server parameters, and tuning key container parameters. You should bear in mind that these parameters are tuned to reduce response time for key transactions, such as new order and manufacturing, for the workload.

    Tuning key container parameters
    Many application server parameters can be tuned to help an application perform more effectively. The following parameters should be considered for most applications.

  • Setting a good value for the initial bean pool size improves the initial response time for EJBs: They are preallocated upon application server startup.
  • Setting an optimal value for bean cache size will prevent passivations: It increases performance by reducing file I/O activity.
  • Allocating large enough cache size for the appropriate stateful session bean can potentially improve the throughput: For example, you may want to increase the max-beans-in-cache specified for the CartSes EJB and measure the change in performance.

    Tuning key application server parameters
    Many application server parameters can be tuned to enable better sharing and interaction with virtual machines and operating systems. The following parameters should be considered for most applications.

  • A platform-optimized socket multiplexer should be used to improve server performance for I/O scalability: In particular, when a performance pack is available from a vendor it should be used. However, with the emergence of JDK 1.4, this effect has become less significant than before.
  • The thread pool size should be gradually increased until performance peaks: Beware of making this size too big, as a higher number may degrade performance due to unnecessary usage of system resources and excessive context switches.
  • BEA WebLogic Server supports the notion of multiple queues for transactions: You may find a specific distribution of executing threads to optimize for a specific workload. This is particularly important when certain transactions have tight response time limits and more threads for those transactions can be allocated accordingly. The support of multiple queues has a clear advantage over a single queue mechanism for shifting long response time transactions to less critical areas. We found that changing a thread pool size by as small a value as 1 can sometimes yield a big response time improvement.
  • The database connection pool should be set equal to or larger than the number of available execute threads: An execute thread does not need to wait for a connection. For optimistic concurrency the number of connection pools required is actually about 1.5 times the number of available execute threads.
  • Experimenting with the JDBC prepared statement cache size may yield a configuration that minimizes the need for parsing statements on the database. The value should be gradually increased until performance peaks. We started with a value of 100 for SPECjAppServer2002 and did not observe performance gains either increasing or decreasing the value.
  • Relationship caching and optimistic concurrency are two additional features provided by BEA WebLogic Server:

    JVM-level Performance
    Selecting the correct JVM is critical. It is essential to use a JVM that has been optimized for the underlying hardware. The best optimizations for various processor platforms are known, and a Java application needs to rely on the JVM to harness these optimizations.

    A JVM can provide configuration parameters to the users to let them identify which techniques the JVM should use for optimal performance of their application. We selected BEA WebLogic JRockit as our JVM as it is highly optimized for both Intel Xeon and Itanium platforms.

    The key JVM parameters are in the area of heap management, ranging from the selection of the garbage collection (GC) algorithm and the specification of heap sizes, down to the specifics of thread local allocation sizes and when the space for an object is cleared. It is usually preferable to set the minimum and maximum heap sizes to be the same to avoid runtime overhead associated with expanding and contracting the heap.

    The selected heap size can have a profound effect on performance. It is often desirable to set the heap space as large as possible provided you have enough memory on the system. We use a heap space of 1.5GB for our setup for the Xeon processor family, while we use a heap space of 12GB for our setup for the Itanium processor family as the 64-bit architecture systems allow us to use much more memory to boost the performance.

    The BEA JRockit JVM permits alternate garbage collection strategies to be specified. Parallel GC is a good starting option for the SPECjAppServer2002 workload.

    While rules-of-thumb can be created and experience can be a guide, there is no real substitute for running a variety of experiments to identify the JVM parameters that work best for a given application. This is especially important for JRockit, which exposes a rich set of parameters for you to squeeze the last drop of performance from your application.

    Summary
    This article described a top-down, data-driven, and closed-loop approach to boost SPECjAppServer2002 performance. The opportunities to improve performance were examined from the whole system including the software/hardware stack of the system level perspectives, the software applications, and the machine level for the virtual machine as well as the physical hardware. Our research suggested that all layers - not just one or two - of the system stack should be examined for performance bottleneck identification and removal.

    Acknowledgments
    Jason A Davidson, Ashish Jha, Michael LQ Jones, Tony TL Wang, D J Penney, Kumar Shiv, and Ricardo Morin provided key information for this article.

    References

  • Arnold, Ken; Gosling, James; Holmes, David. (2000). The Java Programming Language, Third Edition. Sun Microsystems, Inc.
  • Java Community Process, "Java 2 Platform, Enterprise Edition 1.3 Specification": http://jcp.org/aboutJava/ communityprocess/final/jsr058/
  • Standard Performance Evaluation Corporation (SPEC): www.spec.org/jAppServer2002/index.html
  • Chow, K.; Morin, R.; and Shiv, K. (February 2003). "Enterprise Java Performance: Best Practices." Intel Technology Journal. http://developer.intel.com/technology/ itj/2003/volume07issue01/.
  • Intel Corporation, "VTune Performance Analyzer": www.intel.com/software/products/vtune/vtune61
  • Patterson, David A.; Hennessy, John L. (1997) Computer Organization and Design: The Hardware/Software interface. Morgan Kaufmann Publishers.
  • BEA WebLogic Server 8.1: www.bea.com/framework.jsp? CNT=index.htm&FP=/content/products/server
  • Intel Corporation, "Intel Itanium 2 Processor Reference Manual for Software Development and Optimization": http://developer.intel.com/design/itanium2/manuals
  • Intel Corporation. "Intel Pentium 4 Processor Optimization Reference Manual": http://developer.intel.com/design/pentium4/manuals
  • More Stories By Gim Deisher

    Gim Deisher is a Senior Software Performance Engineer working with the
    Software and Solutions Group at Intel Corporation. She received a M.S.
    degree in Electrical Engineering from the Arizona State University in
    1992.

    More Stories By Kingsum Chow

    Kingsum Chow is a Senior Performance Engineer working with the Managed
    Runtime Environments group within the Software and Solutions Group
    (SSG). Kingsum has been involved in performance modeling and
    optimization of middleware application server stacks, with emphasis on
    J2EE and Java Virtual Machines. He has published 20 technical papers and
    presentations. He received his Ph.D. degree in Computer Science and
    Engineering from the University of Washington in 1996.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
    Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
    IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
    The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
    Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
    Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
    Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
    To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
    In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
    Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...