Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

SPECjAppServer2002 Performance Tuning

SPECjAppServer2002 Performance Tuning

This article discusses the best known methods for tuning the performance of the BEA WebLogic application server running the SPECjAppServer2002 benchmark on Intel architecture platforms. We describe a top-down, data-driven, and closed-loop approach to performance tuning, and touch on key advantages of BEA WebLogic that improve the performance of J2EE workloads.

Introduction
Java has become increasingly important in server-based applications. Consequently, standardized, robust, and scalable application support frameworks have become critical. Java 2 Enterprise Edition (J2EE) addresses this need, providing a comprehensive specification for application servers, including componentized object models and life cycles, database access, security, transactional integrity, and safe multithreading. One such application server is the BEA WebLogic application server. SPECjAppServer2002 is the most recent client/server benchmark for measuring the performance of Java Enterprise application servers using a subset of J2EE APIs in a Web application with a focus on Enterprise JavaBeans (EJB) performance.

In this article we'll examine the performance of SPECjAppServer2002 running on WebLogic and Intel architecture server platforms. We describe an iterative, data-driven, top-down methodology (see Figure 1), and the tools needed to systematically improve system performance.

At the system level, we identify performance and scalability barriers such as input/output (I/O), and operating system and database bottlenecks; and techniques to overcome those barriers. At the application level, we'll discuss application design considerations and application server tuning. At the machine level, we'll discuss Java Virtual Machine (JVM) tuning.

Performance Tuning Methodology
Application server configurations frequently involve multiple interconnected computers. Given the complexity involved, ensuring an adequate level of performance in this environment requires a systematic approach. There are many factors that may impact the overall performance and scalability of the system. Examples of these factors include application design decisions, efficiency of user-written application code, system topology, database configuration and tuning, disk and network input/output (I/O) activity, operating system (OS) configuration, and application server resource throttling controls.

We first apply existing generic best-known methods (BKM) to the system under test and obtain initial performance data. The initial performance data establishes a baseline for us to move forward by applying changes to tune the system and measure performance enhancements arising from these tuning efforts. The steps in the iterative process, shown in Figure 2, are:

1.  Collect data: Gather performance data as the system is exercised using stress tests and performance monitoring tools to capture relevant data.
2.  Identify bottlenecks: Analyze the collected data to identify performance bottlenecks.
3.  Identify alternatives: Identify, explore, and select alternatives to address the bottlenecks.
4.  Apply solution: Apply the proposed solution.
5.  Test: Evaluate the performance effect of the corresponding action.

Once a given bottleneck is addressed, additional bottlenecks may appear, so the process starts again by collecting performance data and initiating the cycle, until the desired level of performance is attained.

Data collection utilizes the following types of tools:

  • System monitoring: Collect system-level resource utilization statistics such as CPU (e.g., % processor time), disk I/O (e.g., % disk time, read/write queue lengths, I/O rates, latencies), and network I/O (e.g., I/O rates, latencies). Examples of tools used to measure these quantities are "perfmon" on the Microsoft Windows OS, and "sar/iostat" on the Linux OS.
  • Application server monitoring: Gather and display key application server performance statistics such as queue depths, utilization of thread pools, and database connection pools. For example, BEA's WebLogic Console can be used to monitor such data.
  • Database monitoring tools: Collect database performance metrics, including cache hit ratio, disk operation characteristics (e.g., sort rates, table scan rates), SQL response times, and database table activity. These may be measured using Oracle 9i Performance Manager, for example.
  • Application profilers: Identify application-level hot spots and drill down to the code level. Intel's VTune Performance Analyzer may be used to accomplish this.

    Performance Tuning
    Before we start tuning the system, a lot of effort can be saved by following currently established BKMs. In this section, we'll look at how we applied BKMs to establish the baseline data. Then we'll describe the iterative approach to tune the system for best performance.

    A good source of tuning BKMs can be found from the full disclosures of the publications on the SPEC Web site.

    Establish Baseline by Applying Current Best Known Methods Hardware
    It's important to ensure that the BIOS settings and the populating of the memory subsystem are done following prescribed norms. Reading and following the system documentation can pay dividends. For example, a platform with 4 gigabytes of memory may perform better with four 1- gigabyte memory cards rather than with one 4-gigabyte memory card. We fully populated the memory banks for the systems under test to eliminate known memory latencies caused by having unfilled memory card slots.

    There are several hardware aspects that affect performance, including processor frequency, cache sizes, front-side bus (FSB) capacity, and memory speed. In particular, higher frequency and larger cache lead to improved SPECjAppServer2002 performance. In one study, the performance was improved by 40% when the frequency was increased by 50% and the cache size was doubled.

    Network equipment has become relatively inexpensive. We use 1Gbps NICs to reduce the risk that the network becomes a bottleneck.

    While focusing on the best performance on the application server, we want to reduce the risk that the database system may become a bottleneck. A high-performance disk array system is used for the database back end. We use eight disks for tables and four disks for logs. We also use raw partitions to avoid the OS file system overhead to access the disks.

    Operating Systems
    On some Linux systems, the default number of "open files" might be too small for such enterprise Java applications. We increased the limit by adding the following to /etc/sysctl.conf

    fs.file-max = 65535

    Similarly, a

    ulimit -n 65535

    is added to the application server startup script, or the user's initialization environment (.bashrc).

    The latest kernel or OS build with known good performance should be used on all systems. Similarly, the most current drivers, for example, the NIC drivers, should also be installed on the application server, database server, and client system.

    Once the baseline performance is established, we proceed to the iterative approach to tune the performance at the system, application server, and JVM levels.

    For the SPECjAppServer2002 workload, a higher throughput can be obtained by increasing the load (also known as the injection rate) on the system. However, there are response-time requirements that make merely increasing the injection rate overly simplistic. While increasing the injection rate, we need to tune the rest of the system so that adequate response times can be achieved for key transactions.

    One method we frequently apply is to estimate the likely load on the system that can be supported by scaling a compliant injection rate by CPU utilization to 90%. For example, if we have a compliant run at an injection rate of 100 that consumes 45% CPU, we will increase the injection rate to 200 and tune the system so that the response times are compliant.

    System Level Performance
    The response time is an important aspect of the SPECjAppServer2002 workload. Thus, lowering the use of resources by the system components will be helpful. During system-level tuning, the main goal is to saturate the application server CPU (i.e., close to 100% utilization). Reaching maximum throughput without full saturation of the CPU is an indicator of a performance bottleneck such as I/O contention, over-synchronization, or incorrect thread pool configuration. Conversely, a high response time metric with an injection rate well below CPU saturation indicates latency issues such as excessive disk I/O or improper database configuration.

    Application server CPU saturation indicates that there are no system-level bottlenecks outside of the application server. The throughput measured at this level would point out the maximum system capacity within the current application implementation and system configuration parameters. Further tuning may involve adjusting garbage collection parameters or adding application server nodes to a cluster.

    Most components will exhibit a nonlinear response time/throughput behavior. In other words, increasing the throughput will tend to increase the response time, with a disproportionate increase in response time at high throughputs. It is important to size these components such that the required throughput utilization for the component will be relatively low to allow for the response time to be relatively small as well. This is especially important for network capacity, disk capacity, and the capacity of the data bus connecting processors to memory and I/O.

    System monitoring tools (described earlier) can be used to track system performance metrics, which can help find bottlenecks. In a multitiered system setup where multiple computers are used, it is important to run these tools on all of the computers.

    Application Server-Level Performance
    The workload itself plays an important factor in performance and it may demand a specific optimal application server configuration. Many parameters can be tuned to optimize for both response times and throughput. Reducing overall response time can often help increase the capacity for further increase in throughput. It is also important to break the response times down into subcomponents, and to further tune the system so that response times of key subcomponents are optimized too.

    Many of these tunable parameters are easily accessible from common application servers such as the BEA WebLogic Server. The list of parameters presented here is not exhaustive. It is merely a starting point to tune the performance for your enterprise Java applications. The list includes tuning key application server parameters, and tuning key container parameters. You should bear in mind that these parameters are tuned to reduce response time for key transactions, such as new order and manufacturing, for the workload.

    Tuning key container parameters
    Many application server parameters can be tuned to help an application perform more effectively. The following parameters should be considered for most applications.

  • Setting a good value for the initial bean pool size improves the initial response time for EJBs: They are preallocated upon application server startup.
  • Setting an optimal value for bean cache size will prevent passivations: It increases performance by reducing file I/O activity.
  • Allocating large enough cache size for the appropriate stateful session bean can potentially improve the throughput: For example, you may want to increase the max-beans-in-cache specified for the CartSes EJB and measure the change in performance.

    Tuning key application server parameters
    Many application server parameters can be tuned to enable better sharing and interaction with virtual machines and operating systems. The following parameters should be considered for most applications.

  • A platform-optimized socket multiplexer should be used to improve server performance for I/O scalability: In particular, when a performance pack is available from a vendor it should be used. However, with the emergence of JDK 1.4, this effect has become less significant than before.
  • The thread pool size should be gradually increased until performance peaks: Beware of making this size too big, as a higher number may degrade performance due to unnecessary usage of system resources and excessive context switches.
  • BEA WebLogic Server supports the notion of multiple queues for transactions: You may find a specific distribution of executing threads to optimize for a specific workload. This is particularly important when certain transactions have tight response time limits and more threads for those transactions can be allocated accordingly. The support of multiple queues has a clear advantage over a single queue mechanism for shifting long response time transactions to less critical areas. We found that changing a thread pool size by as small a value as 1 can sometimes yield a big response time improvement.
  • The database connection pool should be set equal to or larger than the number of available execute threads: An execute thread does not need to wait for a connection. For optimistic concurrency the number of connection pools required is actually about 1.5 times the number of available execute threads.
  • Experimenting with the JDBC prepared statement cache size may yield a configuration that minimizes the need for parsing statements on the database. The value should be gradually increased until performance peaks. We started with a value of 100 for SPECjAppServer2002 and did not observe performance gains either increasing or decreasing the value.
  • Relationship caching and optimistic concurrency are two additional features provided by BEA WebLogic Server:

    JVM-level Performance
    Selecting the correct JVM is critical. It is essential to use a JVM that has been optimized for the underlying hardware. The best optimizations for various processor platforms are known, and a Java application needs to rely on the JVM to harness these optimizations.

    A JVM can provide configuration parameters to the users to let them identify which techniques the JVM should use for optimal performance of their application. We selected BEA WebLogic JRockit as our JVM as it is highly optimized for both Intel Xeon and Itanium platforms.

    The key JVM parameters are in the area of heap management, ranging from the selection of the garbage collection (GC) algorithm and the specification of heap sizes, down to the specifics of thread local allocation sizes and when the space for an object is cleared. It is usually preferable to set the minimum and maximum heap sizes to be the same to avoid runtime overhead associated with expanding and contracting the heap.

    The selected heap size can have a profound effect on performance. It is often desirable to set the heap space as large as possible provided you have enough memory on the system. We use a heap space of 1.5GB for our setup for the Xeon processor family, while we use a heap space of 12GB for our setup for the Itanium processor family as the 64-bit architecture systems allow us to use much more memory to boost the performance.

    The BEA JRockit JVM permits alternate garbage collection strategies to be specified. Parallel GC is a good starting option for the SPECjAppServer2002 workload.

    While rules-of-thumb can be created and experience can be a guide, there is no real substitute for running a variety of experiments to identify the JVM parameters that work best for a given application. This is especially important for JRockit, which exposes a rich set of parameters for you to squeeze the last drop of performance from your application.

    Summary
    This article described a top-down, data-driven, and closed-loop approach to boost SPECjAppServer2002 performance. The opportunities to improve performance were examined from the whole system including the software/hardware stack of the system level perspectives, the software applications, and the machine level for the virtual machine as well as the physical hardware. Our research suggested that all layers - not just one or two - of the system stack should be examined for performance bottleneck identification and removal.

    Acknowledgments
    Jason A Davidson, Ashish Jha, Michael LQ Jones, Tony TL Wang, D J Penney, Kumar Shiv, and Ricardo Morin provided key information for this article.

    References

  • Arnold, Ken; Gosling, James; Holmes, David. (2000). The Java Programming Language, Third Edition. Sun Microsystems, Inc.
  • Java Community Process, "Java 2 Platform, Enterprise Edition 1.3 Specification": http://jcp.org/aboutJava/ communityprocess/final/jsr058/
  • Standard Performance Evaluation Corporation (SPEC): www.spec.org/jAppServer2002/index.html
  • Chow, K.; Morin, R.; and Shiv, K. (February 2003). "Enterprise Java Performance: Best Practices." Intel Technology Journal. http://developer.intel.com/technology/ itj/2003/volume07issue01/.
  • Intel Corporation, "VTune Performance Analyzer": www.intel.com/software/products/vtune/vtune61
  • Patterson, David A.; Hennessy, John L. (1997) Computer Organization and Design: The Hardware/Software interface. Morgan Kaufmann Publishers.
  • BEA WebLogic Server 8.1: www.bea.com/framework.jsp? CNT=index.htm&FP=/content/products/server
  • Intel Corporation, "Intel Itanium 2 Processor Reference Manual for Software Development and Optimization": http://developer.intel.com/design/itanium2/manuals
  • Intel Corporation. "Intel Pentium 4 Processor Optimization Reference Manual": http://developer.intel.com/design/pentium4/manuals
  • More Stories By Gim Deisher

    Gim Deisher is a Senior Software Performance Engineer working with the
    Software and Solutions Group at Intel Corporation. She received a M.S.
    degree in Electrical Engineering from the Arizona State University in
    1992.

    More Stories By Kingsum Chow

    Kingsum Chow is a Senior Performance Engineer working with the Managed
    Runtime Environments group within the Software and Solutions Group
    (SSG). Kingsum has been involved in performance modeling and
    optimization of middleware application server stacks, with emphasis on
    J2EE and Java Virtual Machines. He has published 20 technical papers and
    presentations. He received his Ph.D. degree in Computer Science and
    Engineering from the University of Washington in 1996.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @ThingsExpo Stories
    Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
    Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
    In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
    In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
    With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
    The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
    Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
    No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
    22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
    22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
    DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits,...
    @DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...
    Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
    SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on qua...
    SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness. For more information, please visit https://www.cedexis.com.
    SYS-CON Events announced today that Google Cloud has been named “Keynote Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Companies come to Google Cloud to transform their businesses. Google Cloud’s comprehensive portfolio – from infrastructure to apps to devices – helps enterprises innovate faster, scale smarter, stay secure, and do more with data than ever before.
    SYS-CON Events announced today that Vivint to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. As a leading smart home technology provider, Vivint offers home security, energy management, home automation, local cloud storage, and high-speed Internet solutions to more than one million customers throughout the United States and Canada. The end result is a smart home solution that sav...
    SYS-CON Events announced today that Opsani will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Opsani is the leading provider of deployment automation systems for running and scaling traditional enterprise applications on container infrastructure.
    SYS-CON Events announced today that Nirmata will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Nirmata provides a comprehensive platform, for deploying, operating, and optimizing containerized applications across clouds, powered by Kubernetes. Nirmata empowers enterprise DevOps teams by fully automating the complex operations and management of application containers and its underlying ...
    SYS-CON Events announced today that Opsani to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. Opsani is creating the next generation of automated continuous deployment tools designed specifically for containers. How is continuous deployment different from continuous integration and continuous delivery? CI/CD tools provide build and test. Continuous Deployment is the means by which...