Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

WebLogic Server Performance Tuning

WebLogic Server Performance Tuning

Any product that does well in the market has to have good performance. Although many characteristics are necessary for a product to become as widely used as WebLogic Server is today, performance is definitely indispensable.

Good coding practices go a long way toward getting an application to run fast, but they alone are not sufficient. An application server has to be portable across a wide range of hardware and operating systems and has to be versatile in order to manage an even wider range of application types. This is why application servers provide a comprehensive set of tuning knobs that can be adjusted to suit the environment the server runs in as well as the application.

This article discusses some of the WebLogic-specific tuning parameters and is not an exhaustive list of the attributes that can be tuned. In addition, it is recommended that you try out any suggestions made here in an experimental setup before applying them to a production environment.

Monitoring Performance and Finding Bottlenecks
The first step in performance tuning is isolating the hot spots. Performance bottlenecks can exist in any part of the entire system - the network, the database, the clients, or the app server. It's important to first determine which system component is contributing to the performance problem. Tuning the wrong component may even make the situation worse.

WebLogic Server provides a system administrator the ability to monitor system performance with the administration console as well as a command-line tool. The server has a set of what are called mbeans, which gather information like thread usage, resource availability, and cache hits or misses. This information can be retrieved from the server either through the console or with the command-line tool. The screenshot in Figure 1 shows statistics such as cache hits and misses in the EJB container and is one of the several options that the console provides to monitor performance.

Profilers are the other useful tools that help detect performance bottlenecks in the application code itself. There are some great profilers out there: Wily Introscope, JProbe, OptimizeIt.

The EJB Container
The most expensive operations that the EJB container executes are arguably database calls to load and store entity beans. The container therefore provides various parameters that help minimize database access. However, it's not possible to eliminate at least one load operation and one store operation per bean in a transaction, except in certain special cases. These special cases are:

1.   The bean is read-only. In this case, the bean is loaded only once, when it is first accessed, and is never stored. However, the bean will be loaded again if the read-timeout-seconds expire.

2.   The bean has a concurrency strategy of exclusive or optimistic and db-is-shared is false. This parameter has been renamed as cache-between-transactions in WebLogic Server 7.0. And the values for the parameter have opposite meanings, i.e., db-is-shared equals false is the same as cache-between-transactions equals true.

3.   The bean has not been modified in the transaction. In this case, the container optimizes away the store operation.

If none of the above cases apply, each entity bean in the code path will be loaded and stored at least once in a transaction. Some of the features that further help reduce database calls or make such calls less expensive are caching, field groups, concurrency strategy, and eager relationship caching, some of which are new in WebLogic Server 7.0.

  • Caching: The cache size for entity beans is defined in weblogic-ejb-jar.xml by the parameter max-beans-in-cache. The container loads a bean from the database the first time it is invoked in a transaction. The bean is also put in the cache. If the cache is too small, some of the beans are passivated to the database. So, regardless of whether the conditions for optimizations 1 and 2 mentioned above apply, these beans would have to be reloaded from the databse the next time they are called. Using a bean from the cache also means that calls to setEntityContext() are not made at that time. If the primary key is a compound field or is complex, it saves on setting that too.

  • Field groups: Field groups specify which fields to load from the database for a finder method. If the entity bean has a large BLOB field (say an image) associated with it that is required only very rarely, a field group excluding that field could be associated with a finder method and that field would not be loaded. This feature is available only for EJB 2.0 beans

  • Concurrency strategy: As of WebLogic Server 7.0, the container provides four concurrency-control mechanisms. They are Exclusive, Database, Optimistic, and ReadOnly. The concurrency strategy is also tied in with the isolation level the transaction is running with. Concurrency control is not really a performance-boosting feature. Its main purpose is to guarantee consistency of the data that the entity bean represents within the requirements imposed by the deployer of the bean. However, there are some mechanisms that allow the container to process requests faster than others, sometimes at the cost of data consistency.

    The most restrictive concurrency strategy is Exclusive. Access to a bean with a particular primary key is serialized so only one transaction can access the bean at a time. This provides very good concurrency control within the container, but hurts performance. It is useful only if caching between transactions is allowed, which must not be done in a cluster, so load operations can be optimized away, which may make up for the loss in parallelism.

    Database concurrency defers concurrency control to the database. Entity beans are not locked in the container, allowing multiple transactions to operate on the same bean concurrently, which results in faster performance. However, this may require a higher isolation level to guarantee data consistency.

    Optimistic concurrency also defers concurrency control to the database, but the difference is that the check for data consistency is done at store time with a predicated update rather than by locking the row at load time. In cases where there isn't too much contention for the same bean in an application, this mechanism is faster than database concurrency, with both providing the same level of data-consistency protection. However, it does require the caller to retry the invocation in the event of a conflict. This feature can be used only for EJB 2.0 beans.

    The ReadOnly strategy is used only for beans that are just that - read-only. The bean is loaded only the first time it is accessed in an application or when the read-timeout-seconds specified expires. The bean is never stored. There is also the related read-mostly pattern in which the bean is notified when the underlying data changes. This causes the bean to be reloaded.

  • Eager relationship caching: If Bean A and Bean B are related by a CMR (Container Managed Relationship) and both are used in the same transaction, both can be loaded using a single database call. This is what is done in eager relationship caching. This again is a new feature in WebLogic Server 7.0 and is available only for EJB 2.0.

    Besides the above features that help boost performance by optimizing the way the container accesses the database, there are also some parameters for session as well as entity beans that can be used to get better performance out of the EJB container.

    Pooling and caching are the main features that the EJB container provides to boost performance for session and entity beans. However, not all of these are applicable to all types of beans. The downside is that memory requirements are higher, though this isn't a major issue. Pooling is available for stateless session beans (SLSB), message-driven beans (MDB), and entity beans. When a pool size is specified for SLSBs and MDBs, that many instances of the bean are created, their setSessionContext()/setMessageDrivenContext() methods are called, and they are put in the pool. The pool size for these types of beans need not be more than the number of execute threads (actually, less are required) configured. If the bean does anything expensive in the setSessionContext() method, where JNDI lookups are typically done, method invocations would be faster if a pooled instance is used. For entity beans, the pool is populated with anonymous instances of the bean (i.e., with no primary key) after the setEntityContext() method has been called. These instances are used by finders; the finders pick up an instance from the pool, assign a primary key, and load the bean from the database.

    Caching is applicable to stateful session beans (SFSB) and entity beans. Entity beans have already been discussed above. For SFSBs, caching helps prevent serialization to disk. Serialization to disk is an expensive operation and should definitely be avoided. The cache size for SFSBs should be a little larger than the size needed, which is the number of clients that are concurrently connected to the server. This is because the container doesn't start to look for passivatable beans until the cache hits about 85% of its capacity. If the cache size is larger than actually needed, the container doesn't have to expend itself looking through the cache for beans to passivate.

    The EJB container also provides two ways in which bean-to-bean and Web-tier-to-bean calls can be made: call-by-value and call-by-reference. By default, for beans that are part of the same application, call-by-reference is used, which is faster than using call-by-value. Unless there is a compelling reason to do so, call-by-reference should not be disabled. A better way to force call-by-reference is to use local interfaces. Another feature introduced in WebLogic Server 7.0 is the use of activation for stateful services. Though this hurts performance somewhat, it provides huge scalability improvements because of lowered memory requirements. If scalability is not a concern, activation may be turned off by passing the parameter -noObjectActivation to ejbc.

    Tuning the JDBC layer is as important as tuning the EJB container for database accesses. Take, for instance, setting the right connection pool size - the connection pool size should be large enough that there aren't any threads waiting for a connection to become available. If all database access is done along threads in the default execute queue, the number of connections should be the thread count in the execute queue less the number of socket reader threads, which are threads in the default execute queue that read incoming requests. To avoid destruction and creation of connections at runtime, specify the same initial and maximum capacity of the connection pool. Also, ensure that TestConnectionsOnReserve is set to false (which is the default setting), if possible. If it's set to true, the connection is tested every time before allocating it to the caller, which requires an additional database round-trip.

    The other significant parameter is the PreparedStatementCacheSize. Each connection has a static cache of these statements, the size of which is specified in the JDBC connection pool configuration. The cache is static and it is important to keep this in mind. This means that if the size of the cache is n, only the first n statements executed using that connection will be cached. One way to ensure that the most expensive SQL statements are cached is to have a startup class that seeds the cache with these statements. In spite of the big performance boost the cache provides, it should not be used blindly. If the database schema changes, there is no way to invalidate the cached statement or replace it with a new one without restarting the server. Also, cached statements may hold open cursors in the database.

    In WebLogic Server 7.0, performance improvements in the jDriver have made it faster than the Oracle thin driver, especially for applications that do a significant number of SELECTs. This is evident from the two ECperf results submitted by HP using WebLogic Server 7.0 Beta (http://ecperf.theserverside.com/ecperf/ index.jsp?page=results/top_ten_price_performance).

    The JMS subsystem provides a host of tuning parameters. JMS messages are processed by a separate execute queue called the JMSDispatcher. Because of this, the JMS subsystem is not starved by other heavy-traffic applications running off the default or any other execute queues. Conversely, JMS does not starve other applications either. With JMS, there is a trade-off in quality of service for most tuning parameters. For example, when using file-persistent destinations, disabling synchronous writes (by setting the property-Dweblogic.JMSFileStore.SynchronousWritesEnabled=false) may result in dramatic performance improvement, but there is a risk of losing messages or of receiving duplicate messages. Similarly, using multicast to send messages will again boost performance but messages may be dropped.

    The message acknowledgement interval should not be set too small - the larger the rate of sending acknowledgements, the slower message processing is likely to be. At the same time, setting it too large may mean messages will be lost or duplicate messages will be sent in the event of a system failure.

    Typically, multiple JMS destinations should be configured on a single JMS server as opposed to spreading them out over multiple JMS servers, unless scalability has peaked.

    Turning message paging off may improve performance, but it hurts scalability. With paging on, additional I/O is required for messages to be serialized to disk and read in again if required, but at the same time, memory requirements decrease.

    Asynchronous consumers typically scale and perform better than synchronous consumers.

    Web Container
    The Web tier in an application is mostly used to generate presentation logic. A widely used architecture uses servlets and JSPs to generate dynamic content from data retrieved from the application layer, which typically consists of EJBs. Servlets and JSPs in such architectures hold references to EJBs and, in cases where they talk directly to the database, to data sources. It's a good idea to cache these references. If the JSPs and servlets are not deployed on the same application server as the EJBs, JNDI lookups are expensive.

    JSP cache tags can be used to cache data in a JSP page. These tags support output as well as input caching. Output caching refers to the content generated by the code within the tag. Input caching refers to the values to which variables are set by the code within the tag. If the Web tier is not expected to change very often, it's helpful to turn auto-reloading off by setting ServletReloadCheckSecs to -1. This way the server does not poll for changes in the Web tier and the effect is evident if there are a large number of JSPs and servlets.

    It is also advisable not to store too much information in the HTTP session. If such information is needed, consider using a stateful session bean instead.

    JVM Tuning
    Most JVMs these days are self-tuning in that they have the ability to detect hot spots in the code and optimize them. The tuning parameters that developers and deployers have to worry about most are the heap settings. There is no general rule for setting these. In JVMs that organize the heap into generational spaces, the new space or the nursery should typically be set to between a third and half of the total heap size. The total heap specified should not be too large for JVMs that do not support concurrent garbage collection (GC). In such VMs, full GC pauses could be as large as a minute or more if the heap is too large. Last, there is the caveat that these settings depend to a large extent on the memory usage patterns of the application deployed on the server. Additional information regarding tuning the JVM is available at http://edocs.bea.com/wls/docs70/perform/JVMTuning.html#1104200.

    Server-Wide Tuning
    Besides the tuning parameters provided by each subsystem, there are also some server-wide parameters that help boost performance. The most important of these is the thread count and execute queue configuration. Increasing the thread count does not help in every case - consider increasing it only if the desired throughput is not being achieved, the wait queue (which holds the requests on which processing hasn't started) is large, and the CPU has not maxed out. However, this may not necessarily result in improved performance. The CPU usage could be low because there is contention for some resource in the server, as would happen if, for example, there aren't enough JDBC connections. Factors like this should be kept in mind when changing the thread count.

    In WebLogic Server 7.0, it's possible to configure multiple execute queues and have requests for a particular EJB or JSP/servlet processed using threads from deployer-defined execute queues. To do this, pass the flag -dispatchPolicy <queue-name> when running weblogic.ejbc on the bean. For JSPs/servlets, set the init-param wl-dispatch-policy in the weblogic-specific deployment descriptor for the Web app, with the name of the execute queue as the value. In applications in which operations on certain beans/JSPs appear to have larger response times than others, it could help to configure separate execute queues for them. The queue sizes for best performance will, however, have to be determined empirically.

    The other big question is deciding under what circumstances the WebLogic performance packs ( http://e-docs.bea.com/wls/docs70/perform/WLSTuning.html#1112119 should be used. If the number of sockets (there is one socket on the server for each client JVM for RMI connections) isn't large and if the incoming requests from the client are such that there is almost always data to be read from the socket, there might not be a significant advantage to using the performance packs. It's possible that not using the performance packs might result in similar or better performance, depending on the JVM's implementation of network I/O processing.

    The socket reader threads are taken from the default execute queue. On Windows, there are two socket reader threads per CPU and on Solaris, there are three overall for native I/O. While using Java I/O, the number of reader threads is specified by the PercentSocketReaderThreads setting in config.xml. It defaults to 33% and there is a hard limit of 50%, which makes sense because there really isn't any point in having more reader threads if there aren't any threads to process the request. With Java I/O, the number of reader threads should be as close to the number of client connections as possible, because Java I/O blocks while waiting for requests. This is also the reason it doesn't scale well as the number of client connections increase.

    The above are only some of the various ways that the server can be tuned. Bear in mind, however, that a poorly designed, poorly written application will usually have poor performance, regardless of tuning. Performance must always be a key consideration throughout the stages of the application development life cycle - from design to deployment. It happens too often that performance takes a back seat to functionality, and problems are found later that are difficult to fix. Additional information regarding WebLogic Server performance tuning is available at http://e-docs.bea.com/wls/docs70/perform/index.html.

  • Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    @ThingsExpo Stories
    SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
    SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
    Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
    A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
    To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
    Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
    With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
    In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
    "Digital transformation - what we knew about it in the past has been redefined. Automation is going to play such a huge role in that because the culture, the technology, and the business operations are being shifted now," stated Brian Boeggeman, VP of Alliances & Partnerships at Ayehu, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    "Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
    In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
    Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
    No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
    Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
    22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
    22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
    DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits,...
    @DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...
    SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on qua...