Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

WebLogic Server Performance Tuning

WebLogic Server Performance Tuning

Any product that does well in the market has to have good performance. Although many characteristics are necessary for a product to become as widely used as WebLogic Server is today, performance is definitely indispensable.

Good coding practices go a long way toward getting an application to run fast, but they alone are not sufficient. An application server has to be portable across a wide range of hardware and operating systems and has to be versatile in order to manage an even wider range of application types. This is why application servers provide a comprehensive set of tuning knobs that can be adjusted to suit the environment the server runs in as well as the application.

This article discusses some of the WebLogic-specific tuning parameters and is not an exhaustive list of the attributes that can be tuned. In addition, it is recommended that you try out any suggestions made here in an experimental setup before applying them to a production environment.

Monitoring Performance and Finding Bottlenecks
The first step in performance tuning is isolating the hot spots. Performance bottlenecks can exist in any part of the entire system - the network, the database, the clients, or the app server. It's important to first determine which system component is contributing to the performance problem. Tuning the wrong component may even make the situation worse.

WebLogic Server provides a system administrator the ability to monitor system performance with the administration console as well as a command-line tool. The server has a set of what are called mbeans, which gather information like thread usage, resource availability, and cache hits or misses. This information can be retrieved from the server either through the console or with the command-line tool. The screenshot in Figure 1 shows statistics such as cache hits and misses in the EJB container and is one of the several options that the console provides to monitor performance.

Profilers are the other useful tools that help detect performance bottlenecks in the application code itself. There are some great profilers out there: Wily Introscope, JProbe, OptimizeIt.

The EJB Container
The most expensive operations that the EJB container executes are arguably database calls to load and store entity beans. The container therefore provides various parameters that help minimize database access. However, it's not possible to eliminate at least one load operation and one store operation per bean in a transaction, except in certain special cases. These special cases are:

1.   The bean is read-only. In this case, the bean is loaded only once, when it is first accessed, and is never stored. However, the bean will be loaded again if the read-timeout-seconds expire.

2.   The bean has a concurrency strategy of exclusive or optimistic and db-is-shared is false. This parameter has been renamed as cache-between-transactions in WebLogic Server 7.0. And the values for the parameter have opposite meanings, i.e., db-is-shared equals false is the same as cache-between-transactions equals true.

3.   The bean has not been modified in the transaction. In this case, the container optimizes away the store operation.

If none of the above cases apply, each entity bean in the code path will be loaded and stored at least once in a transaction. Some of the features that further help reduce database calls or make such calls less expensive are caching, field groups, concurrency strategy, and eager relationship caching, some of which are new in WebLogic Server 7.0.

  • Caching: The cache size for entity beans is defined in weblogic-ejb-jar.xml by the parameter max-beans-in-cache. The container loads a bean from the database the first time it is invoked in a transaction. The bean is also put in the cache. If the cache is too small, some of the beans are passivated to the database. So, regardless of whether the conditions for optimizations 1 and 2 mentioned above apply, these beans would have to be reloaded from the databse the next time they are called. Using a bean from the cache also means that calls to setEntityContext() are not made at that time. If the primary key is a compound field or is complex, it saves on setting that too.

  • Field groups: Field groups specify which fields to load from the database for a finder method. If the entity bean has a large BLOB field (say an image) associated with it that is required only very rarely, a field group excluding that field could be associated with a finder method and that field would not be loaded. This feature is available only for EJB 2.0 beans

  • Concurrency strategy: As of WebLogic Server 7.0, the container provides four concurrency-control mechanisms. They are Exclusive, Database, Optimistic, and ReadOnly. The concurrency strategy is also tied in with the isolation level the transaction is running with. Concurrency control is not really a performance-boosting feature. Its main purpose is to guarantee consistency of the data that the entity bean represents within the requirements imposed by the deployer of the bean. However, there are some mechanisms that allow the container to process requests faster than others, sometimes at the cost of data consistency.

    The most restrictive concurrency strategy is Exclusive. Access to a bean with a particular primary key is serialized so only one transaction can access the bean at a time. This provides very good concurrency control within the container, but hurts performance. It is useful only if caching between transactions is allowed, which must not be done in a cluster, so load operations can be optimized away, which may make up for the loss in parallelism.

    Database concurrency defers concurrency control to the database. Entity beans are not locked in the container, allowing multiple transactions to operate on the same bean concurrently, which results in faster performance. However, this may require a higher isolation level to guarantee data consistency.

    Optimistic concurrency also defers concurrency control to the database, but the difference is that the check for data consistency is done at store time with a predicated update rather than by locking the row at load time. In cases where there isn't too much contention for the same bean in an application, this mechanism is faster than database concurrency, with both providing the same level of data-consistency protection. However, it does require the caller to retry the invocation in the event of a conflict. This feature can be used only for EJB 2.0 beans.

    The ReadOnly strategy is used only for beans that are just that - read-only. The bean is loaded only the first time it is accessed in an application or when the read-timeout-seconds specified expires. The bean is never stored. There is also the related read-mostly pattern in which the bean is notified when the underlying data changes. This causes the bean to be reloaded.

  • Eager relationship caching: If Bean A and Bean B are related by a CMR (Container Managed Relationship) and both are used in the same transaction, both can be loaded using a single database call. This is what is done in eager relationship caching. This again is a new feature in WebLogic Server 7.0 and is available only for EJB 2.0.

    Besides the above features that help boost performance by optimizing the way the container accesses the database, there are also some parameters for session as well as entity beans that can be used to get better performance out of the EJB container.

    Pooling and caching are the main features that the EJB container provides to boost performance for session and entity beans. However, not all of these are applicable to all types of beans. The downside is that memory requirements are higher, though this isn't a major issue. Pooling is available for stateless session beans (SLSB), message-driven beans (MDB), and entity beans. When a pool size is specified for SLSBs and MDBs, that many instances of the bean are created, their setSessionContext()/setMessageDrivenContext() methods are called, and they are put in the pool. The pool size for these types of beans need not be more than the number of execute threads (actually, less are required) configured. If the bean does anything expensive in the setSessionContext() method, where JNDI lookups are typically done, method invocations would be faster if a pooled instance is used. For entity beans, the pool is populated with anonymous instances of the bean (i.e., with no primary key) after the setEntityContext() method has been called. These instances are used by finders; the finders pick up an instance from the pool, assign a primary key, and load the bean from the database.

    Caching is applicable to stateful session beans (SFSB) and entity beans. Entity beans have already been discussed above. For SFSBs, caching helps prevent serialization to disk. Serialization to disk is an expensive operation and should definitely be avoided. The cache size for SFSBs should be a little larger than the size needed, which is the number of clients that are concurrently connected to the server. This is because the container doesn't start to look for passivatable beans until the cache hits about 85% of its capacity. If the cache size is larger than actually needed, the container doesn't have to expend itself looking through the cache for beans to passivate.

    The EJB container also provides two ways in which bean-to-bean and Web-tier-to-bean calls can be made: call-by-value and call-by-reference. By default, for beans that are part of the same application, call-by-reference is used, which is faster than using call-by-value. Unless there is a compelling reason to do so, call-by-reference should not be disabled. A better way to force call-by-reference is to use local interfaces. Another feature introduced in WebLogic Server 7.0 is the use of activation for stateful services. Though this hurts performance somewhat, it provides huge scalability improvements because of lowered memory requirements. If scalability is not a concern, activation may be turned off by passing the parameter -noObjectActivation to ejbc.

    Tuning the JDBC layer is as important as tuning the EJB container for database accesses. Take, for instance, setting the right connection pool size - the connection pool size should be large enough that there aren't any threads waiting for a connection to become available. If all database access is done along threads in the default execute queue, the number of connections should be the thread count in the execute queue less the number of socket reader threads, which are threads in the default execute queue that read incoming requests. To avoid destruction and creation of connections at runtime, specify the same initial and maximum capacity of the connection pool. Also, ensure that TestConnectionsOnReserve is set to false (which is the default setting), if possible. If it's set to true, the connection is tested every time before allocating it to the caller, which requires an additional database round-trip.

    The other significant parameter is the PreparedStatementCacheSize. Each connection has a static cache of these statements, the size of which is specified in the JDBC connection pool configuration. The cache is static and it is important to keep this in mind. This means that if the size of the cache is n, only the first n statements executed using that connection will be cached. One way to ensure that the most expensive SQL statements are cached is to have a startup class that seeds the cache with these statements. In spite of the big performance boost the cache provides, it should not be used blindly. If the database schema changes, there is no way to invalidate the cached statement or replace it with a new one without restarting the server. Also, cached statements may hold open cursors in the database.

    In WebLogic Server 7.0, performance improvements in the jDriver have made it faster than the Oracle thin driver, especially for applications that do a significant number of SELECTs. This is evident from the two ECperf results submitted by HP using WebLogic Server 7.0 Beta (http://ecperf.theserverside.com/ecperf/ index.jsp?page=results/top_ten_price_performance).

    The JMS subsystem provides a host of tuning parameters. JMS messages are processed by a separate execute queue called the JMSDispatcher. Because of this, the JMS subsystem is not starved by other heavy-traffic applications running off the default or any other execute queues. Conversely, JMS does not starve other applications either. With JMS, there is a trade-off in quality of service for most tuning parameters. For example, when using file-persistent destinations, disabling synchronous writes (by setting the property-Dweblogic.JMSFileStore.SynchronousWritesEnabled=false) may result in dramatic performance improvement, but there is a risk of losing messages or of receiving duplicate messages. Similarly, using multicast to send messages will again boost performance but messages may be dropped.

    The message acknowledgement interval should not be set too small - the larger the rate of sending acknowledgements, the slower message processing is likely to be. At the same time, setting it too large may mean messages will be lost or duplicate messages will be sent in the event of a system failure.

    Typically, multiple JMS destinations should be configured on a single JMS server as opposed to spreading them out over multiple JMS servers, unless scalability has peaked.

    Turning message paging off may improve performance, but it hurts scalability. With paging on, additional I/O is required for messages to be serialized to disk and read in again if required, but at the same time, memory requirements decrease.

    Asynchronous consumers typically scale and perform better than synchronous consumers.

    Web Container
    The Web tier in an application is mostly used to generate presentation logic. A widely used architecture uses servlets and JSPs to generate dynamic content from data retrieved from the application layer, which typically consists of EJBs. Servlets and JSPs in such architectures hold references to EJBs and, in cases where they talk directly to the database, to data sources. It's a good idea to cache these references. If the JSPs and servlets are not deployed on the same application server as the EJBs, JNDI lookups are expensive.

    JSP cache tags can be used to cache data in a JSP page. These tags support output as well as input caching. Output caching refers to the content generated by the code within the tag. Input caching refers to the values to which variables are set by the code within the tag. If the Web tier is not expected to change very often, it's helpful to turn auto-reloading off by setting ServletReloadCheckSecs to -1. This way the server does not poll for changes in the Web tier and the effect is evident if there are a large number of JSPs and servlets.

    It is also advisable not to store too much information in the HTTP session. If such information is needed, consider using a stateful session bean instead.

    JVM Tuning
    Most JVMs these days are self-tuning in that they have the ability to detect hot spots in the code and optimize them. The tuning parameters that developers and deployers have to worry about most are the heap settings. There is no general rule for setting these. In JVMs that organize the heap into generational spaces, the new space or the nursery should typically be set to between a third and half of the total heap size. The total heap specified should not be too large for JVMs that do not support concurrent garbage collection (GC). In such VMs, full GC pauses could be as large as a minute or more if the heap is too large. Last, there is the caveat that these settings depend to a large extent on the memory usage patterns of the application deployed on the server. Additional information regarding tuning the JVM is available at http://edocs.bea.com/wls/docs70/perform/JVMTuning.html#1104200.

    Server-Wide Tuning
    Besides the tuning parameters provided by each subsystem, there are also some server-wide parameters that help boost performance. The most important of these is the thread count and execute queue configuration. Increasing the thread count does not help in every case - consider increasing it only if the desired throughput is not being achieved, the wait queue (which holds the requests on which processing hasn't started) is large, and the CPU has not maxed out. However, this may not necessarily result in improved performance. The CPU usage could be low because there is contention for some resource in the server, as would happen if, for example, there aren't enough JDBC connections. Factors like this should be kept in mind when changing the thread count.

    In WebLogic Server 7.0, it's possible to configure multiple execute queues and have requests for a particular EJB or JSP/servlet processed using threads from deployer-defined execute queues. To do this, pass the flag -dispatchPolicy <queue-name> when running weblogic.ejbc on the bean. For JSPs/servlets, set the init-param wl-dispatch-policy in the weblogic-specific deployment descriptor for the Web app, with the name of the execute queue as the value. In applications in which operations on certain beans/JSPs appear to have larger response times than others, it could help to configure separate execute queues for them. The queue sizes for best performance will, however, have to be determined empirically.

    The other big question is deciding under what circumstances the WebLogic performance packs ( http://e-docs.bea.com/wls/docs70/perform/WLSTuning.html#1112119 should be used. If the number of sockets (there is one socket on the server for each client JVM for RMI connections) isn't large and if the incoming requests from the client are such that there is almost always data to be read from the socket, there might not be a significant advantage to using the performance packs. It's possible that not using the performance packs might result in similar or better performance, depending on the JVM's implementation of network I/O processing.

    The socket reader threads are taken from the default execute queue. On Windows, there are two socket reader threads per CPU and on Solaris, there are three overall for native I/O. While using Java I/O, the number of reader threads is specified by the PercentSocketReaderThreads setting in config.xml. It defaults to 33% and there is a hard limit of 50%, which makes sense because there really isn't any point in having more reader threads if there aren't any threads to process the request. With Java I/O, the number of reader threads should be as close to the number of client connections as possible, because Java I/O blocks while waiting for requests. This is also the reason it doesn't scale well as the number of client connections increase.

    The above are only some of the various ways that the server can be tuned. Bear in mind, however, that a poorly designed, poorly written application will usually have poor performance, regardless of tuning. Performance must always be a key consideration throughout the stages of the application development life cycle - from design to deployment. It happens too often that performance takes a back seat to functionality, and problems are found later that are difficult to fix. Additional information regarding WebLogic Server performance tuning is available at http://e-docs.bea.com/wls/docs70/perform/index.html.

  • Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    @ThingsExpo Stories
    BnkToTheFuture.com is the largest online investment platform for investing in FinTech, Bitcoin and Blockchain companies. We believe the future of finance looks very different from the past and we aim to invest and provide trading opportunities for qualifying investors that want to build a portfolio in the sector in compliance with international financial regulations.
    A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
    Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.
    In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...
    Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
    Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
    Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
    No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
    Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
    In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
    "IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
    When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
    Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
    We are given a desktop platform with Java 8 or Java 9 installed and seek to find a way to deploy high-performance Java applications that use Java 3D and/or Jogl without having to run an installer. We are subject to the constraint that the applications be signed and deployed so that they can be run in a trusted environment (i.e., outside of the sandbox). Further, we seek to do this in a way that does not depend on bundling a JRE with our applications, as this makes downloads and installations rat...
    Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
    DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
    In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
    Digital Transformation (DX) is not a "one-size-fits all" strategy. Each organization needs to develop its own unique, long-term DX plan. It must do so by realizing that we now live in a data-driven age, and that technologies such as Cloud Computing, Big Data, the IoT, Cognitive Computing, and Blockchain are only tools. In her general session at 21st Cloud Expo, Rebecca Wanta explained how the strategy must focus on DX and include a commitment from top management to create great IT jobs, monitor ...
    "Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
    The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...