Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic, Java IoT

Weblogic: Blog Feed Post

Java Memory Problems

Memory problems in Java applications are manifold

Memory Leaks and other memory related problems are among the most prominent performance and scalability problems in Java.  Reason enough to discuss this topic in more detail.

The Java memory model- or more specifically the garbage collector –  has solved many memory problems. At the same time new ones have been created. Especially in J EE Environments with a large number of parallel users, memory is more and more becoming a critical ressource. In times with cheap memory available, 64bit JVMs and modern garbage collection algorithms this might sound strange at first sight.

So let us now take a closer look at Java memory problems. Problems can be categorized into four: groups

  • Memory leaks in Java are created by referencing objects that are no longer used. This easily happens when multiple references to objects exist and  developer forget to clear them, when the object is no longer needed.
  • Unnecessarily high memory usage caused by implementations consuming to much memory. This is very often a problem in web applications where a large amount of state information is managed for “user comfort”. When the number of active users increases, memory limits are reached very fast. Unbound or inefficiently configured caches are another source of constant high memory usage.
  • Ineffizient object creation easilty results in a performance problem when user load increases, as the garbage collector must constanly clean up the heap. This leads to unnecessarily high CPU consumption by the garbage collector. As the CPU is blocked by garbage collection, application response times increases often already under moderate load. This behaviour is also referred to as GC trashing.
  • Inefficient garbage collector behaviour is caused by missing or wrong configuration of the garbage collector. The garbage collector will take care that object are cleaned up. How and when this should happen must however by configured by the programmer or system architect. Very often people simple “forget” to properly configure and tune the garbage collecotr. I was involved in a number of performance workshops where a “simple” parameter change resulted in a performance improvement of up to 25 percent.

In most cases memory problems affect not only performance but also scalability.  The higher the amount of consumed memory per request, user or session the less parallel transactions can be executed. In some cases memory problems also affect availabilty. When the JVM runs out of memory or it is close to memory limits it will quit with an OutOfMemory error. This is when management enters your office and you know you are in serious trouble.

Memory problems are often difficult to resolve for two reasons: In some case analysis will get complex and difficult – especially if you are missing the right methodology to resolve them. Secondly their foundation is often in the architecture of the application. Simple code changes will not help to resolve them.

In order to make life easier I present a couple of memory antipatterns which are often found in real world world applications. Those patterns should help to be able to already avoid memory problems during development.

HTTP Session as Cache

This antipattern refers to the misuse of the HTTPSession object as a data cache. The session object serves as means to store information which should “survive” a single HTTP request. This is also referred to a as conversational state. Meaning data is stored over a couple of requests until it is finally processed. This approach can be found in any non-trivial web application. Web applications have no other means than storing this information on the server. Well, some information can be put into the cookie, but this has a number of other implications.

It is important to keep as few data as possible and as short as possible.  It can easily happen that the session contains megabytes of object data.  This immediately results in high heap usage and memory shortages. At the same time the number of parallel users is very limited. The JVM will respond to an increasing number of users with an OutOfMemoryError. Large user sessions have other performance penalties as well. In case of session replication in clusters increased serialization and communication effort will result in addtional performance and scalability problems.

In some projects the answer to this kind of problems is increasing the amount of memory and switching to 64bit JVMs. They cannot resisit the temptation of just increasing heap size up to several gigabytes. However this is often only hiding symptons than providing a cure to the real problem. This “solution” is only temporal and also introduces a new problem. Bigger and bigger heaps make it more difficult to find “real” memory problems.  Memory dumps for very large heaps (greated 6 gigabytes) cannot be processed by most available analysis tools.  We at dynaTrace invested a lot of R&D effort to be able to efficiently analyze large memory dumps. As this problem is gaining more and more importance a new JSR specification is also adressing it.

Session caching problems often arise because the application architecture has not been clearly defined.  During development data is simply put into the session as it is comfortable. This very often happens in an “add and forget” manner, as nobody ensures that this data is removed when no longer needed. Normally unneeded session data should be handled by the session timeout. In enterprise applications which are constantly under heavy use the session timeout, this will not work. Additionally very often very high session timeouts are used – up to 24 hours – to provide additional “comfort” to users so that they do not have to relogin.

A practical example is putting selection choices from list, which have to be fetched from the database, in the session.  The intention is to avoid unnecessary database queries. (Smells like premature optimization – doesn’t it).  This results in several kilobytes being put into the session object for every single user. While it is reasonable to cache this information the user session is definitely the wrong place for it.

Another example is abusing the Hibernate session for managing conversational state. The Hibernate session object is simply put into the HTTP session to be have fast access to data.  This however results in much more data to be stored as necessary and the memory consumption per users rises significantly.

In modern AJAX applicatons conversational state can also be managed at the client side. Ideally this leads to a stateless or nearly stateless server application which also scales signifcantly better.

ThreadLocal Memory Leak

ThreadLocal variables are used in Java to bind variables to a specific thread. This means every thread gets it’s own single instance. This approach is used to handle status information within a thread. An example would be user credentials. The lifecycle of a ThreadLocal variable is however related to the lifecycle of the thread.  ThreadLocal variables are cleaned up when the thread is terminated and removed by the garbage collector –  if not explicitly removed by the programmer.

Forgotten ThreadLocal variables can especially in application servers easily result in memory problems.  Application servers uses  ThreadPools in avoid constant creation and destruction of threads. An HTTPServletRequest for example gets a free thread assigned at runtime, which is passed back to the ThreadPool after execution. If the application logic uses ThreadLocal variables and forget to explicitly remove them, the memory will not be freed up.

Depending on the pool size – in production systems this can be several hundret threads – und the size of the objects reference by the ThreadLocal variable this can lead to problems. A pool of 200 threads and a ThreadLocal size of 5MB will in the worst case lead to 1 GB of unnecessarily occupied memory. This will immediately result in high GC activity leading to bad response times and potentially to an OutOfMemoryError.

A practical example was a bug in jbossws version 1.2.0 which was fixed in version 1.2.1 – “DOMUtils doesn’t clear thread locals”.  The problem was a ThreadLocal variable which referenced a parsed document having a size of 14 MB.

Large Temporary Objects

Large temporary objects can in the worst case also lead to OutOfMemoryErrors or at least to high GC activity. This will for example happen if very big documents (XML, PDF, images, …) have to be read and processed. In a specific case the application was not responsive for a couple of minutes or performance was so limited that it was not practically usable.  The root cause was the garbage collection going crazy. A detailed analysis lead down to the following code for reading a PDF document.

byte tmpData[] = new byte [1024];
int offs = 0;
do{
  int readLen = bis.read (tmpData, offs, tmpData.length - offs);
  if (readLen == -1)
      break;
  offs+= readLen;
  if (oofs == tmpData.length){
    byte newres[] = new byte[tmpData.length + 1024];
    System.arraycopy(tmpData, 0, newres, 0, tmpData.length);
  tmpData = newres;
  }
} while (true);

The documents which have been read using the method had a size of several megabytes. They were read into the bytearray and then send to the user’s browser. Several parallel requests rapidly led to a full heap.  The problem got even worse due to highly inefficient algorithm for reading the document. The idea is that an intial byte array of one KB is created. If this array is full a new array which is 1 KB large is created and the old array is copied into the new one.  This means when reading the document a new array is created and copied  for each KB read.  This results in a huge number of temporary objects and a memory consumption which is two times the size of the actual amount of data – as the data is permantently copied.

When working with large amounts of data, optimization of the processing logical is crucial to performance. In this case a simple load test would have unvealed the problem.

Bad Garbage Collector Configuration

In the scenarios presented so far the problem was caused by the application code. In a lot of cause the root cause however is wrong – or missing – configuration of the garbage collector. I frequently see people trusting the default settings of their application servers and believing these application server guys know best what is ideal for their application.  The configuration of the heap however strongly depends on the application and the actual usage scenario. Depending on the scenario parameters have to adopted to get a well performing application.  An application processing a high number of short lasting  requests has to be configured completely different than a  batch application, which is execution long lasting tasks. The actual configuration additionally also depends from the used JVM. What works fine for Sun JVM might be a nightmare for IBM (or at least not ideal).

Misconfigured garbage collectors are often not immediately identified as the root cause of a performance problem (unless you monitor GC acitvity anyway).  Often the visual manifestion of problems are bad response times. Understand the relation of garbage collector activity to response times is not obivous. If garbage collector times cannot be correlated to response times, people find themselves hunting a very complex performance problem. Response times and exeution time metric problems will manifest across the applications – at different places without an obvious pattern behind this phenomenon.

The figure below shows transaction metrics correlated with garbage collection times in dynaTrace . I found cases where optimizations in garbage collector settings solved performance problems in minutes which people were hunting for weeks.

Transaction Times correlated to Runtime Suspensions

Transaction Times correlated to Runtime Suspensions

ClassLoader Leaks

When talking about memory leaks most people primarily think about objects on the heap. Besides objects, classes and constants are also managed on the heap. Depending on the JVM they are put into specific areas of the heap.  The Sun JVM for example uses the so called permanent generation or PermGen. Classes very often are put on the heap several times. Simply because they have been loaded by different classloaders.  The memory occupation of loaded classes can be up to several hundret MB in modern enterprise applications.

Key is to avoid unecessarily increasing the size of classes. A good sample is the definition of large amount of String constants – for example in GUI applications. Here all texts are often stored in constants. While the approach of using constants for Strings is in principle a good design approach, the memory consumption should not be neglected. In a real world case all constants where define in one class per language in an internationalized application. A not obviously visibile coding error resulted in all of this classed being loaded. The result was a JVM crash with an OutOfMemoryError in the PermGen of the application.

Application servers suffer additional problems with classloader leaks. These leaks are causes as classloaders cannot be garbage collected because an object of one of the classes of this classloader is still alive. As a result memory occupied by these classes will not be freed up. While this problem today is handled well by J EE application server,  it seems to appear more often in OSGI-based application environments.

Conclusion

Memory problems in Java applications are manifold and easily lead to performance and scalability problems. Especially in J EE applications wiht a high number of parallel users memory management must be a central part of the application architecture.

While the garbage collector takes care that unrefenced objects are clean up, the developer still is responsible for proper memory management. In addtion to application design memory management is a central part of application configuration.

Credits

This article is based on the Performance Antipatterns Series I am working on together with Mirko Novakovic of codecentric.

Related posts:

  1. SharePoint: Identifying memory problems introduced by custom code SharePoint is a great platform that makes it easy to...
  2. Performance Analysis: How to identify “bad” methods messing up the GC Whenever the Garbage Collector kicks in to free up memory...
  3. .NET Performance Analysis: A .NET Garbage Collection Mystery Memory Management in .NET is a broad topic with a...

 

Read the original blog entry...

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

@ThingsExpo Stories
Rodrigo Coutinho is part of OutSystems' founders' team and currently the Head of Product Design. He provides a cross-functional role where he supports Product Management in defining the positioning and direction of the Agile Platform, while at the same time promoting model-based development and new techniques to deliver applications in the cloud.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
In his session at Cloud Expo, Alan Winters, U.S. Head of Business Development at MobiDev, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to maximize project result...
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abilit...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
No hype cycles or predictions of zillions of things here. IoT is big. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, Associate Partner at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He discussed the evaluation of communication standards and IoT messaging protocols, data analytics considerations, edge-to-cloud tec...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...
In his session at @ThingsExpo, Dr. Robert Cohen, an economist and senior fellow at the Economic Strategy Institute, presented the findings of a series of six detailed case studies of how large corporations are implementing IoT. The session explored how IoT has improved their economic performance, had major impacts on business models and resulted in impressive ROIs. The companies covered span manufacturing and services firms. He also explored servicification, how manufacturing firms shift from se...
IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
Organizations planning enterprise data center consolidation and modernization projects are faced with a challenging, costly reality. Requirements to deploy modern, cloud-native applications simultaneously with traditional client/server applications are almost impossible to achieve with hardware-centric enterprise infrastructure. Compute and network infrastructure are fast moving down a software-defined path, but storage has been a laggard. Until now.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
The best way to leverage your CloudEXPO | DXWorldEXPO presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering CloudEXPO | DXWorldEXPO will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at CloudEXPO. Product announcements during our show provide your company with the most reach through our targeted audienc...
DXWorldEXPO LLC announced today that All in Mobile, a mobile app development company from Poland, will exhibit at the 22nd International CloudEXPO | DXWorldEXPO. All In Mobile is a mobile app development company from Poland. Since 2014, they maintain passion for developing mobile applications for enterprises and startups worldwide.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
JETRO showcased Japan Digital Transformation Pavilion at SYS-CON's 21st International Cloud Expo® at the Santa Clara Convention Center in Santa Clara, CA. The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get...
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
"We view the cloud not as a specific technology but as a way of doing business and that way of doing business is transforming the way software, infrastructure and services are being delivered to business," explained Matthew Rosen, CEO and Director at Fusion, in this SYS-CON.tv interview at 18th Cloud Expo (http://www.CloudComputingExpo.com), held June 7-9 at the Javits Center in New York City, NY.
DXWorldEXPO LLC announced today that the upcoming DXWorldEXPO | CloudEXPO New York event will feature 10 companies from Poland to participate at the "Poland Digital Transformation Pavilion" on November 12-13, 2018.