Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

'StackOverFlow' Issues in BEA WebLogic Server

Determine the cause before you make the phone call

A "StackOverFlow" message is usually indicative of an error in the application code of the user, an error in the Java Virtual Machine, or in BEA WebLogic Server itself.

This message is usually seen right before a Java Virtual Machine core dump or the WebLogic Server process just "goes away." It is because of either an unintentional recursive call in user/application code or a scenario where arrays of arrays of Objects can cause the stack to overflow (there are bug reports on http://java.sun.com about these types of issues). This is unfortunate, as it may require programmers to think about the implementation details of the Java Virtual Machine on which they are running. In order to investigate a "StackOverFlow" Error more thoroughly to determine the exact cause, first go through the items discussed here.

Look at any recent application code changes and see if anything could possibly be called recursively. If there is no stack trace produced for the "StackOverflow", try to add debug statements where there is suspect, code. If some application code is suspect, then you can make modifications and add the following:

catch ( StackOverflowError e ) {
System.err.println("Exception: " + e );
// Here is the important thing to do
// when catching StackOverflowError's:
// do some cleanup, destroy the thread or unravel if possible.

The next thing you can look at involves JSPs (if you're using them). A few issues involving recursive problems have been resolved by using the following information for JSPs:

  • Check within your jsp_error page to see if you have the tag <%@ page errorPage="jsp_error"%> as this would cause an infinite recursion. Please remove this and simply print the stack trace of any exceptions that could occur. This way you can find the problem in the error page. Once this is resolved you can look at the original error that gets sent to this page.
  • Feedback provided to BEA Customer Support about the JSP tag was that another problem was due to bad jsp/servlet login/auth/error reporting code. Fixing this code fixed the recursion and therefore the crash.
  • If you are using BEA WebLogic JSP Form Validation Tags, make sure that for the <wl:form> you don't set the action attribute to the same page containing the <wl:form> tag because this will create an infinite loop resulting in a "StackOverFlow" exception. For an example, see http://e-docs.bea.com/wls/docs61/jsp/validation_tags.html#67370.
A strange "StackOverFlow" was caused by the following code snippet within some application code. Please check to see if you are doing this in your code since a known Sun Issue (#4906193) addresses it. Instead of doing this in the application code:

Properties p = new Properties(System.getProperties());

do the following:

Properties p = new Properties();
p = System.getProperties();

in order to avoid the recursive call stack trace that was observed.

A suggested "possible" workaround for some "StackOverFlow" messages is to increase the size of the thread stacks with the -Xss argument to the JVM. However, if a recursive call truly has caused this, then this option will not really help at all and will only delay the inevitable. Some background on this argument to the JVM: each Java thread has two stacks, one for Java code and one for C code. This option sets the maximum stack size that can be used by C code in a thread to the value specified. For a complete definition of the "-Xss" flag see "Non-Standard Options" at http://java.sun.com/j2se/1.3/docs/tooldocs/solaris/java.html.

If the first couple of suggestions don't help pinpoint the problem, then try periodically collecting thread dumps of the JVM as it is running when you think the problem may occur (if it occurs at a specific time or a specific sequence of events causes the problem), usually about 5-10 seconds apart. Using this information, you may be able to find the recursive code and correct it or this may enable BEA Customer Support to have a better idea of what could be causing the problem. To collect thread dumps you need to do the following on the Java Process ID (PID):

  • On all "Unix-like" platforms you can do "kill -3" before expecting the crash. "kill -3 <jvm-pid>" dumps java threads.
  • On all "Windows" platforms you can do a "kill -3" on the JVM PID. You can also do a "<CTRL> <BREAK>" in the window where the JVM is running.
The same applies to the BEA JRockit JVM as well as all platforms. You need to make sure that you do this on the root java process. To get a tree structure of the processes on Linux, use the "--forest" option. For example, to find the processes started by user "wlsuser", execute this: "ps -lU wlsuser -forest". For a specific example of getting thread dumps with the BEA JRockit JVM on Linux, see http://e-docs.bea.com/wls/docs70/cluster/trouble.html#602852.

If you cannot "time" the thread dumps to get a thread dump right before the "StackOverFlow" happens, you can set the following flags to allow a thread dump to be taken of the server right before a core happens to get the state of the threads at that moment. The option is "-XX:+ShowMessageBoxOnError" option on the Sun JVM (which is not officially documented on Sun's Web site). When the JVM crashes, the program will prompt: "Do you want to debug the problem?" You can then take a thread dump of the JVM. This option will be available on the 8.1 SP2 version of the BEA JRockit JVM when it is released. However, in that version the corresponding option will be "-Djrockit.waitonerror".

If a binary core file is produced from a "StackOverFlow", then you can run a debugger on the resulting core file to get a stack trace. This may help in pointing out the offending code to you. If you are unsure, then contact BEA Customer Support with this information so they can investigate the stack trace more thoroughly. If you are on a Windows platform, then a "Dr. Watson" file may be produced so please send this file to BEA Customer Support when opening a case. Otherwise, check the following "Unix" operating system values to make sure that they have already been properly set in order to generate a core file:

  1. Check the "ulimit -c" (configured size of the core file) at a system and user level to make sure that it is set and that the value is not set too low to produce a meaningful core file.
  2. Check the available disk space for the user. For example: Is there a disk quota?
  3. Check the following parameter, which on Solaris is usually in /etc/system file and can be used to disable core files:

    set sys:coredumpsize=0

  4. On Linux, the coredump is turned off by default on all systems. In RedHat Advanced Server 2.1 it should be under "/etc/security". There should be a self-explanatory file called limits.conf; within that file look for the word "core". If it is set to "0", then coredump files are disabled.
  5. On HP-UX check the setting called "kernel parm maxdsiz" (max_per_proc_data_size, which increases the User Process Data Segment Size) from the old value of, say, 64m, to something higher like 134M.
Please get a stack trace (or back trace) from your debugger. For example, here are the commands needed when using "dbx" or "gdb".


  • $ java -version: Need to use right version of JDK
  • $ ls /opt/bin/dbx: Need to know dbx location or "which dbx"
  • $ export DEBUG_PROG=/opt/bin/dbx: Wherever "dbx" is located
  • $ <path to java command>/java corefile
Now you are in the debugger. Execute the following commands:
  • (dbx) where: Shows a summary of the stack)
  • (dbx) threads: Shows the state of the existing threads
  • (dbx) quit
  • $ java -version: Need to use right version of JDK
  • $ ls /usr/local/bin/gdb: Need to know gdb location or "which gdb"
  • $ export DEBUG_PROG=/usr/local/bin/gdb: Wherever "gdb" is located
  • $ <path to java command>/java corefile
Now you're in the debugger. Execute the following commands:
  • (gdb) where: Shows a summary of the stack
  • (gdb) thr: Switch among threads or show the current thread
  • (gdb) info thr: Inquire about existing threads
  • (gdb) thread apply 1 bt: Apply a command to a list of threads, specifically the back trace to thread #1
  • (gdb) quit
Further Information
For additional information you can also go to http://support.bea.com and find some published solutions on "StackOverflows". In the "Question" field type "S-19795" or "S-19361" to display the information from those solutions.

If none of these hints direct you towards a solution or an identifier in your application, then contact BEA Customer Support for further diagnosis. You can open a case with a valid support contract by logging in at http://support.bea.com/login.jsp

More Stories By Steve Pozarycki

Steven Pozarycki is a backline developer relations engineer with BEA Systems, Customer Support Group. He specializes in troubleshooting and solving complex customer issues with their mission-critical applications on BEA products. Steve holds a bachelor's degree in computer science.

Reproduced with permission from BEA Systems.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...