Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic, Linux Containers

Weblogic: Article

Deploying WebLogic on Linux

You are not alone

The rising business trend toward using open source software platforms has brought an increase in the number of critical applications deployed on Linux and BEA WebLogic. For many organizations, in fact, WebLogic deployments are their first major Linux installation.

This article provides an overview of deployment considerations when using a Linux/ WebLogic combination.

Linux deployments span traditional Intel-based servers from grid environments to mainframe systems (IBM's z/VM with Linux guests for example). In this article we will only cover the Intel architecture; however, almost all the points covered in this article are applicable for non-Intel deployments.

Why Linux?
Why the increasing number of deployments? Linux provides an alternative to proprietary operating systems. It can offer lower cost of ownership for some customers and has a large following of skilled workers. The Linux operating system is highly configurable and the source is usually available, so you can change the behavior or recompile options that are specific for your site. Lastly, a number of vendors support Linux, allowing the customer to pick the application software and hardware that is right for them.

Picking Your Distribution
WebLogic currently supports the major Linux distributions (Red Hat and SuSE). Refer to the BEA site (http://edocs.bea.com/wls/certifications/certs_810/overview.html#1043408) for the updated list of supported configurations. Both Red Hat and SuSE contain additional features (like cluster services) that may be useful for your installation. At the time of this writing, Red Hat had just released Enterprise Linux v3, so check on the certification pages for this version of Linux as several important enhancements have been added to the kernel, like Native POSIX Threading Library (NPTL).

Picking Your JVM
BEA's JRockit JVM can be used on an Intel Linux deployment and can provide many benefits as it supports both 32- and 64-bit environments. JRockit is designed for server-side execution and has advanced features like adaptive optimizations that can improve performance of the application. If you are running on a different platform (zLinux, for example) refer to the BEA supported platform page for the supported JVM.

Installing the JVM (JRockit)
JRockit's installation is simple: download the installer for your platform, execute the downloaded file (./jrockit-8.1sp1-j2se1.4.1-linux32.bin), and follow the on-screen prompts.

If you're running on an Intel processor with Hyper-Threading enabled, you will have an extra step once the installation is completed. The cupid for each processor (real and virtual) must be readable by any process; this can be achieved automatically or by changing the /dev/cpu/X/cupid (X is the CPU number) file permissions. Refer to the JRockit Release Notes (http://edocs.bea.com/wljrockit/docs81/relnotes/relnotes.html) for all the details on enabling this support.

Installing BEA WebLogic
Just as with JRockit, the installation of BEA WebLogic is very simple. Download the distribution for your environment and execute the download (./platform811_linux32.bin). The installer provides a GUI (the default) or console (non-GUI) installation option. If you are installing on a platform without a GUI or on a remote system you can bypass the "-mode=console" option when you start the installer. Either option will walk you through the installation process, which is interactive and allows you to select installation options and the home directory.

A number of factors must be considered when deploying BEA WebLogic on Linux. For example, configuration of the J2EE application server and the surrounding ecosystem must be properly planned so that the best performance can be achieved. Before the environment is deployed, for best performance start the process of maintaining the environment. This preplanning will pay off once the application is in production.

Collecting performance metrics on the application and its supporting infrastructure is very important (even before production). Recording these metrics prior to production enables capacity estimates to be built and also allows a reference baseline to be created so that changes to the application or environment can be validated against the baseline prior to a production deployment.

Once in production, collecting and persisting these metrics allows a performance model to be established.

Most vendors have a service to keep you informed via e-mail about patches and updates. Be sure to sign up for these services and make sure the e-mails go to a number of people within the IT group responsible. After all, if the notifications only go to one user, you can imagine what would happen if that user happened to be on vacation and an emergency patch was posted.

Although some automatic update services are available, I would hesitate to use them and would opt for the notification of updates first. Then you can decide what is applicable for your installation and if any cross-vendor dependencies exist.

Although products from different vendors typically play well together, the combination of your applications and the vendor's solution may require testing within your environment before a production deployment. Use the measurements taken to compare the performance delta before and after deploying into production.

One tool to consider for your Linux deployments is Tripwire (www.tripwire.com). Both the open source and commercial variants can be very helpful in identifying the "what changed during the weekend" syndrome. Using Tripwire to create a baseline of the servers can be helpful when used in addition to your change management process to validate software and file consistency or rolling back changes.

Environment Visibility
A BEA WebLogic application often has a number of external touch points that are non-Java. Examples of these are Web servers and databases. The overall performance of the WebLogic application is influenced by how well these other components execute and the overall performance of Linux.

Examples of gathering EPA (Environment Performance Agent; see sidebar, page 10) data include the following;

  • Linux VM data
    - Is too little memory available, causing Linux to swap?
    - How many tasks are pending and what is the load average?
  • Web server data
    - How many http errors occurred between measurements?
    - Are the child threads hung?
  • Database
    - How much space is remaining?
    - What is the cache hit ratio?
  • Network
    - What IP is generating the most requests?
    - Any event alerts on the network?
What Should You Monitor?
This is a loaded question and the answer really depends on the application and your own goals for monitoring and measuring success.

As a general rule of thumb, in addition to the J2EE components within the application, anything that feeds the application, or which the application server relies on to process a request, should be monitored. Review the Environment Visibility section above and consider the touch points your own application has. How do you measure availability and acceptable performance and what are you going to actually do with the data you collect (which is very valuable)?

Collecting metrics like CPU, component response time, memory usage, thread, JDBC pool usage, and concurrent requests are a starting point in creating an understanding of the application performance. Certainly many other components are available and can be incorporated into the measurement.

One consideration you need to make before deploying the application is what happens when it does not perform within the guidelines you set for it (assuming you created a baseline before production).

Linux Configuration
The first step is to understand the physical machine. Using a few displays can help:

  • Display the CPU information (cat/proc/cpuinfo). The CPU type will be displayed for each CPU in the machine.
  • Display the memory information (cat/proc/meminfo). The memory size, swap, and cache details are displayed.
  • Display the disk capacity and free space (du -lh).
  • Display the Network configuration (ifconfig).
The information collected above will help you determine where the application files should reside, the network bindings, and the amount of memory you can use for the application (Java heap).

Review the services that are running on the machine. For example, should the machine running BEA WebLogic have an FTP or mail server running? Remove (or comment out) services that are not required and edit the /etc/xinetd.conf or /etc/inetd.conf (depending on your Linux distribution). Once the services you don't need have been removed, create a baseline of disk and memory usage. Use load generation tools to observe how Linux performs, how many IO operations occur per second and how much swap space is used (iostat and vmstat).

The baseline data can then be used for monitoring.

Runtime Secrets
Now that WebLogic is deployed on Linux, let's look at some of the process information from a Linux perspective.

Find the Linux process id for WebLogic (ps -ef|grep java). Notice that Linux has a process for each thread so the display is a little different from other operating systems. For our example, we will assume the process id (pid) is 27260.

If we needed to know what terminal started the server and if the terminal is remote, how would we do that? Access the /proc/fd directory, which contains the list of file descriptors used by this process. Now list fd 0 (standard input) using the list command (ls -l) and the actual device will be displayed. In this case it was /dev/pts/6. We can use the Linux who command to see who logged on that device and its IP address.

> cd /proc/27260/fd
> ls -l 0 lrwx------ 1 root root 64 Nov 20 14:21 0 -> /dev/pts/6

weblogic pts/6 Nov 20 10:55 (

We can also display the startup command and the environment variables that are being used by this process. This can be useful when trying to track down whether a certain option has been passed to the process via a script. Using the Linux cat command, display the cmdline and environ files (cat cmdline environ).

Another useful trick can be in determining what files are being used by the process. Display the maps file (cat maps), which displays the files that have been opened. An example use case could be to determine if a certain JAR file was loaded and the directory that it was loaded from.

> grep trader.jar maps


Tuning Considerations
Once the application has been in production for a short period of time (3-6 months), the operating system, application, and any of the touch points should be tuned - or at least the configuration parameters should be reviewed to ensure they are still appropriate. This is one of the benefits of persisting measurements made with the environment and its touch points. To measure but not persist the key metrics would be wasteful.

Sometimes tuning is necessary as the workload has changed. Maybe it's more complex as updated code or design has been migrated into production, or perhaps the application now supports a larger user base. Whatever the reason, tuning requires careful validation and, as is often the case, only performance monitors can show the overall impact to the whole application.

Start at the operating-system level and work up through the different stacks. Review the current performance measurements and use tools like Transaction Tracer to quickly show what set of components are responsible for the majority of elapsed time within a given request.

Review the load average, runnable tasks, and disk and swap activity reported per interval by the operating system. Consider reallocating files if disk activity is only on one device.

Perhaps the number of concurrent processes has increased (additional instances on the same machine). If the load average or runnable tasks are high, review what other processes are competing for resources. Maybe deploying application instances on separate machines would allow for the workload to be distributed across many machines, thus lowering the resource usage of an individual machine.

When tuning the JVM, look at the memory usage and garbage collection that is being used. The JVM tuning document, http://edocs.bea.com/wljrockit/docs81/tunelnux, is a good resource that outlines the garbage collection and thread options that are available. The WebLogic application must also be reviewed. The article "Turning WebLogic Server," http://edocs.bea.com/wls/docs81/perform/WLSTuning.html, is an excellent starting point. Then use the data collected to validate the performance.

Review the execute queue and thread pools within WebLogic. Are requests waiting to execute? Are enough connections available in the JDBC poll to process the expected workload, without over allocation?

Talk to your vendors about your deployment plans. Often they see multiple approaches to solving issues and can sometimes share insights based on experiences. The following Web sites will also help in building a Linux WebLogic deployment:

  • www.wilytech.com
  • CLICK!!
  • http://e-docs.bea.com

    I hope this article has provided the background for a production BEA WebLogic and Linux deployment within your environment. The application server will only perform as fast as WebLogic can receive requests and retrieve data from the back end, so tuning is critical.

    The touch points we outlined and tuning considerations are a starting point. Your application and environment will have other touch points. But know that you are not alone on your Linux and BEA WebLogic deployment!

    Troubleshooting Your Linux Deployment
    Using performance-monitoring tools such as Wily's Introscope, the performance of the application and the other environment components that make up the whole application can be captured and recorded to a persistent store.

    Using Introscope and Introscope features such as the Environment Performance Agent (EPA), which is designed specifically for the collection of metrics from non-Java touch points, can offer you a "whole application" view of the operating environment. For example, you can use Introscope EPA to collect vital operating system–level data and Web server data, combine that with J2EE application data collected using the Introscope and then display all of these metrics on a dashboard for viewing. The metrics are then converted into performance metrics that can be used by Introscope to provide a view of the overall performance of the application.

    Tools like Introscope Transaction Tracer enable you to capture a request outside of the baseline for analysis or to create alerts to notify support staff of potential areas to investigate. These are some of the ways to address runtime issues.

    Introscope LeakHunter can also be used to track potential memory leaks within the application. If leaks are found, the class name, method, and size will be available so that a programmer can correct the problem.

    You can use Introscope to create dashboards for the various support teams within your organization before deployment so that if issues arise in production, your team members have data from the application server and supporting systems ready, enabling them to better assist in problem resolution.

    Using Introscope EPA, real-time performance data from Linux can be collected and used for monitoring and alerting. When combined with the in-depth metrics Introscope collects from BEA WebLogic, a complete picture of the application and all of its supporting systems is available (see Figure 1).

  • Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    IoT & Smart Cities Stories
    René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
    Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
    In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
    Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
    Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
    Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
    If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
    Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
    When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
    Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...