Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic, Linux Containers

Weblogic: Article

Deploying WebLogic on Linux

You are not alone

The rising business trend toward using open source software platforms has brought an increase in the number of critical applications deployed on Linux and BEA WebLogic. For many organizations, in fact, WebLogic deployments are their first major Linux installation.

This article provides an overview of deployment considerations when using a Linux/ WebLogic combination.

Linux deployments span traditional Intel-based servers from grid environments to mainframe systems (IBM's z/VM with Linux guests for example). In this article we will only cover the Intel architecture; however, almost all the points covered in this article are applicable for non-Intel deployments.

Why Linux?
Why the increasing number of deployments? Linux provides an alternative to proprietary operating systems. It can offer lower cost of ownership for some customers and has a large following of skilled workers. The Linux operating system is highly configurable and the source is usually available, so you can change the behavior or recompile options that are specific for your site. Lastly, a number of vendors support Linux, allowing the customer to pick the application software and hardware that is right for them.

Picking Your Distribution
WebLogic currently supports the major Linux distributions (Red Hat and SuSE). Refer to the BEA site (http://edocs.bea.com/wls/certifications/certs_810/overview.html#1043408) for the updated list of supported configurations. Both Red Hat and SuSE contain additional features (like cluster services) that may be useful for your installation. At the time of this writing, Red Hat had just released Enterprise Linux v3, so check on the certification pages for this version of Linux as several important enhancements have been added to the kernel, like Native POSIX Threading Library (NPTL).

Picking Your JVM
BEA's JRockit JVM can be used on an Intel Linux deployment and can provide many benefits as it supports both 32- and 64-bit environments. JRockit is designed for server-side execution and has advanced features like adaptive optimizations that can improve performance of the application. If you are running on a different platform (zLinux, for example) refer to the BEA supported platform page for the supported JVM.

Installing the JVM (JRockit)
JRockit's installation is simple: download the installer for your platform, execute the downloaded file (./jrockit-8.1sp1-j2se1.4.1-linux32.bin), and follow the on-screen prompts.

If you're running on an Intel processor with Hyper-Threading enabled, you will have an extra step once the installation is completed. The cupid for each processor (real and virtual) must be readable by any process; this can be achieved automatically or by changing the /dev/cpu/X/cupid (X is the CPU number) file permissions. Refer to the JRockit Release Notes (http://edocs.bea.com/wljrockit/docs81/relnotes/relnotes.html) for all the details on enabling this support.

Installing BEA WebLogic
Just as with JRockit, the installation of BEA WebLogic is very simple. Download the distribution for your environment and execute the download (./platform811_linux32.bin). The installer provides a GUI (the default) or console (non-GUI) installation option. If you are installing on a platform without a GUI or on a remote system you can bypass the "-mode=console" option when you start the installer. Either option will walk you through the installation process, which is interactive and allows you to select installation options and the home directory.

A number of factors must be considered when deploying BEA WebLogic on Linux. For example, configuration of the J2EE application server and the surrounding ecosystem must be properly planned so that the best performance can be achieved. Before the environment is deployed, for best performance start the process of maintaining the environment. This preplanning will pay off once the application is in production.

Collecting performance metrics on the application and its supporting infrastructure is very important (even before production). Recording these metrics prior to production enables capacity estimates to be built and also allows a reference baseline to be created so that changes to the application or environment can be validated against the baseline prior to a production deployment.

Once in production, collecting and persisting these metrics allows a performance model to be established.

Most vendors have a service to keep you informed via e-mail about patches and updates. Be sure to sign up for these services and make sure the e-mails go to a number of people within the IT group responsible. After all, if the notifications only go to one user, you can imagine what would happen if that user happened to be on vacation and an emergency patch was posted.

Although some automatic update services are available, I would hesitate to use them and would opt for the notification of updates first. Then you can decide what is applicable for your installation and if any cross-vendor dependencies exist.

Although products from different vendors typically play well together, the combination of your applications and the vendor's solution may require testing within your environment before a production deployment. Use the measurements taken to compare the performance delta before and after deploying into production.

One tool to consider for your Linux deployments is Tripwire (www.tripwire.com). Both the open source and commercial variants can be very helpful in identifying the "what changed during the weekend" syndrome. Using Tripwire to create a baseline of the servers can be helpful when used in addition to your change management process to validate software and file consistency or rolling back changes.

Environment Visibility
A BEA WebLogic application often has a number of external touch points that are non-Java. Examples of these are Web servers and databases. The overall performance of the WebLogic application is influenced by how well these other components execute and the overall performance of Linux.

Examples of gathering EPA (Environment Performance Agent; see sidebar, page 10) data include the following;

  • Linux VM data
    - Is too little memory available, causing Linux to swap?
    - How many tasks are pending and what is the load average?
  • Web server data
    - How many http errors occurred between measurements?
    - Are the child threads hung?
  • Database
    - How much space is remaining?
    - What is the cache hit ratio?
  • Network
    - What IP is generating the most requests?
    - Any event alerts on the network?
What Should You Monitor?
This is a loaded question and the answer really depends on the application and your own goals for monitoring and measuring success.

As a general rule of thumb, in addition to the J2EE components within the application, anything that feeds the application, or which the application server relies on to process a request, should be monitored. Review the Environment Visibility section above and consider the touch points your own application has. How do you measure availability and acceptable performance and what are you going to actually do with the data you collect (which is very valuable)?

Collecting metrics like CPU, component response time, memory usage, thread, JDBC pool usage, and concurrent requests are a starting point in creating an understanding of the application performance. Certainly many other components are available and can be incorporated into the measurement.

One consideration you need to make before deploying the application is what happens when it does not perform within the guidelines you set for it (assuming you created a baseline before production).

Linux Configuration
The first step is to understand the physical machine. Using a few displays can help:

  • Display the CPU information (cat/proc/cpuinfo). The CPU type will be displayed for each CPU in the machine.
  • Display the memory information (cat/proc/meminfo). The memory size, swap, and cache details are displayed.
  • Display the disk capacity and free space (du -lh).
  • Display the Network configuration (ifconfig).
The information collected above will help you determine where the application files should reside, the network bindings, and the amount of memory you can use for the application (Java heap).

Review the services that are running on the machine. For example, should the machine running BEA WebLogic have an FTP or mail server running? Remove (or comment out) services that are not required and edit the /etc/xinetd.conf or /etc/inetd.conf (depending on your Linux distribution). Once the services you don't need have been removed, create a baseline of disk and memory usage. Use load generation tools to observe how Linux performs, how many IO operations occur per second and how much swap space is used (iostat and vmstat).

The baseline data can then be used for monitoring.

Runtime Secrets
Now that WebLogic is deployed on Linux, let's look at some of the process information from a Linux perspective.

Find the Linux process id for WebLogic (ps -ef|grep java). Notice that Linux has a process for each thread so the display is a little different from other operating systems. For our example, we will assume the process id (pid) is 27260.

If we needed to know what terminal started the server and if the terminal is remote, how would we do that? Access the /proc/fd directory, which contains the list of file descriptors used by this process. Now list fd 0 (standard input) using the list command (ls -l) and the actual device will be displayed. In this case it was /dev/pts/6. We can use the Linux who command to see who logged on that device and its IP address.

> cd /proc/27260/fd
> ls -l 0 lrwx------ 1 root root 64 Nov 20 14:21 0 -> /dev/pts/6

weblogic pts/6 Nov 20 10:55 (

We can also display the startup command and the environment variables that are being used by this process. This can be useful when trying to track down whether a certain option has been passed to the process via a script. Using the Linux cat command, display the cmdline and environ files (cat cmdline environ).

Another useful trick can be in determining what files are being used by the process. Display the maps file (cat maps), which displays the files that have been opened. An example use case could be to determine if a certain JAR file was loaded and the directory that it was loaded from.

> grep trader.jar maps


Tuning Considerations
Once the application has been in production for a short period of time (3-6 months), the operating system, application, and any of the touch points should be tuned - or at least the configuration parameters should be reviewed to ensure they are still appropriate. This is one of the benefits of persisting measurements made with the environment and its touch points. To measure but not persist the key metrics would be wasteful.

Sometimes tuning is necessary as the workload has changed. Maybe it's more complex as updated code or design has been migrated into production, or perhaps the application now supports a larger user base. Whatever the reason, tuning requires careful validation and, as is often the case, only performance monitors can show the overall impact to the whole application.

Start at the operating-system level and work up through the different stacks. Review the current performance measurements and use tools like Transaction Tracer to quickly show what set of components are responsible for the majority of elapsed time within a given request.

Review the load average, runnable tasks, and disk and swap activity reported per interval by the operating system. Consider reallocating files if disk activity is only on one device.

Perhaps the number of concurrent processes has increased (additional instances on the same machine). If the load average or runnable tasks are high, review what other processes are competing for resources. Maybe deploying application instances on separate machines would allow for the workload to be distributed across many machines, thus lowering the resource usage of an individual machine.

When tuning the JVM, look at the memory usage and garbage collection that is being used. The JVM tuning document, http://edocs.bea.com/wljrockit/docs81/tunelnux, is a good resource that outlines the garbage collection and thread options that are available. The WebLogic application must also be reviewed. The article "Turning WebLogic Server," http://edocs.bea.com/wls/docs81/perform/WLSTuning.html, is an excellent starting point. Then use the data collected to validate the performance.

Review the execute queue and thread pools within WebLogic. Are requests waiting to execute? Are enough connections available in the JDBC poll to process the expected workload, without over allocation?

Talk to your vendors about your deployment plans. Often they see multiple approaches to solving issues and can sometimes share insights based on experiences. The following Web sites will also help in building a Linux WebLogic deployment:

  • www.wilytech.com
  • CLICK!!
  • http://e-docs.bea.com

    I hope this article has provided the background for a production BEA WebLogic and Linux deployment within your environment. The application server will only perform as fast as WebLogic can receive requests and retrieve data from the back end, so tuning is critical.

    The touch points we outlined and tuning considerations are a starting point. Your application and environment will have other touch points. But know that you are not alone on your Linux and BEA WebLogic deployment!

    Troubleshooting Your Linux Deployment
    Using performance-monitoring tools such as Wily's Introscope, the performance of the application and the other environment components that make up the whole application can be captured and recorded to a persistent store.

    Using Introscope and Introscope features such as the Environment Performance Agent (EPA), which is designed specifically for the collection of metrics from non-Java touch points, can offer you a "whole application" view of the operating environment. For example, you can use Introscope EPA to collect vital operating system–level data and Web server data, combine that with J2EE application data collected using the Introscope and then display all of these metrics on a dashboard for viewing. The metrics are then converted into performance metrics that can be used by Introscope to provide a view of the overall performance of the application.

    Tools like Introscope Transaction Tracer enable you to capture a request outside of the baseline for analysis or to create alerts to notify support staff of potential areas to investigate. These are some of the ways to address runtime issues.

    Introscope LeakHunter can also be used to track potential memory leaks within the application. If leaks are found, the class name, method, and size will be available so that a programmer can correct the problem.

    You can use Introscope to create dashboards for the various support teams within your organization before deployment so that if issues arise in production, your team members have data from the application server and supporting systems ready, enabling them to better assist in problem resolution.

    Using Introscope EPA, real-time performance data from Linux can be collected and used for monitoring and alerting. When combined with the in-depth metrics Introscope collects from BEA WebLogic, a complete picture of the application and all of its supporting systems is available (see Figure 1).

  • Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    @ThingsExpo Stories
    DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
    "Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
    The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
    In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
    "Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
    Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
    Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
    "Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    "IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
    In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
    22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
    "Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
    Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
    "MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
    "There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
    SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.