Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic, Linux Containers

Weblogic: Article

Deploying WebLogic on Linux

You are not alone

The rising business trend toward using open source software platforms has brought an increase in the number of critical applications deployed on Linux and BEA WebLogic. For many organizations, in fact, WebLogic deployments are their first major Linux installation.

This article provides an overview of deployment considerations when using a Linux/ WebLogic combination.

Linux deployments span traditional Intel-based servers from grid environments to mainframe systems (IBM's z/VM with Linux guests for example). In this article we will only cover the Intel architecture; however, almost all the points covered in this article are applicable for non-Intel deployments.

Why Linux?
Why the increasing number of deployments? Linux provides an alternative to proprietary operating systems. It can offer lower cost of ownership for some customers and has a large following of skilled workers. The Linux operating system is highly configurable and the source is usually available, so you can change the behavior or recompile options that are specific for your site. Lastly, a number of vendors support Linux, allowing the customer to pick the application software and hardware that is right for them.

Picking Your Distribution
WebLogic currently supports the major Linux distributions (Red Hat and SuSE). Refer to the BEA site (http://edocs.bea.com/wls/certifications/certs_810/overview.html#1043408) for the updated list of supported configurations. Both Red Hat and SuSE contain additional features (like cluster services) that may be useful for your installation. At the time of this writing, Red Hat had just released Enterprise Linux v3, so check on the certification pages for this version of Linux as several important enhancements have been added to the kernel, like Native POSIX Threading Library (NPTL).

Picking Your JVM
BEA's JRockit JVM can be used on an Intel Linux deployment and can provide many benefits as it supports both 32- and 64-bit environments. JRockit is designed for server-side execution and has advanced features like adaptive optimizations that can improve performance of the application. If you are running on a different platform (zLinux, for example) refer to the BEA supported platform page for the supported JVM.

Installing the JVM (JRockit)
JRockit's installation is simple: download the installer for your platform, execute the downloaded file (./jrockit-8.1sp1-j2se1.4.1-linux32.bin), and follow the on-screen prompts.

If you're running on an Intel processor with Hyper-Threading enabled, you will have an extra step once the installation is completed. The cupid for each processor (real and virtual) must be readable by any process; this can be achieved automatically or by changing the /dev/cpu/X/cupid (X is the CPU number) file permissions. Refer to the JRockit Release Notes (http://edocs.bea.com/wljrockit/docs81/relnotes/relnotes.html) for all the details on enabling this support.

Installing BEA WebLogic
Just as with JRockit, the installation of BEA WebLogic is very simple. Download the distribution for your environment and execute the download (./platform811_linux32.bin). The installer provides a GUI (the default) or console (non-GUI) installation option. If you are installing on a platform without a GUI or on a remote system you can bypass the "-mode=console" option when you start the installer. Either option will walk you through the installation process, which is interactive and allows you to select installation options and the home directory.

Maintenance
A number of factors must be considered when deploying BEA WebLogic on Linux. For example, configuration of the J2EE application server and the surrounding ecosystem must be properly planned so that the best performance can be achieved. Before the environment is deployed, for best performance start the process of maintaining the environment. This preplanning will pay off once the application is in production.

Collecting performance metrics on the application and its supporting infrastructure is very important (even before production). Recording these metrics prior to production enables capacity estimates to be built and also allows a reference baseline to be created so that changes to the application or environment can be validated against the baseline prior to a production deployment.

Once in production, collecting and persisting these metrics allows a performance model to be established.

Most vendors have a service to keep you informed via e-mail about patches and updates. Be sure to sign up for these services and make sure the e-mails go to a number of people within the IT group responsible. After all, if the notifications only go to one user, you can imagine what would happen if that user happened to be on vacation and an emergency patch was posted.

Although some automatic update services are available, I would hesitate to use them and would opt for the notification of updates first. Then you can decide what is applicable for your installation and if any cross-vendor dependencies exist.

Although products from different vendors typically play well together, the combination of your applications and the vendor's solution may require testing within your environment before a production deployment. Use the measurements taken to compare the performance delta before and after deploying into production.

One tool to consider for your Linux deployments is Tripwire (www.tripwire.com). Both the open source and commercial variants can be very helpful in identifying the "what changed during the weekend" syndrome. Using Tripwire to create a baseline of the servers can be helpful when used in addition to your change management process to validate software and file consistency or rolling back changes.

Environment Visibility
A BEA WebLogic application often has a number of external touch points that are non-Java. Examples of these are Web servers and databases. The overall performance of the WebLogic application is influenced by how well these other components execute and the overall performance of Linux.

Examples of gathering EPA (Environment Performance Agent; see sidebar, page 10) data include the following;

  • Linux VM data
    - Is too little memory available, causing Linux to swap?
    - How many tasks are pending and what is the load average?
  • Web server data
    - How many http errors occurred between measurements?
    - Are the child threads hung?
  • Database
    - How much space is remaining?
    - What is the cache hit ratio?
  • Network
    - What IP is generating the most requests?
    - Any event alerts on the network?
What Should You Monitor?
This is a loaded question and the answer really depends on the application and your own goals for monitoring and measuring success.

As a general rule of thumb, in addition to the J2EE components within the application, anything that feeds the application, or which the application server relies on to process a request, should be monitored. Review the Environment Visibility section above and consider the touch points your own application has. How do you measure availability and acceptable performance and what are you going to actually do with the data you collect (which is very valuable)?

Collecting metrics like CPU, component response time, memory usage, thread, JDBC pool usage, and concurrent requests are a starting point in creating an understanding of the application performance. Certainly many other components are available and can be incorporated into the measurement.

One consideration you need to make before deploying the application is what happens when it does not perform within the guidelines you set for it (assuming you created a baseline before production).

Linux Configuration
The first step is to understand the physical machine. Using a few displays can help:

  • Display the CPU information (cat/proc/cpuinfo). The CPU type will be displayed for each CPU in the machine.
  • Display the memory information (cat/proc/meminfo). The memory size, swap, and cache details are displayed.
  • Display the disk capacity and free space (du -lh).
  • Display the Network configuration (ifconfig).
The information collected above will help you determine where the application files should reside, the network bindings, and the amount of memory you can use for the application (Java heap).

Review the services that are running on the machine. For example, should the machine running BEA WebLogic have an FTP or mail server running? Remove (or comment out) services that are not required and edit the /etc/xinetd.conf or /etc/inetd.conf (depending on your Linux distribution). Once the services you don't need have been removed, create a baseline of disk and memory usage. Use load generation tools to observe how Linux performs, how many IO operations occur per second and how much swap space is used (iostat and vmstat).

The baseline data can then be used for monitoring.

Runtime Secrets
Now that WebLogic is deployed on Linux, let's look at some of the process information from a Linux perspective.

Find the Linux process id for WebLogic (ps -ef|grep java). Notice that Linux has a process for each thread so the display is a little different from other operating systems. For our example, we will assume the process id (pid) is 27260.

If we needed to know what terminal started the server and if the terminal is remote, how would we do that? Access the /proc/fd directory, which contains the list of file descriptors used by this process. Now list fd 0 (standard input) using the list command (ls -l) and the actual device will be displayed. In this case it was /dev/pts/6. We can use the Linux who command to see who logged on that device and its IP address.

> cd /proc/27260/fd
> ls -l 0 lrwx------ 1 root root 64 Nov 20 14:21 0 -> /dev/pts/6

>who
weblogic pts/6 Nov 20 10:55 (192.168.1.105)

We can also display the startup command and the environment variables that are being used by this process. This can be useful when trying to track down whether a certain option has been passed to the process via a script. Using the Linux cat command, display the cmdline and environ files (cat cmdline environ).

Another useful trick can be in determining what files are being used by the process. Display the maps file (cat maps), which displays the files that have been opened. An example use case could be to determine if a certain JAR file was loaded and the directory that it was loaded from.

> grep trader.jar maps

/opt/bea/weblogic81/samples/domains/examples/examplesServer/stage/
_appsdir_webservices_trader_ear/trader.jar

Tuning Considerations
Once the application has been in production for a short period of time (3-6 months), the operating system, application, and any of the touch points should be tuned - or at least the configuration parameters should be reviewed to ensure they are still appropriate. This is one of the benefits of persisting measurements made with the environment and its touch points. To measure but not persist the key metrics would be wasteful.

Sometimes tuning is necessary as the workload has changed. Maybe it's more complex as updated code or design has been migrated into production, or perhaps the application now supports a larger user base. Whatever the reason, tuning requires careful validation and, as is often the case, only performance monitors can show the overall impact to the whole application.

Start at the operating-system level and work up through the different stacks. Review the current performance measurements and use tools like Transaction Tracer to quickly show what set of components are responsible for the majority of elapsed time within a given request.

Review the load average, runnable tasks, and disk and swap activity reported per interval by the operating system. Consider reallocating files if disk activity is only on one device.

Perhaps the number of concurrent processes has increased (additional instances on the same machine). If the load average or runnable tasks are high, review what other processes are competing for resources. Maybe deploying application instances on separate machines would allow for the workload to be distributed across many machines, thus lowering the resource usage of an individual machine.

When tuning the JVM, look at the memory usage and garbage collection that is being used. The JVM tuning document, http://edocs.bea.com/wljrockit/docs81/tunelnux, is a good resource that outlines the garbage collection and thread options that are available. The WebLogic application must also be reviewed. The article "Turning WebLogic Server," http://edocs.bea.com/wls/docs81/perform/WLSTuning.html, is an excellent starting point. Then use the data collected to validate the performance.

Review the execute queue and thread pools within WebLogic. Are requests waiting to execute? Are enough connections available in the JDBC poll to process the expected workload, without over allocation?

Resources
Talk to your vendors about your deployment plans. Often they see multiple approaches to solving issues and can sometimes share insights based on experiences. The following Web sites will also help in building a Linux WebLogic deployment:

  • www.wilytech.com
  • CLICK!!
  • http://e-docs.bea.com

    Summary
    I hope this article has provided the background for a production BEA WebLogic and Linux deployment within your environment. The application server will only perform as fast as WebLogic can receive requests and retrieve data from the back end, so tuning is critical.

    The touch points we outlined and tuning considerations are a starting point. Your application and environment will have other touch points. But know that you are not alone on your Linux and BEA WebLogic deployment!

    Troubleshooting Your Linux Deployment
    Using performance-monitoring tools such as Wily's Introscope, the performance of the application and the other environment components that make up the whole application can be captured and recorded to a persistent store.

    Using Introscope and Introscope features such as the Environment Performance Agent (EPA), which is designed specifically for the collection of metrics from non-Java touch points, can offer you a "whole application" view of the operating environment. For example, you can use Introscope EPA to collect vital operating system–level data and Web server data, combine that with J2EE application data collected using the Introscope and then display all of these metrics on a dashboard for viewing. The metrics are then converted into performance metrics that can be used by Introscope to provide a view of the overall performance of the application.

    Tools like Introscope Transaction Tracer enable you to capture a request outside of the baseline for analysis or to create alerts to notify support staff of potential areas to investigate. These are some of the ways to address runtime issues.

    Introscope LeakHunter can also be used to track potential memory leaks within the application. If leaks are found, the class name, method, and size will be available so that a programmer can correct the problem.

    You can use Introscope to create dashboards for the various support teams within your organization before deployment so that if issues arise in production, your team members have data from the application server and supporting systems ready, enabling them to better assist in problem resolution.

    Using Introscope EPA, real-time performance data from Linux can be collected and used for monitoring and alerting. When combined with the in-depth metrics Introscope collects from BEA WebLogic, a complete picture of the application and all of its supporting systems is available (see Figure 1).

     
  • Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @ThingsExpo Stories
    Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
    DXWorldEXPO LLC, the producer of the world's most influential technology conferences and trade shows has announced the 22nd International CloudEXPO | DXWorldEXPO "Early Bird Registration" is now open. Register for Full Conference "Gold Pass" ▸ Here (Expo Hall ▸ Here)
    Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
    "MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    I think DevOps is now a rambunctious teenager - it's starting to get a mind of its own, wanting to get its own things but it still needs some adult supervision," explained Thomas Hooker, VP of marketing at CollabNet, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
    Recently, WebRTC has a lot of eyes from market. The use cases of WebRTC are expanding - video chat, online education, online health care etc. Not only for human-to-human communication, but also IoT use cases such as machine to human use cases can be seen recently. One of the typical use-case is remote camera monitoring. With WebRTC, people can have interoperability and flexibility for deploying monitoring service. However, the benefit of WebRTC for IoT is not only its convenience and interopera...
    Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
    It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
    WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
    Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
    Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
    IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
    When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
    Detecting internal user threats in the Big Data eco-system is challenging and cumbersome. Many organizations monitor internal usage of the Big Data eco-system using a set of alerts. This is not a scalable process given the increase in the number of alerts with the accelerating growth in data volume and user base. Organizations are increasingly leveraging machine learning to monitor only those data elements that are sensitive and critical, autonomously establish monitoring policies, and to detect...
    In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...
    In his session at @ThingsExpo, Dr. Robert Cohen, an economist and senior fellow at the Economic Strategy Institute, presented the findings of a series of six detailed case studies of how large corporations are implementing IoT. The session explored how IoT has improved their economic performance, had major impacts on business models and resulted in impressive ROIs. The companies covered span manufacturing and services firms. He also explored servicification, how manufacturing firms shift from se...
    DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
    The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
    IoT solutions exploit operational data generated by Internet-connected smart “things” for the purpose of gaining operational insight and producing “better outcomes” (for example, create new business models, eliminate unscheduled maintenance, etc.). The explosive proliferation of IoT solutions will result in an exponential growth in the volume of IoT data, precipitating significant Information Governance issues: who owns the IoT data, what are the rights/duties of IoT solutions adopters towards t...
    Amazon started as an online bookseller 20 years ago. Since then, it has evolved into a technology juggernaut that has disrupted multiple markets and industries and touches many aspects of our lives. It is a relentless technology and business model innovator driving disruption throughout numerous ecosystems. Amazon’s AWS revenues alone are approaching $16B a year making it one of the largest IT companies in the world. With dominant offerings in Cloud, IoT, eCommerce, Big Data, AI, Digital Assista...