Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

BEA Weblogic Application Consolidation Strategies

BEA Weblogic Application Consolidation Strategies

Given the current global economic downturn, it is certainly no surprise that large organizations are putting cost-cutting measures at the top of their priority lists.

This trend is particularly true in the information technology (IT) arena, as the overspending of the last few years and the associated lack of ROI has resulted in intense scrutiny on IT spending. One of the initial responses to this new focus on cost-cutting from many IT organizations has been to consolidate much of their server infrastructure. By controlling and reversing the "server sprawl" trends of the last few years, organizations have been able to achieve substantial savings in server administration and management. Now organizations are seeking to achieve even greater savings by consolidating not only their server infrastructures, but the associated applications running on those servers - a decidedly more complex, yet potentially more rewarding, undertaking. In addition to the potential cost savings, organizations that are able to consolidate their applications will be positioning themselves for the even greater benefits of a truly adaptive enterprise. This article explores the various strategies for consolidating J2EE applications running on BEA WebLogic, and the associated challenges and benefits of doing so.

The Promise
Consolidating J2EE applications, such as portals, running on WebLogic into a single hardware infrastructure holds the promise of extensive cost savings and improved application reliability. As Java and J2EE have evolved over the past several years, IT organizations have tended to deploy both ISV and internally developed applications into what amounts to a "one application per server" model. This made a lot of sense at the time, given the rapid pace of change in the Java platform over the last few years. Now that the platform has matured, IT organizations are left to deal with the resulting server sprawl and the low resource utilization associated with this model.

The good news is that tremendous cost savings can result from consolidating these applications onto more efficient platforms. Consolidating underutilized applications onto fewer but more modern processors can free existing servers for new development or round out application testing and staging environments. As an added benefit, WebLogic server licenses can potentially be consolidated and recovered for use in future initiatives. These results can be further extended by utilizing a more recent version of WebLogic - with performance improvements of approximately 30% when moving from version 7.0 to 8.1, and over 100% when moving from prior versions to 8.1. Finally, the reliability of these applications can also be dramatically improved by moving single-server applications into fault-tolerant WebLogic clusters - thus reducing support and downtime costs.

The Challenge
At first glance, application consolidation appears to be only incrementally more difficult than server consolidation. However, while server consolidation generally consists of moving servers into a centralized administration and management infrastructure and eliminating redundancies, application consolidation involves running multiple, independent applications on the same hardware. Application consolidation immediately raises several issues:

  • How can I guarantee service levels between applications?
  • How can I prevent one errant application from affecting other applications?
  • How can I reconcile differing needs for operating system patch levels between applications?...and so on.

    It quickly becomes clear that any application-consolidation initiative has a high potential for failure unless these and many other issues are proactively addressed and accounted for.

    The primary issue underlying application consolidation is striking the right balance between application isolation and resource utilization. In an ideal world, every application would be completely isolated from every other application running on the shared infrastructure, while simultaneously achieving optimal resource utilization. Of course, there is no ideal world and, to make matters worse, the goals of application isolation and maximum resource utilization tend to be inversely proportional to one another. Generally speaking, the more application isolation you achieve, the less resource utilization you achieve and vice versa. Figure 1 illustrates the various possible application isolation strategies and their relative level of resource utilization (note the linearity of almost all the strategies).

    Flexible Strategies
    Depending upon the underlying operating system, there are several different approaches to application consolidation. For brevity, I'll focus on two operating systems - HP-UX and Linux - that are representative of the types of platforms that IT organizations are likely to run across in real-world situations.

    Single WebLogic Instance
    The first and most obvious strategy of application consolidation using WebLogic is to simply run many applications in the same instance of the application server (i.e., in a single Java Virtual Machine [JVM]; see Figure 2). Modern versions of WebLogic have the ability to run multiple, independent applications on a single server instance in a secure fashion. This appears to be a relatively straightforward solution that would ensure optimal resource utilization within a shared infrastructure (in both single-server and clustered environments).

    However, the drawbacks of this design quickly become obvious when you consider all but the most trivial scenarios. A single JVM environment allows for only a superficial level of isolation between applications, and therefore presents the following issues:

  • Errant applications can negatively impact other running applications.
  • There is no way to guarantee performance service levels for different applications.
  • All applications must work with the same WebLogic version, JVM version, and associated patch levels.
  • All applications must work with the same operating-system version and patch level.
  • The potential for upgrade deadlocks between applications (i.e., an OS or JVM patch that one application requires ends up breaking another application).
  • The potential for significant support issues when running third-party ISV applications in a shared infrastructure.
  • JVMs tend not to scale well past four processor configurations, which limits the viability of the single-JVM model on larger machines.

    As a result of these issues, a single JVM shared environment is a viable model only in situations where extremely rigorous coding and testing standards are the norm and where only a very limited number of ISV applications (if any) are needed.

    Multiple WebLogic Instances
    The next logical strategy for running multiple WebLogic applications on a shared infrastructure is to run multiple instances of the application server (multiple JVMs) on the same physical machine (see Figure 3). This scenario affords much more application isolation than the single JVM model, while giving up only a small degree of resource utilization. Running multiple WebLogic instances makes it more difficult for an errant application to impact other applications and allows different applications to use different WebLogic versions and different JVM versions and patch levels. This model also solves many potential support issues when running third-party ISV applications since each instance has its own virtual machine, heap space, and database connection pools.

    While running multiple WebLogic instances clearly has some advantages over a singe-instance model, it also has some significant drawbacks, including:

  • Errant applications still have a chance (albeit a small chance) of negatively impacting other running applications.
  • There is no way to completely ensure performance service levels for different applications.
  • All applications must work with the same operating system version and patch level.
  • Memory overhead from running multiple JVMs.

    Despite these challenges, running multiple instances of WebLogic is a viable solution for application consolidation in many circumstances and represents a good balance between application isolation and resource utilization.

    Virtual Machines and Virtual Partitions
    While a multiple JVM configuration provides some degree of application isolation, there are many circumstances where greater isolation is necessary between applications. The next logical level of isolation is at the operating system level and is achieved through the use of virtual machines (in the case of Linux) or virtual partitions (or "vPars" in the HP-UX world). In the virtual machine scenario, a single physical machine runs a primary operating system (in our case, Linux) that serves as a "host" OS for various "guest" operating systems that run virtual memory spaces. The host OS essentially virtualizes physical system resources (CPU, disk, I/O, etc.) and coordinates access to these resources by the guest operating systems (see Figure 4).

    Virtual partitions employ similar concepts, but they do not require a dedicated "host" operating system (there is a vPar monitor that performs this function) nor do they truly virtualize system resources (physical resources are assigned to each virtual partition).

    Implementing virtual machines on Linux requires a third-party application, such as the VMWare ESX platform, while virtual partitions are a built-in feature of HP-UX 11i. From a performance standpoint, VMWare virtual machines and HP-UX virtual partitions differ somewhat in their implementation details. Being part of the base OS and not having to virtualize all resources, HP-UX virtual partitions tend to have a lower overhead penalty than VMWare virtual machines, but are limited to running multiple versions of HP-UX only. Conversely, VMWare virtual machines pay a higher overhead penalty but are able to host a wider range of "guest" operating systems, including Windows NT/ 2000/2003, Red Hat Linux 7.x / 8.x, Red Hat Advanced Server 2.1, SuSE Linux 7.3, and FreeBSD 4.5 (all on the IA-32 architecture).

    Since each virtual machine/partition runs a completely independent operating system instance, they achieve a high degree of application isolation, including the ability to dedicate a certain percentage of system resources to particular virtual machines to achieve service-level commitments. (Note: In the case of HP-UX virtual partitions, processor resources are allocatable only on a per-CPU level of granularity.) In effect, each application is completely unaware that it is running in a virtual, rather than a physical, machine. Each virtual machine/partition can also be quickly reconfigured based upon demand, giving system administrators a very high degree of flexibility. However, the price for such independence is the inherent overhead involved in managing multiple OS instances and their associated memory requirements.

    "Hard" Partitions
    Aside from actually using multiple physical servers, the far extreme of the application isolation scale is the use of "hard" partitions on high-end Unix systems such as the HP Superdome platform. A hard partition conceptually resembles a virtual partition, except that it is actually implemented in the hardware architecture of the machine. Each hard partition is physically allocated a certain number of CPUs, memory, storage, and so on from a pool of resources available within the machine. Once allocated, these partitions act, for all intents and purposes (including fault tolerance), as separate physical machines, thus providing the highest possible degree of application isolation but the lowest overall resource utilization.

    From a consolidation standpoint, hard partitions can achieve a marginally higher level of resource utilization than independent machines due to their reconfiguration flexibility and manageability. The resources of hard partitions can quickly be reallocated to other partitions to meet the ever-changing needs of the organization. Management and operational costs can also be reduced through the use of hard partitions by eliminating the redundant system administration resources needed to manage a decentralized environment.

    Resource Partitions and HP Workload Manager
    A resource partition is similar to a virtual partition or a virtual machine, but it utilizes only a single instance of the HP-UX operating system. In this scenario, each resource partition gets its own allocation of CPU resources, which is managed by its own process scheduler, and its own allocation of memory resources, which is managed by its own memory management subsystem. Thus, a resource partition can simplistically be viewed as a "mini" virtual machine - without as much overhead, but restricting the system to a single instance (and hence, version) of the operating system.

    While resource partitions represent an interesting compromise between multiple JVMs and virtual machines, they are able to achieve even higher levels of resource utilization by using an HP product called Workload Manager (WLM). WLM essentially sits on top of a set of resource partitions and periodically monitors their performance characteristics versus a set of pre-established service-level objectives (SLOs). If the SLO of any resource partition is not being met, then WLM can dynamically reallocate processor and memory resources from other resource partitions to achieve the correct service level. That is, if one partition needs additional CPU or memory resources, WLM can dynamically reallocate resources from another partition and add them to the needed partition - all automatically and without human intervention. Ultimately, this increases overall resource utilization while simultaneously ensuring that system resource allocations never fall below a minimum threshold specified by an application owner (see Figure 5).

    Workload Manager also has WebLogic-specific monitoring functionality that allows it to dynamically reallocate resources to partitions based upon BEA WebLogic Server's free thread count and/or work queue length. This further ensures that system resources are utilized in an optimal fashion while maintaining a high degree of application isolation.

    On large, multiprocessor HP-UX systems, HP Workload Manager probably represents the best combination of application isolation and optimal resource utilization for BEA WebLogic for most circumstances.

    Combining Application Isolation Strategies
    The good news is that many of these application isolation strategies can be combined to achieve the right level of granularity to effectively balance isolation needs with resource utilization targets. In order to achieve the desired results from an application isolation strategy, it is likely that many enterprises may actually need to combine several solutions to precisely fit their needs. To illustrate the possible combinations and the decision-making processes behind choosing the right combination strategy, I'll look at two different scenarios involving multiple application isolation requirements.

    Scenario 1
    In this scenario, an IT organization is tasked with consolidating three internally developed applications. Two of the applications were recently developed by the same team and are deployed on BEA WebLogic 7.0 running on Red Hat Linux 7.1. The third application was developed by an outside consulting firm (and is currently under a support contract with that firm) and is deployed on WebLogic 6.1 running on Red Hat 7.1. In this case, the best singular application isolation strategy would be to deploy each application in its own instance of WebLogic to ensure that there were no support conflicts with the outside vendor's application. However, the IT organization could cut out a significant amount of overhead by running the externally developed application in its own JVM and having the other two applications share a JVM.

    Scenario 2
    In the second scenario, an IT organization is tasked with consolidating four different WebLogic applications - a third-party ISV application that runs on top of WebLogic and three internally developed integration applications. The third-party ISV application is only supported under WebLogic 6.1 running on Windows 2000 while the two internally developed applications are currently running on WebLogic 7.0 on Red Hat Linux 7.2. At first glance, the only singular application isolation strategy that would work for all three applications would be a virtual machine implementation on Linux - resulting in each application running in its own virtual machine and the high overhead requirements associated with doing so. However, by combining a multiple JVM strategy with the virtual machine implementation, this overhead could be reduced significantly by running the third-party ISV application in its own virtual machine and then running three instances of WebLogic in a second virtual machine to accommodate the three internally developed applications.

    Obviously, the number of possible combinations is almost endless and real-world consolidation scenarios are likely to be significantly more complex than the two that we have outlined here. However, in order to avoid long-term support issues, it probably makes sense to standardize on a particular subset of combinations that fit the majority of the applications deployed in a particular enterprise.

    Summary
    It is clear that many IT organizations can potentially achieve extensive cost savings and improved application reliability by consolidating their J2EE applications running on WebLogic onto a common hardware infrastructure. However, there are many complex issues to consider, and achieving the right balance between application isolation and resource utilization is a critical success factor.

    For most large enterprises, there is probably no single application isolation strategy that will work in every situation. Simply put, different applications have varying isolation needs and any overall strategy must be flexible enough to accommodate the majority of those needs. The good news is that a platform as flexible as BEA WebLogic, combined with adaptive infrastructure solutions and services from HP, can make the promise of J2EE application consolidation a reality.

  • More Stories By Alex Heublein

    Alex Heublein is the Managing Principal of the Next-Generation Enterprise Technologies team within HP’s Consulting and Integration division. Prior to joining HP, Alex was the CTO of an IT consulting firm focused on B2B integration technologies and was the Managing Principal of the Application Server & Middleware group for IBM Global Services in North America. He holds a Bachelor of Science degree in Computer Science from the Georgia Institute of Technology.

    Comments (1)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
    Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
    René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
    Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
    Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
    If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
    In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
    Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
    When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
    Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...