Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

WebLogic on the Mainframe: Install, Configure, Deploy

WebLogic on the Mainframe: Install, Configure, Deploy

So, you're going to deploy WebLogic Server on the mainframe. Pretty scary, huh? There are all those "glass house" terms: sysgens, operating systems with a "z," parallel sysplex, Workload Manager, and on and on. Without a little education, the mainframe world can be as foreign to the Java developer and architect as the distributed J2EE world is to a COBOL programmer on the host.

But these two environments aren't as different as you might believe. This article covers how to install, configure, and deploy WebLogic and WebLogic-based J2EE applications on the mainframe, specifically z/OS and z/VM deployments.

The first article in this series (WLDJ, Vol. 1, issue 7 ) described why customers have decided to combine the benefits of the industry's most stable, reliable, and scalable application server with the operational advantages of the mainframe.

  • Rewriting existing mainframe applications in Java allows higher programmer productivity and flexibility and eliminates dependence on a single vendor.

  • Consolidating UNIX and Windows servers with z/Linux lowers total cost of ownership.

  • Deploying new applications on existing mainframes (z/Linux and z/OS) offers better resource utilization.

  • Leveraging business contingency benefits through the mainframe qualities of service and operational properties ensures that J2EE applications are always available.

  • Extending existing systems and applications when rewriting isn't practical or feasible lowers cost.

    This article details the installation, including the steps required; what is needed on the mainframe; and how it's different from WebLogic installations on other platforms.

    Mainframe Configuration and Installation
    There are three key ways to install and configure WebLogic for deployment on the mainframe:

    1. Install and run in z/OS
    2. Install and runin z/VM under Linux
    3. Create a Logical Partition (LPAR) and install WebLogic directly to Linux on the mainframe
    One advantage of WebLogic on the mainframe is that, regardless of your deployment model, the same J2EE application developed on Windows or another UNIX platform will run without any changes on the mainframe. You're free to use powerful developer productivity aids without consideration of the required deployment platform. In addition, WebLogic has the unique ability to cluster a J2EE application across various hardware platforms, grouping the mainframe and UNIX or Windows NT/2000 servers in a single cluster.

    WebLogic Under z/OS
    WebLogic can be installed on z/OS, IBM's flagship operating system used on the zSeries line of hardware. When deployed on z/OS, WebLogic executes as a UNIX System Services (USS) task. USS can be thought of as a mode of execution on z/OS - all the POSIX APIs are implemented directly into z/OS, and provide a standard API set for a multithreaded environment. This execution model enables programs written using UNIX libraries to execute unchanged (or with minor changes) on the mainframe.

    Naturally, WebLogic on this platform uses a Java Virtual Machine (JVM); on the mainframe we use IBM's JVM. The current version of IBM's JVM is based on JDK 1.3.1. and can be downloaded free of charge from www.ibm.com/java.

    Once installation is complete, use the "java -version" command to verify successful installation.

    WebLogic can now be installed - download a copy of WebLogic Server from the BEA Web site (www.bea.com/download) onto a local file system. Once the download is complete you're ready to start the installation process. This is where we need to note a few procedural differences between installing WebLogic on a Windows or UNIX platform.

  • The installation process is run from a Telnet session to the mainframe, not via an OMVS shell.

  • Only the shell scripts are in EBCDIC; all other files are ASCII. It's always a best practice to review the readme.txt file for the very latest installation steps; the documentation will walk you through the remaining steps.

    At this point you have a WebLogic Server installed on the mainframe under USS. To someone familiar with WebLogic, this looks like a WebLogic installation on any other platform: the sample applications, graphical aids and deployment tools, steps to deploy applications, and administration console are the same.

    The snippet of the WebLogic console shown in Figure 1 should look familiar. it's actually WebLogic on the mainframe. The only way to see this is by looking at the version information on the console for the platform information.

    WebLogic can now be started, shut down, and administered as on any other WebLogic platform.

    How can WebLogic make use of the mainframe environment attributes? You could follow the normal method of starting WebLogic, i.e., running the startweblogic script, but that requires the administrator to be logged on to the mainframe operating system via a Telnet session. A different and decidedly more mainframe approach is to create some JCL (Job Control Language) programs, which are like scripts for the mainframe operating system, to start and stop WebLogic. Using a simple JCL procedure, we can control WebLogic.

    Listing 1 is an example of some JCL that could be used, although the mainframe system programmer would customize this for a specific site's use.

    When using JCL to control the execution of WebLogic, we don't have to use a Telnet session to start WebLogic - this allows the operations staff to automate startup and shutdown, making WebLogic's operation on the mainframe more natural for the mainframe staff because they can use the tools they're accustomed to.

    WebLogic and the WorkLoad Manager
    Workload management is a very powerful function of z/OS. The WorkLoad Manager enables machine resources to be utilized efficiently to achieve the best performance for a given workload. For example, it might define that a workload must have its transactions completed within a given number of seconds - a Response Goal.

    Like any workload on the mainframe, WebLogic can participate in workload management. For example, the administrator would define a Response Goal for WebLogic where a percent of transactions must be completed within a time period (specified in seconds). z/OS then prioritizes all the work on the machine to achieve this goal. As you might imagine, this is a very powerful, results-orientated scheduling mechanism.

    In order to implement a Response Goal for WebLogic, the mainframe system programmer defines a group with the required goal. The goal is then defined to the WebLogic task, which associates the goal with the corresponding task to be completed. The goal and corresponding task are defined using the standard WorkLoad Manager screens in ISPF. Figure 2 shows a page cut from a WorkLoad Manager configuration session.

    WebLogic Under z/VM and Linux
    Also consider the z/VM deployment model, which has many benefits as well. The z/VM operating system enables multiple virtual machines to be created that represent a physical mainframe, thus allowing a high level of resource-sharing and reuse to be achieved. For example, using multiple z/VM virtual machines as z/VM Guests may allow for multiple UNIX or NT deployments to be consolidated on to a single mainframe server (see Figure 3).

    When using z/VM, install Linux into the z/VM Guest machine. WebLogic currently supports SuSE, but other Linux brands will be certified over time. Once Linux is installed on the virtual machine, WebLogic can be installed. Start the installation by accessing Linux running in the z/VM Guest from a Telnet session and run the install command shown below; you'll then be prompted for the remaining installation steps.

    "java -classpath weblogic600sp2_generic.zip install -i console"

    The only differences in the steps for installing on z/VM are:

  • WebLogic on z/VM uses the IBM JVM (as noted above).

  • A JAAS file is created for the associated Linux user that will start WebLogic, such as /home/weblogic.

    Because z/VM has a high degree of resource-sharing, the configuration of the z/VM Guest machine that executes both Linux and WebLogic is important. The memory allocated to WebLogic is defined by the parameter Guest size. This sets the amount of memory the virtual machine will have. The actual amount of memory required will depend greatly on your application; however, an average value is 512MB. It's important to note that this is virtual memory.

    Another important setting is the execution class. This value will differ from site to site, but WebLogic should be defined as a high-priority Guest machine to z/VM, as the transaction-response-end users experience is a factor of the execution class.

    z/VM provides a number of key performance optimizations. For example, all network functions can be implemented as if they existed in a single virtual machine. This concept is called virtualization. z/VM provides an option, Guest LAN Support, where a LAN segment is defined in memory and all the Guest machines connect to it via a high-speed, in-memory network. WebLogic can use this option, and the z/VM virtualization will hide the actual implementation.

    Other network options are available as well, such as the virtual channel-to-channel adapter (VCTCA) to establish a connection between two Guest machines via a virtual point-to-point network.

    WebLogic Running Linux Within an LPAR
    The last deployment model is the installation of Linux directly on the mainframe hardware. This is known as a Logical Partition (LPAR) installation. This type of installation reserves a portion of the mainframe hardware for a logically separate operating system environment. Although hardware reuse is limited and may not be allocated in the most efficient manner, this is an easy way to try Linux on the mainframe, as z/VM and z/OS at the respective levels are not required (see Figure 4).

    Once Linux is installed in the LPAR, the installation of WebLogic follows the same steps as a z/VM Guest machine installation.

    Because the WebLogic platform has so many third-party vendors building on or interfacing with WebLogic Server, a whole new breed of applications is becoming available. A number of traditional mainframe vendors are committed to supporting WebLogic deployments on the mainframe. These new offerings will allow WebLogic to be managed and administered using the tools and utilities with which mainframe staff are already familiar.

    Which Operating System?
    The question of which deployment model to use for the mainframe can be answered by identifying what hardware and operating system are available. Although an LPAR provides an easy way to test WebLogic, the ease of deploying WebLogic in z/OS or as a z/VM Guest enables customers to select the environment they're comfortable with.

    Because WebLogic and the application are configured and deployed in the same way on all platforms, even if the production deployment is on different hardware or a combination of operating systems/hardware, picking the operating system is strictly a matter of selecting the one that provides the services (like WLM) you need.

    What's Next?
    With the execution and operational questions answered, we need to build an application that will access legacy applications and data. The mainframe has many applications, databases, and files that can be accessed in a variety of ways, including Web services, the J2EE Connector Architecture, native API calls exposed to Java, or JDBC APIs. A future article will explore each of these options in more detail, and will look at how using tools such as BEA WebLogic Workshop can make it easy to design and assemble applications with access to mainframe-based applications and data. We'll also detail a production application using WebLogic on the mainframe and some of the development, deployment, tuning, and management tips and tricks learned from production deployments.

    A number of options are available to customers who need to deploy J2EE applications on the mainframe, including z/OS, Linux under z/VM, and running Linux natively on the mainframe operating system. The steps to install and configure WebLogic on the mainframe are similar to the steps required on other platforms, but there are some key differences. The actual deployment model the customer selects will depend on a number of factors (skills, availability, costs, etc.). Each option has benefits and drawbacks; however, WebLogic deployment on the mainframe provides a high level of performance and integration.

    The next artice in this series will address development and testing strategies, and will describe lessons learned by current BEA customers running production WebLogic-based applications on the mainframe. We'll cover details of the integration strategies for access to mainframe systems and data, including how to use Web services to gain access to and from mainframe systems.

  • More Stories By Tad Stephens

    Tad Stephens is a system engineer based in Atlanta, Georgia for BEA Systems. Tad came to BEA from WebLogic and has over 10 years of distributed computing experience covering a broad range of technologies, including J2EE, Tuxedo, CORBA, DCE, and the Encina transaction system.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    IoT & Smart Cities Stories
    The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
    Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
    The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
    René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
    Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
    Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
    Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
    Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
    Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
    As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...