Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

WebLogic on theMainframe

WebLogic on theMainframe

In helping our customers deploy J2EE applications on the mainframe we've learned a number of tips and tricks. We've included configuration settings, tuning suggestions, and descriptions of existing production applications in this article. Although each environment is different, these tips and tricks should jump-start anyone considering a mainframe WebLogic deployment.

In the first article (WLDJ, Vol. 1, issue 7) in this series, we discussed many of the business benefits to be realized by deploying J2EE applications on the mainframe. These benefits included leveraging Java for better programmer productivity, aggregating multiple servers onto a single mainframe partition to lower operational costs and more efficiently utilize existing hardware, leveraging mainframe quality-of-service capabilities for 24x7x365 application availability, and extending existing applications and data located on the host machines. The second article (WLDJ, Vol. 1, issue 8) detailed how to install and configure WebLogic Server for z/Linux and z/OS environments, including the steps required, the resources needed on the mainframe, and the differences from installing WebLogic on other platforms.

One of the benefits that can be realized when deploying WebLogic Server on the mainframe is the extension of access to existing systems and data. In today's business environment enterprises are looking more than ever for ways to leverage existing investment in mainframe systems and databases rather than taking on the costs associated with rewriting applications and rehosting them in a distributed environment. Web services is a key technology that can enable this access. Rather than covering Web services and data integration in this article, we've decided to add a fourth article to our trilogy, à la Douglas Adams and The Hitchhiker's Guide to the Galaxy, to thoroughly detail how to Web service-enable existing mainframe applications and data using WebLogic Server.

Now let's get to it.

Performance Tips
When it comes to tuning applications, no recommendation will fit all customers. In general, a baseline for an application should be created, including a well-defined test procedure that exactly or closely models the behavior of the business application. All tuning and application changes can then be compared to the baseline by rerunning a well-defined test procedure. Once tested and validated, changes that result in performance improvements can then be promoted to the production system with risks minimized. This performance and tuning methodology requires establishing a test environment in which the configuration can be controlled, along with defining and implementing a repeatable test process.

However, a few generalizations can be made about performance and tuning for WebLogic Server running on the mainframe. These tips will help create a good starting point for creating a baseline.

Hardware Requirements
There are a number of factors that affect the performance of a WebLogic-based application on the mainframe. Some of these affect the operating system, some the application and security subsystems, and some are related to WebLogic Server. However, none are more important than whether the underlying processor is designed to support Java. Specifically, IBM recommends the G5 class processor with IEEE floating-point support for Java applications to achieve optimal performance. Although Java applications can execute and be deployed on a non-IEEE floating-point processor, the performance is significantly lower. One alternative is to use non-IEEE floating-point processors for development or prototype work where overall throughput is not a critical factor and G5 class processors for production deployment.

General Tuning
We've broken the tuning topic into two sections, z/Linux and z/OS, based on the operating system used. Although we make some generalizations, these suggestions are an excellent starting point for planning a mainframe WebLogic deployment. In addition, there are some UNIX System Services (USS) parameters that should be reviewed. The actual changes made to your environment will depend on a number of factors, such as the workload you will be processing and other applications deployed. In particular, the workload - concurrent users, number of transactions, and time period - greatly affects the decisions you have to make when configuring a system.

Tuning Tips for z/Linux

  • Set the virtual machine guest size to 512MB.
    This is a good average size for initial configuration,
    although you might be able to create a smaller machine
    if your workload and concurrent user load are
    relatively small.
  • Disable any Linux services that aren't needed.
  • As the virtual machine hosting WebLogic Server
    will have quite a few interactive users, we recommend
    that an execution class be assigned to WebLogic,
    ensuring that the server will have enough CPU and memory
    resources.
  • Ensure that the WebLogic NativeIO option is enabled.
    This can be set from the WebLogic Console using the Tuning tab.

Tuning Tips for z/OS

  • Consider placing commonly used modules (javac for example)
    into the LPA (link pack area).
  • Follow the TCP/IP tuning recommendations for your operating
    system release.
  • Use the WorkLoad Manager to ensure that the right mix of
    system resources is used.
USS Parameters
WebLogic executes as a USS task. In fact, it is not uncommon to find that WebLogic is the first major application to execute in this environment. Because of this, the USS configuration should be reviewed prior to deployment and adjusted as needed. The BPXINITxx member in the SYSx.PARMLIB library contains the parameters that control the execution of USS tasks.

As a baseline, the following parameters should be reviewed:

  • MAXASSIZE: This is the maximum address
    space size. If resources will allow, set this to the 2GB maximum
    (2147483647).
  • MAXTHREADS: This is the maximum number of
    threads per process. A good starting point is 10000.
  • MAXTHREADTASKS: This is the maximum number
    of operating system tasks a given address space can have active
    concurrently. Setting this parameter to 5000 is a good starting
    point for this value.

Security Considerations
In addition to the settings noted above, the user identity used to start WebLogic Server can also affect the application's performance. A number of parameters that affect USS resource allocation are set in the user's RACF (Resource Access Control Facility) profile. These values may override those defined globally for USS, impacting WebLogic Server performance. There are a number of ways to prevent this from happening, such as removing the RACF USS parameters or setting the global USS parameters lower and configuring higher values in the user's RACF USS profile. However, the final implementation is administrator's choice.

The first parameter to check for the user identity starting WebLogic Server is the personal address space size value ASSIZEMAX, specified in the RACF USS segment. This parameter sets a specific user's address space. If this value is less than the MAXASSIZE, then the smaller ASSIZEMAX value associated with the user's profile will override the MAXASSIZE and be used instead. As a workaround, many administrators will set the global MAXASSIZE parameter to a smaller value and override it with a larger ASSIZEMAX setting in the RACF Profile.

Likewise, the MAXTHREADS value can also be overridden by the THREADMAX value specified on the user's RACF USS segment.

In general, it isn't a good idea to run the WebLogic Server startup script from an OMVS shell since the TSO region size will be used. Usually TSO regions are only 4MB in size and the WebLogic Server will very quickly run out of memory. A better approach is to start the WebLogic Server instance using a JCL (Job Control Language) procedure or via a Telnet session. An example of a JCL procedure to start WebLogic Server was included in our second article. One suggestion is to use a Telnet session when configuring the WebLogic Server after installation and during development, then create a JCL program for server startup when ready for production deployment.

VM Guest Options
The most important settings for a virtual machine are:

  • The virtual machine size: This has already been defined with a base of 512MB.
  • The execution class
  • The share of processor resources WebLogic will receive:a relative share setting for CPU resources so it doesn't starve other virtual machines.
  • The use of the z/VM Guest LAN to support WebLogic clusters:This option provides an in-memory LAN segment that WebLogic instances can use to communicate throughout the cluster.

Legacy Applications
One of the key advantages of deploying WebLogic Server on the mainframe is the proximity to the underlying business data and information. There are a number of connectors for mainframe applications that enable calls to legacy systems to be handled in a very efficient manner. Many of these options for legacy integration were outlined in the second article in this series, including the ShadowDirect adapters available from Neon Systems. In the next article we'll outline the various options for mainframe application integration, including Web services.

Regardless of the adapter or connectivity option used, the configuration options for that adapter should be reviewed. This is particularly important when WebLogic Server and the legacy application are on the same platform, since configuration options may provide an extra performance boost.

Java Virtual Machine
A number of parameters affect the performance of the Java Virtual Machine (JVM) on the mainframe. The first item to review is the minimum and maximum heap size. This setting controls how often the garbage collector runs. Contrary to popular belief, setting the JVM's heap size too high can in many cases be as bad as setting it too low.

Unless you have detailed knowledge of the application running in WebLogic and how it uses memory, the only way to determine the optimum minimum and maximum heap size values is by trial and error. Setting the heap size too small will result in constant swapping; too large will result in inefficiencies for garbage collection and resource utilization. A good recommendation is to start with minimum and maximum heap size settings of 256MB each. The minimum heap size is set with the "-Xms" option when starting WebLogic Server; the maximum heap size is set with the "-Xmx" option. Values can then be adjusted based on how the application performs.

The JVM heap memory will be allocated immediately during the server startup above the 16MB line. However, overrides in the JES Job exit and region size on the WebLogic JCL will limit the actual amount of memory allocated, so always make sure the server gets the size specified.

Application Code
There is no right or wrong way to code programs, but there are some generally accepted best practices when coding any Java or J2EE application. These practices apply to applications deployed on the mainframe just like other platforms. It's a good idea to review the code and make sure these practices are enforced and followed. A number of sources, such as BEA WebLogic Developer's Journal, cover many of these best practices in great detail. In particular, be aware of things like multithreaded servlets, large objects, very granular Enterprise JavaBeans, etc. Although you may not have the luxury of changing the code, particularly when using packaged applications, you can often configure WebLogic Server to tolerate them. For example, it's generally a good idea to isolate certain components into another instance of WebLogic Server and allow the runtime execution to resolve the actual deployment of the components.

Performance Tools
Once a test environment has been established, the initial load test will form the baseline for future tuning efforts. It is key to have some quantitative way to measure performance. On the mainframe many performance measurement tools exist, such as Wily's Introscope. With tools such as these the actual internals of WebLogic Server and the business application can be monitored, both in QA and production mode. For example, Introscope collects statistics and metrics in a SQL database. This information can be used for performance analysis and capacity planning, and as a way to compare changes made to the application and the underlying server configuration, such as determining whether increasing the heap size in the JVM actually improves performance.

The vmstat command is another useful tool. Results from this command can be written to a file via the pipe utility during a load test in the QA environment. The vmstat command will display a number of runtime resources, including:

  • Paging rates
  • Task status
  • Memory usage
  • CPU times

With this information problems can be identified very quickly. Each of these will assist in pinpointing areas where a potential performance bottleneck might exist.

Customer Examples
BEA has a number of customers running WebLogic Server on the mainframe, covering a broad range of business situations and environments. In this article we have selected three situations - application redeployment, new application deployment, and deployment in a heterogeneous cluster. Each of these examples discusses an actual customer deployment. Together, the examples span both z/OS and z/Linux customers, and represent a general survey of how WebLogic Server on the mainframe is being used to solve critical business problems.

Application Redeployment
In the first case, the customer had an existing application deployed using WebLogic Server on a UNIX server. Unfortunately, this particular application could only support a small number of concurrent users. To meet the requirements dictated by the business unit the customer had to add UNIX servers, increasing hardware and administrative costs, as well as increasing the complexity of the production deployment. The goals in moving to the mainframe were: (1) to exchange the hardware platform without making any code or architecture changes to the application, (2) to move the application closer to the data accessed, and (3) to compare the performance in a mainframe environment with the performance on UNIX hardware.

Existing Deployment
The existing application was based on J2EE standards and deployed on WebLogic Server v6.0 using JDK 1.3.0. Several connectors were used to access data from legacy applications. All of these were written in Java, which enabled the same connectors to be used when the application was redeployed on the mainframe.

Results
The application was deployed to WebLogic Server on the mainframe without making any changes to the design or the application. The existing data connectors were utilized on the mainframe. When the performance tests were run on the new deployment platform, a single WebLogic Server instance achieved five times the concurrent user load. In addition, this load was achieved while cutting the response time approximately in half.

Summary In this case the customer was able to utilize the portability of J2EE applications with WebLogic Server, increase application performance with lower response time, lower administrative costs, reduce complexity, and redeploy the application to a new hardware platform without requiring modification to application components or the design. This gave the customer the freedom to choose the hardware platform providing the desired quality of service for their application. In addition, the ease with which the application was deployed to the mainframe suggests that consolidation from a number of UNIX servers to the mainframe is achievable.

New Application Deployment
In many cases, customers will decide to develop and deploy new applications on the mainframe to leverage the quality-of-service features available, such as the WorkLoad Manager. A particular benefit found during numerous evaluations with customer systems is that, as the workload increases, the overall responsiveness of the application does not vary widely. Deploying such services on the mainframe, allocates resources efficiently, allowing customers to plan and predict performance accurately.

In this particular case, several new applications were designed and developed specifically for deployment on the mainframe. The business unit had specified that the applications must be highly available and able to support a large number of concurrent users. The customer determined that deploying the same applications in a cluster of distributed servers would require many more servers, increasing complexity, and in many cases a single server would be needed for each application. By deploying on the mainframe the customer was again able to lower operational and administrative costs, reduce application complexity, and consolidate a number of unique server instances on a single mainframe. Utilizing the WorkLoad Manager gave the customer the necessary degree of application availability while effectively utilizing the underlying resources.

Contingency Deployment
One advantage of WebLogic Server, regardless of the underlying hardware, is the unique clustering technology. Clustering provides application redundancy and failover in a distributed environment. WebLogic Server clustering allows heterogeneous hardware servers running the same application to be combined within a single WebLogic cluster. In a recent case, a customer decided to use WebLogic Server instances running on the mainframe as backup nodes for the WebLogic Servers running on UNIX in a separate data center.

Background
The customer had already deployed a number of UNIX servers running WebLogic Server. In the event the UNIX servers experienced some critical failure the mainframe running WebLogic Server would assume a portion or all of the workload. Although the UNIX and mainframe hardware were located in separate data centers, the business need for highly available applications required that these heterogeneous platforms be clustered.

Design
The WebLogic cluster included both the WebLogic Server instances running on the UNIX servers and on the mainframe. An HTTP proxy server running in the network DMZ routed initial session requests to the UNIX servers. Using replication partners, session replication was designated to the WebLogic Server deployed on the mainframe. Sessions were persisted on the primary and secondary servers in memory.

Results
In this case, by including both UNIX and mainframe WebLogic Server instances located in separate data centers the customer delivered redundancy between data centers without affecting the end user. The customer achieved high availability and reliability, regardless of the underlying hardware platform, and leveraged advanced clustering features such as in-memory session persistence and replication groups in a heterogeneous environment. Regardless of planned outages or critical events, the application continued processing without interruption.

Conclusion
We've examined a number of performance and tuning tips for planning a WebLogic Server deployment on the mainframe. We've provided configuration and initial settings and outlined a plan for achieving optimal application performance. We've also detailed three scenarios where WebLogic Server is currently in use on the mainframe. In the final article in this series on WebLogic Server and the mainframe we will examine many of the options for data and application integration, including how to Web service-enable applications by using WebLogic Server on the mainframe. Be there or be square!

More Stories By Tad Stephens

Tad Stephens is a system engineer based in Atlanta, Georgia for BEA Systems. Tad came to BEA from WebLogic and has over 10 years of distributed computing experience covering a broad range of technologies, including J2EE, Tuxedo, CORBA, DCE, and the Encina transaction system.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they bu...
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, will discuss how from store operations...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, will discuss some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he’ll go over some of the best practices for structured team migrat...
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
SYS-CON Events announced today that Datera will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datera offers a radically new approach to data management, where innovative software makes data infrastructure invisible, elastic and able to perform at the highest level. It eliminates hardware lock-in and gives IT organizations the choice to source x86 server nodes, with business model option...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp emp...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
SYS-CON Events announced today that Avere Systems, a leading provider of hybrid cloud enablement solutions, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere Systems was created by file systems experts determined to reinvent storage by changing the way enterprises thought about and bought storage resources. With decades of experience behind the company’s founders, Avere got its ...
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere delivers a more modern architectural approach to storage that doesn't require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbui...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.