Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

WebLogic on theMainframe

WebLogic on theMainframe

In helping our customers deploy J2EE applications on the mainframe we've learned a number of tips and tricks. We've included configuration settings, tuning suggestions, and descriptions of existing production applications in this article. Although each environment is different, these tips and tricks should jump-start anyone considering a mainframe WebLogic deployment.

In the first article (WLDJ, Vol. 1, issue 7) in this series, we discussed many of the business benefits to be realized by deploying J2EE applications on the mainframe. These benefits included leveraging Java for better programmer productivity, aggregating multiple servers onto a single mainframe partition to lower operational costs and more efficiently utilize existing hardware, leveraging mainframe quality-of-service capabilities for 24x7x365 application availability, and extending existing applications and data located on the host machines. The second article (WLDJ, Vol. 1, issue 8) detailed how to install and configure WebLogic Server for z/Linux and z/OS environments, including the steps required, the resources needed on the mainframe, and the differences from installing WebLogic on other platforms.

One of the benefits that can be realized when deploying WebLogic Server on the mainframe is the extension of access to existing systems and data. In today's business environment enterprises are looking more than ever for ways to leverage existing investment in mainframe systems and databases rather than taking on the costs associated with rewriting applications and rehosting them in a distributed environment. Web services is a key technology that can enable this access. Rather than covering Web services and data integration in this article, we've decided to add a fourth article to our trilogy, à la Douglas Adams and The Hitchhiker's Guide to the Galaxy, to thoroughly detail how to Web service-enable existing mainframe applications and data using WebLogic Server.

Now let's get to it.

Performance Tips
When it comes to tuning applications, no recommendation will fit all customers. In general, a baseline for an application should be created, including a well-defined test procedure that exactly or closely models the behavior of the business application. All tuning and application changes can then be compared to the baseline by rerunning a well-defined test procedure. Once tested and validated, changes that result in performance improvements can then be promoted to the production system with risks minimized. This performance and tuning methodology requires establishing a test environment in which the configuration can be controlled, along with defining and implementing a repeatable test process.

However, a few generalizations can be made about performance and tuning for WebLogic Server running on the mainframe. These tips will help create a good starting point for creating a baseline.

Hardware Requirements
There are a number of factors that affect the performance of a WebLogic-based application on the mainframe. Some of these affect the operating system, some the application and security subsystems, and some are related to WebLogic Server. However, none are more important than whether the underlying processor is designed to support Java. Specifically, IBM recommends the G5 class processor with IEEE floating-point support for Java applications to achieve optimal performance. Although Java applications can execute and be deployed on a non-IEEE floating-point processor, the performance is significantly lower. One alternative is to use non-IEEE floating-point processors for development or prototype work where overall throughput is not a critical factor and G5 class processors for production deployment.

General Tuning
We've broken the tuning topic into two sections, z/Linux and z/OS, based on the operating system used. Although we make some generalizations, these suggestions are an excellent starting point for planning a mainframe WebLogic deployment. In addition, there are some UNIX System Services (USS) parameters that should be reviewed. The actual changes made to your environment will depend on a number of factors, such as the workload you will be processing and other applications deployed. In particular, the workload - concurrent users, number of transactions, and time period - greatly affects the decisions you have to make when configuring a system.

Tuning Tips for z/Linux

  • Set the virtual machine guest size to 512MB.
    This is a good average size for initial configuration,
    although you might be able to create a smaller machine
    if your workload and concurrent user load are
    relatively small.
  • Disable any Linux services that aren't needed.
  • As the virtual machine hosting WebLogic Server
    will have quite a few interactive users, we recommend
    that an execution class be assigned to WebLogic,
    ensuring that the server will have enough CPU and memory
  • Ensure that the WebLogic NativeIO option is enabled.
    This can be set from the WebLogic Console using the Tuning tab.

Tuning Tips for z/OS

  • Consider placing commonly used modules (javac for example)
    into the LPA (link pack area).
  • Follow the TCP/IP tuning recommendations for your operating
    system release.
  • Use the WorkLoad Manager to ensure that the right mix of
    system resources is used.
USS Parameters
WebLogic executes as a USS task. In fact, it is not uncommon to find that WebLogic is the first major application to execute in this environment. Because of this, the USS configuration should be reviewed prior to deployment and adjusted as needed. The BPXINITxx member in the SYSx.PARMLIB library contains the parameters that control the execution of USS tasks.

As a baseline, the following parameters should be reviewed:

  • MAXASSIZE: This is the maximum address
    space size. If resources will allow, set this to the 2GB maximum
  • MAXTHREADS: This is the maximum number of
    threads per process. A good starting point is 10000.
  • MAXTHREADTASKS: This is the maximum number
    of operating system tasks a given address space can have active
    concurrently. Setting this parameter to 5000 is a good starting
    point for this value.

Security Considerations
In addition to the settings noted above, the user identity used to start WebLogic Server can also affect the application's performance. A number of parameters that affect USS resource allocation are set in the user's RACF (Resource Access Control Facility) profile. These values may override those defined globally for USS, impacting WebLogic Server performance. There are a number of ways to prevent this from happening, such as removing the RACF USS parameters or setting the global USS parameters lower and configuring higher values in the user's RACF USS profile. However, the final implementation is administrator's choice.

The first parameter to check for the user identity starting WebLogic Server is the personal address space size value ASSIZEMAX, specified in the RACF USS segment. This parameter sets a specific user's address space. If this value is less than the MAXASSIZE, then the smaller ASSIZEMAX value associated with the user's profile will override the MAXASSIZE and be used instead. As a workaround, many administrators will set the global MAXASSIZE parameter to a smaller value and override it with a larger ASSIZEMAX setting in the RACF Profile.

Likewise, the MAXTHREADS value can also be overridden by the THREADMAX value specified on the user's RACF USS segment.

In general, it isn't a good idea to run the WebLogic Server startup script from an OMVS shell since the TSO region size will be used. Usually TSO regions are only 4MB in size and the WebLogic Server will very quickly run out of memory. A better approach is to start the WebLogic Server instance using a JCL (Job Control Language) procedure or via a Telnet session. An example of a JCL procedure to start WebLogic Server was included in our second article. One suggestion is to use a Telnet session when configuring the WebLogic Server after installation and during development, then create a JCL program for server startup when ready for production deployment.

VM Guest Options
The most important settings for a virtual machine are:

  • The virtual machine size: This has already been defined with a base of 512MB.
  • The execution class
  • The share of processor resources WebLogic will receive:a relative share setting for CPU resources so it doesn't starve other virtual machines.
  • The use of the z/VM Guest LAN to support WebLogic clusters:This option provides an in-memory LAN segment that WebLogic instances can use to communicate throughout the cluster.

Legacy Applications
One of the key advantages of deploying WebLogic Server on the mainframe is the proximity to the underlying business data and information. There are a number of connectors for mainframe applications that enable calls to legacy systems to be handled in a very efficient manner. Many of these options for legacy integration were outlined in the second article in this series, including the ShadowDirect adapters available from Neon Systems. In the next article we'll outline the various options for mainframe application integration, including Web services.

Regardless of the adapter or connectivity option used, the configuration options for that adapter should be reviewed. This is particularly important when WebLogic Server and the legacy application are on the same platform, since configuration options may provide an extra performance boost.

Java Virtual Machine
A number of parameters affect the performance of the Java Virtual Machine (JVM) on the mainframe. The first item to review is the minimum and maximum heap size. This setting controls how often the garbage collector runs. Contrary to popular belief, setting the JVM's heap size too high can in many cases be as bad as setting it too low.

Unless you have detailed knowledge of the application running in WebLogic and how it uses memory, the only way to determine the optimum minimum and maximum heap size values is by trial and error. Setting the heap size too small will result in constant swapping; too large will result in inefficiencies for garbage collection and resource utilization. A good recommendation is to start with minimum and maximum heap size settings of 256MB each. The minimum heap size is set with the "-Xms" option when starting WebLogic Server; the maximum heap size is set with the "-Xmx" option. Values can then be adjusted based on how the application performs.

The JVM heap memory will be allocated immediately during the server startup above the 16MB line. However, overrides in the JES Job exit and region size on the WebLogic JCL will limit the actual amount of memory allocated, so always make sure the server gets the size specified.

Application Code
There is no right or wrong way to code programs, but there are some generally accepted best practices when coding any Java or J2EE application. These practices apply to applications deployed on the mainframe just like other platforms. It's a good idea to review the code and make sure these practices are enforced and followed. A number of sources, such as BEA WebLogic Developer's Journal, cover many of these best practices in great detail. In particular, be aware of things like multithreaded servlets, large objects, very granular Enterprise JavaBeans, etc. Although you may not have the luxury of changing the code, particularly when using packaged applications, you can often configure WebLogic Server to tolerate them. For example, it's generally a good idea to isolate certain components into another instance of WebLogic Server and allow the runtime execution to resolve the actual deployment of the components.

Performance Tools
Once a test environment has been established, the initial load test will form the baseline for future tuning efforts. It is key to have some quantitative way to measure performance. On the mainframe many performance measurement tools exist, such as Wily's Introscope. With tools such as these the actual internals of WebLogic Server and the business application can be monitored, both in QA and production mode. For example, Introscope collects statistics and metrics in a SQL database. This information can be used for performance analysis and capacity planning, and as a way to compare changes made to the application and the underlying server configuration, such as determining whether increasing the heap size in the JVM actually improves performance.

The vmstat command is another useful tool. Results from this command can be written to a file via the pipe utility during a load test in the QA environment. The vmstat command will display a number of runtime resources, including:

  • Paging rates
  • Task status
  • Memory usage
  • CPU times

With this information problems can be identified very quickly. Each of these will assist in pinpointing areas where a potential performance bottleneck might exist.

Customer Examples
BEA has a number of customers running WebLogic Server on the mainframe, covering a broad range of business situations and environments. In this article we have selected three situations - application redeployment, new application deployment, and deployment in a heterogeneous cluster. Each of these examples discusses an actual customer deployment. Together, the examples span both z/OS and z/Linux customers, and represent a general survey of how WebLogic Server on the mainframe is being used to solve critical business problems.

Application Redeployment
In the first case, the customer had an existing application deployed using WebLogic Server on a UNIX server. Unfortunately, this particular application could only support a small number of concurrent users. To meet the requirements dictated by the business unit the customer had to add UNIX servers, increasing hardware and administrative costs, as well as increasing the complexity of the production deployment. The goals in moving to the mainframe were: (1) to exchange the hardware platform without making any code or architecture changes to the application, (2) to move the application closer to the data accessed, and (3) to compare the performance in a mainframe environment with the performance on UNIX hardware.

Existing Deployment
The existing application was based on J2EE standards and deployed on WebLogic Server v6.0 using JDK 1.3.0. Several connectors were used to access data from legacy applications. All of these were written in Java, which enabled the same connectors to be used when the application was redeployed on the mainframe.

The application was deployed to WebLogic Server on the mainframe without making any changes to the design or the application. The existing data connectors were utilized on the mainframe. When the performance tests were run on the new deployment platform, a single WebLogic Server instance achieved five times the concurrent user load. In addition, this load was achieved while cutting the response time approximately in half.

Summary In this case the customer was able to utilize the portability of J2EE applications with WebLogic Server, increase application performance with lower response time, lower administrative costs, reduce complexity, and redeploy the application to a new hardware platform without requiring modification to application components or the design. This gave the customer the freedom to choose the hardware platform providing the desired quality of service for their application. In addition, the ease with which the application was deployed to the mainframe suggests that consolidation from a number of UNIX servers to the mainframe is achievable.

New Application Deployment
In many cases, customers will decide to develop and deploy new applications on the mainframe to leverage the quality-of-service features available, such as the WorkLoad Manager. A particular benefit found during numerous evaluations with customer systems is that, as the workload increases, the overall responsiveness of the application does not vary widely. Deploying such services on the mainframe, allocates resources efficiently, allowing customers to plan and predict performance accurately.

In this particular case, several new applications were designed and developed specifically for deployment on the mainframe. The business unit had specified that the applications must be highly available and able to support a large number of concurrent users. The customer determined that deploying the same applications in a cluster of distributed servers would require many more servers, increasing complexity, and in many cases a single server would be needed for each application. By deploying on the mainframe the customer was again able to lower operational and administrative costs, reduce application complexity, and consolidate a number of unique server instances on a single mainframe. Utilizing the WorkLoad Manager gave the customer the necessary degree of application availability while effectively utilizing the underlying resources.

Contingency Deployment
One advantage of WebLogic Server, regardless of the underlying hardware, is the unique clustering technology. Clustering provides application redundancy and failover in a distributed environment. WebLogic Server clustering allows heterogeneous hardware servers running the same application to be combined within a single WebLogic cluster. In a recent case, a customer decided to use WebLogic Server instances running on the mainframe as backup nodes for the WebLogic Servers running on UNIX in a separate data center.

The customer had already deployed a number of UNIX servers running WebLogic Server. In the event the UNIX servers experienced some critical failure the mainframe running WebLogic Server would assume a portion or all of the workload. Although the UNIX and mainframe hardware were located in separate data centers, the business need for highly available applications required that these heterogeneous platforms be clustered.

The WebLogic cluster included both the WebLogic Server instances running on the UNIX servers and on the mainframe. An HTTP proxy server running in the network DMZ routed initial session requests to the UNIX servers. Using replication partners, session replication was designated to the WebLogic Server deployed on the mainframe. Sessions were persisted on the primary and secondary servers in memory.

In this case, by including both UNIX and mainframe WebLogic Server instances located in separate data centers the customer delivered redundancy between data centers without affecting the end user. The customer achieved high availability and reliability, regardless of the underlying hardware platform, and leveraged advanced clustering features such as in-memory session persistence and replication groups in a heterogeneous environment. Regardless of planned outages or critical events, the application continued processing without interruption.

We've examined a number of performance and tuning tips for planning a WebLogic Server deployment on the mainframe. We've provided configuration and initial settings and outlined a plan for achieving optimal application performance. We've also detailed three scenarios where WebLogic Server is currently in use on the mainframe. In the final article in this series on WebLogic Server and the mainframe we will examine many of the options for data and application integration, including how to Web service-enable applications by using WebLogic Server on the mainframe. Be there or be square!

More Stories By Tad Stephens

Tad Stephens is a system engineer based in Atlanta, Georgia for BEA Systems. Tad came to BEA from WebLogic and has over 10 years of distributed computing experience covering a broad range of technologies, including J2EE, Tuxedo, CORBA, DCE, and the Encina transaction system.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.