Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Attention All BEA Developers: Stop Fearing Mainframe Integration

Attention All BEA Developers: Stop Fearing Mainframe Integration

There are many reasons why organizations fear mainframe integration. Proprietary interfaces, radically different processing environments, lack of support for standard development APIs, and the fact that the people who created the applications have since "moved on" are the most common factors identified when an organization postpones a mainframe integration project.

These are all reasonable issues, but in the same breath, these issues must be resolved for organizations to fully realize and maximize their investment in mainframe data.

There are an equal number of compelling reasons why organizations should deploy mainframe resources as part of WebLogic application development and integration projects. In this article, I'll review the current state of the industry and discuss the solutions needed to create a comfort zone that allows application developers to access mainframe resources.

Current State of Affairs
The leading reasons for using mainframe resources in conjunction with WebLogic application development and integration projects include leveraging existing logic and data, increasing the value of current investments, and focusing development effort on solving new business problems. For users of the BEA WebLogic Platform who are building high-value applications that depend on IBM mainframe data and applications, getting this integration right means avoiding the headaches associated with a deluge of extra costs and specialized training.

There's a line of demarcation that divides WebLogic development organizations and mainframe systems management organizations. You can't see it, but you know it's there. From a productivity perspective, it's the boundary between comfort zones. WebLogic developers are resistant to learning about the mainframe, and the mainframe "glass house" personnel are focused on maintaining the security and stability of their tightly controlled production environment, preferring to avoid interaction with the chaotic world of open systems applications.

From a technological perspective, "the line" is an effectiveness threshold. Each computing environment has strengths and weaknesses. When a solution pushes the capabilities of a platform too far, problems are inevitable.

Fear and frustration often surround the idea of mainframe integration because most solutions force someone to cross "the line." The results are scalability and failover problems, limited feature set support, additional administrative headcount requirements, ongoing training, and maintenance rollout problems. For the organization, the results are development delays, a backlog of support calls, higher Total Cost of Ownership, and, more important, dissatisfied internal and external clients.

First Step: Identify the Problem
The first step in communication resolution is always the hardest. We must take the time to identify the problem before we can begin to fix it.

In finding a solution to mainframe integration, the logical question is, "How can I move forward without crossing 'the line'?" As with any communication problem, one idea is to use an arbitrator to ensure that individual strengths are communicated and a common solution is achieved. This principle can be applied to mainframe integration as well.

Technologists are not the best long-term strategists. The point is that we need to employ cross-communication solutions that not only solve present problems but also provide the foundation to address future obstacles.

Outline an Ideal Solution and Stay Within Your Comfort Zone
The next step in jumping the communication hurdle between WebLogic developers and mainframe administrators is to outline an ideal solution. This is an important step and requires the most effort. We can all identify the problem, and once we have the tools in place, we can start fixing it. The important step in between is ensuring that each group is comfortable with the proposed solution, and once that solution is in place, is committed to "cross-the-line" communication.

One idea is to place a communications component on each side of "the line." In this manner, the communications component, or arbitration component, presents each organization with an interface that looks and acts in a familiar manner. For WebLogic developers, this means working with standard development tools where mainframe resources behave like distributed relational databases. For mainframe administrators, the arbitration component provides the enterprise-class monitoring, management, and control facilities required to maintain system availability and ensure minimal utilization of CPU.

Here are several points to keep in mind when building an ideal solution. Remember to think long term. Put together a roadmap that evolves with changing business and technology needs. Find a solution that supports application integration and data integration. If your organization requires access to programs running under IMS/TM or CICS and access to the underlying data stored in DB2, IMS/DB, VSAM, and other databases, search for a solution that provides the WebLogic platform with standard API access to the most important mainframe transaction managers and databases. By doing this, you'll fix today's problems and have a plan to fix issues lurking around the corner, saving time and money along the way.

Communication problems arise when we are forced into uncomfortable situations. The same is true with mainframe integration. To solve this problem we must enable a "cross-the-line" solution to communicate on our behalf. This solution must be transparent to the WebLogic platform, exploiting native OS features for performance optimization and addressing the translation and connectivity issues that can be handled from the distributed platform. To developers, mainframe databases must appear as distributed systems' relational databases, and mainframe programs must look and behave like stored procedures.

During the production phase, the solutions must support connection pooling and two-phase commit (2PC) transactions through extensions compatible with the Java Transaction Service (JTS) and the Java Transaction API (JTA). The solutions should also gather diagnostic and performance data for debugging and troubleshooting purposes, ideally through a choice of J2CA, JDBC, and ODBC adapters, and an agent that gathers information useful in troubleshooting.

Part of the "fear factor" that mainframe integration has evoked over the years has been self-inflicted by component architectures that have been less than adequate and that caused problems and confusion from development through deployment. This is directly due to taking individual users out of their comfort zone. In many instances these substandard architectures have a direct impact on overall performance.

Screen scrapers usually fall into this category but can also be found in mainframe integration paradigms that require multi-tiered server architectures. While there can be some instant gratification with screen scraping, performance will always be a problem and administrators can expect to be continually adding additional network servers and the complexities associated as throughput demands increase. With screen scraping, XA two-phase commit and direct data access go by the wayside, limiting functionality. Lastly, this paradigm requires someone on the Web server side to understand the application they are trying to access from the host. Other issues soon surface, like trying to "record" mainframe terminal sessions and the maintenance associated when mainframe applications change. These tasks take individuals out of their comfort zone.

Likewise, there are problems with multi-tiered mainframe integration architectures. These architectures, usually conceived off the mainframe, then eventually migrated to it, have one or more gateways or servers associated with the open Web server side. Some in this category even have one or more mainframe footprint requirements. While these may not be dubbed screen scrapers and may not require someone with mainframe expertise to manage or maintain on the Web server side, they do present other problems. The problem associated with this architecture is one of performance coupled with the fact that there is no special attention paid to any one operating system. Why run on the mainframe if you don't take advantage of any of the benefits associated with mainframe architectures or subsystems This paradigm also has the potential of taking the mainframe personnel out of their comfort zone by having them install, configure, and maintain a component designed and implemented on a Unix/NT platform, within a mainframe address space. In a nutshell, the more moving parts, the more complex the architecture, the greater the likelihood that it will have a point of failure. Worse yet is trying to track down where the failure occurred.

The most effective architecture will allow users and administrators to keep within their comfort zones and satisfy users on both sides of the imaginary "line." To the integrator or Web application developer, this provides for consistent standards such as JDBC, J2CA, and JCA in a thin-client paradigm. These personnel do not want to get bogged down with terminologies that are foreign to them. They also want to be assured that the architecture they choose does not have a major performance impact on them, or the rest of their environment, while maintaining WebLogic security, clustering, loadbalancing, and failover. Likewise, the mainframe systems programmer or DBA demands security and a product that has superior performance and a resource-prudent architecture that can meet the needs of thousands of simultaneous user requests, as well as allowing them the ability to monitor and control their own environment. Products of this nature can only have a mainframe footprint that makes use of the mainframe subsystems and subtasking that have made the mainframe the cornerstone of reliability, availability, and security (RAS).

Remember the main goal: find a solution that allows developers with no knowledge of the mainframe or mainframe access technology to build effective applications across all components of the WebLogic Platform, allowing for the use of one technology across multiple projects. This isn't as easy as it sounds, but if you keep this objective in mind, you'll solve current and future "cross-the-line" problems without a detrimental effect on the business.

Don't Forget Their Needs
Let's take a quick look at what is needed on the mainframe side of the "line." The ideal solution fully understands application development and mainframe systems, leaving professionals on each side of the line to focus on the task at hand.

A true "cross-the-line" solution simplifies monitoring, management, and control of composite applications for mainframe systems administrators. While some organizations would prefer a solution that did not include a solution running on the mainframe, the reality is that this solution is absolutely required in order to ensure high performance, scalability to meet evolving business and technological needs, and a manageable operational environment. Ineffective solutions are developed and maintained independently, of each other and scattered throughout multiple mainframe subsystems, and do not provide a consistent or comprehensive integration infrastructure for the mainframe.

There is a reason why mainframes have been the de facto standard for security, availability, and reliability for more than 30 years. You need to deploy a solution that extends these features to composite applications, providing maximum reliability and minimum resource utilization without compromising its ability to support thousands of users and thousands of transactions per second. Most important, the solution must mask the complexity of the mainframe.

Once in a While, We Must Step Over the Line
There is one aspect of composite applications where it is absolutely required that developers and mainframe systems personnel step over "the line" - application debugging and problem resolution. In this situation, developers and systems administrators must work together to find the cause of application or performance problems. The process normally entails gathering and reviewing trace logs from the application, any midtier servers, the mainframe integration technology, and the mainframe resource itself.

Unfortunately, if there is a weak spot in many component architectures, it surfaces in the area of problem determination and diagnostics. The last thing an application developer wants to do is to call a mainframe systems programmer and ask for a GTF trace, to piece together with the information they have accumulated from the open Web side. Likewise, the systems programmer doesn't want to spend a lot of time explaining a trace that the application developer doesn't understand in the first place.

The ideal solution will provide a communication interface that shows application calls and mainframe responses, allowing administrators to change operational parameters, and providing "drill-down" capability that displays response time and other critical information that helps administrators maintain service levels and detect problems. In the end, simple problems are corrected automatically, and complex problems can be located and corrected with less effort and disruption.

This type of solution is equally important to application developers. Look for a solution that will enable trace and diagnostic aggregation between mainframe and open-system components, eliminating the need for the dreaded call to the mainframe Systems Programmer to help piece information together. A telltale sign of a good component architecture will show little to no performance impact with full trace diagnostics enabled. This eliminates the need for problem replication, speeds up the diagnostic process, and allows both developers and systems administrators to remain productive and work within their comfort zones.

This type of solution is equally important to application developers. Diagnostic data presents a view of events from the WebLogic Platform to the mainframe and back, allowing developers to troubleshoot and tune applications with little or no assistance from mainframe systems administrators. This significantly reduces the time required to find and fix problems and allows both developers and systems administrators to remain productive and work within their comfort zones.

Cross-the-Line Communication Removes the Fear
True cross-the-line communication allows us to recognize the challenges that face our mainframe counterparts without leaving our comfort zone. There are many points to keep in mind when searching for a solution. After identifying the problem, spend time outlining an ideal solution that will solve immediate problems and evolve with the organization to meet future challenges.

Next, it's important to evaluate available solutions with a critical eye, knowing that the best solution will provide an agnostic platform for application developers and mainframe administrators to stay within their comfort zone.

By not forcing any organization to cross "the line," developers and systems personnel work well within their comfort zones, computing systems work effectively within standard application implementations, and the overall organization gains increased productivity and business agility.

More Stories By Doc Mills

Doc Mills is currently the director of product marketing for NEON Systems, the leader in enterprise-class mainframe adapters that reduce integration complexity. He has more than 18 years of experience with mainframe networking and integration.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...