Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Attention All BEA Developers: Stop Fearing Mainframe Integration

Attention All BEA Developers: Stop Fearing Mainframe Integration

There are many reasons why organizations fear mainframe integration. Proprietary interfaces, radically different processing environments, lack of support for standard development APIs, and the fact that the people who created the applications have since "moved on" are the most common factors identified when an organization postpones a mainframe integration project.

These are all reasonable issues, but in the same breath, these issues must be resolved for organizations to fully realize and maximize their investment in mainframe data.

There are an equal number of compelling reasons why organizations should deploy mainframe resources as part of WebLogic application development and integration projects. In this article, I'll review the current state of the industry and discuss the solutions needed to create a comfort zone that allows application developers to access mainframe resources.

Current State of Affairs
The leading reasons for using mainframe resources in conjunction with WebLogic application development and integration projects include leveraging existing logic and data, increasing the value of current investments, and focusing development effort on solving new business problems. For users of the BEA WebLogic Platform who are building high-value applications that depend on IBM mainframe data and applications, getting this integration right means avoiding the headaches associated with a deluge of extra costs and specialized training.

There's a line of demarcation that divides WebLogic development organizations and mainframe systems management organizations. You can't see it, but you know it's there. From a productivity perspective, it's the boundary between comfort zones. WebLogic developers are resistant to learning about the mainframe, and the mainframe "glass house" personnel are focused on maintaining the security and stability of their tightly controlled production environment, preferring to avoid interaction with the chaotic world of open systems applications.

From a technological perspective, "the line" is an effectiveness threshold. Each computing environment has strengths and weaknesses. When a solution pushes the capabilities of a platform too far, problems are inevitable.

Fear and frustration often surround the idea of mainframe integration because most solutions force someone to cross "the line." The results are scalability and failover problems, limited feature set support, additional administrative headcount requirements, ongoing training, and maintenance rollout problems. For the organization, the results are development delays, a backlog of support calls, higher Total Cost of Ownership, and, more important, dissatisfied internal and external clients.

First Step: Identify the Problem
The first step in communication resolution is always the hardest. We must take the time to identify the problem before we can begin to fix it.

In finding a solution to mainframe integration, the logical question is, "How can I move forward without crossing 'the line'?" As with any communication problem, one idea is to use an arbitrator to ensure that individual strengths are communicated and a common solution is achieved. This principle can be applied to mainframe integration as well.

Technologists are not the best long-term strategists. The point is that we need to employ cross-communication solutions that not only solve present problems but also provide the foundation to address future obstacles.

Outline an Ideal Solution and Stay Within Your Comfort Zone
The next step in jumping the communication hurdle between WebLogic developers and mainframe administrators is to outline an ideal solution. This is an important step and requires the most effort. We can all identify the problem, and once we have the tools in place, we can start fixing it. The important step in between is ensuring that each group is comfortable with the proposed solution, and once that solution is in place, is committed to "cross-the-line" communication.

One idea is to place a communications component on each side of "the line." In this manner, the communications component, or arbitration component, presents each organization with an interface that looks and acts in a familiar manner. For WebLogic developers, this means working with standard development tools where mainframe resources behave like distributed relational databases. For mainframe administrators, the arbitration component provides the enterprise-class monitoring, management, and control facilities required to maintain system availability and ensure minimal utilization of CPU.

Here are several points to keep in mind when building an ideal solution. Remember to think long term. Put together a roadmap that evolves with changing business and technology needs. Find a solution that supports application integration and data integration. If your organization requires access to programs running under IMS/TM or CICS and access to the underlying data stored in DB2, IMS/DB, VSAM, and other databases, search for a solution that provides the WebLogic platform with standard API access to the most important mainframe transaction managers and databases. By doing this, you'll fix today's problems and have a plan to fix issues lurking around the corner, saving time and money along the way.

Communication problems arise when we are forced into uncomfortable situations. The same is true with mainframe integration. To solve this problem we must enable a "cross-the-line" solution to communicate on our behalf. This solution must be transparent to the WebLogic platform, exploiting native OS features for performance optimization and addressing the translation and connectivity issues that can be handled from the distributed platform. To developers, mainframe databases must appear as distributed systems' relational databases, and mainframe programs must look and behave like stored procedures.

During the production phase, the solutions must support connection pooling and two-phase commit (2PC) transactions through extensions compatible with the Java Transaction Service (JTS) and the Java Transaction API (JTA). The solutions should also gather diagnostic and performance data for debugging and troubleshooting purposes, ideally through a choice of J2CA, JDBC, and ODBC adapters, and an agent that gathers information useful in troubleshooting.

Part of the "fear factor" that mainframe integration has evoked over the years has been self-inflicted by component architectures that have been less than adequate and that caused problems and confusion from development through deployment. This is directly due to taking individual users out of their comfort zone. In many instances these substandard architectures have a direct impact on overall performance.

Screen scrapers usually fall into this category but can also be found in mainframe integration paradigms that require multi-tiered server architectures. While there can be some instant gratification with screen scraping, performance will always be a problem and administrators can expect to be continually adding additional network servers and the complexities associated as throughput demands increase. With screen scraping, XA two-phase commit and direct data access go by the wayside, limiting functionality. Lastly, this paradigm requires someone on the Web server side to understand the application they are trying to access from the host. Other issues soon surface, like trying to "record" mainframe terminal sessions and the maintenance associated when mainframe applications change. These tasks take individuals out of their comfort zone.

Likewise, there are problems with multi-tiered mainframe integration architectures. These architectures, usually conceived off the mainframe, then eventually migrated to it, have one or more gateways or servers associated with the open Web server side. Some in this category even have one or more mainframe footprint requirements. While these may not be dubbed screen scrapers and may not require someone with mainframe expertise to manage or maintain on the Web server side, they do present other problems. The problem associated with this architecture is one of performance coupled with the fact that there is no special attention paid to any one operating system. Why run on the mainframe if you don't take advantage of any of the benefits associated with mainframe architectures or subsystems This paradigm also has the potential of taking the mainframe personnel out of their comfort zone by having them install, configure, and maintain a component designed and implemented on a Unix/NT platform, within a mainframe address space. In a nutshell, the more moving parts, the more complex the architecture, the greater the likelihood that it will have a point of failure. Worse yet is trying to track down where the failure occurred.

The most effective architecture will allow users and administrators to keep within their comfort zones and satisfy users on both sides of the imaginary "line." To the integrator or Web application developer, this provides for consistent standards such as JDBC, J2CA, and JCA in a thin-client paradigm. These personnel do not want to get bogged down with terminologies that are foreign to them. They also want to be assured that the architecture they choose does not have a major performance impact on them, or the rest of their environment, while maintaining WebLogic security, clustering, loadbalancing, and failover. Likewise, the mainframe systems programmer or DBA demands security and a product that has superior performance and a resource-prudent architecture that can meet the needs of thousands of simultaneous user requests, as well as allowing them the ability to monitor and control their own environment. Products of this nature can only have a mainframe footprint that makes use of the mainframe subsystems and subtasking that have made the mainframe the cornerstone of reliability, availability, and security (RAS).

Remember the main goal: find a solution that allows developers with no knowledge of the mainframe or mainframe access technology to build effective applications across all components of the WebLogic Platform, allowing for the use of one technology across multiple projects. This isn't as easy as it sounds, but if you keep this objective in mind, you'll solve current and future "cross-the-line" problems without a detrimental effect on the business.

Don't Forget Their Needs
Let's take a quick look at what is needed on the mainframe side of the "line." The ideal solution fully understands application development and mainframe systems, leaving professionals on each side of the line to focus on the task at hand.

A true "cross-the-line" solution simplifies monitoring, management, and control of composite applications for mainframe systems administrators. While some organizations would prefer a solution that did not include a solution running on the mainframe, the reality is that this solution is absolutely required in order to ensure high performance, scalability to meet evolving business and technological needs, and a manageable operational environment. Ineffective solutions are developed and maintained independently, of each other and scattered throughout multiple mainframe subsystems, and do not provide a consistent or comprehensive integration infrastructure for the mainframe.

There is a reason why mainframes have been the de facto standard for security, availability, and reliability for more than 30 years. You need to deploy a solution that extends these features to composite applications, providing maximum reliability and minimum resource utilization without compromising its ability to support thousands of users and thousands of transactions per second. Most important, the solution must mask the complexity of the mainframe.

Once in a While, We Must Step Over the Line
There is one aspect of composite applications where it is absolutely required that developers and mainframe systems personnel step over "the line" - application debugging and problem resolution. In this situation, developers and systems administrators must work together to find the cause of application or performance problems. The process normally entails gathering and reviewing trace logs from the application, any midtier servers, the mainframe integration technology, and the mainframe resource itself.

Unfortunately, if there is a weak spot in many component architectures, it surfaces in the area of problem determination and diagnostics. The last thing an application developer wants to do is to call a mainframe systems programmer and ask for a GTF trace, to piece together with the information they have accumulated from the open Web side. Likewise, the systems programmer doesn't want to spend a lot of time explaining a trace that the application developer doesn't understand in the first place.

The ideal solution will provide a communication interface that shows application calls and mainframe responses, allowing administrators to change operational parameters, and providing "drill-down" capability that displays response time and other critical information that helps administrators maintain service levels and detect problems. In the end, simple problems are corrected automatically, and complex problems can be located and corrected with less effort and disruption.

This type of solution is equally important to application developers. Look for a solution that will enable trace and diagnostic aggregation between mainframe and open-system components, eliminating the need for the dreaded call to the mainframe Systems Programmer to help piece information together. A telltale sign of a good component architecture will show little to no performance impact with full trace diagnostics enabled. This eliminates the need for problem replication, speeds up the diagnostic process, and allows both developers and systems administrators to remain productive and work within their comfort zones.

This type of solution is equally important to application developers. Diagnostic data presents a view of events from the WebLogic Platform to the mainframe and back, allowing developers to troubleshoot and tune applications with little or no assistance from mainframe systems administrators. This significantly reduces the time required to find and fix problems and allows both developers and systems administrators to remain productive and work within their comfort zones.

Cross-the-Line Communication Removes the Fear
True cross-the-line communication allows us to recognize the challenges that face our mainframe counterparts without leaving our comfort zone. There are many points to keep in mind when searching for a solution. After identifying the problem, spend time outlining an ideal solution that will solve immediate problems and evolve with the organization to meet future challenges.

Next, it's important to evaluate available solutions with a critical eye, knowing that the best solution will provide an agnostic platform for application developers and mainframe administrators to stay within their comfort zone.

By not forcing any organization to cross "the line," developers and systems personnel work well within their comfort zones, computing systems work effectively within standard application implementations, and the overall organization gains increased productivity and business agility.

More Stories By Doc Mills

Doc Mills is currently the director of product marketing for NEON Systems, the leader in enterprise-class mainframe adapters that reduce integration complexity. He has more than 18 years of experience with mainframe networking and integration.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
"Digital transformation - what we knew about it in the past has been redefined. Automation is going to play such a huge role in that because the culture, the technology, and the business operations are being shifted now," stated Brian Boeggeman, VP of Alliances & Partnerships at Ayehu, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits,...
@DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...
SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on qua...