Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Attention All BEA Developers: Stop Fearing Mainframe Integration

Attention All BEA Developers: Stop Fearing Mainframe Integration

There are many reasons why organizations fear mainframe integration. Proprietary interfaces, radically different processing environments, lack of support for standard development APIs, and the fact that the people who created the applications have since "moved on" are the most common factors identified when an organization postpones a mainframe integration project.

These are all reasonable issues, but in the same breath, these issues must be resolved for organizations to fully realize and maximize their investment in mainframe data.

There are an equal number of compelling reasons why organizations should deploy mainframe resources as part of WebLogic application development and integration projects. In this article, I'll review the current state of the industry and discuss the solutions needed to create a comfort zone that allows application developers to access mainframe resources.

Current State of Affairs
The leading reasons for using mainframe resources in conjunction with WebLogic application development and integration projects include leveraging existing logic and data, increasing the value of current investments, and focusing development effort on solving new business problems. For users of the BEA WebLogic Platform who are building high-value applications that depend on IBM mainframe data and applications, getting this integration right means avoiding the headaches associated with a deluge of extra costs and specialized training.

There's a line of demarcation that divides WebLogic development organizations and mainframe systems management organizations. You can't see it, but you know it's there. From a productivity perspective, it's the boundary between comfort zones. WebLogic developers are resistant to learning about the mainframe, and the mainframe "glass house" personnel are focused on maintaining the security and stability of their tightly controlled production environment, preferring to avoid interaction with the chaotic world of open systems applications.

From a technological perspective, "the line" is an effectiveness threshold. Each computing environment has strengths and weaknesses. When a solution pushes the capabilities of a platform too far, problems are inevitable.

Fear and frustration often surround the idea of mainframe integration because most solutions force someone to cross "the line." The results are scalability and failover problems, limited feature set support, additional administrative headcount requirements, ongoing training, and maintenance rollout problems. For the organization, the results are development delays, a backlog of support calls, higher Total Cost of Ownership, and, more important, dissatisfied internal and external clients.

First Step: Identify the Problem
The first step in communication resolution is always the hardest. We must take the time to identify the problem before we can begin to fix it.

In finding a solution to mainframe integration, the logical question is, "How can I move forward without crossing 'the line'?" As with any communication problem, one idea is to use an arbitrator to ensure that individual strengths are communicated and a common solution is achieved. This principle can be applied to mainframe integration as well.

Technologists are not the best long-term strategists. The point is that we need to employ cross-communication solutions that not only solve present problems but also provide the foundation to address future obstacles.

Outline an Ideal Solution and Stay Within Your Comfort Zone
The next step in jumping the communication hurdle between WebLogic developers and mainframe administrators is to outline an ideal solution. This is an important step and requires the most effort. We can all identify the problem, and once we have the tools in place, we can start fixing it. The important step in between is ensuring that each group is comfortable with the proposed solution, and once that solution is in place, is committed to "cross-the-line" communication.

One idea is to place a communications component on each side of "the line." In this manner, the communications component, or arbitration component, presents each organization with an interface that looks and acts in a familiar manner. For WebLogic developers, this means working with standard development tools where mainframe resources behave like distributed relational databases. For mainframe administrators, the arbitration component provides the enterprise-class monitoring, management, and control facilities required to maintain system availability and ensure minimal utilization of CPU.

Here are several points to keep in mind when building an ideal solution. Remember to think long term. Put together a roadmap that evolves with changing business and technology needs. Find a solution that supports application integration and data integration. If your organization requires access to programs running under IMS/TM or CICS and access to the underlying data stored in DB2, IMS/DB, VSAM, and other databases, search for a solution that provides the WebLogic platform with standard API access to the most important mainframe transaction managers and databases. By doing this, you'll fix today's problems and have a plan to fix issues lurking around the corner, saving time and money along the way.

Communication problems arise when we are forced into uncomfortable situations. The same is true with mainframe integration. To solve this problem we must enable a "cross-the-line" solution to communicate on our behalf. This solution must be transparent to the WebLogic platform, exploiting native OS features for performance optimization and addressing the translation and connectivity issues that can be handled from the distributed platform. To developers, mainframe databases must appear as distributed systems' relational databases, and mainframe programs must look and behave like stored procedures.

During the production phase, the solutions must support connection pooling and two-phase commit (2PC) transactions through extensions compatible with the Java Transaction Service (JTS) and the Java Transaction API (JTA). The solutions should also gather diagnostic and performance data for debugging and troubleshooting purposes, ideally through a choice of J2CA, JDBC, and ODBC adapters, and an agent that gathers information useful in troubleshooting.

Part of the "fear factor" that mainframe integration has evoked over the years has been self-inflicted by component architectures that have been less than adequate and that caused problems and confusion from development through deployment. This is directly due to taking individual users out of their comfort zone. In many instances these substandard architectures have a direct impact on overall performance.

Screen scrapers usually fall into this category but can also be found in mainframe integration paradigms that require multi-tiered server architectures. While there can be some instant gratification with screen scraping, performance will always be a problem and administrators can expect to be continually adding additional network servers and the complexities associated as throughput demands increase. With screen scraping, XA two-phase commit and direct data access go by the wayside, limiting functionality. Lastly, this paradigm requires someone on the Web server side to understand the application they are trying to access from the host. Other issues soon surface, like trying to "record" mainframe terminal sessions and the maintenance associated when mainframe applications change. These tasks take individuals out of their comfort zone.

Likewise, there are problems with multi-tiered mainframe integration architectures. These architectures, usually conceived off the mainframe, then eventually migrated to it, have one or more gateways or servers associated with the open Web server side. Some in this category even have one or more mainframe footprint requirements. While these may not be dubbed screen scrapers and may not require someone with mainframe expertise to manage or maintain on the Web server side, they do present other problems. The problem associated with this architecture is one of performance coupled with the fact that there is no special attention paid to any one operating system. Why run on the mainframe if you don't take advantage of any of the benefits associated with mainframe architectures or subsystems This paradigm also has the potential of taking the mainframe personnel out of their comfort zone by having them install, configure, and maintain a component designed and implemented on a Unix/NT platform, within a mainframe address space. In a nutshell, the more moving parts, the more complex the architecture, the greater the likelihood that it will have a point of failure. Worse yet is trying to track down where the failure occurred.

The most effective architecture will allow users and administrators to keep within their comfort zones and satisfy users on both sides of the imaginary "line." To the integrator or Web application developer, this provides for consistent standards such as JDBC, J2CA, and JCA in a thin-client paradigm. These personnel do not want to get bogged down with terminologies that are foreign to them. They also want to be assured that the architecture they choose does not have a major performance impact on them, or the rest of their environment, while maintaining WebLogic security, clustering, loadbalancing, and failover. Likewise, the mainframe systems programmer or DBA demands security and a product that has superior performance and a resource-prudent architecture that can meet the needs of thousands of simultaneous user requests, as well as allowing them the ability to monitor and control their own environment. Products of this nature can only have a mainframe footprint that makes use of the mainframe subsystems and subtasking that have made the mainframe the cornerstone of reliability, availability, and security (RAS).

Remember the main goal: find a solution that allows developers with no knowledge of the mainframe or mainframe access technology to build effective applications across all components of the WebLogic Platform, allowing for the use of one technology across multiple projects. This isn't as easy as it sounds, but if you keep this objective in mind, you'll solve current and future "cross-the-line" problems without a detrimental effect on the business.

Don't Forget Their Needs
Let's take a quick look at what is needed on the mainframe side of the "line." The ideal solution fully understands application development and mainframe systems, leaving professionals on each side of the line to focus on the task at hand.

A true "cross-the-line" solution simplifies monitoring, management, and control of composite applications for mainframe systems administrators. While some organizations would prefer a solution that did not include a solution running on the mainframe, the reality is that this solution is absolutely required in order to ensure high performance, scalability to meet evolving business and technological needs, and a manageable operational environment. Ineffective solutions are developed and maintained independently, of each other and scattered throughout multiple mainframe subsystems, and do not provide a consistent or comprehensive integration infrastructure for the mainframe.

There is a reason why mainframes have been the de facto standard for security, availability, and reliability for more than 30 years. You need to deploy a solution that extends these features to composite applications, providing maximum reliability and minimum resource utilization without compromising its ability to support thousands of users and thousands of transactions per second. Most important, the solution must mask the complexity of the mainframe.

Once in a While, We Must Step Over the Line
There is one aspect of composite applications where it is absolutely required that developers and mainframe systems personnel step over "the line" - application debugging and problem resolution. In this situation, developers and systems administrators must work together to find the cause of application or performance problems. The process normally entails gathering and reviewing trace logs from the application, any midtier servers, the mainframe integration technology, and the mainframe resource itself.

Unfortunately, if there is a weak spot in many component architectures, it surfaces in the area of problem determination and diagnostics. The last thing an application developer wants to do is to call a mainframe systems programmer and ask for a GTF trace, to piece together with the information they have accumulated from the open Web side. Likewise, the systems programmer doesn't want to spend a lot of time explaining a trace that the application developer doesn't understand in the first place.

The ideal solution will provide a communication interface that shows application calls and mainframe responses, allowing administrators to change operational parameters, and providing "drill-down" capability that displays response time and other critical information that helps administrators maintain service levels and detect problems. In the end, simple problems are corrected automatically, and complex problems can be located and corrected with less effort and disruption.

This type of solution is equally important to application developers. Look for a solution that will enable trace and diagnostic aggregation between mainframe and open-system components, eliminating the need for the dreaded call to the mainframe Systems Programmer to help piece information together. A telltale sign of a good component architecture will show little to no performance impact with full trace diagnostics enabled. This eliminates the need for problem replication, speeds up the diagnostic process, and allows both developers and systems administrators to remain productive and work within their comfort zones.

This type of solution is equally important to application developers. Diagnostic data presents a view of events from the WebLogic Platform to the mainframe and back, allowing developers to troubleshoot and tune applications with little or no assistance from mainframe systems administrators. This significantly reduces the time required to find and fix problems and allows both developers and systems administrators to remain productive and work within their comfort zones.

Cross-the-Line Communication Removes the Fear
True cross-the-line communication allows us to recognize the challenges that face our mainframe counterparts without leaving our comfort zone. There are many points to keep in mind when searching for a solution. After identifying the problem, spend time outlining an ideal solution that will solve immediate problems and evolve with the organization to meet future challenges.

Next, it's important to evaluate available solutions with a critical eye, knowing that the best solution will provide an agnostic platform for application developers and mainframe administrators to stay within their comfort zone.

By not forcing any organization to cross "the line," developers and systems personnel work well within their comfort zones, computing systems work effectively within standard application implementations, and the overall organization gains increased productivity and business agility.

More Stories By Doc Mills

Doc Mills is currently the director of product marketing for NEON Systems, the leader in enterprise-class mainframe adapters that reduce integration complexity. He has more than 18 years of experience with mainframe networking and integration.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...