Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Canonical Message Formats

Avoiding the Pitfalls

As the scope of enterprise integration grows, IT organizations are demanding greater efficiency and agility from their architectures and are moving away from point-to-point integration,which is proving to be increasingly cumbersome to build and maintain.

They are migrating towards adaptive platforms such as BEA's 8.1 Platform and many-to-many architectures that supports linear growth costs as well as simplified maintenance. To connect systems, WebLogic developers are shifting away from creating individual adapters between every pair of systems in favor of Web services. For data, they are shifting away from individual mappings between data sources and targets in favor of Liquid Data enhanced with canonical messages. But trying to implement canonical messages can be rife with problems such as causing infighting between departments and creating rigid models that become obstacles to progress. Fortunately, these pitfalls can be avoided through proper architecture.

Introduction
As IT professionals engineer greater efficiency and flexibility into their enterprises, the industry has seen a shift from point-to-point connectivity to platforms such as BEA WebLogic that support many-to-many integration. The heart of this shift is a service-oriented architecture (SOA) that brings the world one step closer to IT infrastructure that is truly plug-and-play. However, any CIO who has implemented an SOA can tell you that the SOA still has some hurdles to clear before it delivers on the plug-and-play future. The most imposing of these hurdles is data interoperability.

Data documents are exchanged blindly because the SOA offers no systematic way to standardize the way in which inbound data is interpreted and validated. Even if an organization standardizes on an industry document-type definition (DTD), there is still no automated way to reconcile, for example, the strings "two," "2," and "2.00."

XML Schema Definitions (XSDs) and LiquidData help with data typing and transformations, but that's just the tip of the iceberg. What if the value of 2 is derived from multiple data sources? What if another system reports that the value is really 3? When should the value 2.001 be rounded to 2? What if there is a business requirement that dictates that the value must be greater than 5?

By most accounts, over 40% of the cost of integrating applications is spent writing point-to-point adapters for the transformation, aggregation, validation, and business operations needed to effectively exchange data - and much of the code is redundant, inconsistent, and costly to maintain. This approach results in a brittle architecture that is expensive to build and difficult to change without more hand coding.

Canonical models are a step in the right direction, but exchange models - a superset of canonical models - solve the entire problem with a many-to-many solution for modeling, automating, and managing the exchange of data across an SOA. They capture data reconciliation and validation operations as metadata, defined once but used consistently across the SOA. In this way, rules are enforced, reused, and easily modified without incurring additional integration costs. Without such infrastructure in place, data interoperability problems threaten the flexibility and reuse of the SOA - the incentives for implementing an SOA in the first place.

SOAs Ignore Underlying Data
Web services provide a standards-based mechanism for applications to access business logic and ship documents, but they provide nothing for interpreting and validating the data in the documents within the context of the local application. In addition to the simple examples given above, IT departments are left with the sizable task of writing code to interpret and convert data every time a document is received. The operations performed include:

  • Transformations: Schema mapping, such as converting credit bureau data into an application document
  • Aggregation: Merging and reconciling disparate data, such as merging two customer profiles with different and potentially conflicting fields
  • Validation: Ensuring data consistency, such as checking that the birth date precedes the death date
  • Business operations: Enforcing business processes, such as ensuring that all orders fall within corporate risk guidelines

Consider the apparently simple concept of "order status". This concept is vital to the correct execution of many business processes within the enterprise, but what are the true semantics of the term? Is it the status of issuing a request for a quote, or placing a purchase order? What about the status of a credit check on the customer? What limitations should be placed on the order if the credit check reveals anomalies? Perhaps "order status" refers to the manufacturing, shipping, delivery, return, or payment status. In practice, it's probably all of these things as well as the interdependencies between them. Applications in different functional areas of the corporation will have different interpretations of "order status" and different databases will store different "order status" values.

Each database and application within a given area of responsibility, such as order management, manufacturing, or shipping, uses a locally consistent definition of "order status". However, when an enterprise begins using an SOA to integrate processes across these areas, source systems must convert their local definitions to the local definitions of the target systems. And the problem increases with the number of target systems - order management systems may send messages to both manufacturing systems and shipping systems, each with a different definition of "order status".

Converting these messages for the target system is labor intensive and goes beyond simple mappings. IT departments are often forced to alter applications and embed additional code in order to execute any of the operations shown in Table 1.

These tasks are often implemented with hand coding, which creates a brittle set of data services that scatter the semantics of reconciling data across the entire service network. What is worse is that this code is not reusable from project to project. Worse still is that it creates an architecture that is not amenable to change. Changing a single logical rule anywhere on the network sets off a domino effect in which every application service that has implemented this rule must also be changed - assuming each instance of the rule can even be found.

The interoperability of data plays just as big a role in SOAs as the interoperability of services, and if the mechanics of exchanging data are not addressed the two key reasons for implementing an SOA in the first place - flexibility and reuse - are severely threatened. Fortunately, this problem can be addressed through good architecture that includes an exchange model for data in the middle tier.

Migrating Point-to-Point to Many-to-Many
As the scope of enterprise integration grows, IT organizations are demanding greater efficiency and agility from their architectures. This demand has fueled a shift in the industry from point-to-point integration - which has proven to be increasingly cumbersome to build and maintain - to a many-to-many architecture that supports linear growth costs and simplified maintenance.

From EAI to SOA
Enterprise application integration (EAI) packages rely on point-to-point integration for interoperability between services and incorporate individual adapters between every pair of systems. The costs to build and maintain such an architecture become unmanageable as the number of systems grows; integrating n systems requires n(n-1) adapters.

To overcome this problem, IT departments are turning towards a many-to-many approach for integrating diverse systems. This shift involves moving away from individual adaptors between every pair of services towards publishing Web services with standard interfaces on the enterprise message bus. This is the heart of an SOA.

From Mapping Tools to Canonical Messages
IT organizations are making the same transition in how they architect for data interoperability. Rather than creating individual mappings between every pair of systems, organizations are implementing canonical messages for a move towards a many-to-many data architecture. Canonical messages involve publishing standard schemas and vocabularies, and all systems map to the agreed-upon models to communicate. This takes a major step in controlling the initial costs of data interoperability.

Canonical message formats represent a many-to-many approach to sharing data across integrated systems, but they fall short of a complete data exchange solution. Early attempts to implement canonical messages formats have often been rife with problems, such as forcing agreement between departments and creating rigid models that become obstacles to future growth. While they do simplify some aspects of integration, shortcomings prevent them from being the silver bullet:

  • Requires agreement between departments: One of the biggest drawbacks of imposing common message formats or vocabularies across the enterprise is accounting for diversity between users. Forcing groups with different needs and goals to agree on messages can cause infighting and usually results in the lowest common denominator that inevitably hinders progress.
  • Inflexible: Because all systems write to the canonical model, it is impossible to change the model without considerable cost and disruption of service. A single change in a message schema requires code in all systems to be rewritten, tested, and redeployed.
  • Ignores interpretation and validation operations: Canonical messages do nothing to address the semantic differences discussed in the previous section or to make sure that the information is valid within the context of the application. Even with all systems using the identical schema for "order status," the rules that govern the usage of each field are not defined - and this ambiguity has to be handled with custom code.

From Canonical Messages to Data Exchange Models
Making a many-to-many SOA architecture deliver on the promise of simplified integration requires more than Web services and canonical message formats. The missing link is infrastructure that standardizes and administers all exchange operations including data transformation, aggregation, validation, and business operations. Canonical messages that are enriched with metadata defining such operations and constituting an exchange model complete the architecture and bridge the gap between diverse services that share information.

The Real Solution: Exchange Modeling
For an SOA to deliver on its promise, the architecture needs a mechanism for reconciling the inherent inconsistencies between data sources and data targets. Exchange modeling technology goes one step further than canonical message formats by creating an exchange model that captures the semantics and usage of data as metadata, in addition to the syntax and structure. Exchange modeling fits into the BEA 8.1 Platform (see Figure 1).

The metadata in an exchange model is used to deploy rich sets of shared data services that formalize and automate the interaction with data across an SOA - ensuring data interoperability and data quality.

Exchange models consist of three tiers (see Figure 2): one tier each for data sources, data targets, and the intervening canonical message formats. By having multiple tiers, each department can program to its own models, and changes can be made locally without disrupting other systems. Schemas are derived directly from existing systems, enterprise data formats (EDFs), existing canonical messages, or industry standard schemas such as RosettaNet, ACORD, or MISMO. They are then enriched with metadata that define the transformation, aggregation, validation, and business operations associated with each field, class, or the entire schema.

At runtime, services within the SOA need only to access the shared data services for data, rather than having to go to multiple individual data sources via tightly coupled integration channels. During each exchange, data is fully converted and documents are guaranteed to be valid before they are submitted to back end systems.

In the "order status" example discussed earlier, a developer would create a set of rules on canonical messages sent from order management, manufacturing, and shipping systems. These conditions would fire based on the source system, the target system, and the canonical message type. In cases where a transformation is necessary, the fired condition would apply a mapping that described how to translate among the various definitions of status. Exchange models are a practical part of any SOA because:

  • Conditions and mapping are described as metadata: Unlike hand code in applications, the developer can change them dynamically at runtime.
  • Exchange operations are defined centrally: The transformation, aggregation, validation, and business operations are defined once and take effect throughout the enterprise without exception to eliminate redundant coding and ensure consistency.
  • Data services are deployed locally: There is no single point of failure or performance bottleneck.

Conclusion
IT organizations are architecting their infrastructures to support a greater level of integration and for a greater level of flexibility to respond to an ever-changing business climate. For many, this includes a shift to an SOA for service interoperability, but an SOA does nothing to address data interoperability. Many organizations resort to adding data interoperability code to every application, and others are implementing canonical data models but run into shortcomings.

Data exchange models go further by allowing data structures and message formats to be enriched with semantic information to fully describe how to interpret and convert the information that is shared between systems. Exchange models are also tiered so that organizations have control over their own data structures and are not forced to program to common models. As an added bonus, the change management facilities inherent in exchange models create infrastructure that can be responsive to the changing needs of business.

More Stories By Coco Jaenicke

Coco Jaenicke was, until recently, the XML evangelist and director of product marketing for eXcelon, the industry's first application development environment for building and deploying e-business applications. She is a member of XML-J's Editorial Advisory Board.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
As businesses evolve, they need technology that is simple to help them succeed today and flexible enough to help them build for tomorrow. Chrome is fit for the workplace of the future — providing a secure, consistent user experience across a range of devices that can be used anywhere. In her session at 21st Cloud Expo, Vidya Nagarajan, a Senior Product Manager at Google, will take a look at various options as to how ChromeOS can be leveraged to interact with people on the devices, and formats th...
SYS-CON Events announced today that Yuasa System will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Yuasa System is introducing a multi-purpose endurance testing system for flexible displays, OLED devices, flexible substrates, flat cables, and films in smartphones, wearables, automobiles, and healthcare.
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
SYS-CON Events announced today that SourceForge has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SourceForge is the largest, most trusted destination for Open Source Software development, collaboration, discovery and download on the web serving over 32 million viewers, 150 million downloads and over 460,000 active development projects each and every month.
SYS-CON Events announced today that Nihon Micron will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Nihon Micron Co., Ltd. strives for technological innovation to establish high-density, high-precision processing technology for providing printed circuit board and metal mount RFID tags used for communication devices. For more inf...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
SYS-CON Events announced today that Massive Networks, that helps your business operate seamlessly with fast, reliable, and secure internet and network solutions, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. As a premier telecommunications provider, Massive Networks is headquartered out of Louisville, Colorado. With years of experience under their belt, their team of...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their ass...
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
As popularity of the smart home is growing and continues to go mainstream, technological factors play a greater role. The IoT protocol houses the interoperability battery consumption, security, and configuration of a smart home device, and it can be difficult for companies to choose the right kind for their product. For both DIY and professionally installed smart homes, developers need to consider each of these elements for their product to be successful in the market and current smart homes.
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering ...
SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...