Welcome!

Weblogic Authors: Carmen Gonzalez, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

XA Transactions

Needed More Often Than You Think

Most developers have at least heard of XA, which describes the standard protocol that allows coordination, commitment, and recovery between transaction managers and resource managers.

Products such as CICS, Tuxedo, and even BEA WebLogic Server act as transaction managers, coordinating transactions across different resource managers. Typical XA resources are databases, messaging queuing products such as JMS or WebSphere MQ, mainframe applications, ERP packages, or anything else that can be coordinated with the transaction manager. XA is used to coordinate what is commonly called a two-phase commit (2PC) transaction. The classic example of a 2PC transaction is when two different databases need to be updated atomically. Most people think of something like a bank that has one database for savings accounts and a different one for checking accounts. If a customer wants to transfer money between his checking and savings accounts, both databases have to participate in the transaction or the bank risks losing track of some money.

The problem is that most developers think, "Well, my application uses only one database, so I don't need to use XA on that database." This may not be true. The question that should be asked is, "Does the application require shared access to multiple resources that need to ensure the integrity of the transaction being performed?" For instance, does the application use Java 2 Connector Architecture adapters, the BEA WebLogic Server Messaging Bridge, or the Java Message Service (JMS)? If the application needs to update the database and any of these other resources in the same transaction, then both the database and the other resource need to be treated as XA resources.

In addition to Web or EJB applications that may touch different resources, XA is often needed when building Web services or BEA WebLogic Integration applications. Integration applications often span disparate resources and involve asynchronous interfaces. As a result, they frequently require 2PC. An extremely common use case for WebLogic Integration that calls for XA is to pull a message from WebSphere MQ, do some business processing with the message, make updates to a database, and then place another message back on MQ. Usually this whole process has to occur in a guaranteed and transactional manner. There is a tendency to shy away from XA because of the performance penalty it imposes. Still, if transaction coordination across multiple resources is needed, there is no way to avoid XA. If the requirements for an application include phrases such as "persistent messaging with guaranteed once and only once message delivery," then XA is probably needed.

Figure 1 shows a common, though extremely simplified, BEA WebLogic Integration process definition that needs to use XA. A JMS message is received to start the process. Assume the message is a customer order. The order then has to be placed in the order shipment database and placed on another message queue for further processing by a legacy billing application. Unless XA is used to coordinate the transaction between the database and JMS, we risk updating the shipment database without updating the billing application. This could result in the order being shipped, but the customer might never be billed.

Once you've determined that your application does in fact need to use XA, how do we make sure it is used correctly? Fortunately, J2EE and the Java Transaction API (JTA) hide the implementation details of XA. Coding changes are not required to enable XA for your application. Using XA properly is a matter of configuring the resources that need to be enrolled in the same transaction. Depending on the application, the BEA WebLogic Server resources that most often need to be configured for XA are connection pools, data sources, JMS Servers, JMS connection factories, and messaging bridges. Fortunately, the entire configuration needed on the WebLogic side can be done from the WebLogic Server Console.

Before worrying about the WebLogic configuration for XA, we have to ensure that the resources we want to access are XA enabled. Check with the database administrator, the WebSphere MQ administrator, or whoever is in charge of the resources that are outside WebLogic. These resources do not always enable XA by default, nor do all resources support the X/Open XA interface, which is required to truly do XA transactions. For example, some databases require that additional scripts be run in order to enable XA.

For those resources that do not support XA at all, some transaction managers allow for a "one-phase" optimization. In a one-phase optimization, the transaction manager issues a "prepare to commit" command to all of the XA resources. If all of the XA resources respond affirmatively, the transaction manager will commit the non-XA resource. The transaction manager will then commit all of the XA resources. This allows the transaction manager to work with a non-XA resource, but normally only one XA resource per transaction is allowed. There is a small chance that something will go wrong after committing the non-XA resource and before the XA resources all commit, but this is the best alternative if a resource just doesn't support XA.

Connection pools are where most people start configuring WebLogic for XA. The connection pool needs to use an XA driver. Most database vendors provide XA drivers for their databases. BEA WebLogic Server 8.1 SP2 ships with a number of XA drivers for Oracle, DB2, Informix, SQL Server, and Sybase. We need to ensure that the Driver classname on the connection pool page of the BEA WebLogic Console is in fact an XA driver. When using the configuration wizards in BEA WebLogic Server 8.1, the wizards always note which drivers are XA enabled.

When more than one XA driver is available for the database involved, be sure to run some benchmarks to determine which driver gives the best performance. Sometimes different drivers for the same database implement XA in completely different ways. This leads to wide variances in performance. For example, the Oracle 9.2 OCI Driver implements XA natively, while the Oracle 9.2 Thin Driver relies on stored procedures in the database to implement XA. As a result, the Oracle 9.2 OCI driver generally performs XA transactions much faster than the Thin driver. Oracle's newest Type 4 driver, the 10g Thin Driver, also implements XA natively and is backwards compatible with some previous versions of the Oracle database. Taking the time to fully evaluate alternative drivers can lead to significant performance improvements.

If only some of the database access needs to be done under XA, create two connection pools for the same database. Use an XA driver on one of the connection pools and a non-XA driver on the other. This will avoid the performance overhead of XA transactions for database calls that don't need 2PC.

Closely related to the connection pools are the data sources. In order to use XA, a data source must have the value "Honor Global Transactions" set to true. Prior to BEA WebLogic Server 8.1, these data sources appeared in the Console under the heading "Tx Data Sources". In 8.1, all data sources are under the same heading. Turning this flag on means that BEA WebLogic Server will be able to correctly handle transactions in a number of different scenarios. Setting this flag will ensure that WebLogic Server's JTA implementation will automatically enroll the data source in an XA transaction if it is required. There are also situations where this flag should be set even if your application does not use XA. The "Honor Global Transactions" flag should also be enabled if your application makes any explicit JTA calls, uses container-managed transactions with EJBs, or issues multiple SQL statements within the same transaction. In these non-XA situations, BEA WebLogic Server will ensure that the application retains the proper database connection from the connection pool to ensure transactional integrity.

A second flag on the data source page that is occasionally used is the "Emulate Two-Phase Commit for Non-XA Driver" setting. This flag should only be used if an XA driver cannot be obtained for the database. When this flag is on, a one-phase optimization is used. BEA WebLogic Server will first issue the "prepare to commit" command to the XA resources, commit the database that has emulation enabled, and then commit the resources in the transaction that are XA enabled. As long as nothing goes wrong, the data will still be consistent.

There is a potential that WebLogic Server will commit the non-XA transaction, only to have the transaction on the XA resource fail. WebLogic Server allows only one data source using emulation per transaction. Given the availability of XA drivers for most databases and the potential for inconsistent data, this setting should rarely be used. Figure 2 shows a data source properly configured for XA.

Within BEA WebLogic Server, JMS Servers themselves are XA resources. There is nothing special that needs to be configured to XA enable a JMS Server, but there is one configuration item that seems counter-intuitive. When using a JDBC store for the JMS Server, you might think that the connection pool used by the JDBC store needs to use an XA driver. In fact, the exact opposite is true. The connection pool for the JDBC store should not use an XA driver. In this case, the XA resource is the JMS Server, not the database. For this reason, a JMS Server that uses a file store is still capable of participating in an XA transaction. The decision about whether to use a file store or a JDBC store for a JMS Server should not be based on whether or not an application will need to use XA.

The next step to ensure that you are using XA with your JMS-based application is to use an XA connection factory. Again, the application code does not change, but a configuration setting in the BEA WebLogic console needs to be checked. After creating a new connection factory, you need to go to the "Transactions" tab and check "XA Connection Factory Enabled". Changing this value will require a server restart. If only some of the application's work with JMS needs to use XA, you may want to create another connection factory that does not use XA. Figure 3 shows a JMS connection factory configured for XA transactions.

The last resource that deserves mention is the messaging bridge, which was introduced originally for BEA WebLogic Server 7.0 and is intended to make it easier to integrate WebLogic JMS with foreign JMS providers. The messaging bridge acts as a bidirectional store-and-forward mechanism to transfer messages back and forth between WebLogic JMS and another messaging product, such as WebSphere MQ. WebLogic applications do not interact directly with the messaging bridge. Instead, they interact in a normal manner with the local WebLogic JMS queues or topics. The local queue or topic is then bridged to the foreign JMS provider. However, there are several configuration settings for the messaging bridge that need to be set correctly to ensure "guaranteed once and only once" delivery between WebLogic and the other product. Some of the settings are on the JMS Bridge Destination and some of the settings are on the bridge itself. These settings affect whether or not BEA WebLogic Server will use XA transactions when it transfers messages.

When configuring a Bridge Destination, there are three settings that control whether or not WebLogic Server will treat the destination as an XA resource: the adapter, the Adapter JNDI Name, and the Connection Factory JNDI Name. BEA WebLogic Server uses J2EE Connector Architecture adapters to communicate with the Bridge Destinations. The adapter that is used must support XA. BEA WebLogic Server provides a generic XA adapter named jms-xa-adp.rar. The Adapter JNDI Name for this is eis.jms.WLSConnectionFactoryJNDIXA. Finally, the foreign JMS Server must have an XA-enabled connection factory, and the name of this connection factory is placed in the Connection Factory JNDI Name field.

The messaging bridge itself has different qualities of service available: Exactly-Once, Atmost-once, and Duplicate-okay. If "guaranteed once and only once" delivery is a requirement for the application, then the only acceptable setting for Quality of Service on the messaging bridge configuration page is "Exactly-Once". The QoS Degradation Allowed flag should also be unchecked. Checking this box allows BEA WebLogic Server to default to a lower quality of service if it is unable to get an XA connection to the foreign provider. This is usually a very bad idea. Qualities of service should be dictated by the business requirements. Business requirements are rarely flexible enough to switch back and forth between "Exactly-Once" and other service levels. Using the QOS Degradation Allowed flag means that no one can predict which quality of service WebLogic Server will be using at runtime.

Once you've configured XA for the resources involved in an application, how can you determine that everything is working properly? Under normal conditions, where all resources are available and operating correctly, a non-XA–enabled application will behave exactly the same as an XA-enabled one. XA proves its value when an application encounters unexpected situations. The test plan for an application should include scenarios where each resource is unavailable. Testing should also evaluate what happens when a resource becomes unavailable in the middle of processing transactions. Intentionally killing a BEA WebLogic Server instance, causing a duplicate key error, or restarting a database simulates situations that can happen in production. If XA has been properly configured, all resources should complete or roll back the same transactions.

Conclusion
By now two things should be clear. XA transactions are needed more often than most developers realize, and XA is very easy to configure within BEA WebLogic Server. Always evaluate the configuration based on the application's business requirements and then choose the appropriate settings to make sure that transactions behave in the way they should.

More Stories By Wes Hewatt

Wes Hewatt has over fourteen years of experience designing and deploying mission critical applications for Fortune 1000 companies. As a Senior Systems Engineer for BEA Systems, Mr. Hewatt works with BEA's customers to develop J2EE applications for the WebLogic Platform. He specializes in web services and integration technologies.

Comments (4) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Thomas 09/27/07 09:44:01 AM EDT

Hi,

I am pretty new on this subject, but I don't get this sentence:

"This allows the transaction manager to work with a non-XA resource, but normally only one XA resource per transaction is allowed."

Shouldn't this be "but normally only one non-XA resource per transaction is allowed"?

Otherwise the "one-phase-commit" optimization would only be applicable for the case, that you have two resources, one XA and one non-XA. But there is no problem to expand the XA resources to as many as possible, because they all support the two-phase-commit, and I think this is done with the XA resources also in the one-phase-commit, because they respond to the commit-request of the transaction manager. The only thing that's different from a "real" two-phase-commit is the non-XA resource, which "prepared to commit" like the other XA resources. Is this correct?

Steve Kaminski 07/17/07 09:14:10 PM EDT

Hi, I am told that if you have a single XA datasource (using an XA driver) and you initiate a transaction on that datasource that involves only 1 resource WLS JTA will perform the transaction as a local transaction and as such avoid the overhead involved in an XA (global) transaction. Is this true? and if so why would you ever need one data source for non-XA and another for XA transactions as you suggest in this article?

Very good article by the way, well done on a difficult subject. You have a skill.

Sudheer Bandaru 05/14/04 12:04:56 PM EDT

The article was excellent and well designed which even covers many points that might be helpful to solve the errors and warnings when using an application.Also explains clearly when and how to use XA drivers to a new user.

Gian Luca 05/12/04 06:26:52 AM EDT

The article is well done! But we were interested more specifically in what happens in case of system crashes or DataBase failure... We encoutered problems of in-doubt transactions (Oracle 8/9) not recovered by the BEA''s Transaction recovery System !! Have you got informations at regards?
Thanks,

Gian Luca Paloni

@ThingsExpo Stories
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have s...
The Internet of Things (IoT) is going to require a new way of thinking and of developing software for speed, security and innovation. This requires IT leaders to balance business as usual while anticipating for the next market and technology trends. Cloud provides the right IT asset portfolio to help today’s IT leaders manage the old and prepare for the new. Today the cloud conversation is evolving from private and public to hybrid. This session will provide use cases and insights to reinforce the value of the network in helping organizations to maximize their company’s cloud experience.
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize supplier management. Learn about enterprise architecture strategies for designing connected systems tha...
IoT is still a vague buzzword for many people. In his session at Internet of @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, will discuss the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. The presentation will also discuss how IoT is perceived by investors and how venture capitalist access this space. Other topics to discuss are barriers to success, what is new, what is old, and what the future may hold.
Whether you're a startup or a 100 year old enterprise, the Internet of Things offers a variety of new capabilities for your business. IoT style solutions can help you get closer your customers, launch new product lines and take over an industry. Some companies are dipping their toes in, but many have already taken the plunge, all while dramatic new capabilities continue to emerge. In his session at Internet of @ThingsExpo, Reid Carlberg, Senior Director, Developer Evangelism at salesforce.com, to discuss real-world use cases, patterns and opportunities you can harness today.
All major researchers estimate there will be tens of billions devices – computers, smartphones, tablets, and sensors – connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be!
Noted IoT expert and researcher Joseph di Paolantonio (pictured below) has joined the @ThingsExpo faculty. Joseph, who describes himself as an “Independent Thinker” from DataArchon, will speak on the topic of “Smart Grids & Managing Big Utilities.” Over his career, Joseph di Paolantonio has worked in the energy, renewables, aerospace, telecommunications, and information technology industries. His expertise is in data analysis, system engineering, Bayesian statistics, data warehouses, business intelligence, data mining, predictive methods, and very large databases (VLDB). Prior to DataArcho...
Software AG helps organizations transform into Digital Enterprises, so they can differentiate from competitors and better engage customers, partners and employees. Using the Software AG Suite, companies can close the gap between business and IT to create digital systems of differentiation that drive front-line agility. We offer four on-ramps to the Digital Enterprise: alignment through collaborative process analysis; transformation through portfolio management; agility through process automation and integration; and visibility through intelligent business operations and big data.
There will be 50 billion Internet connected devices by 2020. Today, every manufacturer has a propriety protocol and an app. How do we securely integrate these "things" into our lives and businesses in a way that we can easily control and manage? Even better, how do we integrate these "things" so that they control and manage each other so our lives become more convenient or our businesses become more profitable and/or safe? We have heard that the best interface is no interface. In his session at Internet of @ThingsExpo, Chris Matthieu, Co-Founder & CTO at Octoblu, Inc., will discuss how thes...
Last week, while in San Francisco, I used the Uber app and service four times. All four experiences were great, although one of the drivers stopped for 30 seconds and then left as I was walking up to the car. He must have realized I was a blogger. None the less, the next car was just a minute away and I suffered no pain. In this article, my colleague, Ved Sen, Global Head, Advisory Services Social, Mobile and Sensors at Cognizant shares his experiences and insights.
We are reaching the end of the beginning with WebRTC and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) ir...
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. It also ensured scalability and better service for customers, including MUY! Companies, one of the country's largest franchise restaurant companies with 232 Pizza Hut locations. This is one example of WebRTC adoption today, but the potential is limitless when powered by IoT. Attendees will learn rea...
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at Internet of @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, will share some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, deploy, and manage applications integrating voice, video and data. He is the co-founder ...
The Internet of Things (IoT) promises to create new business models as significant as those that were inspired by the Internet and the smartphone 20 and 10 years ago. What business, social and practical implications will this phenomenon bring? That's the subject of "Monetizing the Internet of Things: Perspectives from the Front Lines," an e-book released today and available free of charge from Aria Systems, the leading innovator in recurring revenue management.
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges.
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at 6th Big Data Expo®, Hannah Smalltree, Director at Treasure Data, to discuss how IoT, Big Data and deployments are processing massive data volumes from wearables, utilities and other mach...
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at Internet of @ThingsExpo, Erik Lagerway, Co-founder of Hookflash, will walk through the shifting landscape of traditional telephone and voice s...
While great strides have been made relative to the video aspects of remote collaboration, audio technology has basically stagnated. Typically all audio is mixed to a single monaural stream and emanates from a single point, such as a speakerphone or a speaker associated with a video monitor. This leads to confusion and lack of understanding among participants especially regarding who is actually speaking. Spatial teleconferencing introduces the concept of acoustic spatial separation between conference participants in three dimensional space. This has been shown to significantly improve comprehe...
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, will discuss single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example...