Click here to close now.


Weblogic Authors: Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

XA Transactions

Needed More Often Than You Think

Most developers have at least heard of XA, which describes the standard protocol that allows coordination, commitment, and recovery between transaction managers and resource managers.

Products such as CICS, Tuxedo, and even BEA WebLogic Server act as transaction managers, coordinating transactions across different resource managers. Typical XA resources are databases, messaging queuing products such as JMS or WebSphere MQ, mainframe applications, ERP packages, or anything else that can be coordinated with the transaction manager. XA is used to coordinate what is commonly called a two-phase commit (2PC) transaction. The classic example of a 2PC transaction is when two different databases need to be updated atomically. Most people think of something like a bank that has one database for savings accounts and a different one for checking accounts. If a customer wants to transfer money between his checking and savings accounts, both databases have to participate in the transaction or the bank risks losing track of some money.

The problem is that most developers think, "Well, my application uses only one database, so I don't need to use XA on that database." This may not be true. The question that should be asked is, "Does the application require shared access to multiple resources that need to ensure the integrity of the transaction being performed?" For instance, does the application use Java 2 Connector Architecture adapters, the BEA WebLogic Server Messaging Bridge, or the Java Message Service (JMS)? If the application needs to update the database and any of these other resources in the same transaction, then both the database and the other resource need to be treated as XA resources.

In addition to Web or EJB applications that may touch different resources, XA is often needed when building Web services or BEA WebLogic Integration applications. Integration applications often span disparate resources and involve asynchronous interfaces. As a result, they frequently require 2PC. An extremely common use case for WebLogic Integration that calls for XA is to pull a message from WebSphere MQ, do some business processing with the message, make updates to a database, and then place another message back on MQ. Usually this whole process has to occur in a guaranteed and transactional manner. There is a tendency to shy away from XA because of the performance penalty it imposes. Still, if transaction coordination across multiple resources is needed, there is no way to avoid XA. If the requirements for an application include phrases such as "persistent messaging with guaranteed once and only once message delivery," then XA is probably needed.

Figure 1 shows a common, though extremely simplified, BEA WebLogic Integration process definition that needs to use XA. A JMS message is received to start the process. Assume the message is a customer order. The order then has to be placed in the order shipment database and placed on another message queue for further processing by a legacy billing application. Unless XA is used to coordinate the transaction between the database and JMS, we risk updating the shipment database without updating the billing application. This could result in the order being shipped, but the customer might never be billed.

Once you've determined that your application does in fact need to use XA, how do we make sure it is used correctly? Fortunately, J2EE and the Java Transaction API (JTA) hide the implementation details of XA. Coding changes are not required to enable XA for your application. Using XA properly is a matter of configuring the resources that need to be enrolled in the same transaction. Depending on the application, the BEA WebLogic Server resources that most often need to be configured for XA are connection pools, data sources, JMS Servers, JMS connection factories, and messaging bridges. Fortunately, the entire configuration needed on the WebLogic side can be done from the WebLogic Server Console.

Before worrying about the WebLogic configuration for XA, we have to ensure that the resources we want to access are XA enabled. Check with the database administrator, the WebSphere MQ administrator, or whoever is in charge of the resources that are outside WebLogic. These resources do not always enable XA by default, nor do all resources support the X/Open XA interface, which is required to truly do XA transactions. For example, some databases require that additional scripts be run in order to enable XA.

For those resources that do not support XA at all, some transaction managers allow for a "one-phase" optimization. In a one-phase optimization, the transaction manager issues a "prepare to commit" command to all of the XA resources. If all of the XA resources respond affirmatively, the transaction manager will commit the non-XA resource. The transaction manager will then commit all of the XA resources. This allows the transaction manager to work with a non-XA resource, but normally only one XA resource per transaction is allowed. There is a small chance that something will go wrong after committing the non-XA resource and before the XA resources all commit, but this is the best alternative if a resource just doesn't support XA.

Connection pools are where most people start configuring WebLogic for XA. The connection pool needs to use an XA driver. Most database vendors provide XA drivers for their databases. BEA WebLogic Server 8.1 SP2 ships with a number of XA drivers for Oracle, DB2, Informix, SQL Server, and Sybase. We need to ensure that the Driver classname on the connection pool page of the BEA WebLogic Console is in fact an XA driver. When using the configuration wizards in BEA WebLogic Server 8.1, the wizards always note which drivers are XA enabled.

When more than one XA driver is available for the database involved, be sure to run some benchmarks to determine which driver gives the best performance. Sometimes different drivers for the same database implement XA in completely different ways. This leads to wide variances in performance. For example, the Oracle 9.2 OCI Driver implements XA natively, while the Oracle 9.2 Thin Driver relies on stored procedures in the database to implement XA. As a result, the Oracle 9.2 OCI driver generally performs XA transactions much faster than the Thin driver. Oracle's newest Type 4 driver, the 10g Thin Driver, also implements XA natively and is backwards compatible with some previous versions of the Oracle database. Taking the time to fully evaluate alternative drivers can lead to significant performance improvements.

If only some of the database access needs to be done under XA, create two connection pools for the same database. Use an XA driver on one of the connection pools and a non-XA driver on the other. This will avoid the performance overhead of XA transactions for database calls that don't need 2PC.

Closely related to the connection pools are the data sources. In order to use XA, a data source must have the value "Honor Global Transactions" set to true. Prior to BEA WebLogic Server 8.1, these data sources appeared in the Console under the heading "Tx Data Sources". In 8.1, all data sources are under the same heading. Turning this flag on means that BEA WebLogic Server will be able to correctly handle transactions in a number of different scenarios. Setting this flag will ensure that WebLogic Server's JTA implementation will automatically enroll the data source in an XA transaction if it is required. There are also situations where this flag should be set even if your application does not use XA. The "Honor Global Transactions" flag should also be enabled if your application makes any explicit JTA calls, uses container-managed transactions with EJBs, or issues multiple SQL statements within the same transaction. In these non-XA situations, BEA WebLogic Server will ensure that the application retains the proper database connection from the connection pool to ensure transactional integrity.

A second flag on the data source page that is occasionally used is the "Emulate Two-Phase Commit for Non-XA Driver" setting. This flag should only be used if an XA driver cannot be obtained for the database. When this flag is on, a one-phase optimization is used. BEA WebLogic Server will first issue the "prepare to commit" command to the XA resources, commit the database that has emulation enabled, and then commit the resources in the transaction that are XA enabled. As long as nothing goes wrong, the data will still be consistent.

There is a potential that WebLogic Server will commit the non-XA transaction, only to have the transaction on the XA resource fail. WebLogic Server allows only one data source using emulation per transaction. Given the availability of XA drivers for most databases and the potential for inconsistent data, this setting should rarely be used. Figure 2 shows a data source properly configured for XA.

Within BEA WebLogic Server, JMS Servers themselves are XA resources. There is nothing special that needs to be configured to XA enable a JMS Server, but there is one configuration item that seems counter-intuitive. When using a JDBC store for the JMS Server, you might think that the connection pool used by the JDBC store needs to use an XA driver. In fact, the exact opposite is true. The connection pool for the JDBC store should not use an XA driver. In this case, the XA resource is the JMS Server, not the database. For this reason, a JMS Server that uses a file store is still capable of participating in an XA transaction. The decision about whether to use a file store or a JDBC store for a JMS Server should not be based on whether or not an application will need to use XA.

The next step to ensure that you are using XA with your JMS-based application is to use an XA connection factory. Again, the application code does not change, but a configuration setting in the BEA WebLogic console needs to be checked. After creating a new connection factory, you need to go to the "Transactions" tab and check "XA Connection Factory Enabled". Changing this value will require a server restart. If only some of the application's work with JMS needs to use XA, you may want to create another connection factory that does not use XA. Figure 3 shows a JMS connection factory configured for XA transactions.

The last resource that deserves mention is the messaging bridge, which was introduced originally for BEA WebLogic Server 7.0 and is intended to make it easier to integrate WebLogic JMS with foreign JMS providers. The messaging bridge acts as a bidirectional store-and-forward mechanism to transfer messages back and forth between WebLogic JMS and another messaging product, such as WebSphere MQ. WebLogic applications do not interact directly with the messaging bridge. Instead, they interact in a normal manner with the local WebLogic JMS queues or topics. The local queue or topic is then bridged to the foreign JMS provider. However, there are several configuration settings for the messaging bridge that need to be set correctly to ensure "guaranteed once and only once" delivery between WebLogic and the other product. Some of the settings are on the JMS Bridge Destination and some of the settings are on the bridge itself. These settings affect whether or not BEA WebLogic Server will use XA transactions when it transfers messages.

When configuring a Bridge Destination, there are three settings that control whether or not WebLogic Server will treat the destination as an XA resource: the adapter, the Adapter JNDI Name, and the Connection Factory JNDI Name. BEA WebLogic Server uses J2EE Connector Architecture adapters to communicate with the Bridge Destinations. The adapter that is used must support XA. BEA WebLogic Server provides a generic XA adapter named jms-xa-adp.rar. The Adapter JNDI Name for this is eis.jms.WLSConnectionFactoryJNDIXA. Finally, the foreign JMS Server must have an XA-enabled connection factory, and the name of this connection factory is placed in the Connection Factory JNDI Name field.

The messaging bridge itself has different qualities of service available: Exactly-Once, Atmost-once, and Duplicate-okay. If "guaranteed once and only once" delivery is a requirement for the application, then the only acceptable setting for Quality of Service on the messaging bridge configuration page is "Exactly-Once". The QoS Degradation Allowed flag should also be unchecked. Checking this box allows BEA WebLogic Server to default to a lower quality of service if it is unable to get an XA connection to the foreign provider. This is usually a very bad idea. Qualities of service should be dictated by the business requirements. Business requirements are rarely flexible enough to switch back and forth between "Exactly-Once" and other service levels. Using the QOS Degradation Allowed flag means that no one can predict which quality of service WebLogic Server will be using at runtime.

Once you've configured XA for the resources involved in an application, how can you determine that everything is working properly? Under normal conditions, where all resources are available and operating correctly, a non-XA–enabled application will behave exactly the same as an XA-enabled one. XA proves its value when an application encounters unexpected situations. The test plan for an application should include scenarios where each resource is unavailable. Testing should also evaluate what happens when a resource becomes unavailable in the middle of processing transactions. Intentionally killing a BEA WebLogic Server instance, causing a duplicate key error, or restarting a database simulates situations that can happen in production. If XA has been properly configured, all resources should complete or roll back the same transactions.

By now two things should be clear. XA transactions are needed more often than most developers realize, and XA is very easy to configure within BEA WebLogic Server. Always evaluate the configuration based on the application's business requirements and then choose the appropriate settings to make sure that transactions behave in the way they should.

More Stories By Wes Hewatt

Wes Hewatt has over fourteen years of experience designing and deploying mission critical applications for Fortune 1000 companies. As a Senior Systems Engineer for BEA Systems, Mr. Hewatt works with BEA's customers to develop J2EE applications for the WebLogic Platform. He specializes in web services and integration technologies.

Comments (4) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Most Recent Comments
Thomas 09/27/07 09:44:01 AM EDT


I am pretty new on this subject, but I don't get this sentence:

"This allows the transaction manager to work with a non-XA resource, but normally only one XA resource per transaction is allowed."

Shouldn't this be "but normally only one non-XA resource per transaction is allowed"?

Otherwise the "one-phase-commit" optimization would only be applicable for the case, that you have two resources, one XA and one non-XA. But there is no problem to expand the XA resources to as many as possible, because they all support the two-phase-commit, and I think this is done with the XA resources also in the one-phase-commit, because they respond to the commit-request of the transaction manager. The only thing that's different from a "real" two-phase-commit is the non-XA resource, which "prepared to commit" like the other XA resources. Is this correct?

Steve Kaminski 07/17/07 09:14:10 PM EDT

Hi, I am told that if you have a single XA datasource (using an XA driver) and you initiate a transaction on that datasource that involves only 1 resource WLS JTA will perform the transaction as a local transaction and as such avoid the overhead involved in an XA (global) transaction. Is this true? and if so why would you ever need one data source for non-XA and another for XA transactions as you suggest in this article?

Very good article by the way, well done on a difficult subject. You have a skill.

Sudheer Bandaru 05/14/04 12:04:56 PM EDT

The article was excellent and well designed which even covers many points that might be helpful to solve the errors and warnings when using an application.Also explains clearly when and how to use XA drivers to a new user.

Gian Luca 05/12/04 06:26:52 AM EDT

The article is well done! But we were interested more specifically in what happens in case of system crashes or DataBase failure... We encoutered problems of in-doubt transactions (Oracle 8/9) not recovered by the BEA''s Transaction recovery System !! Have you got informations at regards?

Gian Luca Paloni

@ThingsExpo Stories
As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningful and actionable insights. In his session at @ThingsExpo, Paul Turner, Chief Marketing Officer at...
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).
We all know that data growth is exploding and storage budgets are shrinking. Instead of showing you charts on about how much data there is, in his General Session at 17th Cloud Expo, Scott Cleland, Senior Director of Product Marketing at HGST, showed how to capture all of your data in one place. After you have your data under control, you can then analyze it in one place, saving time and resources.
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessions, I wanted to share some of my observations on emerging trends. As cyber security serves as a fou...
The Internet of Everything is re-shaping technology trends–moving away from “request/response” architecture to an “always-on” Streaming Web where data is in constant motion and secure, reliable communication is an absolute necessity. As more and more THINGS go online, the challenges that developers will need to address will only increase exponentially. In his session at @ThingsExpo, Todd Greene, Founder & CEO of PubNub, exploreed the current state of IoT connectivity and review key trends and technology requirements that will drive the Internet of Things from hype to reality.
With all the incredible momentum behind the Internet of Things (IoT) industry, it is easy to forget that not a single CEO wakes up and wonders if “my IoT is broken.” What they wonder is if they are making the right decisions to do all they can to increase revenue, decrease costs, and improve customer experience – effectively the same challenges they have always had in growing their business. The exciting thing about the IoT industry is now these decisions can be better, faster, and smarter. Now all corporate assets – people, objects, and spaces – can share information about themselves and thei...
Continuous processes around the development and deployment of applications are both impacted by -- and a benefit to -- the Internet of Things trend. To help better understand the relationship between DevOps and a plethora of new end-devices and data please welcome Gary Gruver, consultant, author and a former IT executive who has led many large-scale IT transformation projects, and John Jeremiah, Technology Evangelist at Hewlett Packard Enterprise (HPE), on Twitter at @j_jeremiah. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound effect on the world, and what should we expect to see over the next couple of years.
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" in this scenario: microservice A (releases daily) depends on a couple of additions to backend B (re...
The cloud. Like a comic book superhero, there seems to be no problem it can’t fix or cost it can’t slash. Yet making the transition is not always easy and production environments are still largely on premise. Taking some practical and sensible steps to reduce risk can also help provide a basis for a successful cloud transition. A plethora of surveys from the likes of IDG and Gartner show that more than 70 percent of enterprises have deployed at least one or more cloud application or workload. Yet a closer inspection at the data reveals less than half of these cloud projects involve production...
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Day 2 Keynote at 17th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, wil...
PubNub has announced the release of BLOCKS, a set of customizable microservices that give developers a simple way to add code and deploy features for realtime apps.PubNub BLOCKS executes business logic directly on the data streaming through PubNub’s network without splitting it off to an intermediary server controlled by the customer. This revolutionary approach streamlines app development, reduces endpoint-to-endpoint latency, and allows apps to better leverage the enormous scalability of PubNub’s Data Stream Network.
Container technology is shaping the future of DevOps and it’s also changing the way organizations think about application development. With the rise of mobile applications in the enterprise, businesses are abandoning year-long development cycles and embracing technologies that enable rapid development and continuous deployment of apps. In his session at DevOps Summit, Kurt Collins, Developer Evangelist at, examined how Docker has evolved into a highly effective tool for application delivery by allowing increasingly popular Mobile Backend-as-a-Service (mBaaS) platforms to quickly crea...
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user experience, both offline and online. The focus of this talk was on IBM Cloudant, Apache CouchDB, and ...
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi’s VP Business Development and Engineering, explored the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context with p...
I recently attended and was a speaker at the 4th International Internet of @ThingsExpo at the Santa Clara Convention Center. I also had the opportunity to attend this event last year and I wrote a blog from that show talking about how the “Enterprise Impact of IoT” was a key theme of last year’s show. I was curious to see if the same theme would still resonate 365 days later and what, if any, changes I would see in the content presented.
Cloud computing delivers on-demand resources that provide businesses with flexibility and cost-savings. The challenge in moving workloads to the cloud has been the cost and complexity of ensuring the initial and ongoing security and regulatory (PCI, HIPAA, FFIEC) compliance across private and public clouds. Manual security compliance is slow, prone to human error, and represents over 50% of the cost of managing cloud applications. Determining how to automate cloud security compliance is critical to maintaining positive ROI. Raxak Protect is an automated security compliance SaaS platform and ma...