Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

XA Transactions

Needed More Often Than You Think

Most developers have at least heard of XA, which describes the standard protocol that allows coordination, commitment, and recovery between transaction managers and resource managers.

Products such as CICS, Tuxedo, and even BEA WebLogic Server act as transaction managers, coordinating transactions across different resource managers. Typical XA resources are databases, messaging queuing products such as JMS or WebSphere MQ, mainframe applications, ERP packages, or anything else that can be coordinated with the transaction manager. XA is used to coordinate what is commonly called a two-phase commit (2PC) transaction. The classic example of a 2PC transaction is when two different databases need to be updated atomically. Most people think of something like a bank that has one database for savings accounts and a different one for checking accounts. If a customer wants to transfer money between his checking and savings accounts, both databases have to participate in the transaction or the bank risks losing track of some money.

The problem is that most developers think, "Well, my application uses only one database, so I don't need to use XA on that database." This may not be true. The question that should be asked is, "Does the application require shared access to multiple resources that need to ensure the integrity of the transaction being performed?" For instance, does the application use Java 2 Connector Architecture adapters, the BEA WebLogic Server Messaging Bridge, or the Java Message Service (JMS)? If the application needs to update the database and any of these other resources in the same transaction, then both the database and the other resource need to be treated as XA resources.

In addition to Web or EJB applications that may touch different resources, XA is often needed when building Web services or BEA WebLogic Integration applications. Integration applications often span disparate resources and involve asynchronous interfaces. As a result, they frequently require 2PC. An extremely common use case for WebLogic Integration that calls for XA is to pull a message from WebSphere MQ, do some business processing with the message, make updates to a database, and then place another message back on MQ. Usually this whole process has to occur in a guaranteed and transactional manner. There is a tendency to shy away from XA because of the performance penalty it imposes. Still, if transaction coordination across multiple resources is needed, there is no way to avoid XA. If the requirements for an application include phrases such as "persistent messaging with guaranteed once and only once message delivery," then XA is probably needed.

Figure 1 shows a common, though extremely simplified, BEA WebLogic Integration process definition that needs to use XA. A JMS message is received to start the process. Assume the message is a customer order. The order then has to be placed in the order shipment database and placed on another message queue for further processing by a legacy billing application. Unless XA is used to coordinate the transaction between the database and JMS, we risk updating the shipment database without updating the billing application. This could result in the order being shipped, but the customer might never be billed.

Once you've determined that your application does in fact need to use XA, how do we make sure it is used correctly? Fortunately, J2EE and the Java Transaction API (JTA) hide the implementation details of XA. Coding changes are not required to enable XA for your application. Using XA properly is a matter of configuring the resources that need to be enrolled in the same transaction. Depending on the application, the BEA WebLogic Server resources that most often need to be configured for XA are connection pools, data sources, JMS Servers, JMS connection factories, and messaging bridges. Fortunately, the entire configuration needed on the WebLogic side can be done from the WebLogic Server Console.

Before worrying about the WebLogic configuration for XA, we have to ensure that the resources we want to access are XA enabled. Check with the database administrator, the WebSphere MQ administrator, or whoever is in charge of the resources that are outside WebLogic. These resources do not always enable XA by default, nor do all resources support the X/Open XA interface, which is required to truly do XA transactions. For example, some databases require that additional scripts be run in order to enable XA.

For those resources that do not support XA at all, some transaction managers allow for a "one-phase" optimization. In a one-phase optimization, the transaction manager issues a "prepare to commit" command to all of the XA resources. If all of the XA resources respond affirmatively, the transaction manager will commit the non-XA resource. The transaction manager will then commit all of the XA resources. This allows the transaction manager to work with a non-XA resource, but normally only one XA resource per transaction is allowed. There is a small chance that something will go wrong after committing the non-XA resource and before the XA resources all commit, but this is the best alternative if a resource just doesn't support XA.

Connection pools are where most people start configuring WebLogic for XA. The connection pool needs to use an XA driver. Most database vendors provide XA drivers for their databases. BEA WebLogic Server 8.1 SP2 ships with a number of XA drivers for Oracle, DB2, Informix, SQL Server, and Sybase. We need to ensure that the Driver classname on the connection pool page of the BEA WebLogic Console is in fact an XA driver. When using the configuration wizards in BEA WebLogic Server 8.1, the wizards always note which drivers are XA enabled.

When more than one XA driver is available for the database involved, be sure to run some benchmarks to determine which driver gives the best performance. Sometimes different drivers for the same database implement XA in completely different ways. This leads to wide variances in performance. For example, the Oracle 9.2 OCI Driver implements XA natively, while the Oracle 9.2 Thin Driver relies on stored procedures in the database to implement XA. As a result, the Oracle 9.2 OCI driver generally performs XA transactions much faster than the Thin driver. Oracle's newest Type 4 driver, the 10g Thin Driver, also implements XA natively and is backwards compatible with some previous versions of the Oracle database. Taking the time to fully evaluate alternative drivers can lead to significant performance improvements.

If only some of the database access needs to be done under XA, create two connection pools for the same database. Use an XA driver on one of the connection pools and a non-XA driver on the other. This will avoid the performance overhead of XA transactions for database calls that don't need 2PC.

Closely related to the connection pools are the data sources. In order to use XA, a data source must have the value "Honor Global Transactions" set to true. Prior to BEA WebLogic Server 8.1, these data sources appeared in the Console under the heading "Tx Data Sources". In 8.1, all data sources are under the same heading. Turning this flag on means that BEA WebLogic Server will be able to correctly handle transactions in a number of different scenarios. Setting this flag will ensure that WebLogic Server's JTA implementation will automatically enroll the data source in an XA transaction if it is required. There are also situations where this flag should be set even if your application does not use XA. The "Honor Global Transactions" flag should also be enabled if your application makes any explicit JTA calls, uses container-managed transactions with EJBs, or issues multiple SQL statements within the same transaction. In these non-XA situations, BEA WebLogic Server will ensure that the application retains the proper database connection from the connection pool to ensure transactional integrity.

A second flag on the data source page that is occasionally used is the "Emulate Two-Phase Commit for Non-XA Driver" setting. This flag should only be used if an XA driver cannot be obtained for the database. When this flag is on, a one-phase optimization is used. BEA WebLogic Server will first issue the "prepare to commit" command to the XA resources, commit the database that has emulation enabled, and then commit the resources in the transaction that are XA enabled. As long as nothing goes wrong, the data will still be consistent.

There is a potential that WebLogic Server will commit the non-XA transaction, only to have the transaction on the XA resource fail. WebLogic Server allows only one data source using emulation per transaction. Given the availability of XA drivers for most databases and the potential for inconsistent data, this setting should rarely be used. Figure 2 shows a data source properly configured for XA.

Within BEA WebLogic Server, JMS Servers themselves are XA resources. There is nothing special that needs to be configured to XA enable a JMS Server, but there is one configuration item that seems counter-intuitive. When using a JDBC store for the JMS Server, you might think that the connection pool used by the JDBC store needs to use an XA driver. In fact, the exact opposite is true. The connection pool for the JDBC store should not use an XA driver. In this case, the XA resource is the JMS Server, not the database. For this reason, a JMS Server that uses a file store is still capable of participating in an XA transaction. The decision about whether to use a file store or a JDBC store for a JMS Server should not be based on whether or not an application will need to use XA.

The next step to ensure that you are using XA with your JMS-based application is to use an XA connection factory. Again, the application code does not change, but a configuration setting in the BEA WebLogic console needs to be checked. After creating a new connection factory, you need to go to the "Transactions" tab and check "XA Connection Factory Enabled". Changing this value will require a server restart. If only some of the application's work with JMS needs to use XA, you may want to create another connection factory that does not use XA. Figure 3 shows a JMS connection factory configured for XA transactions.

The last resource that deserves mention is the messaging bridge, which was introduced originally for BEA WebLogic Server 7.0 and is intended to make it easier to integrate WebLogic JMS with foreign JMS providers. The messaging bridge acts as a bidirectional store-and-forward mechanism to transfer messages back and forth between WebLogic JMS and another messaging product, such as WebSphere MQ. WebLogic applications do not interact directly with the messaging bridge. Instead, they interact in a normal manner with the local WebLogic JMS queues or topics. The local queue or topic is then bridged to the foreign JMS provider. However, there are several configuration settings for the messaging bridge that need to be set correctly to ensure "guaranteed once and only once" delivery between WebLogic and the other product. Some of the settings are on the JMS Bridge Destination and some of the settings are on the bridge itself. These settings affect whether or not BEA WebLogic Server will use XA transactions when it transfers messages.

When configuring a Bridge Destination, there are three settings that control whether or not WebLogic Server will treat the destination as an XA resource: the adapter, the Adapter JNDI Name, and the Connection Factory JNDI Name. BEA WebLogic Server uses J2EE Connector Architecture adapters to communicate with the Bridge Destinations. The adapter that is used must support XA. BEA WebLogic Server provides a generic XA adapter named jms-xa-adp.rar. The Adapter JNDI Name for this is eis.jms.WLSConnectionFactoryJNDIXA. Finally, the foreign JMS Server must have an XA-enabled connection factory, and the name of this connection factory is placed in the Connection Factory JNDI Name field.

The messaging bridge itself has different qualities of service available: Exactly-Once, Atmost-once, and Duplicate-okay. If "guaranteed once and only once" delivery is a requirement for the application, then the only acceptable setting for Quality of Service on the messaging bridge configuration page is "Exactly-Once". The QoS Degradation Allowed flag should also be unchecked. Checking this box allows BEA WebLogic Server to default to a lower quality of service if it is unable to get an XA connection to the foreign provider. This is usually a very bad idea. Qualities of service should be dictated by the business requirements. Business requirements are rarely flexible enough to switch back and forth between "Exactly-Once" and other service levels. Using the QOS Degradation Allowed flag means that no one can predict which quality of service WebLogic Server will be using at runtime.

Once you've configured XA for the resources involved in an application, how can you determine that everything is working properly? Under normal conditions, where all resources are available and operating correctly, a non-XA–enabled application will behave exactly the same as an XA-enabled one. XA proves its value when an application encounters unexpected situations. The test plan for an application should include scenarios where each resource is unavailable. Testing should also evaluate what happens when a resource becomes unavailable in the middle of processing transactions. Intentionally killing a BEA WebLogic Server instance, causing a duplicate key error, or restarting a database simulates situations that can happen in production. If XA has been properly configured, all resources should complete or roll back the same transactions.

Conclusion
By now two things should be clear. XA transactions are needed more often than most developers realize, and XA is very easy to configure within BEA WebLogic Server. Always evaluate the configuration based on the application's business requirements and then choose the appropriate settings to make sure that transactions behave in the way they should.

More Stories By Wes Hewatt

Wes Hewatt has over fourteen years of experience designing and deploying mission critical applications for Fortune 1000 companies. As a Senior Systems Engineer for BEA Systems, Mr. Hewatt works with BEA's customers to develop J2EE applications for the WebLogic Platform. He specializes in web services and integration technologies.

Comments (4)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...