Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Acid Reign

The transaction processing monitor is dead; long live the transaction processing monitor

In browsing around the Web, as one occasionally does in a free nanosecond, I read an interesting article about twp-phase commit transactions by Gregor Hohpe of ThoughtWorks ("Your Coffee Shop Does Not Use Two Phase Commit"). Gregor comes at the subject from the direction opposite the one I usually take in this column, since I am of a TP persuasion, but he covers the same arguments that I have explored in the past and comes to similar conclusions.

"Your Coffee Shop Doesn't Use Two-Phase Commit": Should You?
Briefly, two-phase commit can be costly from a performance standpoint - in terms of both end-to-end transaction time and throughput - and you need to make sure you cost-justify your decision to use it when you do. On a finer point of detail, I would take issue with the implication in the article that there is a straight choice between synchronous processing and two-phase commit - most of the transaction systems I have ever seen involve access to one database and one asynchronous reliable queue in the context of a single transaction - thus guaranteeing that some updates happen and a message is guaranteed to be delivered to the next step in the process at some time in the future as an atomic unit, but that's another story.

So, back from my digression, the reason I wanted to write a follow-on to this article was because it seemed to me that the article illustrates a good point about the benefits of transactions and transaction management - particularly in the broadest sense of the term. To briefly summarize it so that the rest of my article makes sense, the thesis is that Starbucks uses an asynchronous model for accepting coffee requests into its main business system (the barista) in order to maximize the potential throughput of coffee from the shop, and hence maximize revenue. The potential cost of this optimization to the "happy day" scenario - the assumption that all is going forward with no errors - is the odd need to pause and hold up the line while wrong drinks are disposed of and remade, or money is refunded to unhappy (and still thirsty) punters. Indeed, if this is the only cost of this assumption, then the asynchronous case is clearly the correct design - throughput is maximized and the cost of unwinding the odd failure is outweighed by the less costly, less complex system we have built.

However, back in the real world away from analogy land, this is often a better "project phase one" argument than a "deployment lifetime" argument. To return to the analogy by way of illustration, imagine that our coffee production line is now in place and we are happily raking in money for strangely named coffees, and suddenly we get a good idea: How about improved quality of service for regular customers? For those really important coffee drinkers - rendered hyper-impatient by the caffeine they are wired with - we want to tell them how long it will be until they can expect their coffee. This poses a problem for us. From the moment a name got written on the empty coffee cup and it got queued on top of the coffee machine, we lost track of it. We are relying on the customer to hang about and wait until his name is called before customer and cup (now replete with coffee) are reunited. To track the cup in the queue, we will need to adopt a new strategy - maybe we have a yellow cup for the priority customer, so we can see the yellow cups processing up the queue. Well, that's fine until we have too many privileged customers, when we will no longer be able to distinguish which yellow cup we are looking for. What now? Suddenly we start to wish that we had a nice synchronous coffee production process, so we knew where we stood; of course, the analogy is starting to creak here a bit - the time it takes to produce a coffee is relatively short, and there is only one business system in the process, so this is not really that great an issue (which is a pity, because I was just about to propose attaching RFID tags to each cup, to allow us to associate them with their intended recipients…), but there is a core of truth here.

Attach an RFID Tag to Each Cup
Because of the apparent simplicity of asynchronous queuing-based systems, they are widely deployed. Additionally, a widely felt pain they create is the lack of visibility into "in-flight" business transactions. There is a clear and present demand for "executive dashboard" type facilities, so that managers have some insight into how the systems for which they are responsible are running - and some ability to foresee and forestall problems. This usually causes the nice simple phase one MOM-based systems to be glued to a lot of information gathering infrastructure (usually, more queues) with some kind of event correlation machinery on the back end to give an indication of what is passing through the system, and what looks unusual or possibly erroneous. Where did that original elegant simplicity go?

One oft-overlooked advantage of building systems with a transaction manager coordinating them is that we get an out-of-the-box central place that we can go to see which business event has touched which resource, and what the outcome is expected to be.

Of course, that said we still do have the bottleneck that the enforcement of ACID properties places on our throughput (chiefly, because of the database locks that ACID implies). It is here that I can mount another of my favorite hobbyhorses. One way to get the benefits of central coordination of transactions (in the loose sense, simply meaning correlated business activities) without incurring the penalties of data locking and contention is to relax the (technical) strictures of the ACID rules and allow a more easygoing, business reality-focused view of how the transactions correlate the business events. This is the idea behind a "next-generation" transaction concept such as cohesions, which first surfaced in the OASIS BTP standard and are now informing the debate within the WSBusinessActivity proposed standard.

Have Your Coffee and Drink It Too
Maybe, when these transaction standards mature, we will at last be able to have our coffee and drink it too?


More Stories By Peter Holditch

Peter Holditch is a senior presales engineer in the UK for Azul Systems. Prior to joining Azul he spent nine years at BEA systems, going from being one of their first Professional Services consultants in Europe and finishing up as a principal presales engineer. He has an R&D background (originally having worked on BEA's Tuxedo product) and his technical interests are in high-throughput transaction systems. "Of the pitch" Peter likes to brew beer, build furniture, and undertake other ludicrously ambitious projects - but (generally) not all at the same time!

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
The Founder of NostaLab and a member of the Google Health Advisory Board, John is a unique combination of strategic thinker, marketer and entrepreneur. His career was built on the "science of advertising" combining strategy, creativity and marketing for industry-leading results. Combined with his ability to communicate complicated scientific concepts in a way that consumers and scientists alike can appreciate, John is a sought-after speaker for conferences on the forefront of healthcare science,...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
DXWorldEXPO LLC announced today that Ed Featherston has been named the "Tech Chair" of "FinTechEXPO - New York Blockchain Event" of CloudEXPO's 10-Year Anniversary Event which will take place on November 12-13, 2018 in New York City. CloudEXPO | DXWorldEXPO New York will present keynotes, general sessions, and more than 20 blockchain sessions by leading FinTech experts.
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user e...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of ...