Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Freedom, Disasters, and Getting Something for Nothing

Freedom, Disasters, and Getting Something for Nothing

In most large-scale "mission critical" systems, high on the list of requirements is resistance to failure. With the world living in fear of violent destruction post 9/11, it is more common for the definition of "failure" in this context to be the loss of a whole data-processing facility.

Taking the provision of a full failover site into consideration provides some good food for thought in terms of the best way to architect a solution.

Having decided to split data processing over two sites, it then needs to be determined whether the sites operate in a dual-running mode, whereby both share the transaction load during normal operation or in a master/backup configuration where the processing all occurs on the "master" site, and entirely fails over to a backup site should disaster strike. In these parsimonious times, it is usual to want dual operation to cut down on the solution cost, so lets think about what parts of J2EE we need to employ to get dual sites running in parallel, with each having the capability to fail over to the other in disaster situations.

Before starting down the road of technology solutions, there is an architectural choice to be made, a choice which in its turn may be driven by further business requirements: namely, how "synchronized" does the data on the two sites have to be during normal operation? Is it acceptable for the mirrored data on the two sites to be inconsistent, and if so for what duration?

If the two sites must be completely synchronized, then the good old transaction manager comes into play - you make all your database updates twice in the context of a JTA transaction, and the two databases are guaranteed to be synchronized to within a whisker of time. Of course, when disaster strikes and one of the databases becomes unavailable, one of the dual updates must be dropped - otherwise the whole system won't be able to function - how can a transaction commit if only one of the two resources it touched is available? So, you code some logic such that the application ceases to send the remote updates when the remote site is unavailable. Of course, someone digging a trench through your fiber-optic cable, rather than a real disaster, may cause the apparent disappearance of the remote site. In that case the systems are said to be partitioned, and after some interval the two (both of which have been gaily processing on the assumption that the other has died) will be reunited. At this point, you have a bit of a headache since the databases are now inconsistent in ways that need detecting and reconciling.

Clearly, what I have described is a lot of design - and consequent lines of code - of an infrastructure nature. I guess that's okay, since the business requirements are also of an infrastructure nature. It's just as well that you had the transaction manager at hand, though - just think how much more you'd have had to worry about without it!

Usually, because of the onerous nature of maintaining such tight levels of synchronization between sites, the requirements relax somewhat and some window of inconsistency is tolerated between replica databases on different sites, with updates on one propagated to the other asynchronously. In this scenario, the JTA transaction manager provides transactional consistency not between two databases, but between the local database and a JMS queue, which will be used to propagate updates to the remote site as and when it can. In this case temporary connectivity losses between the two sites are less of a problem because of the tolerance built into the requirements of data being out of synch. When the connection is lost, messages queue locally and when the network comes back transmission resumes - putting much less pressure on the management of inconsistency between the databases. (Of course, in practice the lower pressure is a direct result of the looser consistency requirement.) The good news is that the transaction manager is still there to ensure that updates are only published via the queue once they are committed to the local database.

In noncatastrophic failure cases, it is likely that the database that was running on the failed system will need to be failed over to the system that is still running (maybe we're talking about a CPU failure now, not total destruction of a facility). This will clearly require that the physical disks that the database is stored on are available on the primary and the backup machine - in the first instance, the database engine will need to be restarted on the secondary node so that it can pick up managing the data again after recovering its state. Likewise, the storage backing up the JMS queues will need to be failed over too. It is at this point that we must count the cost of using JTA - it isn't magically giving you something for nothing - to make sure updates between databases, or between queues and databases, are consistent. The transaction manager itself needs to keep a persistent record of what is going on; this record is held in the transaction log. Therefore, to do a failover for the queued case, not only do the persistent stores behind the queues and databases need to be moved across and recovered, but the transaction log needs to be moved too. Moving it is achieved the same way you moved the database files - dual ported disks, storage area networks, whatever you have at hand. Once this is done, the databases and queues need to be recovered and the migrated transaction log needs to be restarted to allow in-flight decided transactions to complete, while aborting and cleaning up after those that did not get to the point of being decided. WebLogic's console provides for the administrator to migrate the transaction recovery service from a failed server to a healthy one; doing this is what completes the in-flight transactions according to the content of the migrated log file.

Conclusion
In conclusion, there are two observations to be made here. The first is that a complete solution to this type of requirement requires design and implementation at multiple levels - the storage hardware, DBMS, application server, and application logic all need to work together to provide support for graceful failover. The second point, which to some extent follows from the first, is that (as with all architectural design) you have a choice as to what level to implement things at. A hardware clustering solution could fail over a disk and restart all the application-level facilities, but these tend to be regarded as expensive and operate most easily in the master/hot standby mode. Databases offer replication techniques too; however, they suffer from the same laws of physics that I discussed above. Changes are replicated asynchronously, allowing for windows of data inconsistency, with the added problem that the code you need to implement to resolve conflicting updates will be tied to the particular database engine you have - this kind of thing is way beyond what standards in the database arena specify - and also, since the replication happens behind the scenes, diagnosing problems and determining the application's desired state from a set of inconsistent logs, etc., requires extremely deep knowledge of database internals, if it's possible at all.

Finally, you can go with the application-server approach, which has the benefit of being portable across application servers, since it's based on standard programming interfaces, and also provides transparency as to what is happening with the replication since it is all "above the covers." The downside of the application-server approach, arguably, is that the transaction log is yet one more thing to manage and fail over. To that, I'd say that you get what you pay for in terms of application manageability and transparency.

In reality, any failover solution of this nature will be designed to use different capabilities provided at different levels in the architecture. The choice of what to do at what level cannot be made in a generic way since all the detailed requirements of any given application will vary so widely. What is certain is that somewhere, at some level in your solution, whatever shape it takes, is a transaction manager (or something that smells like one) holding things together.

More Stories By Peter Holditch

Peter Holditch is a senior presales engineer in the UK for Azul Systems. Prior to joining Azul he spent nine years at BEA systems, going from being one of their first Professional Services consultants in Europe and finishing up as a principal presales engineer. He has an R&D background (originally having worked on BEA's Tuxedo product) and his technical interests are in high-throughput transaction systems. "Of the pitch" Peter likes to brew beer, build furniture, and undertake other ludicrously ambitious projects - but (generally) not all at the same time!

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...