Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Freedom, Disasters, and Getting Something for Nothing

Freedom, Disasters, and Getting Something for Nothing

In most large-scale "mission critical" systems, high on the list of requirements is resistance to failure. With the world living in fear of violent destruction post 9/11, it is more common for the definition of "failure" in this context to be the loss of a whole data-processing facility.

Taking the provision of a full failover site into consideration provides some good food for thought in terms of the best way to architect a solution.

Having decided to split data processing over two sites, it then needs to be determined whether the sites operate in a dual-running mode, whereby both share the transaction load during normal operation or in a master/backup configuration where the processing all occurs on the "master" site, and entirely fails over to a backup site should disaster strike. In these parsimonious times, it is usual to want dual operation to cut down on the solution cost, so lets think about what parts of J2EE we need to employ to get dual sites running in parallel, with each having the capability to fail over to the other in disaster situations.

Before starting down the road of technology solutions, there is an architectural choice to be made, a choice which in its turn may be driven by further business requirements: namely, how "synchronized" does the data on the two sites have to be during normal operation? Is it acceptable for the mirrored data on the two sites to be inconsistent, and if so for what duration?

If the two sites must be completely synchronized, then the good old transaction manager comes into play - you make all your database updates twice in the context of a JTA transaction, and the two databases are guaranteed to be synchronized to within a whisker of time. Of course, when disaster strikes and one of the databases becomes unavailable, one of the dual updates must be dropped - otherwise the whole system won't be able to function - how can a transaction commit if only one of the two resources it touched is available? So, you code some logic such that the application ceases to send the remote updates when the remote site is unavailable. Of course, someone digging a trench through your fiber-optic cable, rather than a real disaster, may cause the apparent disappearance of the remote site. In that case the systems are said to be partitioned, and after some interval the two (both of which have been gaily processing on the assumption that the other has died) will be reunited. At this point, you have a bit of a headache since the databases are now inconsistent in ways that need detecting and reconciling.

Clearly, what I have described is a lot of design - and consequent lines of code - of an infrastructure nature. I guess that's okay, since the business requirements are also of an infrastructure nature. It's just as well that you had the transaction manager at hand, though - just think how much more you'd have had to worry about without it!

Usually, because of the onerous nature of maintaining such tight levels of synchronization between sites, the requirements relax somewhat and some window of inconsistency is tolerated between replica databases on different sites, with updates on one propagated to the other asynchronously. In this scenario, the JTA transaction manager provides transactional consistency not between two databases, but between the local database and a JMS queue, which will be used to propagate updates to the remote site as and when it can. In this case temporary connectivity losses between the two sites are less of a problem because of the tolerance built into the requirements of data being out of synch. When the connection is lost, messages queue locally and when the network comes back transmission resumes - putting much less pressure on the management of inconsistency between the databases. (Of course, in practice the lower pressure is a direct result of the looser consistency requirement.) The good news is that the transaction manager is still there to ensure that updates are only published via the queue once they are committed to the local database.

In noncatastrophic failure cases, it is likely that the database that was running on the failed system will need to be failed over to the system that is still running (maybe we're talking about a CPU failure now, not total destruction of a facility). This will clearly require that the physical disks that the database is stored on are available on the primary and the backup machine - in the first instance, the database engine will need to be restarted on the secondary node so that it can pick up managing the data again after recovering its state. Likewise, the storage backing up the JMS queues will need to be failed over too. It is at this point that we must count the cost of using JTA - it isn't magically giving you something for nothing - to make sure updates between databases, or between queues and databases, are consistent. The transaction manager itself needs to keep a persistent record of what is going on; this record is held in the transaction log. Therefore, to do a failover for the queued case, not only do the persistent stores behind the queues and databases need to be moved across and recovered, but the transaction log needs to be moved too. Moving it is achieved the same way you moved the database files - dual ported disks, storage area networks, whatever you have at hand. Once this is done, the databases and queues need to be recovered and the migrated transaction log needs to be restarted to allow in-flight decided transactions to complete, while aborting and cleaning up after those that did not get to the point of being decided. WebLogic's console provides for the administrator to migrate the transaction recovery service from a failed server to a healthy one; doing this is what completes the in-flight transactions according to the content of the migrated log file.

Conclusion
In conclusion, there are two observations to be made here. The first is that a complete solution to this type of requirement requires design and implementation at multiple levels - the storage hardware, DBMS, application server, and application logic all need to work together to provide support for graceful failover. The second point, which to some extent follows from the first, is that (as with all architectural design) you have a choice as to what level to implement things at. A hardware clustering solution could fail over a disk and restart all the application-level facilities, but these tend to be regarded as expensive and operate most easily in the master/hot standby mode. Databases offer replication techniques too; however, they suffer from the same laws of physics that I discussed above. Changes are replicated asynchronously, allowing for windows of data inconsistency, with the added problem that the code you need to implement to resolve conflicting updates will be tied to the particular database engine you have - this kind of thing is way beyond what standards in the database arena specify - and also, since the replication happens behind the scenes, diagnosing problems and determining the application's desired state from a set of inconsistent logs, etc., requires extremely deep knowledge of database internals, if it's possible at all.

Finally, you can go with the application-server approach, which has the benefit of being portable across application servers, since it's based on standard programming interfaces, and also provides transparency as to what is happening with the replication since it is all "above the covers." The downside of the application-server approach, arguably, is that the transaction log is yet one more thing to manage and fail over. To that, I'd say that you get what you pay for in terms of application manageability and transparency.

In reality, any failover solution of this nature will be designed to use different capabilities provided at different levels in the architecture. The choice of what to do at what level cannot be made in a generic way since all the detailed requirements of any given application will vary so widely. What is certain is that somewhere, at some level in your solution, whatever shape it takes, is a transaction manager (or something that smells like one) holding things together.

More Stories By Peter Holditch

Peter Holditch is a senior presales engineer in the UK for Azul Systems. Prior to joining Azul he spent nine years at BEA systems, going from being one of their first Professional Services consultants in Europe and finishing up as a principal presales engineer. He has an R&D background (originally having worked on BEA's Tuxedo product) and his technical interests are in high-throughput transaction systems. "Of the pitch" Peter likes to brew beer, build furniture, and undertake other ludicrously ambitious projects - but (generally) not all at the same time!

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER gives detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPOalso offers sp...
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
SYS-CON Events announced today that IoT Global Network has been named “Media Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. The IoT Global Network is a platform where you can connect with industry experts and network across the IoT community to build the successful IoT business of the future.