Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Freedom, Disasters, and Getting Something for Nothing

Freedom, Disasters, and Getting Something for Nothing

In most large-scale "mission critical" systems, high on the list of requirements is resistance to failure. With the world living in fear of violent destruction post 9/11, it is more common for the definition of "failure" in this context to be the loss of a whole data-processing facility.

Taking the provision of a full failover site into consideration provides some good food for thought in terms of the best way to architect a solution.

Having decided to split data processing over two sites, it then needs to be determined whether the sites operate in a dual-running mode, whereby both share the transaction load during normal operation or in a master/backup configuration where the processing all occurs on the "master" site, and entirely fails over to a backup site should disaster strike. In these parsimonious times, it is usual to want dual operation to cut down on the solution cost, so lets think about what parts of J2EE we need to employ to get dual sites running in parallel, with each having the capability to fail over to the other in disaster situations.

Before starting down the road of technology solutions, there is an architectural choice to be made, a choice which in its turn may be driven by further business requirements: namely, how "synchronized" does the data on the two sites have to be during normal operation? Is it acceptable for the mirrored data on the two sites to be inconsistent, and if so for what duration?

If the two sites must be completely synchronized, then the good old transaction manager comes into play - you make all your database updates twice in the context of a JTA transaction, and the two databases are guaranteed to be synchronized to within a whisker of time. Of course, when disaster strikes and one of the databases becomes unavailable, one of the dual updates must be dropped - otherwise the whole system won't be able to function - how can a transaction commit if only one of the two resources it touched is available? So, you code some logic such that the application ceases to send the remote updates when the remote site is unavailable. Of course, someone digging a trench through your fiber-optic cable, rather than a real disaster, may cause the apparent disappearance of the remote site. In that case the systems are said to be partitioned, and after some interval the two (both of which have been gaily processing on the assumption that the other has died) will be reunited. At this point, you have a bit of a headache since the databases are now inconsistent in ways that need detecting and reconciling.

Clearly, what I have described is a lot of design - and consequent lines of code - of an infrastructure nature. I guess that's okay, since the business requirements are also of an infrastructure nature. It's just as well that you had the transaction manager at hand, though - just think how much more you'd have had to worry about without it!

Usually, because of the onerous nature of maintaining such tight levels of synchronization between sites, the requirements relax somewhat and some window of inconsistency is tolerated between replica databases on different sites, with updates on one propagated to the other asynchronously. In this scenario, the JTA transaction manager provides transactional consistency not between two databases, but between the local database and a JMS queue, which will be used to propagate updates to the remote site as and when it can. In this case temporary connectivity losses between the two sites are less of a problem because of the tolerance built into the requirements of data being out of synch. When the connection is lost, messages queue locally and when the network comes back transmission resumes - putting much less pressure on the management of inconsistency between the databases. (Of course, in practice the lower pressure is a direct result of the looser consistency requirement.) The good news is that the transaction manager is still there to ensure that updates are only published via the queue once they are committed to the local database.

In noncatastrophic failure cases, it is likely that the database that was running on the failed system will need to be failed over to the system that is still running (maybe we're talking about a CPU failure now, not total destruction of a facility). This will clearly require that the physical disks that the database is stored on are available on the primary and the backup machine - in the first instance, the database engine will need to be restarted on the secondary node so that it can pick up managing the data again after recovering its state. Likewise, the storage backing up the JMS queues will need to be failed over too. It is at this point that we must count the cost of using JTA - it isn't magically giving you something for nothing - to make sure updates between databases, or between queues and databases, are consistent. The transaction manager itself needs to keep a persistent record of what is going on; this record is held in the transaction log. Therefore, to do a failover for the queued case, not only do the persistent stores behind the queues and databases need to be moved across and recovered, but the transaction log needs to be moved too. Moving it is achieved the same way you moved the database files - dual ported disks, storage area networks, whatever you have at hand. Once this is done, the databases and queues need to be recovered and the migrated transaction log needs to be restarted to allow in-flight decided transactions to complete, while aborting and cleaning up after those that did not get to the point of being decided. WebLogic's console provides for the administrator to migrate the transaction recovery service from a failed server to a healthy one; doing this is what completes the in-flight transactions according to the content of the migrated log file.

Conclusion
In conclusion, there are two observations to be made here. The first is that a complete solution to this type of requirement requires design and implementation at multiple levels - the storage hardware, DBMS, application server, and application logic all need to work together to provide support for graceful failover. The second point, which to some extent follows from the first, is that (as with all architectural design) you have a choice as to what level to implement things at. A hardware clustering solution could fail over a disk and restart all the application-level facilities, but these tend to be regarded as expensive and operate most easily in the master/hot standby mode. Databases offer replication techniques too; however, they suffer from the same laws of physics that I discussed above. Changes are replicated asynchronously, allowing for windows of data inconsistency, with the added problem that the code you need to implement to resolve conflicting updates will be tied to the particular database engine you have - this kind of thing is way beyond what standards in the database arena specify - and also, since the replication happens behind the scenes, diagnosing problems and determining the application's desired state from a set of inconsistent logs, etc., requires extremely deep knowledge of database internals, if it's possible at all.

Finally, you can go with the application-server approach, which has the benefit of being portable across application servers, since it's based on standard programming interfaces, and also provides transparency as to what is happening with the replication since it is all "above the covers." The downside of the application-server approach, arguably, is that the transaction log is yet one more thing to manage and fail over. To that, I'd say that you get what you pay for in terms of application manageability and transparency.

In reality, any failover solution of this nature will be designed to use different capabilities provided at different levels in the architecture. The choice of what to do at what level cannot be made in a generic way since all the detailed requirements of any given application will vary so widely. What is certain is that somewhere, at some level in your solution, whatever shape it takes, is a transaction manager (or something that smells like one) holding things together.

More Stories By Peter Holditch

Peter Holditch is a senior presales engineer in the UK for Azul Systems. Prior to joining Azul he spent nine years at BEA systems, going from being one of their first Professional Services consultants in Europe and finishing up as a principal presales engineer. He has an R&D background (originally having worked on BEA's Tuxedo product) and his technical interests are in high-throughput transaction systems. "Of the pitch" Peter likes to brew beer, build furniture, and undertake other ludicrously ambitious projects - but (generally) not all at the same time!

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits,...
@DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on qua...
SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness. For more information, please visit https://www.cedexis.com.
SYS-CON Events announced today that Google Cloud has been named “Keynote Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Companies come to Google Cloud to transform their businesses. Google Cloud’s comprehensive portfolio – from infrastructure to apps to devices – helps enterprises innovate faster, scale smarter, stay secure, and do more with data than ever before.
SYS-CON Events announced today that Vivint to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. As a leading smart home technology provider, Vivint offers home security, energy management, home automation, local cloud storage, and high-speed Internet solutions to more than one million customers throughout the United States and Canada. The end result is a smart home solution that sav...
SYS-CON Events announced today that Opsani will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Opsani is the leading provider of deployment automation systems for running and scaling traditional enterprise applications on container infrastructure.
SYS-CON Events announced today that Nirmata will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Nirmata provides a comprehensive platform, for deploying, operating, and optimizing containerized applications across clouds, powered by Kubernetes. Nirmata empowers enterprise DevOps teams by fully automating the complex operations and management of application containers and its underlying ...
SYS-CON Events announced today that Opsani to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. Opsani is creating the next generation of automated continuous deployment tools designed specifically for containers. How is continuous deployment different from continuous integration and continuous delivery? CI/CD tools provide build and test. Continuous Deployment is the means by which...