Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog

@CloudExpo: Blog Post

High Availability, Fault Tolerance and Disaster Recovery in AWS

Introducing elastic cloud computing for Disaster Recovery, Fault Tolerance and High Availability

Amazon Web Services – Disaster Recovery, High Availability and Fault Tolerance

Abbreviations used:

  • AWS - Amazon Web Services
  • AMI - Amazon Machine Instance
  • DR - Disaster Recovery
  • FT - Fault Tolerance
  • HA - High Availability

Non-technical introduction.
H
igh Availability and Fault Tolerance – the requirement that a computer application be seemlessly available to users without interruption, literally “no (or very little) fault will be tolerated”. In simple speak, this means that I am able to use a computer application even though in the background there may be outages, for example hardware failure, network congestion or maximum CPU utilization.  The application is “highly available”, it  is available “(almost) all the time.”

Disaster Recovery – what it takes for your organization to recover from a computer disaster and be operational again. Simple example: suppose a hard-drive fails, do you need a duplicate hard-drive with an identical copy of the data available  within micro-seconds? Or can your application be unavailable to end-users while a new hard-drive is installed and data restored from a backup, knowing that there will be data missing between the time of the last backup and the failure of the hard-drive?

Organizations such as an “online auction company” and a “major search engine” cannot tolerate down-time nor data loss. Your personal computer, though, can likely tolerate some downtime and even data loss. Sometimes DR is automatic, when the primary hardware/software fails, the secondary automatically takes over; other times DR requires manual intervention.  Whichever you configure, depends on your tolerance for downtime. When considering FT and DR, there are two important terms:

  • Recovery Time Objective: how long it takes your organization to recover from an outage.
  • Recovery Point Objective:how much data can you afford to lose.

What does Fault Tolerance, High Availability and Disaster Recovery cost?
Organizations that require almost 100% availability and fractional if any data loss,  build redundant datacenters. Entire datacenters with hardware and software and identical copies of the appliction and data. Sometimes these extra datacenter(s) are used to offload traffic from the primary datacenter(s) during times of peak usage. Example, during the November/December shopping season or March/April tax season, companies like an “online auction company” and an “online tax company”, will offload some traffic from their primary servers to secondary servers. Other times, these extra datacenter(s) are simply sitting idle waiting for a disaster to occur at the primary datacenter. Hardware, software, electric power, air conditioning,  building rent and physical security all “just in case” or “when we need it. The invesment in the secondary datacenter(s) is up-front capital expenditure. Secondary datacenters are expensive for computing power that is often idle!

Introducing elastic cloud computing for Disaster Recovery, Fault Tolerance and High Availability.
What is an elastic band? A thin strip of rubber that stretches or contracts. If I need to bind three DVDs together before loaning them to a friend, the elastic band may stretch to perhaps 50% of capacity. If I  am loaning six or seven DVDs, the elastic band may stretch to maximum capacity, but either way I only need one elastic band, I don’t need to buy two. This concept illustrates the elasticity of cloud computing:  use and pay for computing services only when you need it.

In my above example, when tax season begins, “online tax company” can request and purchase computing services from a provider, such as Amazon Web Services,  and then at the end of tax season cease using and paying for those computing services. Thus “online tax company” can provide a HA tax processing service to their customers, because their primary datacenter(s) will not be overloaded. “Online tax company” will automatically scale out to a cloud provider such as Amazon as needed.  Secondly, the extra computing services provide FT, in case there is a disaster at their physical site during peak tax season.

High Availability, Fault Tolerance and Disaster Recovery in the Cloud
What about an organization that does not own a physical datacenter, but instead runs their entire operation in the cloud. How does their cloud provider provide FT, HA and DR? How does this compare to HA, FT and DR in physical datacenters?

What follows is a precis of two white-papers from Amazon Web Services, Building Fault Tolerant Applications on AWS and Using AWS for Disaster Recovery

This table is  a technical overview of AWS’s FT, HA and DR features compared to a physical environment:

Failure or fault

Physical server

Amazon Web Services

Server failure

To provide fault tolerance I need to have a second server on standby, with an identical copy of the application. I also need to re-direct traffic to the second server by changing the IP address of the second server to the IP address of the primary server.

I launch a new instance of my failed AMI from an AMI template using a script, an API call or the web console. I then map my elastic IP address to the new instance.  AWS CloudFormation allows me to create a collection of AMIs and resources.

Backups for FT

Physical backups often use tape or other mediume for duplication often stored offsite. Backup and Restore can be time consuming.

AWS backups are snapshots of an AMI that can be easily restored using command line or the web console in near instantaneous time.

Storage

If a physical server uses Network Attached Storage, the data is preserved if the server fails. If the server uses an internal drive, the drive is unavailable and potentially lost if the server fails.

Elastic Block Storage is separate from the AMI and persists even if the AMI does not. EBS is built on highly redundant storage that has a failure rate of 0.1 to 0.5% compared to 4% for a standard hard-drive.

IP addressing

An IP address is bound to a physical server and manual configuration is required to modify the IP address of a server.

AWS IP addresses are bound to an AWS account and separate from the AMI. They can be dynamically associated or disassociated from an AMI.

Scale/Growth for HA

To extend the capacity, while maintaining High Availability, of an application running on physical servers, I have to purchase more hardware, rackspace, electrical power and cooling. When I have excess capacity, these servers are idle, consume space, power and waste money.

To extend the capacity of an application running in an AWS cloud, I can use autoscaling which automatically adds AMIs to my capacity based on rules. Similary, I can terminate instances when no longer needed. This also allows me to refresh instances if they degrade.

Load Balancing for HA and FT

A physical load balancer balances traffic across known physical servers. It can detect a server is overloaded and direct traffic to other less utilized servers.

Elastic Load Balancing distributes traffic across AMIs, detects which AMIs may be less responsive and redirects traffic away from them until they are restored.

Multiple Geographies for HA

Physical servers are stored in datacenters around the world to provide HA.

AWS is distributed across geographic regions with Availability Zones within each region to provide HA.

Guaranteed Failover and FT

An organization that purchases its own physical servers is guaranteed failover to those servers.

An AWS customer can purchase Reserved AMIs that are guaranteed to the customer, regardless of what overall load AWS  may experience.

An AWS customer can purchase Reserved AMIs that are guaranteed to the customer, regardless of what overall load AWS  may experience.

I can maintain a physical data center and use the AWS as needed. There are four configuration options.

  1. AWS as backup - I can back up my physical environment to AWS using DirectConnect or AWS import/export, thus using AWS as for backup.
  2. Minimal AWS - In this scenario I have core services permanently running in AWS, for example copies of my data/databases. In case of failure of my physical datacenter, I need to startup AMIs that contain my applications that then connect to my redundant datastores in AWS and modify DNS settings to route traffic to AWS.
  3. Partial AWS - In this scenario I have a complete duplicate of my services permanently running in AWS, but on a minimal number of AMIs. In case of failure of my physical datacenter, I scale up the configuration of my AMIs to cope with the increased load.
  4. Complete AWS - In this scenario I have a complete duplicated configuration of my physical datacenter in AWS. I can use weighted 53 DNS service from AWS to redirect traffic, application logic to use theAWS datastores and EC2 auto-scaling to grow capacity within AWS.

More Stories By Jonathan Gershater

Jonathan Gershater has lived and worked in Silicon Valley since 1996, primarily doing system and sales engineering specializing in: Web Applications, Identity and Security. At Red Hat, he provides Technical Marketing for Virtualization and Cloud. Prior to joining Red Hat, Jonathan worked at 3Com, Entrust (by acquisition) two startups, Sun Microsystems and Trend Micro.

(The views expressed in this blog are entirely mine and do not represent my employer - Jonathan).

@ThingsExpo Stories
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things’). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing? IoT is not about the devices, it’s about the data consumed and generated. The devices are tools, mechanisms, conduits. In his session at Internet of Things at Cloud Expo | DXWor...