Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Lessons from the Amazon Cloud Outage

Best Practices for Resilient Cloud Applications

As reported in SYS-CON and elsewhere, we found the Amazon's cloud crashed, taking sites like Reddit, Foursquare, Quora, Hootsuite, Indaba, GroupMe, Scvngr, Motherboard.tv and few more down with it.

As reported several components of Amazon Cloud portfolio like, EC2, Elastic Block Store (EBS), Relational Database Service (RDS), Elastic Beastalk, CloudFormation and lately MapReduce were all impacted.

Amazon has given the following explanation for the crash at this time:

"A networking event triggered a large amount of re-mirroring of EBS [Extended Block Store] volumes ... This re-mirroring created a shortage of capacity ... which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes."

While this issue will be solved for now, this has created a huge impact on the Cloud Adoption for the large enterprises. However, the traditional high availability best practices always hold good for Cloud also and this issue cannot be seen as the failure of Cloud, rather more on the implementation. The following best practices will guard the cloud applications on top of the out of the box high availability options provided by the Cloud Provider like Amazon.

Ensure Application Controlled Scalability
We have got components like Auto Scaling, Elastic Load Balancing and Cloud Watch etc... These will help the scalability by monitoring the resource usage and automatically allocate new instances.

However this is achieved, if the application is aware of its usage and scales accordingly.

One such implementation pattern is a Routing Server, where the application characteristic like the type of the user, geography or the kind of transaction determines the target destination to be process the request and load balance accordingly.

Making the data aware scaling rules configurable without restarting the servers will go a long way in adjusting the routing mechanism to specific servers in case of regions or availability Zones are down due to the unknown reasons. This will also ensure that scalability rules can be dynamically altered in cases of catastrophic situations, such that some high priority transactions can continue to be served and low-priority transactions can be put on hold.

Stay Disconnected
Even though the typical application consists of multiple logical and physical components, it is best to decouple each of these components, so that each layer interacts with the   next layer in an asynchronous manner.

While there are some applications like banking, stock trading and online reservation which requires real time and stay connected nature, most applications in today's scenario can still take the advantage of a disconnected architecture.

Use reliable messaging and request / response framework so that the end users are never aware that their request is queued rather they get a feeling that their request is taken care and got a satisfactory response. This will ensure that even if some physical servers or logical components are down we can still not impact the end user.

Keep Transactions Smaller
The best path to ensure transparent application fail over and recoverability is to ensure that the transactions are as small as possible, and each step ensures a logically meaningful step within the overall process from an end-user perspective.

Remember some legacy applications of the previous era, which accepts transaction data for several fields and pages and used to have a Single SAVE button, and if anything happens, the end user lost all the data requiring to be re-entered, this needs to be avoided at all cost and the systems should be designed to be a combination of logically smaller steps that tied together in a loose coupled manner.

VEET: The User Entered Data
In a disconnected environment, end users are not there to fix the data entry errors or provide additional information, so that best fault tolerant systems are designed when the user is made to enter minimal data and the pattern of VEET (Validate Extract Enrich Transform) is applied to the user data.

Validate: Once the transaction inputs are entered and accepted, they stay as a meaningful information across the system components and no need to correct any data.

Extract: Never accept the information which can be derived, this will ensure that the errors are avoided on the known data.

Enrich: Accumulate the information from the existing information, so that the information need not be entered by the user. For example if the user enters the zipcode, the City, state and other information can automatically be retrieved.

Transform: Transform from one form to another form as it is meaningful to the system flow.

The above steps ensure that we can recover gracefully from failures, which will be transparent to the user.

Keep the Backup Data to the Lowest Granularity for Recovery
We have seen the storage mechanisms like Amazon EBS (Amazon Elastic Block Store) have in built fault tolerant mechanism such that volumes are replicated automatically. This is a very good feature. But more the data is backup as a raw volumes, we should also think about the ability to quickly recover and get going in case of disasters.

Typically database instances take some time to recover the pending transactions or to roll back the unfinished ones, proper backup mechanisms can help to recover from this scenario quickly.

The following options can be considered in the order to quickly recover from a disaster scenario.

Alternative Write Mechanism: Whether a log shipping or stand by database or simply mirror the data to other availability zones is one of the best mechanism to keep the databases in sync and quickly recover when one zone is not available.

Implicit Raw Volume Backups: This is employed out of the box in most of the cloud platform, however the intelligence to quickly recover the raw volumes with automated scripts should be in place.

Share Nothing
From the Amazon experience it is clear that in spite of the best availability mechanisms adopted by the Cloud provider, rarely we may end up in few availability zones struck by disaster.

However, during these scenarios we wanted to ensure that not all our users are affected, but only the minimal number of users. This can be achieved by adopting the ‘Shared Nothing Pattern' so that tenants are logically and physically separate within the Cloud Eco System.

This will ensure that the failure of part of the infrastructure will not affect everyone in the system.

Summary
The Amazon Cloud down event is a wake-up call about how Cloud can be utilized. There is no automatic switch that ensures all the fault tolerant needed for the systems. However, this has reinforced the strong fundamental principles with which applications needs to be built in order to be resilient. This incident cannot be seen as a failure of the Cloud platform itself and we have lot of room for improvement and to avoid these situations in future.

More Stories By Srinivasan Sundara Rajan

Highly passionate about utilizing Digital Technologies to enable next generation enterprise. Believes in enterprise transformation through the Natives (Cloud Native & Mobile Native).

@ThingsExpo Stories
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...