Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: @CloudExpo

@CloudExpo: Article

Cloud Computing: The SmugMug Approach to Using Amazon's EC2 and S3

SkyNet Lives! (aka EC2 @ SmugMug)

Don MacAskell's Blog

Everyone knows that SmugMug is a heavy user of S3, storing well over half a petabyte of data (non-replicated) there. What you may not know is that EC2 provides a core part of our infrastructure, too. Thanks to Amazon, the software and hardware that processes all of your high-resolution photos and high-definition video is totally scalable without any human intervention. And when I say scalable, I mean both up and down, just the way it should be. Here's our approach in a nutshell...

Overview

The architecture basically consists of three software components: the rendering workers, the batch queuing piece, and the controller. The rendering workers live on EC2, and both the queuing piece and the controller live at SmugMug. We don’t use SQS for our queuing mechanism for a few reasons:

  • We’d already built a queuing mechanism years ago, and it hasn’t (yet?) hit any performance or reliability bottlenecks.
  • SQS’s pricing used to be outta whack for what we needed. They’ve since dramatically lowered the pricing and it’s now much more in line with what we’d expect - but by then, we were done.
  • The controller consumes historical data to make smart decisions, and our existing queuing system was slightly easier to generate the historical data from.

Render Workers

Our render workers are totally “dumb”. They’re literally bare-bones CentOS 5 AMIs (you can build your own, or use RightScale’s, or whatever you’d like) with a single extra script on them which is executed from /etc/rc.d/rc.local. What does that script do? It fetches intelligence.

When that script executes, it sends an authenticated request to get a software bundle, extracts the bundle, and starts the software inside. That’s it. Further, the software inside the bundle is self-aware and self-updating, too, automatically fetching updated software, terminating older versions, and relaunching itself. This makes it super-simple to push out new SmugMug software releases - no bundling up new AMIs and testing them or anything else that’s messy. Simply update the software bundle on our servers and all of the render workers automatically get the new release within seconds.

Of course, worker instances might have different roles or be assigned to work with different SmugMug clusters (test vs production, for example), so we have to be able to give it instructions at launch. We do this through the “user-data” launch parameter you can specify for EC2 instances - they give the software all the details needed to choose a role, get software, and launch it. Reading the user-data couldn’t be easier. If you haven’t done it before, just fetch http://169.254.169.254/latest/user-data from your running instance and parse it.

Once they’re up and running, they simply ping the queue service with a “Hi, I’m looking for work. Do you have any?” request, and the queue service either supplies them with work or gives them some other directive (shutdown, software update, take a short nap, etc). Once a job is done (or generated an error), the worker stores the work result on S3 and notifies the queue service that the job is done and asks for more work. Simple.

Queue Service

This is your basic queuing service, probably very similar to any other queueing service you’ve seen before. Ours supports job types (new upload, rotate, watermark, etc) and priorities (Pros go to the head of the line, etc) as well as other details. Upon completion, it also logs historical data such as time to completion. It also supports time-based re-queueing in the event of a worker outage, miscommunication, error, or whatever. I haven’t taken a really hard look at SQS in quite some time, but I can’t imagine it would be very difficult to implement on SQS for those of you starting fresh.

Controller (aka SkyNet)

For me, this was the fun part. Initially we called it RubberBand, but we had an ususual partial outage one day which caused it to go berzerk and launch ~250 XL instances (~2000 normal EC2 instances) in a single call. Clearly, it had gained sentience and was trying to take over the world, so we renamed it SkyNet. (We’ve since corrected the problem, and given SkyNet more reasonable thresholds and limits. And yes, I caught it within the hour.).

SkyNet is completely autonomous - it operates with with zero human interaction, either watching or providing interactive guidance. No-one at SmugMug even pays attention to it anymore (and we haven’t for many months) since it operates so efficiently. (Yes, I realize that means it’s probably well on its way to world domination. Sorry in advance to everyone killed in the forthcoming man-machine war.)

Roughly once per minute, SkyNet makes an EC2 decision: launch instance(s), terminate instance(s), or sleep. It has a lot of inputs - it checks anywhere from 30-50 pieces of data to make an informed decision. One of the reasons for that is we have a variety of different jobs coming in, some of which (uploads) are semi-predictable. We know that lots of uploads come in every Sunday evening, for example, so we can begin our prediction model there. Other jobs, though, such as watermarking an entire gallery of 10,000 photos with a single click, aren’t predictable in a useful way, and we can only respond once the load hits the queue.

A few of the data points SkyNet looks at are:

  • How many jobs are pending?
  • What’s the priority of the jobs?
  • What type of jobs are they?
  • How complex are the pending jobs? (ex: HD video vs 1Mpix photo)
  • How time-sensitive are the pending jobs? (ex: Uploads vs rotations)
  • Current load of the EC2 cluster
  • Current # of jobs per sample processed
  • Average time per job per sample
  • Historical load and job performance
  • How close any instances are to the end of their 1-hour cost window
  • Recent SkyNet actions (start/terminate/etc)

.. and the list goes on.

Our goal is to keep enough slack around to handle surges of unpredictable batch operations, but not enough so it drains our bank account. We’ve settled on an average of roughly 25% of excess compute capacity available when averaged over a full 24 hour period and SkyNet keeps us remarkably close to that number. We always err on the side of more excess (so we get faster processing times) rather than less when we have to make a decision. It’s great to save a few bucks here and there that we can plow back into better customer service or a new feature - but not if photo uploads aren’t processing, consistently, within 5-30 seconds of upload.

SkyNet Lives - EC2 at SmugMug

Our workers like lots of threads, so SkyNet does its best to launch c1.xlarge instances (Amazon calls these “High-CPU Instances“), but is smart enough to request equivalent other instance sizes (2 x Large, 8 x Small, etc) in the event it can’t allocate as many c1.xlarge instances as it would like. Our application doesn’t care how big/small the instances are, just that we get lots of CPU cores in aggregate. (We were in the Beta for the High-CPU feature, so we’ve been using it for months).

One interesting thing we had to take into account when writing SkyNet was the EC2 startup lag. Don’t get me wrong - I think EC2 starts up reasonably fast (~5 mins max, usually less), but when SkyNet is making a decision every minute, that means you could launch too many instances if you don’t take recent actions into account to cover startup lag (and, conversely, you need to start instances a little earlier than you might actually need them otherwise you get behind).

The Money

SmugMug is a profitable business, and we like to keep it that way. The secrets to efficiently using EC2, at least in our use case, are as follows:

  • Take advantage of the free S3 transfers. This is a biggy. Our workers get and put almost all of their bytes to/from S3.
  • Make sure you have scaling down working as well as scaling up. At 3am on an average Wednesday morning, we have very few instances running.
  • Use the new High-CPU Instances. Twice the CPU resources for the same $$ if you don’t need RAM.
  • Amazon kindly gives you 30 days to monetize your AWS expenses. Use those 30 days wisely - generate revenues.

Why No Web Servers?

I get asked this question a lot, and it really comes down to two issues, one major and one minor:

  • No complete DB solution. SimpleDB is interesting, and the new EC2 Persistent Storage is too, but neither provides a complete solution for us. EC2 storage isn’t performant enough without some serious, painful partitioning to a finer grain than we do now - which comes with its own set of challenges, and SimpleDB both isn’t performant enough and doesn’t address all of our use cases. Since latency to our DBs matters a great deal to our web servers, this is a deal-killer - I can’t have EC2 web servers talking to DBs in my datacenters. (There are a few corner cases we’re exploring where we probably can, but they’re the exception - not the rule).
  • No load balancing API. They’ve got an IP address solution in the form of Elastic IPs, which is awesome and major step forward, but they don’t have a simple Load Balancer API that I can throw my web boxes behind. Yes, I realize I can manually do it using EC2 instances, but that’s more fragile and difficult (and has unknown scaling properties at our scale). If the DB issue were solved, I’d probably dig into this and figure out how to do it ourselves - but since it’s not, I can keep asking for this in the meantime.

Let me be very clear here: I really don’t want to operate datacenters anymore despite the fact that we’re pretty good at it. It’s a necessary evil because we’re an Internet company, but our mission is to be the best photo sharing site. We’d rather spend our time giving our customers great service and writing great software rather than managing physical hardware. I’d rather have my awesome Ops team interacting with software remotely for 100% of their duties (and mostly just watching software like SkyNet do its thing). We’ll get there - I’m confident of that - we’re just not there yet.

Until then, we’ll remain a hybrid approach.

More Stories By Don MacAskill

Don MacAskill is CEO and Chief Geek of SmugMug.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Mukul Kumar 07/28/08 01:45:54 PM EDT

Hi,

I just posted an article on 'Cloud Availability' at the following URL, would like to hear your comments on the same.
http://mukulblog.blogspot.com/2008/07/cloud-availability.html

Thanks,
Mukul.

IoT & Smart Cities Stories
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
Early Bird Registration Discount Expires on August 31, 2018 Conference Registration Link ▸ HERE. Pick from all 200 sessions in all 10 tracks, plus 22 Keynotes & General Sessions! Lunch is served two days. EXPIRES AUGUST 31, 2018. Ticket prices: ($1,295-Aug 31) ($1,495-Oct 31) ($1,995-Nov 12) ($2,500-Walk-in)
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...