Brad's Books and Organizations

Books

Books

Organizations

Organizations

Hi, I’m Brad Feld, a managing director at the Foundry Group who lives in Boulder, Colorado. I invest in software and Internet companies around the US, run marathons and read a lot.

« swipe left for tags/categories

swipe right to go back »

Learning the Right Lessons from the Amazon Outage

Comments (15)

As most nerds know, Skynet gained self-awareness last week and decided as its first act to mess with Amazon Web Services, creating havoc for anyone that wanted to check-in on the Internet to their current physical location. In hindsight Skynet eventually figured out this was a bad call on its part as it actually wants to know where every human is at any given time. However, Skynet is still trying to get broader adoption of Xbox Live machines, so the Sony Playstation Network appears to still be down.

After all the obvious “oh my god, AWS is down” articles followed by the “see – I told you the cloud wouldn’t work” articles, some thoughtful analysis and suggestions have started to appear. Over the weekend, Dave Jilk, the CEO of Standing Cloud (I’m on the board) asked if I was going to write something about this and – if not – did I want him to write a guest post for me. Since I’ve used my weekend excess of creative energy building a Thing-O-Matic 3D Printer in an effort to show the machines that I come in peace, I quickly took him up on his offer.

Following are Dave’s thoughts on learning the right lessons from the Amazon outage.

Much has already been written about the recent Amazon Web Services outage that has caused problems for a few high-profile companies. Nevertheless, at Standing Cloud we live and breathe the infrastructure-as-a-service (IaaS) world every day, so I thought I might have something useful to add to the discussion.  In particular, some media and naysayers are emphasizing the wrong lessons to be learned from this incident.

Wrong lesson #1: The infrastructure cloud is either not ready for prime time, or never will be.

Those who say this simply do not understand what the infrastructure cloud is. At bottom, it is just a way to provision virtual servers in a data center without human involvement. It is not news to anyone who uses them that virtual servers are individually less reliable than physical servers; furthermore, those virtual servers run on physical servers inside a physical data center. All physical data centers have glitches and downtime, and this is not the first time Amazon has had an outage, although it is the most severe.

What is true is that the infrastructure cloud is not and never will be ready to be used exactly like a traditional physical data center that is under your control. But that is obvious after a moment’s reflection. So when you see someone claiming that the Amazon outage shows that the cloud is not ready, they are just waving an ignorance flag.

Wrong lesson #2: Amazon is not to be trusted.

On the contrary, the AWS cloud has been highly reliable on the whole. They take downtime seriously and given the volume of usage and the amount of time they have been running it (since 2006), it is not surprising that they would eventually have a major outage of some sort. Enterprises have data center downtime, and back in the day when startups had to build their own, so did they. Some data centers are run better than others, but they all have outages.

What is of more concern are rumors I have heard that Amazon does not actually use AWS for Amazon.com.  That doesn’t affect the quality of their cloud product directly, but given that they have lured customers with the claim that they do use it, this does impact our trust in relation to their marketing integrity. Presumably we will eventually find out the truth on that score. In any case, this issue is not related to the outage itself.

Having put the wrong lessons to rest, here are some positive lessons that put the nature of this outage into perspective, and help you take advantage of IaaS in the right way and at the right time.

Right lesson #1: Amazon is not infallible, and the cloud is not magic.

This is just the flip side of the “wrong lessons” discussed above. If you thought that Amazon would have 100% uptime, or that the infrastructure cloud somehow eliminates concerns about downtime, then you need to look closer at what it really is and how it works. It’s just a way to deploy somewhat less reliable servers, quickly and without human intervention. That’s all.  Amazon (and other providers) will have more outages, and cloud servers will fail both individually and en masse.

Your application and deployment architecture may not be ready for this. However, I would claim that if it is not, you are assuming that your own data center operators are infallible. The architectural changes required to accommodate the public IaaS cloud are a good idea even if you never move the application there. That’s why smart enterprises have been virtualizing their infrastructure, building private clouds, and migrating their applications to operate in that environment. It’s not just a more efficient use of hardware resources, it also increases the resiliency of the application.

Right lesson #2: Amazon is not the only IaaS provider, and your application should be able to run on more than one.

This requires a bias alert: cloud portability is one of the things Standing Cloud enables for the applications it manages. If you build/deploy/manage an application using our system, it will be able to run on many different cloud providers, and you can move it easily and quickly.

We built this capability, though, because we believed that it was important for risk mitigation. As I have already pointed out, no data center is infallible and outages are inevitable.  Further, It is not enough to have access to multiple data centers – the Amazon outage, though focused on one data center, created cascading effects (due to volume) in its other data centers. This, too, was predictable.

Given the inevitability of outages, how can one avoid downtime?  My answer is that an application should be able to run on more than one, or many, different public cloud services.  This answer has several implications:

  • You should avoid reliance on unique features of a particular IaaS provider if they affect your application architecture. Amazon has built a number of features that other providers do not have, and if you are committed to Amazon they make it very easy to be “locked in.” There are two ways to handle this: first, use a least-common-denominator approach; second, find a substitution for each such feature on a “secondary” service.
  • Your system deployment must be automated. If it is not automated, it is likely that it will take you longer to re-deploy the application (either in a different data center or on a different cloud service) than it will take for the provider to bring their service back up. As we have seen, that can take days. I discuss automation more below.
  • Your data store must be accessible from outside your primary cloud provider. This is the most difficult problem, and how you accomplish it depends greatly on the nature of your data store. However, backups and redundancy are the key considerations (as usual!). All data must be in more than one place, and you need to have a way to fail over gracefully. As the Amazon outage has shown, a “highly reliable” system like their EBS (Elastic Block Storage) is still not reliable enough to avoid downtime.

Right lesson #3: Cloud deployments must be automated and should take cloud server reliability characteristics into account.

Even though I have seen it many times, I am still taken aback when I talk to a startup that has used Amazon just like a traditional data center using traditional methods.  Their sysadmins go into the Amazon console, fire up some servers, manually configure the deployment architecture (often using Amazon features that save them time but lock them in), and hope for the best.  Oh, they might burn an AMI and save it on S3, in case the server dies (which only works as long as nothing changes).  If they need to scale up, they manually add another server and manually add it to the load balancer queue.

This type of usage treats IaaS as mostly a financing alternative. It’s a way to avoid buying capital equipment and conserving financial resources when you do not know how much computing infrastructure you will need. Even the fact that you can change your infrastructure resources rapidly really just boils down to not having to buy and provision those resources in advance. This benefit is a big one for capital-efficient lean startups, but on the whole the approach is risky and suboptimal. The Amazon outage illustrates this: companies that used this approach were stuck during the outage, but at another level they are still stuck with Amazon because their server configurations are implicit.

Instead, the best practice for deploying applications – in the cloud but also anywhere, is by automating the deployment process. There should be no manual steps in the deployment process. Although this can be done using scripts, even better is to use a tool like Chef, Puppet, or cfEngine to take advantage of abstractions in the process. Or use RightScale, Kaavo, CA Applogic, or similar tools to not only automate but organize your deployment process. If your application uses a standard N-tier architecture, you can potentially use Standing Cloud without having to build any automation scripts at all.

Automating an application deployment in the cloud is a best practice with numerous benefits, including:

  • Free redundancy. Instead of having an idle redundant data center (whether cloud or otherwise), you can simply re-deploy your application in another data center or cloud service using on-demand resources.  Some of the resources (e.g., a replicated data store) might need to be available at all times, but most of the deployment can be fired up only when it is needed.
  • Rapid scalability. In theory you can get this using Amazon’s auto-scaling features, Elastic Beanstalk, and the like.  But these require access to AMIs that are stored on S3 or EBS.  We’ve learned our lesson about that, right? Instead, build a general purpose scalability approach that takes advantage of the on-demand resources but keeps it under your control.
  • Server failover can be treated just like scalability. Virtual servers fail more frequently than physical servers, and when they do, there is less ability to recover them. Consequently, a good automation procedure treats scalability and failover the same way – just bring up a new server.
  • Maintainability. A server configuration that is created manually and saved to a “golden image” has numerous problems. Only the person who built it knows what is there, and if that person leaves or goes on vacation, it can be very time consuming to reverse-engineer it. Even that person will eventually forget, and if there are several generations of manual configuration changes (boot the golden image, start making changes, create a new golden image), possibly by different people, you are now locked into that image. All these issues become apparent when you need to upgrade O/S versions or change to a new O/S distribution. In contrast, a fully automated deployment is not only a functional process with the benefits mentioned above, it also serves as documentation.

In summary, let the Amazon Web Services outage be a wake-up call… not to fear the IaaS cloud, but to know it, use it properly, and take advantage of its full possibilities.

  • http://www.facebook.com/people/Christopher-Knorr/100000281893641 Christopher Knorr

    Good Post.

    Some of BigDoor MiniBar components seemed impacted as well but not everything. I thought that was interesting or just a coincidence. I knew you would follow up on the outage at Amazon but any comments on what BigDoor had wrong or right to have a partial working MiniBar or was it just coincidence?

    • DaveJ

      Chris, I’d have to look at what you’re doing specifically to understand that. A data center outage is a bit like brain damage – it’s very difficult to predict the specific cognitive deficits that will result. Send me a note if you’d like to discuss.

      • http://www.facebook.com/people/Christopher-Knorr/100000281893641 Christopher Knorr

        I am not using it on my site. Was just noting what seemed to be strange behaviors during the Amazon Outage on the feld.com BigDoor MiniBar. I agree in that it did act rather like brain damage with a strange set of issues. I guess that is what you get with components glued together in ways that are semi-fault tolerate. It keeps functioning but with degraded and sometimes unexpected failures, but still kinda working. I wonder if this is a new normal, Apps are not simply down but degraded due to partial failures in the network of distributed infrastructure.

        • http://www.feld.com bfeld

          Most well architected components have a clear fail over behavior set. In
          BigDoor’s case, if the MiniBar isn’t available it just shouldn’t appear nor
          should it impact the site. However, if some of the pieces of the MiniBar are
          available, I imagine there could be weird behavior. Can you explain more
          what you saw that was weird as it was “sort of working” so we can drill
          down?

          • http://www.facebook.com/people/Christopher-Knorr/100000281893641 Christopher Knorr

            Let me first state that I love the MiniBar, I think it is a very cool concept and have been actively engaging with your site more because of it. During the outage the MiniBar would show up during some loads and not show up in others. It may have auto checked in once and I didn’t see the XP later, it may be that the auto check in just never succeeded, I honestly can’t recall. The loading time in some cases took longer. The variation in auto check in is a perceptual flag for load issues as it is very noticeable. The variance I see in the auto check in firing is pretty high and was higher during the outage. The leaderboard didn’t load at least once. I will note that just about all of these things have happened to me outside of the AWS outage. The occurrence rate spiked for all these during the outage. That made me curious as to the architecture that resulted in degraded and varying behavior instead of simply not showing up during the whole outage.

    • http://www.feld.com bfeld

      BigDoor was very open about this issue with their publishers while it was
      going on and then put up this post after things settled down explaining what
      had happened.

      http://www.bigdoor.com/blog/all-systems-go/

  • http://www.rassoc.com/gregr/weblog greinacker

    Good post. On your lesson #2 about Amazon not to be trusted…I agree with you. However, it certainly wouldn’t hurt for them to be a little more transparent about how their availability zones within a region are constructed, how they relate to each other, and how they deal with a complete failure of one AZ…hopefully something positive will come out of the post-mortem they do.

    • Dave Jilk

      Yeah, I have subsequently heard that they probably pitched the benefits of multiple availability zones harder than was warranted. Given my other comment about whether or not they use AWS for Amazon.com, I think the point should be “trust their data centers like you would any other data center, but their marketing, not so much.”

      • http://www.rassoc.com/gregr/weblog greinacker

        Well put. :-)

  • Dave Jilk
  • http://sco.tt Scott Yates

    I think you, me and Brad are the only ones who really wondered out loud about the nexus of AWS and Amazon.com. (I did it here: http://www.sco.tt/scott_yates/2011/04/big-day-for-the-future-of-water-and-for-blogmutt.html)

    As I wrote over there, I’m just glad Quora didn’t come back from the outage asking for the location of Sarah Connor.

  • http://www.softlayer.com Paul Ford

    The unfortunate AWS “Incident” is just a big bump in the long road of technology. The cloud is not for everyone, and its not for everything – but it is an amazing evolution in computing, and it’s here to stay. The big lesson for many of the people who were effected (In my opinion, at least) is “Dont put all of your eggs in one basket” Applications and infrastructures, particularly in the Cloud, should be designed to mitigate risk by redundancy across providers, among other things…

    Events like this definitely leave a mark on all of us IAAS/Cloud providers. So, lets take it for what it really was – part of the growing pains of living on the cutting edge. I commend Werner and the team at Amazon for their hard work and dedication to getting things right again.

    Thanks for the great post Dave – And thanks for spreading the good word Brad

    Paul Ford
    SoftLayer Technologies

    • http://thesis911.com/ thesis

      agree with you! i think that is quite difficult to learn something from mistakes… not to say but really to learn and dont repeat it…

  • http://twitter.com/DHendersonCO David Henderson

    Great points… totally agree.

    But…

    I’ve looked at clouds from both sides now
    From up and down, and still somehow
    It’s cloud illusions I recall
    I really don’t know clouds at all.

    Sorry… couldn’t resist.. :-)

  • Anonymous

     Building a really professional cloud system requires a lot of energy and money. This is why I would recommend entrepreneurs who want to build cloud systems to take the effort of raising funds before launching the platform so that they can do it how it is supported to be done. Here is a great list of investors they can use: http://www.vcgate.com/2011/05/12/would-you-like-to-finally-have-access-to-an-easy-to-use-investor%E2%80%99s-data-base/

Build something great with me