« swipe left for tags/categories
swipe right to go back »
This is a post by Dave Jilk, a long time friend, business partner, and CEO of Standing Cloud. While the words are his, I agree 100% with everything he is saying here. I continue to be stunned and amazed by both the behavior of our government around this and the behavior of “us” (companies and individuals) around their data given our government’s behavior. But Dave’s point is not only around the actions of government, but the broader risks that exist in the context of multi-tenant services that I don’t think we are spending enough time thinking or talking about.
While I was in Iceland a few weeks ago, there was a set of discussions driven by Brad Burnham of Union Square Ventures about trying to make Iceland and “Internet Neutrality Zone” similar to Switzerland and banking. While I have no idea if this is feasible, the need for it seems to be increasing on a regular basis.
I encourage you to read Dave’s post below carefully. While neither of us are endorsing or defending Megaupload, it’s pretty clear that the second order impact and unintended consequences around situations like the government takedown of it have wide ranging consequences for all of us. And – it’s not just the government, but mother nature and humans.
Suppose you live in an apartment building, and one day the federal government swoops in and takes control over the building, preventing you from entering or retrieving any of your belongings. They allege that the landlord was guilty of running a child prostitution ring in the building and, while you are not accused of any crime, they will not give you access to your property. They suggest that you sue the landlord to get your property back, even though the landlord no longer controls the property.
This seems like a fairly obvious violation of your rights, and it is unlikely that the government would be able to maintain this position for long. Yet this is exactly what it is doing in the Megaupload case, and in relation to the rather lesser crime of copyright infringement. Somehow – perhaps because of the pernicious influence of large media companies on the government’s activities – rights to your digital data are taking a backseat to any and all attempts to enforce the copyright laws. This is what the online community was trying to prevent with its opposition to SOPA/PIPA, and the government seems to have elected to implement a de facto SOPA by simply trampling on the Constitution.
While I could rant further about the government’s egregious behavior, let’s talk about the practical implications of this situation. The primary implication is that there is a new risk to your data and your operations when you use multi-tenant online services. Such risks have always existed: if you do not have both an offsite backup of your data and a way to use that backup then any number of black swan events could disrupt your operations in dramatic ways. Earthquakes, wars, power brownouts, asteroids, human errors, cascading network failures – yes, it reads like the local evening news, and though any one situation is unlikely, the aggregate likelihood that something can go wrong is high enough that you need to consider it and deal with it.
What this particular case illustrates is that a company that provides your online service is a single point of failure. In other words, simply offering multiple data centers, or replicating data in multiple locations, does not mitigate all the risks, because there are risks that affect entire companies. I have never believed that “availability zones” or other such intra-provider approaches completely mitigate risk, and the infamous Amazon Web Services outage of Spring 2011 demonstrated that quite clearly (i.e., cascading effects crossed their availability zones). The Megaupload situation is an example of a non-technical company-wide effect. Other non-technical company-wide effects might be illiquidity, acquisition by one of your competitors, or changes in strategy that do not include the service you use.
So again, while this is a striking and unfortunate illustration, the risk it poses is not fundamentally new. You need to have an offsite backup of your data and a way to use that backup. The situation where the failure to do this is most prevalent is in multi-tenant, shared-everything SaaS, such as Salesforce.com and NetSuite. While these are honorable companies unlikely to be involved in federal data confiscations, they are still subject to all the other risks, including company-wide risks. With these services, off-site backups are awkward at best, and more importantly, there is no software available to which you could restore the backup and run it. In essence, you would have to engage in a data conversion project to move to a new provider, and this could take weeks or more. Can you afford to be without your CRM or ERP system for weeks? By the way, I think there are steps these companies could take to mitigate this risk for you, but they will only do it if they get enough pressure from customers. Alternatively, you could build (or an entrepreneurial company could provide) conversion routines that bring your data up and running in another provider or software system fairly quickly. This would have to be tested in advance.
Another approach – the one Standing Cloud enables – is to use software that is automatically deployed and managed in the infrastructure cloud, but is separate for each customer; and further, it is backed up on another cloud provider or other location. In this scenario, there is no single point of failure or company failure. If the provider of the software has a problem, it doesn’t matter because you are running it yourself. If the cloud provider has a problem, Standing Cloud has your backups and can re-deploy the application in another location. If Standing Cloud has a problem, you can have the cloud provider reset the password for your virtual server and access it that way.
As long as governments violate rights, mother nature wreaks havoc, and humans make errors, you need to deal with these issues. Make sure you have an offsite backup of your data and a way to use that backup.
I’m now officially in the cloud business, courtesy of my friends at Standing Cloud (we are investors.) Standing Cloud delivers cloud application management solutions for cloud service providers, technology solution providers, and their customers. Their application management layer, automated managed services and Application Storefront make it easy to build, deploy and manage applications in the cloud.
Back in November, I wrote that “If you are a hosting, managed service provider, or building a cloud service (public or private), you have three choices. The first is to ignore this stuff (dumb). The second is to try to build it all yourself and keep pace with Amazon (good luck). The third is to use Standing Cloud.” And that’s exactly what I’m doing. Basically, Brad’s Amazing Cloud makes me a cloud provider.
Built on a white-label version on the Standing Cloud platform, Brad’s Amazing Cloud provides all the tools to get applications up and running in the cloud quickly, affordably, and without the hassle of managing hardware and infrastructure. So it’s goodbye to server racks, terminal windows, and sys admin headaches. With Brad’s Amazing Cloud, it’s simple and hassle-free, whether you’re a skilled developer or a non-technical application user.
With Brad’s Amazing Cloud, you get your choice of clouds, applications, and developer frameworks. You can run on AWS, Rackspace, GoGrid, and several other cloud hosting providers. Through the Storefront, you can fire up preconfigured instances of a wide range of open source apps – and you need no technical knowledge to do so. Nearly 100 open-source and commercial apps – including popular business applications like Drupal, WordPress,
All account management with the cloud provider you select is handled transparently – no separate account needed. Pick the cloud of your choice, and change at any time, for any reason. Your applications are completely portable from cloud-to-cloud, so you’re never locked-in to a single cloud service or provider.
Brad’s Amazing Cloud is real, and it’s open for business now. Check it out and tell me what you think.
If you’re a solutions provider looking to provide an application management layer for your customers or resellers, a developer looking for the easiest way to build, deploy and scale applications in the cloud, or just want to get your business up and running in the cloud, quickly, cost effectively and without the typical technical challenges, Standing Cloud may have a solution for you. With their white-label capabilities, your cloud offering can be customized to meet your specific needs (just like they did for me), including branding on the storefront, management console and support pages, and your choice of applications (including proprietary apps and software), infrastructure and supported clouds.
My friends at Standing Cloud have closed another $3 million financing from us (Foundry Group) and Avalon Ventures. They’ve also added a long time friend, co-investor, and amazing entrepreneur Will Herman to the board.
Standing Cloud is a great example how one of our funding strategies plays out. We are the seed investors and have been working closely with the company since inception. They’ve built an incredibly deep product around a very specific aspect of the broad Cloud computing ecosystem. When they started, much of Cloud computing was total noise and marketing baloney. While there’s still plenty of that in the system, many of the products and services are maturing and the particular segment Standing Cloud has gone after has suddenly become incredibly important to a large number of hosting, managed services, and cloud providers (often the same thing) not named Amazon. Specifically:
Standing Cloud provides a seamless application layer for cloud providers, making application deployment and management fast, simple and hassle-free for their customers. Standing Cloud’s standard application catalog includes 100 open-source and commercial applications; its Platform-as-as-Service (PaaS) capabilities support multiple programming languages, including Rails, PHP, Java and Python, and a wide range of cloud service providers and orchestration software systems.
If you are a hosting, managed service provider, or building a cloud service (public or private), you have three choices. The first is to ignore this stuff (dumb). The second is to try to build it all yourself and keep pace with Amazon (good luck). The third is to use Standing Cloud.
If you want an intro, just email me and ask.
As most nerds know, Skynet gained self-awareness last week and decided as its first act to mess with Amazon Web Services, creating havoc for anyone that wanted to check-in on the Internet to their current physical location. In hindsight Skynet eventually figured out this was a bad call on its part as it actually wants to know where every human is at any given time. However, Skynet is still trying to get broader adoption of Xbox Live machines, so the Sony Playstation Network appears to still be down.
After all the obvious “oh my god, AWS is down” articles followed by the “see – I told you the cloud wouldn’t work” articles, some thoughtful analysis and suggestions have started to appear. Over the weekend, Dave Jilk, the CEO of Standing Cloud (I’m on the board) asked if I was going to write something about this and – if not – did I want him to write a guest post for me. Since I’ve used my weekend excess of creative energy building a Thing-O-Matic 3D Printer in an effort to show the machines that I come in peace, I quickly took him up on his offer.
Following are Dave’s thoughts on learning the right lessons from the Amazon outage.
Much has already been written about the recent Amazon Web Services outage that has caused problems for a few high-profile companies. Nevertheless, at Standing Cloud we live and breathe the infrastructure-as-a-service (IaaS) world every day, so I thought I might have something useful to add to the discussion. In particular, some media and naysayers are emphasizing the wrong lessons to be learned from this incident.
Wrong lesson #1: The infrastructure cloud is either not ready for prime time, or never will be.
Those who say this simply do not understand what the infrastructure cloud is. At bottom, it is just a way to provision virtual servers in a data center without human involvement. It is not news to anyone who uses them that virtual servers are individually less reliable than physical servers; furthermore, those virtual servers run on physical servers inside a physical data center. All physical data centers have glitches and downtime, and this is not the first time Amazon has had an outage, although it is the most severe.
What is true is that the infrastructure cloud is not and never will be ready to be used exactly like a traditional physical data center that is under your control. But that is obvious after a moment’s reflection. So when you see someone claiming that the Amazon outage shows that the cloud is not ready, they are just waving an ignorance flag.
Wrong lesson #2: Amazon is not to be trusted.
On the contrary, the AWS cloud has been highly reliable on the whole. They take downtime seriously and given the volume of usage and the amount of time they have been running it (since 2006), it is not surprising that they would eventually have a major outage of some sort. Enterprises have data center downtime, and back in the day when startups had to build their own, so did they. Some data centers are run better than others, but they all have outages.
What is of more concern are rumors I have heard that Amazon does not actually use AWS for Amazon.com. That doesn’t affect the quality of their cloud product directly, but given that they have lured customers with the claim that they do use it, this does impact our trust in relation to their marketing integrity. Presumably we will eventually find out the truth on that score. In any case, this issue is not related to the outage itself.
Having put the wrong lessons to rest, here are some positive lessons that put the nature of this outage into perspective, and help you take advantage of IaaS in the right way and at the right time.
Right lesson #1: Amazon is not infallible, and the cloud is not magic.
This is just the flip side of the “wrong lessons” discussed above. If you thought that Amazon would have 100% uptime, or that the infrastructure cloud somehow eliminates concerns about downtime, then you need to look closer at what it really is and how it works. It’s just a way to deploy somewhat less reliable servers, quickly and without human intervention. That’s all. Amazon (and other providers) will have more outages, and cloud servers will fail both individually and en masse.
Your application and deployment architecture may not be ready for this. However, I would claim that if it is not, you are assuming that your own data center operators are infallible. The architectural changes required to accommodate the public IaaS cloud are a good idea even if you never move the application there. That’s why smart enterprises have been virtualizing their infrastructure, building private clouds, and migrating their applications to operate in that environment. It’s not just a more efficient use of hardware resources, it also increases the resiliency of the application.
Right lesson #2: Amazon is not the only IaaS provider, and your application should be able to run on more than one.
This requires a bias alert: cloud portability is one of the things Standing Cloud enables for the applications it manages. If you build/deploy/manage an application using our system, it will be able to run on many different cloud providers, and you can move it easily and quickly.
We built this capability, though, because we believed that it was important for risk mitigation. As I have already pointed out, no data center is infallible and outages are inevitable. Further, It is not enough to have access to multiple data centers – the Amazon outage, though focused on one data center, created cascading effects (due to volume) in its other data centers. This, too, was predictable.
Given the inevitability of outages, how can one avoid downtime? My answer is that an application should be able to run on more than one, or many, different public cloud services. This answer has several implications:
- You should avoid reliance on unique features of a particular IaaS provider if they affect your application architecture. Amazon has built a number of features that other providers do not have, and if you are committed to Amazon they make it very easy to be “locked in.” There are two ways to handle this: first, use a least-common-denominator approach; second, find a substitution for each such feature on a “secondary” service.
- Your system deployment must be automated. If it is not automated, it is likely that it will take you longer to re-deploy the application (either in a different data center or on a different cloud service) than it will take for the provider to bring their service back up. As we have seen, that can take days. I discuss automation more below.
- Your data store must be accessible from outside your primary cloud provider. This is the most difficult problem, and how you accomplish it depends greatly on the nature of your data store. However, backups and redundancy are the key considerations (as usual!). All data must be in more than one place, and you need to have a way to fail over gracefully. As the Amazon outage has shown, a “highly reliable” system like their EBS (Elastic Block Storage) is still not reliable enough to avoid downtime.
Right lesson #3: Cloud deployments must be automated and should take cloud server reliability characteristics into account.
Even though I have seen it many times, I am still taken aback when I talk to a startup that has used Amazon just like a traditional data center using traditional methods. Their sysadmins go into the Amazon console, fire up some servers, manually configure the deployment architecture (often using Amazon features that save them time but lock them in), and hope for the best. Oh, they might burn an AMI and save it on S3, in case the server dies (which only works as long as nothing changes). If they need to scale up, they manually add another server and manually add it to the load balancer queue.
This type of usage treats IaaS as mostly a financing alternative. It’s a way to avoid buying capital equipment and conserving financial resources when you do not know how much computing infrastructure you will need. Even the fact that you can change your infrastructure resources rapidly really just boils down to not having to buy and provision those resources in advance. This benefit is a big one for capital-efficient lean startups, but on the whole the approach is risky and suboptimal. The Amazon outage illustrates this: companies that used this approach were stuck during the outage, but at another level they are still stuck with Amazon because their server configurations are implicit.
Instead, the best practice for deploying applications – in the cloud but also anywhere, is by automating the deployment process. There should be no manual steps in the deployment process. Although this can be done using scripts, even better is to use a tool like Chef, Puppet, or cfEngine to take advantage of abstractions in the process. Or use RightScale, Kaavo, CA Applogic, or similar tools to not only automate but organize your deployment process. If your application uses a standard N-tier architecture, you can potentially use Standing Cloud without having to build any automation scripts at all.
Automating an application deployment in the cloud is a best practice with numerous benefits, including:
- Free redundancy. Instead of having an idle redundant data center (whether cloud or otherwise), you can simply re-deploy your application in another data center or cloud service using on-demand resources. Some of the resources (e.g., a replicated data store) might need to be available at all times, but most of the deployment can be fired up only when it is needed.
- Rapid scalability. In theory you can get this using Amazon’s auto-scaling features, Elastic Beanstalk, and the like. But these require access to AMIs that are stored on S3 or EBS. We’ve learned our lesson about that, right? Instead, build a general purpose scalability approach that takes advantage of the on-demand resources but keeps it under your control.
- Server failover can be treated just like scalability. Virtual servers fail more frequently than physical servers, and when they do, there is less ability to recover them. Consequently, a good automation procedure treats scalability and failover the same way – just bring up a new server.
- Maintainability. A server configuration that is created manually and saved to a “golden image” has numerous problems. Only the person who built it knows what is there, and if that person leaves or goes on vacation, it can be very time consuming to reverse-engineer it. Even that person will eventually forget, and if there are several generations of manual configuration changes (boot the golden image, start making changes, create a new golden image), possibly by different people, you are now locked into that image. All these issues become apparent when you need to upgrade O/S versions or change to a new O/S distribution. In contrast, a fully automated deployment is not only a functional process with the benefits mentioned above, it also serves as documentation.
In summary, let the Amazon Web Services outage be a wake-up call… not to fear the IaaS cloud, but to know it, use it properly, and take advantage of its full possibilities.
Standing Cloud, which makes it easy to deploy and run apps in the cloud, recently closed a $3m financing led by Rich Levandov at Avalon Ventures. Rich and I have known each other and worked together since the mid-1990′s and more recently have invested together in NewsGator and Zynga.
Rich has spend a lot of time in the clouds lately, including his investment in Cloudkick which was acquired yesterday by Rackspace. He got excited about Standing Cloud and their mission to “reimagine hosting” in the context of cloud computing. Shared hosting was a great idea back in 1999 but most users of Web apps today require more control over upgrades, better access to backups, ability to move applications across cloud providers, and extremely high reliability. In addition, deploying apps on most cloud providers continues to be unnecessarily complicated.
There are a huge number of solution providers out in the world who are specialists in any of the more than 70 open-source apps that Standing Cloud supports. For them, Standing Cloud is a simple way to deploy multiple instances of a single app across all of their clients, retain a high degree of flexibility and control over the apps, and not ever have to worry about hosting. These are folks who are helping businesses launch and maintain not only websites but the software they use to run and manage their business.
This week, Standing Cloud launched the Standing Cloud Partner Program for these customers. Becoming a partner includes free hosting for one instance of a single application for one year, volume pricing, and a listing of their services in the Standing Cloud Application Network, launched last week, which is gearing up to be the go-to place for end users and solution providers around Web apps. The program is designed to help grow the business of service providers who customize, support, and deploy online applications, ranging from CMS systems like Drupal, WordPress and Plone, CRM systems like vTiger and SugarCRM, and other business tools like Status.Net, and OpenVBX.
If you’re a solution provider looking for a better way to manage apps for your clients, you can sign up at Standing Cloud. And if you want to see how easy it is to set up any of over 70 open source apps in under five minutes, just select an app and click on “Use It Now.”