I expect most of you know the fable of the scorpion and the frog, but if you don’t, it goes like this (quoted from Wikipedia):
“A scorpion asks a frog to carry him over a river. The frog is afraid of being stung during the trip, but the scorpion argues that if it stung the frog, both would sink and the scorpion would drown. The frog agrees and begins carrying the scorpion, but midway across the river the scorpion does indeed sting the frog, dooming them both. When asked why, the scorpion points out that this is its nature. The fable is used to illustrate the position that no change can be made in the behaviour of the fundamentally vicious.”
Over the weekend, there was some commentary on AWS in fight of its life as customers like Dropbox ponder hybrid clouds and Google pricing. Amazon turned in slightly declining quarter-over-quarter revenue on AWS, although significant year-over-year quarterly growth, as explained in Sign of stress or just business as usual? AWS sales are off slightly.
“Could Amazon Web Services be feeling the heat from new public cloud competitors? Maybe. Maybe not. Second quarter net sales of AWS — or at least the category in which it is embedded– were off about 3 percent sequentially to $1.168 billion from $1.204 billion for the first quarter. But they were up 38 percent from $844 million for the second quarter last year. In the first quarter, growth in this category year over year was 60 percent. So make of that what you will.”
Could Amazon’s nature be catching up with it, or is it just operating in a more competitive market? A set of emails went around from some of the CEOs of our companies talking about this followed by a broader discussion on our Foundry Group EXEC email list. It contained, among other comments:
- AWS is not the low price provider.
- AWS is not the best product at anything – most of their features are mediocre knock offs of other products.
- AWS is unbelievably lousy at support.
- Once you are at $200k / month of spend, it’s cheaper and much more effective to build your own infrastructure.
While we are in the middle of a massive secular shift from owned data centers to outsourced data centers and hardware, anyone who remembers the emergence of outsourced data centers, shared web hosting, dedicated web hosting, co-location, and application service providers will recognize many of the dynamics going on. Predictably in the tech industry, what’s old is new again as all the infrastructure players roll out their public clouds and all the scaled companies start exploring ways to move off of AWS (and other cloud services) into much more cost effective configurations.
Let’s pick apart the four points above a little bit.
1. AWS is not the low price provider. When AWS came out, it was amazing, partly because you didn’t need to buy any hardware to get going, partly because it had a very fine grade variable pricing approach, and mostly because these two things added up to an extremely low cost for a startup relative to all other options. This is no longer the case as AWS, Microsoft, and Google bash each other over the head on pricing, with Microsoft and Google willing to charge extremely low prices to gain market share. And, more importantly, see point #4 below in a moment. Being low priced is in Amazon’s nature so this will be intensely challenging to them.
2. AWS is not the best product at anything – most of their features are mediocre knock offs of other products. We’ve watched as AWS has aggressively talked to every company we know doing things in the cloud infrastructure and application stack, and then rather than partner eventually roll out low-end versions of competitive products. We used to think of Amazon as a potential acquirer for these companies, or at least a powerful strategic partner. Now we know they are just using the bait of “we want to work more closely with you” as market and product intelligence. Ultimately, when they come out with what they view of as a feature, it’s a low-end, mediocre, and limited version of what these companies do. So, they commoditize elements of the low end of the market, but don’t impact anything that actually scales. In addition, they always end up competing on every front possible, hence the chatter about Dropbox moving away from AWS since AWS has now come out with a competitive product. It appears that it’s just not in Amazon’s nature to collaborate with others.
3. AWS is unbelievably lousy at support. While they’ve gotten better at paid support, including their premium offerings, these support contracts are expensive. Approaches to get around support issues and/or lower long term prices like reserved instances are stop gaps and often a negative benefit for a fast growing company. I’ve had several conversations over the years with friends at Amazon about this and I’ve given up. Support is just not in Amazon’s nature (as anyone who has ever tried to figure out why a package didn’t show up when expected) and when a company running production systems on AWS is having mission critical issues that are linked to AWS, it’s just painful. At low volumes, it doesn’t matter, but at high scale, it matters a huge amount.
4. Once you are at $200k / month of spend, it’s cheaper and much more effective to build your own infrastructure. I’ve now seen this over and over and over again. Once a company hits $200k / month of spend on AWS, the discussion starts about building out your own infrastructure on bare metal in a data center. This ultimately is a cost of capital discussion and I’ve found massive cost of capital leverage to move away from AWS onto bare metal. When you fully load the costs at scale, I’ve seen gross margin moves of over 20 points (or 2000 basis points – say from 65% to 85%). It’s just nuts when you factor in the extremely low cost of capital for hardware today against a fully loaded cost model at scale. Sure, the price declines from point #1 will impact this, but the operational effectiveness, especially given #3, is remarkable.
There are a number of things Amazon, and AWS, could do to address this if they wanted to. While not easy, I think they could do a massive turnaround on #2 and #3, which combined with intelligent pricing and better account management for the companies in #4, could result in meaningful change.
I love Amazon and think they have had amazing impact on our world. Whenever I’ve given them blunt feedback like this, I’ve always intended it to be constructive. I’m doubt it matters at all to their long term strategy whether they agree with, or even listen to, me. But given the chatter over the weekend, it felt like it was time to say this in the hope that it generated a conversation somewhere.
But I worry some of the things they need to be doing to maintain their dominance is just not in their nature. In a lot of ways, it’s suddenly a good time to be Microsoft or Google in the cloud computing wars.
Our portfolio company JumpCloud is running a survey to dig deeper into the professional lives of IT folks and their move to DevOps. If you are open to sharing your thoughts and experiences, please take their survey. It’s only about five minutes long and they are sharing all of the raw data (anonymized, of course). The survey ends at the end of June.
The IT sector is undergoing some interesting transformations as a result of the cloud, DevOps, and mobile. I’m interested to see what the data tells us.
If you happen to have at least 100 servers, JumpCloud is looking to pick your brain about how you manage them. If you are open to it, let me know and I’ll connect you with them – I’m sure that they will make it worth your time (and I appreciate the help)!
As a bonus, JumpCloud is raffling off a Fitbit Flex (another one of our portfolio companies), an Amazon Fire TV, and Samsung Gear Neo 2 Smartwatch if you complete the survey. Please take a few minutes and help us get some interesting data on how the IT sectors works.
This is the second year that TechStars is running a thematic accelerator in Texas focused just on cloud computing. At Foundry Group, we believe in thematic investing both as a way to organize and filter the massive number of opportunities to look at, but also as a way to build a set of muscles around a sphere of knowledge. It’s been fun to experiment with this approach at TechStars.
While we recognize the tidal wave trend of all technology becoming ‘cloudy’, we are approaching TechStars Cloud with specific focus. The companies in TechStars Cloud are the ones enabling the trend of cloud computing and providing the underlying technology, versus just the ones that are being carried along with it.
An example of just such a company is Cloudability, who is a graduate of last years TechStars Cloud program, who we subsequently funded this past summer. They are taking the pain out of managing and monitoring the dozens and often hundreds of individual cloud provider accounts that companies end up with. It’s a big need and the early success of Cloudability validates this.
Cloud computing is still an amazingly nascent field with opportunities everywhere you look. From database technologies to network, big data to analytics, security to hosting platforms, documents to video, the next wave of companies are turning cloud technology into leverage for all businesses – tech and non-tech. The world now has an API and we call it cloud.
If you are a part of that landscape, or want to be, this is a great first step -> apply.techstars.com. Tell them I sent you.
The first cycle of The Microsoft Accelerator, powered by TechStars, is in its final run up to demo day. The first program has focused on Kinect applications and has some super teams, such as Gestsure (they control operating rooms with motion control) and Ubi (they turn any surface into a touch screen.)
Demo Day is in Seattle on June 28th. If you are an investor (angel or VC), send me an email and I’ll get you an invitation.
TechStars and Microsoft have been so pleased with the program that a second cycle of the Microsoft Accelerator in Seattle has been added focusing on cloud-based applications. The applications are open now through July 13. Each company gets $20k in funding, mentorship from top entrepreneurs, investors and Microsoft executives, $60K in Azure credit, office space, training and support, and demo day to pitch to investors, media, and industry influentials.
As you may know Microsoft has really made some awesome improvements to Windows Azure. Most notably it’s much more open source focused. Want to run Linux? No problem. Python? No problem as Microsoft has embraced open source with this update of Windows Azure. While you need not be using Azure to apply to the Microsoft Accelerator, if you’re playing in the Microsoft ecosystem at all I’d really encourage you to take a look at the latest news about Windows Azure.
If you are an entrepreneur working on something cloud computing related, especially in the Microsoft ecosystem, consider applying to the Microsoft Accelerator today.
I’m now officially in the cloud business, courtesy of my friends at Standing Cloud (we are investors.) Standing Cloud delivers cloud application management solutions for cloud service providers, technology solution providers, and their customers. Their application management layer, automated managed services and Application Storefront make it easy to build, deploy and manage applications in the cloud.
Back in November, I wrote that “If you are a hosting, managed service provider, or building a cloud service (public or private), you have three choices. The first is to ignore this stuff (dumb). The second is to try to build it all yourself and keep pace with Amazon (good luck). The third is to use Standing Cloud.” And that’s exactly what I’m doing. Basically, Brad’s Amazing Cloud makes me a cloud provider.
Built on a white-label version on the Standing Cloud platform, Brad’s Amazing Cloud provides all the tools to get applications up and running in the cloud quickly, affordably, and without the hassle of managing hardware and infrastructure. So it’s goodbye to server racks, terminal windows, and sys admin headaches. With Brad’s Amazing Cloud, it’s simple and hassle-free, whether you’re a skilled developer or a non-technical application user.
With Brad’s Amazing Cloud, you get your choice of clouds, applications, and developer frameworks. You can run on AWS, Rackspace, GoGrid, and several other cloud hosting providers. Through the Storefront, you can fire up preconfigured instances of a wide range of open source apps – and you need no technical knowledge to do so. Nearly 100 open-source and commercial apps – including popular business applications like Drupal, WordPress,
All account management with the cloud provider you select is handled transparently – no separate account needed. Pick the cloud of your choice, and change at any time, for any reason. Your applications are completely portable from cloud-to-cloud, so you’re never locked-in to a single cloud service or provider.
Brad’s Amazing Cloud is real, and it’s open for business now. Check it out and tell me what you think.
If you’re a solutions provider looking to provide an application management layer for your customers or resellers, a developer looking for the easiest way to build, deploy and scale applications in the cloud, or just want to get your business up and running in the cloud, quickly, cost effectively and without the typical technical challenges, Standing Cloud may have a solution for you. With their white-label capabilities, your cloud offering can be customized to meet your specific needs (just like they did for me), including branding on the storefront, management console and support pages, and your choice of applications (including proprietary apps and software), infrastructure and supported clouds.
Are you building a cloud startup? If so, apply to TechStars Cloud today!
Earlier this month TechStars announced its newest accelerator program, TechStars Cloud, and we are looking for the best cloud startups we can find to go through the inaugural program.
We’ve gotten a lot of questions about what constitutes a “cloud startup”, so here is a discussion of what we think are cloud startups. We think we can do something special with this program and have big expectations for the results we’ll see when we connect early stage cloud startups to the best cloud mentors and companies.
If you haven’t heard, we have upped the initial funding in the program to 118k.
StillSecure has been nailing it in the service provider segment with deals with XO, ViaWest, CoreSite, and others recently. StillSecure fundamentally believes that service providers – telcos, datacenter, cloud providers – will be the channel to market for security solutions and I agree. They have built an amazing set of solutions for colocation and dedicated server environments and have solutions that can apply to some higher-end cloud users. Today they are announcing a new host-based firewall management solution in conjunction with SoftLayer – a leader in the cloud market. Aimed at all cloud users, StillSecure’s new solution is the start of a major initiative for the company and is also a new category of solutions.
As most cloud users know, securing their systems is incredibly hard. The solutions are either just “cloud-washed” products that aren’t a fit or they are so expensive that they cannot fit within the elastic cloud model. StillSecure has taken nearly 12 years of history and experience and have built a product from the ground-up with the cloud users’ customer experience and profile in mind.
The solution, called Cloud SMS, is a free today and will expand into premium offerings very quickly. StillSecure and Cloud SMS are in the SoftLayer Tech Partner Marketplace, being promoted to SoftLayer’s 23,000 customers. The two companies are also beginning to explore offering the complete spectrum of StillSecure’s managed security services into SoftLayer’s broader offerings.
I’m excited for the StillSecure and SoftLayer teams – building a secure cloud is an incredibly important goal and one that many companies can take advantage of. Do yourself a favor – if you have any cloud instances out there, go download StillSecure’s cloud security product and please secure them.
Before we invested in MakerBot, we bought and assembled a Thing-O-Matic. When I say we, I mean me, Jason, and Ross. It took us about 20 hours (Jason and I did the first half; Jason and Ross did the second half) and was a blast – think of it as an adult lego project. Our Thing-O-Matic has been steadily printing stuff – you can play a game of chess with our Thing-O-Matic pieces. the next time you are in my office.
As part of the endless series of Amazing Deals I bring you from my deal site, today’s offer is a fully assembled Thing-O-Matic. If you want your own 3D printer, but you don’t want to assemble it, you can buy it fully assembled for $2,500. But, through the magic of daily deals, there are 20 available for a 20% discount ($2,000). This is a one time offer from my friends at MakerBot so grab ’em while they are available.
And finally, for all of you that have written asking for a “Convertible Debt Series” like our term sheet series, we’ve just started one on AsktheVC.com. The first post is up and introduces the series – we’ll be working through all of the terms in a convertible debt deal over the next few weeks.
I find it endlessly entertaining that people say things like “I don’t need to back up my data anymore because it’s in the cloud.” These people have never experienced a cloud failure, accidentally deleted a specific contact record, or authenticated an app that messed up their account. They will. And it will be painful.
I became a believer in backing up my data when I was 17 years old and had my first data calamity. I wrote about the story on my post What Should You Do When Your Web Service Blows Up. I’ve been involved in a few other data tragedies over the past 28 years which always reinforce (sometimes dramatically) the importance of backups.
We recently invested in a company called Spanning Cloud Apps. If you are a Google Apps user, this is a must use application. Go take a look at Spanning Backup for Google Apps – your first three seats are free. It currently does automatic backup of your Google contacts, calendars, and docs at an item level allowing you to selectively restore any data that accidentally gets deleted or lost. I’ve been using it for a while (well before we invested) and it works great.
I’ve known the founder and CEO, Charlie Wood, for six years or so. Charlie was an early exec at NewsGator but left to pursue his own startup. I came close to funding another company of his in the 2005 time frame but that never came together. I’m delighted to be in business with him again.
Don’t be a knucklehead. Back up your data.
As most nerds know, Skynet gained self-awareness last week and decided as its first act to mess with Amazon Web Services, creating havoc for anyone that wanted to check-in on the Internet to their current physical location. In hindsight Skynet eventually figured out this was a bad call on its part as it actually wants to know where every human is at any given time. However, Skynet is still trying to get broader adoption of Xbox Live machines, so the Sony Playstation Network appears to still be down.
After all the obvious “oh my god, AWS is down” articles followed by the “see – I told you the cloud wouldn’t work” articles, some thoughtful analysis and suggestions have started to appear. Over the weekend, Dave Jilk, the CEO of Standing Cloud (I’m on the board) asked if I was going to write something about this and – if not – did I want him to write a guest post for me. Since I’ve used my weekend excess of creative energy building a Thing-O-Matic 3D Printer in an effort to show the machines that I come in peace, I quickly took him up on his offer.
Following are Dave’s thoughts on learning the right lessons from the Amazon outage.
Much has already been written about the recent Amazon Web Services outage that has caused problems for a few high-profile companies. Nevertheless, at Standing Cloud we live and breathe the infrastructure-as-a-service (IaaS) world every day, so I thought I might have something useful to add to the discussion. In particular, some media and naysayers are emphasizing the wrong lessons to be learned from this incident.
Wrong lesson #1: The infrastructure cloud is either not ready for prime time, or never will be.
Those who say this simply do not understand what the infrastructure cloud is. At bottom, it is just a way to provision virtual servers in a data center without human involvement. It is not news to anyone who uses them that virtual servers are individually less reliable than physical servers; furthermore, those virtual servers run on physical servers inside a physical data center. All physical data centers have glitches and downtime, and this is not the first time Amazon has had an outage, although it is the most severe.
What is true is that the infrastructure cloud is not and never will be ready to be used exactly like a traditional physical data center that is under your control. But that is obvious after a moment’s reflection. So when you see someone claiming that the Amazon outage shows that the cloud is not ready, they are just waving an ignorance flag.
Wrong lesson #2: Amazon is not to be trusted.
On the contrary, the AWS cloud has been highly reliable on the whole. They take downtime seriously and given the volume of usage and the amount of time they have been running it (since 2006), it is not surprising that they would eventually have a major outage of some sort. Enterprises have data center downtime, and back in the day when startups had to build their own, so did they. Some data centers are run better than others, but they all have outages.
What is of more concern are rumors I have heard that Amazon does not actually use AWS for Amazon.com. That doesn’t affect the quality of their cloud product directly, but given that they have lured customers with the claim that they do use it, this does impact our trust in relation to their marketing integrity. Presumably we will eventually find out the truth on that score. In any case, this issue is not related to the outage itself.
Having put the wrong lessons to rest, here are some positive lessons that put the nature of this outage into perspective, and help you take advantage of IaaS in the right way and at the right time.
Right lesson #1: Amazon is not infallible, and the cloud is not magic.
This is just the flip side of the “wrong lessons” discussed above. If you thought that Amazon would have 100% uptime, or that the infrastructure cloud somehow eliminates concerns about downtime, then you need to look closer at what it really is and how it works. It’s just a way to deploy somewhat less reliable servers, quickly and without human intervention. That’s all. Amazon (and other providers) will have more outages, and cloud servers will fail both individually and en masse.
Your application and deployment architecture may not be ready for this. However, I would claim that if it is not, you are assuming that your own data center operators are infallible. The architectural changes required to accommodate the public IaaS cloud are a good idea even if you never move the application there. That’s why smart enterprises have been virtualizing their infrastructure, building private clouds, and migrating their applications to operate in that environment. It’s not just a more efficient use of hardware resources, it also increases the resiliency of the application.
Right lesson #2: Amazon is not the only IaaS provider, and your application should be able to run on more than one.
This requires a bias alert: cloud portability is one of the things Standing Cloud enables for the applications it manages. If you build/deploy/manage an application using our system, it will be able to run on many different cloud providers, and you can move it easily and quickly.
We built this capability, though, because we believed that it was important for risk mitigation. As I have already pointed out, no data center is infallible and outages are inevitable. Further, It is not enough to have access to multiple data centers – the Amazon outage, though focused on one data center, created cascading effects (due to volume) in its other data centers. This, too, was predictable.
Given the inevitability of outages, how can one avoid downtime? My answer is that an application should be able to run on more than one, or many, different public cloud services. This answer has several implications:
- You should avoid reliance on unique features of a particular IaaS provider if they affect your application architecture. Amazon has built a number of features that other providers do not have, and if you are committed to Amazon they make it very easy to be “locked in.” There are two ways to handle this: first, use a least-common-denominator approach; second, find a substitution for each such feature on a “secondary” service.
- Your system deployment must be automated. If it is not automated, it is likely that it will take you longer to re-deploy the application (either in a different data center or on a different cloud service) than it will take for the provider to bring their service back up. As we have seen, that can take days. I discuss automation more below.
- Your data store must be accessible from outside your primary cloud provider. This is the most difficult problem, and how you accomplish it depends greatly on the nature of your data store. However, backups and redundancy are the key considerations (as usual!). All data must be in more than one place, and you need to have a way to fail over gracefully. As the Amazon outage has shown, a “highly reliable” system like their EBS (Elastic Block Storage) is still not reliable enough to avoid downtime.
Right lesson #3: Cloud deployments must be automated and should take cloud server reliability characteristics into account.
Even though I have seen it many times, I am still taken aback when I talk to a startup that has used Amazon just like a traditional data center using traditional methods. Their sysadmins go into the Amazon console, fire up some servers, manually configure the deployment architecture (often using Amazon features that save them time but lock them in), and hope for the best. Oh, they might burn an AMI and save it on S3, in case the server dies (which only works as long as nothing changes). If they need to scale up, they manually add another server and manually add it to the load balancer queue.
This type of usage treats IaaS as mostly a financing alternative. It’s a way to avoid buying capital equipment and conserving financial resources when you do not know how much computing infrastructure you will need. Even the fact that you can change your infrastructure resources rapidly really just boils down to not having to buy and provision those resources in advance. This benefit is a big one for capital-efficient lean startups, but on the whole the approach is risky and suboptimal. The Amazon outage illustrates this: companies that used this approach were stuck during the outage, but at another level they are still stuck with Amazon because their server configurations are implicit.
Instead, the best practice for deploying applications – in the cloud but also anywhere, is by automating the deployment process. There should be no manual steps in the deployment process. Although this can be done using scripts, even better is to use a tool like Chef, Puppet, or cfEngine to take advantage of abstractions in the process. Or use RightScale, Kaavo, CA Applogic, or similar tools to not only automate but organize your deployment process. If your application uses a standard N-tier architecture, you can potentially use Standing Cloud without having to build any automation scripts at all.
Automating an application deployment in the cloud is a best practice with numerous benefits, including:
- Free redundancy. Instead of having an idle redundant data center (whether cloud or otherwise), you can simply re-deploy your application in another data center or cloud service using on-demand resources. Some of the resources (e.g., a replicated data store) might need to be available at all times, but most of the deployment can be fired up only when it is needed.
- Rapid scalability. In theory you can get this using Amazon’s auto-scaling features, Elastic Beanstalk, and the like. But these require access to AMIs that are stored on S3 or EBS. We’ve learned our lesson about that, right? Instead, build a general purpose scalability approach that takes advantage of the on-demand resources but keeps it under your control.
- Server failover can be treated just like scalability. Virtual servers fail more frequently than physical servers, and when they do, there is less ability to recover them. Consequently, a good automation procedure treats scalability and failover the same way – just bring up a new server.
- Maintainability. A server configuration that is created manually and saved to a “golden image” has numerous problems. Only the person who built it knows what is there, and if that person leaves or goes on vacation, it can be very time consuming to reverse-engineer it. Even that person will eventually forget, and if there are several generations of manual configuration changes (boot the golden image, start making changes, create a new golden image), possibly by different people, you are now locked into that image. All these issues become apparent when you need to upgrade O/S versions or change to a new O/S distribution. In contrast, a fully automated deployment is not only a functional process with the benefits mentioned above, it also serves as documentation.
In summary, let the Amazon Web Services outage be a wake-up call… not to fear the IaaS cloud, but to know it, use it properly, and take advantage of its full possibilities.