« swipe left for tags/categories
swipe right to go back »
I just found out that Startup Communities: Building an Entrepreneurial Ecosystem in Your City made the Amazon Top 10 Business Books of 2012.
I’m not a huge “made that list person” but as a writer this is a very cool thing, especially when I look at the other books, and writers, on the list. I’m downloading all of the other books right now and taking them on my two week vacation which is coming up.
I’m at Defrag this morning listing to Kevin Kelly explain how the global super organism already exists and why it is different than the Kurzweil defined Singularity. Awesome – and extremely consistent with how I think about how the machines have already taken over. Kevin’s intellectual approach is clearer and deeper – which I like, and will borrow heavily from. Kevin’s book, What Technology Wants, is also in a swag bag and I’ll be reading it next week.
One of the powerful concepts is that the “city is the node.” As I’ve been talking about Startup Communities, I’ve been explaining the power of “entrepreneurial density” and why everyone is congregating around cities again (intellectually referred to as the reurbanism of American). It’s really cool that he’s using the Degree Confluence Project to “show” (rather than simply “tell”) this.
A few of the books on the Amazon Top 10 Business Books of 2012 touch on this theme – I’ll be looking for it as I read a lot on the beach the next few weeks.
Thanks to all of you who participated in Operation Pre-Order for Startup Communities. I got a bunch of fun emails and am excited to share my newest book with you.
The Amazon winner is Jess Bachman. He’s from Bowmanville, Ontario which Google shows me is an hour east of Toronto. Hopefully we can connect during my Toronto trip in October.
The BarnesandNoble.com winner is Chris Rill from Mamaroneck, NY. I’ll catch him on my next NY trip.
The ratio of Amazon to B&N started out about 10:1 but ended up at 6:1. Later entries thought about it and figured out that odds were better if they bought from B&N since they guessed that more people would buy from Amazon. They were correct!
As most nerds know, Skynet gained self-awareness last week and decided as its first act to mess with Amazon Web Services, creating havoc for anyone that wanted to check-in on the Internet to their current physical location. In hindsight Skynet eventually figured out this was a bad call on its part as it actually wants to know where every human is at any given time. However, Skynet is still trying to get broader adoption of Xbox Live machines, so the Sony Playstation Network appears to still be down.
After all the obvious “oh my god, AWS is down” articles followed by the “see – I told you the cloud wouldn’t work” articles, some thoughtful analysis and suggestions have started to appear. Over the weekend, Dave Jilk, the CEO of Standing Cloud (I’m on the board) asked if I was going to write something about this and – if not – did I want him to write a guest post for me. Since I’ve used my weekend excess of creative energy building a Thing-O-Matic 3D Printer in an effort to show the machines that I come in peace, I quickly took him up on his offer.
Following are Dave’s thoughts on learning the right lessons from the Amazon outage.
Much has already been written about the recent Amazon Web Services outage that has caused problems for a few high-profile companies. Nevertheless, at Standing Cloud we live and breathe the infrastructure-as-a-service (IaaS) world every day, so I thought I might have something useful to add to the discussion. In particular, some media and naysayers are emphasizing the wrong lessons to be learned from this incident.
Wrong lesson #1: The infrastructure cloud is either not ready for prime time, or never will be.
Those who say this simply do not understand what the infrastructure cloud is. At bottom, it is just a way to provision virtual servers in a data center without human involvement. It is not news to anyone who uses them that virtual servers are individually less reliable than physical servers; furthermore, those virtual servers run on physical servers inside a physical data center. All physical data centers have glitches and downtime, and this is not the first time Amazon has had an outage, although it is the most severe.
What is true is that the infrastructure cloud is not and never will be ready to be used exactly like a traditional physical data center that is under your control. But that is obvious after a moment’s reflection. So when you see someone claiming that the Amazon outage shows that the cloud is not ready, they are just waving an ignorance flag.
Wrong lesson #2: Amazon is not to be trusted.
On the contrary, the AWS cloud has been highly reliable on the whole. They take downtime seriously and given the volume of usage and the amount of time they have been running it (since 2006), it is not surprising that they would eventually have a major outage of some sort. Enterprises have data center downtime, and back in the day when startups had to build their own, so did they. Some data centers are run better than others, but they all have outages.
What is of more concern are rumors I have heard that Amazon does not actually use AWS for Amazon.com. That doesn’t affect the quality of their cloud product directly, but given that they have lured customers with the claim that they do use it, this does impact our trust in relation to their marketing integrity. Presumably we will eventually find out the truth on that score. In any case, this issue is not related to the outage itself.
Having put the wrong lessons to rest, here are some positive lessons that put the nature of this outage into perspective, and help you take advantage of IaaS in the right way and at the right time.
Right lesson #1: Amazon is not infallible, and the cloud is not magic.
This is just the flip side of the “wrong lessons” discussed above. If you thought that Amazon would have 100% uptime, or that the infrastructure cloud somehow eliminates concerns about downtime, then you need to look closer at what it really is and how it works. It’s just a way to deploy somewhat less reliable servers, quickly and without human intervention. That’s all. Amazon (and other providers) will have more outages, and cloud servers will fail both individually and en masse.
Your application and deployment architecture may not be ready for this. However, I would claim that if it is not, you are assuming that your own data center operators are infallible. The architectural changes required to accommodate the public IaaS cloud are a good idea even if you never move the application there. That’s why smart enterprises have been virtualizing their infrastructure, building private clouds, and migrating their applications to operate in that environment. It’s not just a more efficient use of hardware resources, it also increases the resiliency of the application.
Right lesson #2: Amazon is not the only IaaS provider, and your application should be able to run on more than one.
This requires a bias alert: cloud portability is one of the things Standing Cloud enables for the applications it manages. If you build/deploy/manage an application using our system, it will be able to run on many different cloud providers, and you can move it easily and quickly.
We built this capability, though, because we believed that it was important for risk mitigation. As I have already pointed out, no data center is infallible and outages are inevitable. Further, It is not enough to have access to multiple data centers – the Amazon outage, though focused on one data center, created cascading effects (due to volume) in its other data centers. This, too, was predictable.
Given the inevitability of outages, how can one avoid downtime? My answer is that an application should be able to run on more than one, or many, different public cloud services. This answer has several implications:
- You should avoid reliance on unique features of a particular IaaS provider if they affect your application architecture. Amazon has built a number of features that other providers do not have, and if you are committed to Amazon they make it very easy to be “locked in.” There are two ways to handle this: first, use a least-common-denominator approach; second, find a substitution for each such feature on a “secondary” service.
- Your system deployment must be automated. If it is not automated, it is likely that it will take you longer to re-deploy the application (either in a different data center or on a different cloud service) than it will take for the provider to bring their service back up. As we have seen, that can take days. I discuss automation more below.
- Your data store must be accessible from outside your primary cloud provider. This is the most difficult problem, and how you accomplish it depends greatly on the nature of your data store. However, backups and redundancy are the key considerations (as usual!). All data must be in more than one place, and you need to have a way to fail over gracefully. As the Amazon outage has shown, a “highly reliable” system like their EBS (Elastic Block Storage) is still not reliable enough to avoid downtime.
Right lesson #3: Cloud deployments must be automated and should take cloud server reliability characteristics into account.
Even though I have seen it many times, I am still taken aback when I talk to a startup that has used Amazon just like a traditional data center using traditional methods. Their sysadmins go into the Amazon console, fire up some servers, manually configure the deployment architecture (often using Amazon features that save them time but lock them in), and hope for the best. Oh, they might burn an AMI and save it on S3, in case the server dies (which only works as long as nothing changes). If they need to scale up, they manually add another server and manually add it to the load balancer queue.
This type of usage treats IaaS as mostly a financing alternative. It’s a way to avoid buying capital equipment and conserving financial resources when you do not know how much computing infrastructure you will need. Even the fact that you can change your infrastructure resources rapidly really just boils down to not having to buy and provision those resources in advance. This benefit is a big one for capital-efficient lean startups, but on the whole the approach is risky and suboptimal. The Amazon outage illustrates this: companies that used this approach were stuck during the outage, but at another level they are still stuck with Amazon because their server configurations are implicit.
Instead, the best practice for deploying applications – in the cloud but also anywhere, is by automating the deployment process. There should be no manual steps in the deployment process. Although this can be done using scripts, even better is to use a tool like Chef, Puppet, or cfEngine to take advantage of abstractions in the process. Or use RightScale, Kaavo, CA Applogic, or similar tools to not only automate but organize your deployment process. If your application uses a standard N-tier architecture, you can potentially use Standing Cloud without having to build any automation scripts at all.
Automating an application deployment in the cloud is a best practice with numerous benefits, including:
- Free redundancy. Instead of having an idle redundant data center (whether cloud or otherwise), you can simply re-deploy your application in another data center or cloud service using on-demand resources. Some of the resources (e.g., a replicated data store) might need to be available at all times, but most of the deployment can be fired up only when it is needed.
- Rapid scalability. In theory you can get this using Amazon’s auto-scaling features, Elastic Beanstalk, and the like. But these require access to AMIs that are stored on S3 or EBS. We’ve learned our lesson about that, right? Instead, build a general purpose scalability approach that takes advantage of the on-demand resources but keeps it under your control.
- Server failover can be treated just like scalability. Virtual servers fail more frequently than physical servers, and when they do, there is less ability to recover them. Consequently, a good automation procedure treats scalability and failover the same way – just bring up a new server.
- Maintainability. A server configuration that is created manually and saved to a “golden image” has numerous problems. Only the person who built it knows what is there, and if that person leaves or goes on vacation, it can be very time consuming to reverse-engineer it. Even that person will eventually forget, and if there are several generations of manual configuration changes (boot the golden image, start making changes, create a new golden image), possibly by different people, you are now locked into that image. All these issues become apparent when you need to upgrade O/S versions or change to a new O/S distribution. In contrast, a fully automated deployment is not only a functional process with the benefits mentioned above, it also serves as documentation.
In summary, let the Amazon Web Services outage be a wake-up call… not to fear the IaaS cloud, but to know it, use it properly, and take advantage of its full possibilities.
Colorado HB10-1193 – also known as the “Amazon Tax” – really upset me as I wrote about in Amazon Fires Its Affiliates in Colorado (Including Me) Because of Colorado HB 10-1193. While I discovered a partial solution via a service from a company called Viglink which I wrote about in I’m An Amazon Affiliate Again – Sort Of I’m still really annoyed with the myopia of our Colorado state representatives around this issue.
I’m also disgusted by the protectionist turn this took as our governor, many representatives, and several progressive organizations that I’ve supported called for a ban on Amazon because of the need to “level the playing field for local merchants.” When I talked to a number of folks about this, including the organizations that I had previous supported, they demonstrated that they didn’t really understand the issue, were getting confused about states rights vs. federal rights (an issue I expect we’ll see come up a lot over the next few years given our federal, state, and local government search for additional revenue wherever they can find it), and didn’t get that a protectionist attitude was actually offensive to most business people (except, presumably, those being protected by the government.)
Finally, legislation like this is completely tone deaf to both the growing impact of technology on our society as well as a huge shift in the way information based goods are bought and sold.
I’ve been told by several Colorado representatives that didn’t support this bill that there is no way this tax will be repealed, but I haven’t given up yet. I’ve enlisted my friend David Binetti to crank up another Twitter Campaign To Repeal Colorado’s Internet Tax. If you are a Colorado citizen with a twitter account, it’ll take less than a minute to tweet out this message along with delivering a physical letter to your specific representatives.
Let’s make sure our representatives know that this is a piece of legislation that should be repealed.
On March 8, 2010 Amazon fired me as an Amazon Affiliate because of Colorado HB 10-1193. I proceeded to have a dozen different conversations (email and live) with several of my state representatives, including one of the co-sponsors of the bill, and each conversation made me more incensed at the abject stupidity and lack of understanding of the dynamics surrounding the situation. Ultimately, the argument came down to one of protectionism – e.g. “we have to protect our local merchants so Amazon shouldn’t get an unfair advantage by not having to charge state tax.” I could rant about this for a while, but I’ve got better things to spend my time on at this point.
I’ve been an early Viglink user for a while. Niel Robertson, the CEO of Trada, introduced me to Viglink’s founder Oliver Roup and I agreed to be an alpha tester. While we aren’t an investor, I’m intrigued with what Viglink is doing and I’m already a big fan.
Last week I realized that all of my going forward Amazon links (and other links to merchants with an affiliate program) were getting rewritten by Viglink. As a result, on a going forward basis, I was getting Amazon affiliate revenue (via Viglink) for anyone that clicked through one of my links and bought something on Amazon.
That was cool, but I have a gillion of old links using my Amazon affiliate code that no longer works. I asked Oliver if he could rewrite all of the old links also. Here’s his response:
“We have coded this and deployed it. As a result, all your dead Amazon affiliate links will be overwritten with our affiliate code and the revenue will be credited to you. What’s more, we just created an affiliate program against ourselves – any links you have to us on your blog will automatically be affiliated and you will receive 10% of the revenue from any customers we get as a result of those links.”
Awesome! If you are a fired Amazon Affiliate in Colorado, take a look at Viglink.