« swipe left for tags/categories
swipe right to go back »
Maybe everyone knows this, but it took me a while to realize that almost all of my performance issues with Google Apps were related to my DNS configuration. Once I switched all my machines and routers to Google Public DNS all of my performance problems went away.
It’s remarkable. Simply hard code DNS to 22.214.171.124 and 126.96.36.199. Problem solved.
My office, condo, and house in Keystone are all on Comcast. For the last month I’ve been struggling in each of them. There are days that Gmail feels almost unusable – five to ten second waits between messages. Web performance was “good enough” so I assumed it was a Gmail problem.
Nope – it was a Comcast DNS problem.
In hindsight, this is kind of obvious. But wow, what a difference it made.
On Saturday, I read the final draft of a magnificent book by David Rose. The book is titled Enchanted Objects: Design, Human Desire and the Internet of Things.
I’ve known David for many years. I was a huge fan and an early customer, but not an investor, in one of his companies (Ambient Devices) and we share a lot of friends and colleagues from MIT and the Media Lab. I was happy to be asked to blurb his book and then absolutely delighted with the book. It captured so many things that I’ve been thinking about and working on in a brilliantly done 300 page manuscript.
The basic premise of the book is that ultimately we want “enchanted objects”, not “glass slabs” to interact with. Our current state of the art (iMacs, iPhones, Android stuff, Windows tablets, increasing large TV screens) are all glass slabs. The concept of the “Internet of Things” introduces the idea of any device being internet connected, which is powerful, but enchanted objects take it one step further.
Now, the irony of it is that I read David’s book on a glass slab (my Kindle Fire, which is currently my favorite reading device.) But page after page jumped out at me with assertions that I agreed with, examples that were right, or puzzle pieces that I hadn’t quite put together yet.
And then on Saturday night it all hit home for me with a real life example. I was lying on the couch reading another book on my Kindle Fire at about 10pm. I heard a chirp. I tried to suppress it at first, but after I heard the second one I knew it was the dreaded chirp of my smoke detector. I continued to try to deny reality, but a few chirps later Amy walked into the room (she had already gone to bed) and said “do you hear what I hear?” Brooks the Wonder Dog was already having a spaz attack.
I got up on a chair and pulled the smoke alarm off the ceiling. I took out the 9V battery and was subject to a very loud beep. We scavenged around for 9V batteries in our condo. We found about 200 AAs and 100 AAAs but no 9Vs. Chirp chirp. We bundled up (it was 2 degrees out) and walked down the street to the Circle K to buy a 9V battery. They only had AAs. We walked back home, got in the car (with Brooks, who was now a complete mess from all the beeping) and drove to King Soopers. This time we got about 20 9Vs. We got home and I got back on the chair and wrestled with the battery holder. After the new battery was in the beeping continued. Out of frustration, I hit the “Test” button, heard a very loud extended beep, and then silence. At least from that smoke alarm.
Chirp. It turns out that I changed the battery in the wrong one. The one that was chirping was in another room. This one was too high for a chair, which resulted in us having to go into our storage cage in the condo basement and get a ladder. There was a padlock on our cage – fortunately the four digit code was one of the ones that everyone in the world who knows us knows. Eventually, with the ladder, the new batteries, and some effort I got the chirping to stop.
We have those fancy white smoke alarms that are wired directly into the power of the house. I have no idea why they even need a battery. The first thing they do when they want your attention is to make an unbelievably obnoxious noise. Then, they are about as hard as humanly possible to silence. They generate one emotion – anger.
Not an enchanted object.
In comparison, Nest is trying to make an enchanted object our of their new smoke detector product. After reading the Amazon reviews, I realize this is an all or nothing proposition and after spending $30 on 9V batteries and then changing all of the ones in the existing smoke detectors I don’t feel like spending $550 to replace the four smoke detectors in my condo. Plus, the one I want – the wired one – isn’t in stock. So I’ll wait one product cycle, or at least until the beeping crushes my soul again.
We’ve got a bunch of investments in our human computer interaction them that aspire to be enchanted objects including Fitbit, Modular Robotics, LittleBits, Orbotix, and Sifteo. I’m going to start using David’s great phrase “enchanted objects” to describe what I’m looking for in this area. And while I’ll continue to invest in many things that improve our glass slab world, I believe that the future is enchanted objects.
I’ve been railing about the evils of software patents – how they stifle and create a massive tax on innovation – since I wrote my first post about it in 2006 titled Abolish Software Patents. Seven years ago this was a borderline heretical point of view since it was widely asserted that VCs believed you should patent everything to protect your intellectual property. Of course, this was nonsense and the historical myths surrounding intellectual property, especially the importance and validity of software and business methods, have now been exploded.
My post from 2006 lays out my point of view clearly. If you don’t want to read it, here’s a few paragraphs.
“I personally think software patents are an abomination. My simple suggestion on the panel was to simply abolish them entirely. There was a lot of discussion around patent reform and whether we should consider having different patent rules for different industries. We all agreed this was impossible – it was already hard enough to manage a single standard in the US – even if we could get all the various lobbyists to shut up for a while and let the government figure out a set of rules. However, everyone agreed that the fundamental notion of a patent – that the invention needed to be novel and non-obvious – was at the root of the problem in software.
I’ve skimmed hundreds of software patents in the last decade (and have read a number of them in detail.) I’ve been involved in four patent lawsuits and a number of “threats” by other parties. I’ve had many patents granted to companies I’ve been an investor in. I’ve been involved in patent discussions in every M&A transaction I’ve ever been involved in. I’ve spent more time than I care to on conference calls with lawyers talking about patent issues. I’ve always wanted to take a shower after I finished thinking about, discussing, or deciding how to deal with something with regard to a software patent.”
Companies I’ve been involved in have now been on the receiving end of around 100 patent threats or suits, almost all from patent trolls who like to masquerade behind names like non-practicing entities (NPEs) and patent assertion entities (PAEs). We have fought many of them and had a number patents ultimately invalidated. The cost of time and energy is ridiculous, but being extorted by someone asserting a software patent for something irrelevant to one’s business, something completely obvious that shouldn’t have been patented in the first place, or something that isn’t unique or novel in any way, is really offensive to me.
In 2009, I got to sit in and listen to the Supreme Court hear the oral arguments on Bilski. I was hopeful that this could be a defining case around business method and software patents, but the Supreme Court punted and just made things worse.
Now that the President and Congress has finally started to try to figure out how to address the issue of patent trolls, the Supreme Court has another shot at dealing with this once and for all.
I’m not longer optimistic about any of this and just expect I’ll have to live – and do business – under an ever increasing mess of unclear legislation and litigation. That sucks, but maybe I’ll be pleasantly surprised this time around.
I woke up this morning to several articles about Bitcoins. From Dave Taylor’s explanation in the Boulder Daily Camera to a paywall article that you can’t buy with bitcoins (ironic) in the NY Times (A Bitcoin Puzzle) to Fred Wilson’s blog (A Note about Bitcoin), I was surrounded by words about them.
We have an awesome CEO list that covers plenty of topics. Early in the week I posted a link to Fred Wilson’s post Buying Your Holiday Gifts With Bitcoin. That generated a fun discussion including lots of “what are bitcoins and why do I care”; “here’s what they are” kind of things. And then Kwin Kramer of Oblong weighed in with a phenomenal essay. It follows.
I’m with Seth; I think bitcoin is interesting on several levels, including as a real-life experiment with a semi-decentralized currency.
Bitcoin is a software engineer’s implementation of money (as distinct from, for example, a politician’s, banker’s, or economist’s).
There’s a lot of overlap between bitcoin fans and folks with strongly libertarian views. Many of bitcoin’s most vocal proponents see bitcoin as a currency, a replacement for currencies that are created and managed by governments. These folks tend to view bitcoin as a sort of electronic version of gold, a new currency that’s not a “fiat” currency.
I’m deeply skeptical of this set of ideas. First, and very generally, I don’t tend to think that dis-intermediating government institutions is a useful goal in and of itself. I would describe a well-run central bank like the United States Federal Reserve the way Churchill described democracy: the worst solution to the problem of managing a monetary system, except for all those other forms that have been tried from time to time.
In addition, core design decisions in the bitcoin spec make bitcoin a pretty terrible store of value and unit of account, which are two things we expect from a currency.
As has been noted in this thread, the total number of bitcoins is capped at 21,000,000. Currently there are about half that number of bitcoins in circulation. The rate at which new bitcoins are mined is designed to decrease over time. This means the bitcoin market behaves more like a commodity market than like a currency market, prone to volatility and some specific kinds of market pathologies. In my view the fact that the money supply can’t be “managed” by a central bank that is able to turn various “knobs” (interest rates of several kinds, the amount of money in circulation) is a bug, not feature!
The cap also means that a bitcoin-denominated monetary system will be a system built around deflation — the opposite of how the monetary system we use today is constructed. Over time, prices will fall, rather than rise. Economists generally view deflation as a problem. If prices get cheaper over time, all the time, people have strong incentives to delay purchases and to save money. If everyone saves, rather than spends, economic growth is impossible.
Economists have lots of tools for talking about this stuff. And, while economists often disagree violently with each other, the collective knowledge in the field is important and valuable. To draw an analogy, non-programmers can and often do have very insightful things to say about digital technology. But it’s definitely worth talking to experienced programmers when trying to understand a particular platform, protocol, or application.
I’m not an economist, but I find convincing the economists’ consensus that deflation is “bad.” At the very least, I’d argue that we don’t know how to build a stable monetary system on top of a currency that is fundamentally deflationary.
On the other hand, even if bitcoin makes for a poor currency, it may well be a very useful payment mechanism. The original bitcoin paper focuses heavily on this aspect of the system design.
To explain this a little more, we can think about how we use US dollars in normal, every-day life. I usually keep some printed dollar banknotes in my pockets. These banknotes — these “dollars” — are a store of value. (They’re worth something in an economic sense.) The banknotes are also a unit of account. (Lots and lots of things I encounter every day have prices denominated in dollars.) Finally, each banknote is a payment mechanism — a transaction mechanism. I can hand over a banknote to most people I might want to buy something from. They’ll accept it. We’ll both know what that means.
But physically handing over a “dollar” isn’t the only payment mechanism I regularly use. I have credit cards, and checks (sort of — that’s kind of changing), and now some other electronic payment mechanisms like PayPal and Amazon points.
It’s possible to separate the functions of value store, unit of account, and transaction mechanism. They fit together neatly and are systemically related, but they’re three different things.
The bitcoin peer-to-peer transaction protocol is pretty cool. It’s basically strong cyptography, good timestamps, and a consensus protocol for blessing transaction reporting.
Which boils down to a way to “hand someone cash” electronically. With no trusted third party having to broker the handover. And, theoretically, anonymity for both the payer and the payee.
As a software person, I think of this as a platform. A new electronic payment platform that may have significant advantages over most of the existing ones. To get broad adoption, platforms need killer apps. So far, there aren’t killer apps for bitcoin. But there are some possible raw materials for killer apps. Cheaper international payments. Completely anonymous electronic payments. But the great thing about platforms is that it’s often quite hard to predict early on what the killer apps might be. Particularly for the really disruptive ones.
A couple of final caveats. It’s not clear (at least to me) whether it’s possible to separate the currency aspects of bitcoin from the transaction platform aspects. If bitcoin does turn out to be a flawed currency, that could be a problem even if the transaction platform stuff is really useful.
Also, the bitcoin platform is pretty new and there may be some fatal flaws in the design of its anonymity features and its transaction log. For example, the transaction log is a global, permanent thing. To verify any bitcoin transaction you have to have a full record of every bitcoin transaction ever. That’s okay now; the system is small. Our computers and networks will keep getting faster as bitcoin use increases. But a broadly used currency will have to be able to support a lot of transactions. Maybe the design can be patched, either in a technical sense or in a social/institutional sense. But we don’t really know.
At dinner last week, my long time friend Dave Jilk (we just celebrated our 30th friendship anniversary) tossed a hypothesis at me that as people age, they resist adopting new technologies. This was intended as a personal observation, not an ageist statement, and we devolved into a conversation about brain plasticity. Eventually we popped back up the stack to dealing with changing tech and at some point I challenged Dave to write an essay on this.
The essay follows. I think he totally nails it. What do you think?
People working in information technology tend to take a producer perspective. Though the notion of a “lean startup” that uses both Agile and Customer Development approaches is ostensibly strongly customer focused, the purpose of these methodologies is for the company to find an maximize its market, not specifically to optimize the user experience. The following is an observation more purely from the perspective of the consumer of information technology.
On average, as people age they resist adopting new technologies, only doing so slowly and where the benefits compellingly outweigh the time cost and inevitable frustrations. This resistance is not necessarily irrational – after a number of cycles where the new technology proves to be a fad, or premature, or less than useful, we learn that it may behoove us to wait and see. We want to accomplish things, not spend time learning tools that may or may not help us accomplish something.
Consequently, for many decades the pattern has been that technology adoption is skewed toward younger people, not only because they have not yet built up this resistance, but also because they are immersed in the particular new technologies as they grow up.
But something new is happening today, and it is evidence of accelerating rather than merely progressive technology change. Discrete technology advances are giving way to continuous technology advances. Instead of making a one-time investment in learning a new technology, and then keeping up with the occasional updates, it is increasingly necessary to be investing in learning on a constant, ongoing basis.
I will provide three examples. First, application features and user interfaces are increasingly in a state of continuous flux. From a user perspective, on any given day you may connect to Facebook or Gmail or even a business application like Salesforce.com, and find that there are new features, new layout or organization of screen elements, new keystroke patterns, even new semantics associated with privacy, security, or data entered and displayed. This is most prominent in online systems, but increasingly software updates are automatic and frequent on mobile devices and even full computer systems. On any given day, one may need to spend a significant amount of time re-learning how to use the software before being productive or experiencing the desired entertainment.
My mother is 86 years old. For perspective, when she was 20, television was a new consumer technology, and room-sized digital computers had just been invented. She uses the web, Yahoo mail, and Facebook, impressive feats in themselves for someone her age. But every time Yahoo changes their UI, she gets frustrated, because from her perspective it simply no longer works. The changes neither make things better for her nor add capabilities she cares about. She wants to send email, not learn a new UI; but worse, she doesn’t really know that learning a new UI is what she is expected to do.
Middle-aged people like me are better prepared to cope with these changes, because we’ve gotten used to them, but we still find them frustrating. Perhaps it is in part because we are busy and we have things we need to get done, but it is interesting to see how much people complain about changes to the Facebook interface or iOS updates or what have you. We can figure it out, but it seems more like a waste of time.
Young people gobble up these changes. They seem to derive value from the learning itself, and keeping up with the changes even has a peer pressure or social esteem component. Yes, this is in part because they also have fewer responsibilities, but that cannot be the entire explanation. They have grown up in a world where technology changes rapidly. They didn’t just “grow up with social media,” they grew up with “social media that constantly changes.” In fact, not only do they keep up with the changes on a particular social media service, they are always exploring the latest new services. Several times a year, I hear about a new service that is all the rage with teens and tweens.
A second example that is more esoteric but perhaps a leading indicator, is the rise of continuous integration in software development, not just with one’s own development team but with third-party software and tools. No longer is it sufficient to learn a programming language, its idiosyncrasies, its libraries, and its associated development tools. Instead, all of these tools change frequently, and in some cases continuously. Every time you build your application, you are likely to have some new bugs or incompatibilities related to a change in the language or the libraries (especially open source libraries). Thus, learning about the changes and fixing your code to accommodate them are simply part of the job.
This situation has become sufficiently common that some language projects (Ruby on Rails and Python come to mind) have abandoned upward compatibility. That’s right, you can no longer assume that a new version of your programming language will run your existing applications. This is because you are expected to keep up with all the changes all the time. Continuous integration, continuous learning. Older coders like me view this as a tax on software development time, but younger coders accept it as a given and seem to not only take it in stride but revel in their evolving expertise.
My final example, a little different from the others, is the pace of client device change. From 1981, when the IBM PC was introduced, until about 2005, one could expect a personal computer system to have a lifespan of 3-5 years. You could get a new one sooner if you wanted, but it would have reasonable performance for three years and tolerable for five. By then, the faster speed of the new machine would be a treat, and make learning the latest version of DOS, and later Windows, almost tolerable. Today, five years is closer to the lifespan of a device category. Your recent smartphone purchase is more likely to be replaced in 2017 by a smart watch, or smart eyewear, as it is by another smartphone. You won’t just have to migrate your apps and data, and learn the new organization of the screen – you will have to learn a new way to physically interact with your device. Hand gestures, eye gestures, speaking – all of these are likely to be part of the interface. Another five years and it is highly likely that some element of the interface will take input from your brain signals, whether indirectly (skin or electromagnetic sensors) or directly (implants). When you say you are having trouble getting your mind around the new device, you will mean it literally.
The foregoing is primarily just an observation, but it will clearly have large effects on markets and on sociology. It suggests very large opportunities but also a great deal of disruption. And this transition from generational learning to continuous learning is not the last word. Technology will not just keep advancing, it will keep accelerating. As the youth of today, accustomed to continuous learning, reach their 40s and beyond, they will become laggards and slow to adopt in comparison with their children. Even continuous learning will no longer be sufficient. What will that look like?