« swipe left for tags/categories
swipe right to go back »
I talk about human computer interaction (HCI) a lot on this blog. We’ve invested in a number of companies in our HCI theme, including Oblong, Organic Motion, and EmSense and have a few more that we are working on that hopefully will be announced shortly. When I think about the areas I’ve been paying the most attention to and am the most intrigued with as an investor, HCI rises to the top of the list.
This morning I read an article on SeattlePI titled UW researchers look to reinvent the graphical user interface. While the headline is a bit sensational, the project (Prefab) is very cool. At first glance I thought it was simply rewriting HTML pages (clever, but not that big a deal) but then I realized it was doing something more profound. The five minute video is worth a look if you are into these types of things.
The bubble cursor and sticky icon examples are great ones. Starting at 1:45 you see the bubble cursor and sticky icons in action on Firefox in Vista. At 2:05 you see it on OSX. At 2:45 you see it in action on a Youtube player. The magic seems to be around pixel level mapping, which anyone working in adtech knows that’s where the real action is. It’s pretty cool to see it being used to map UI functionality.
A week or so ago, Fred Wilson Dictated a Blog Post. In it he dictated a blog post on his Nexus One phone. He then discovered Swype which now has an unofficial Android app. As usual the comment threads on AVC were very active and had lots of thoughts about the future (and past) of voice and keyboard input.
When I talk about Human Computer Interaction, I regularly say that “in 20 years from now, we will look back on the mouse and keyboard as input devices the same way we currently look back on punch cards.”
While I don’t have a problem with mice and keyboards, I think we are locked into a totally sucky paradigm. The whole idea of having a software QWERTY keyboard on an iPhone amuses me to no end. Yeah – I’ve taught myself to type pretty quickly on it but when I think of the information I’m trying to get into the phone, typing seems so totally outmoded.
Last year at CES “gestural input” was all the rage in the major CE booths (Sony, Samsung, LG, Panasonic, …). In CES speak, this was primarily things like “changing the channel on a TV using a gesture”. This year the silly basic gesture crap was gone and replaced with IP everywhere (very important in my mind) and 3D (very cute, but not important). And elsewhere there was plenty of 2D multitouch, most notably front and center in the Microsoft and Intel booths. I didn’t see much speech and I saw very little 3D UI stuff – one exception was the Sony booth where our portfolio company Organic Motion had a last minute installation that Sony wanted that showed off markerless 3D motion capture.
So – while speech and 2D multitouch are going to be an important part of all of this, it’s a tiny part. If you want to envision what things could be like a decade from now, read Daniel Suarez’s incredible books Daemon and Freedom (TM) . Or, watch the following video that I just recorded from my glasses and uploaded to my computer (warning – cute dog alert).
I’ve introduced two new devices into my personal human instrumentation experiment. In addition to my Zeo, I am now carrying around a FitBit and using a Withings scale. I’ve discovered the mild embarrassment associated with having a scale mis-tweet your weight by 10 pounds too much (e.g. “Brad – you gained a lot of weight recently – everything ok?”) But I suppose that is part of the experiment.
The comparison on the Zeo and FitBit sleep data is fascinating. Take a look. Zeo from last night first.
Now the FitBit from last night.
The Zeo breaks things down into four categories: Wake, REM, Deep Sleep, and Light Sleep. The FitBit only has two: Active and Asleep. My FitBit time setting is wrong (it has me going to sleep at 9:17 but I went to bed at 11:10 – I’ll need to figure out how to fix that). But both have me in bed for a little over 9 hours, although the FitBit thinks I was only asleep for 8:17 of it. The Zeo has me asleep for 97% of the time; the FitBit has me at a Sleep Efficiency of 95%.
I need a few more nights of comparative data to completely understand the differences, but I thought I’d toss up a baseline to get started. Oh – and I slept in this morning – I felt kind of crummy and decided to just sleep to try to shake off whatever was creeping up on me.
One of my recent obsessions has become “human instrumentation.” I’ve always been really interested in the data that I generate (sleep, fitness, diet, medical) and in the past six months have started buying every personal measurement product or device I can find that is integrated with the web.
One of my favorites is the Zeo. We looked at investing a while ago and I got to play with one of the alpha prototypes. It was cool but we just didn’t get there on the investment, even though I loved the product and had a great impression of the founding team and what they were up to. We keep a list of “companies we hope we regret not investing in” which means (in English) that we are huge fans and will do whatever we can to help, even though we aren’t investors. Zeo is on that list for me.
But – back to my sleeping skills. Last night I set a new personal ZQ of 137. Here’s my sleep graph from last night.
Light green is REM – I had four REM cycles last night (I usually have one or two) and during the week my score is usually between 50 and 70. The red wake up spikes are bathroom trips (three last night – eek – getting older) and the last one on the far right is when Amy came in to the room at 11am to make sure I was still alive.
I just got a Fitbit and I’m starting to use it also so at some point I’ll do a comparison of the Zeo vs. Fitbit sleep data. In the mean time, I think the Zeo is a great present – definitely consider it for any friends who either (a) love data or (b) have trouble sleeping. I get some kind of affiliate thingy if you clicking on that link above, so if you do buy a Zeo, help me fund my endless toy habit.
The Boulder Camera highlighted a few CU Boulder students and their newest project in the article CU-Boulder students create Pac-Man Roomba game. For anyone that played Pac-Man as a kid (as I did) or anyone that loves robots, it’s sheer brilliance.
Information about the entire project is up on the web at Roomba Pac-Man. Now they need to do a Ms. Roomba Pac-Man – that would be a nice marriage of technologies.