« swipe left for tags/categories
swipe right to go back »
A week or so ago, Fred Wilson Dictated a Blog Post. In it he dictated a blog post on his Nexus One phone. He then discovered Swype which now has an unofficial Android app. As usual the comment threads on AVC were very active and had lots of thoughts about the future (and past) of voice and keyboard input.
When I talk about Human Computer Interaction, I regularly say that “in 20 years from now, we will look back on the mouse and keyboard as input devices the same way we currently look back on punch cards.”
While I don’t have a problem with mice and keyboards, I think we are locked into a totally sucky paradigm. The whole idea of having a software QWERTY keyboard on an iPhone amuses me to no end. Yeah – I’ve taught myself to type pretty quickly on it but when I think of the information I’m trying to get into the phone, typing seems so totally outmoded.
Last year at CES “gestural input” was all the rage in the major CE booths (Sony, Samsung, LG, Panasonic, …). In CES speak, this was primarily things like “changing the channel on a TV using a gesture”. This year the silly basic gesture crap was gone and replaced with IP everywhere (very important in my mind) and 3D (very cute, but not important). And elsewhere there was plenty of 2D multitouch, most notably front and center in the Microsoft and Intel booths. I didn’t see much speech and I saw very little 3D UI stuff – one exception was the Sony booth where our portfolio company Organic Motion had a last minute installation that Sony wanted that showed off markerless 3D motion capture.
So – while speech and 2D multitouch are going to be an important part of all of this, it’s a tiny part. If you want to envision what things could be like a decade from now, read Daniel Suarez’s incredible books Daemon and Freedom (TM) . Or, watch the following video that I just recorded from my glasses and uploaded to my computer (warning – cute dog alert).
I’ve introduced two new devices into my personal human instrumentation experiment. In addition to my Zeo, I am now carrying around a FitBit and using a Withings scale. I’ve discovered the mild embarrassment associated with having a scale mis-tweet your weight by 10 pounds too much (e.g. “Brad – you gained a lot of weight recently – everything ok?”) But I suppose that is part of the experiment.
The comparison on the Zeo and FitBit sleep data is fascinating. Take a look. Zeo from last night first.
Now the FitBit from last night.
The Zeo breaks things down into four categories: Wake, REM, Deep Sleep, and Light Sleep. The FitBit only has two: Active and Asleep. My FitBit time setting is wrong (it has me going to sleep at 9:17 but I went to bed at 11:10 – I’ll need to figure out how to fix that). But both have me in bed for a little over 9 hours, although the FitBit thinks I was only asleep for 8:17 of it. The Zeo has me asleep for 97% of the time; the FitBit has me at a Sleep Efficiency of 95%.
I need a few more nights of comparative data to completely understand the differences, but I thought I’d toss up a baseline to get started. Oh – and I slept in this morning – I felt kind of crummy and decided to just sleep to try to shake off whatever was creeping up on me.
One of my recent obsessions has become “human instrumentation.” I’ve always been really interested in the data that I generate (sleep, fitness, diet, medical) and in the past six months have started buying every personal measurement product or device I can find that is integrated with the web.
One of my favorites is the Zeo. We looked at investing a while ago and I got to play with one of the alpha prototypes. It was cool but we just didn’t get there on the investment, even though I loved the product and had a great impression of the founding team and what they were up to. We keep a list of “companies we hope we regret not investing in” which means (in English) that we are huge fans and will do whatever we can to help, even though we aren’t investors. Zeo is on that list for me.
But – back to my sleeping skills. Last night I set a new personal ZQ of 137. Here’s my sleep graph from last night.
Light green is REM – I had four REM cycles last night (I usually have one or two) and during the week my score is usually between 50 and 70. The red wake up spikes are bathroom trips (three last night – eek – getting older) and the last one on the far right is when Amy came in to the room at 11am to make sure I was still alive.
I just got a Fitbit and I’m starting to use it also so at some point I’ll do a comparison of the Zeo vs. Fitbit sleep data. In the mean time, I think the Zeo is a great present – definitely consider it for any friends who either (a) love data or (b) have trouble sleeping. I get some kind of affiliate thingy if you clicking on that link above, so if you do buy a Zeo, help me fund my endless toy habit.
The Boulder Camera highlighted a few CU Boulder students and their newest project in the article CU-Boulder students create Pac-Man Roomba game. For anyone that played Pac-Man as a kid (as I did) or anyone that loves robots, it’s sheer brilliance.
Information about the entire project is up on the web at Roomba Pac-Man. Now they need to do a Ms. Roomba Pac-Man – that would be a nice marriage of technologies.
I ingest a ton of information on a daily, weekly, monthly, quarterly, and annual basis. My process for doing it today is entirely manual. I’m starting to look around for a way to automate this using the metaphor of a “personal dashboard”, not dissimilar to the idea from the 1980’s of an EIS (“executive information system”). Let me explain.
- Daily: I have an information processing routine each morning that is web-based. I open a folder in Firefox that contains 14 tabs. I then go through all of them – most, but not all are news related. A few are interactive and require data from me. I then scan through my tweets from the previous night. I then review my “Daily” email folder – most of the items are “daily reports” from a variety of companies I’m an investor in. Next up, my RSS feeds. Finally, I process whatever email came in from the previous night.
- Weekly: I have a weekly tab in Firefox. There are only 5 tabs here and they shift around a little. But – they reference a variety of text and numerical data that I check on a weekly basis.
- Monthly: I get financial statements (balance sheet, income statement, cash flow statement) along with board packages from all of the companies I’m an investor in along with all of my personal financial information.
- Quarterly: Similar to monthly, but for the quarter.
- Annual: Similar to monthly, but for the year. I also generate a variety of other “annual data” much of it to do with either money or fitness.
My Daily routine takes around an hour. Weekly, which includes reviewing my upcoming calendar, takes about 30 minutes. I don’t know how long Monthly, Quarterly, or Annual take as they are usually spread out over multiple days.
In theory, I’m using Firefox and Outlook as my personal dashboards to get to this data and then viewing it in a variety of apps including Excel, Adobe, and Word. However, this is really unsatisfying as the data is (a) in different formats, (b) impossible to search effectively, (c) not persistent, and (d) difficult to handle or manipulate.
My guess is I need both an (a) ingestion and (b) presentation layer. The ingestion layer seems straightforward – the software I’d use for my personal dashboard should be able to generate an XML template for each “type of data”. I should be able to configure this (or – optimally – the ingestion layer should be able to figure this out automatically). The ingestion layer should be able to handle different types of inputs – html files, xml files, emails, or some other quasi-API. So – “Glue”.
The presentation layer is a little harder for me to get my mind around. A year ago I would have said “hmtl is fine – just give it to me in Firefox via a web page.” In some cases this is fine, but I want finer grained control over how this stuff is displayed. Some of the web pages I look at are formatted worse and are less flexible than the DEC-based EISes I played with in the 1980’s. In many cases we haven’t made any progress on the presentation layer not withstanding all the efforts of Edward Tufte. So – “HCI”.
I’m hopeful that in a decade I’ll have a much more effective way of dealing with my periodic information routine. Until then, I’m searching for companies working on both the ingestion layer and presentation layer (preferably both). Feel free to give me a shout if this is something you are working on.