« swipe left for tags/categories
swipe right to go back »
I ingest a ton of information on a daily, weekly, monthly, quarterly, and annual basis. My process for doing it today is entirely manual. I’m starting to look around for a way to automate this using the metaphor of a “personal dashboard”, not dissimilar to the idea from the 1980’s of an EIS (“executive information system”). Let me explain.
- Daily: I have an information processing routine each morning that is web-based. I open a folder in Firefox that contains 14 tabs. I then go through all of them – most, but not all are news related. A few are interactive and require data from me. I then scan through my tweets from the previous night. I then review my “Daily” email folder – most of the items are “daily reports” from a variety of companies I’m an investor in. Next up, my RSS feeds. Finally, I process whatever email came in from the previous night.
- Weekly: I have a weekly tab in Firefox. There are only 5 tabs here and they shift around a little. But – they reference a variety of text and numerical data that I check on a weekly basis.
- Monthly: I get financial statements (balance sheet, income statement, cash flow statement) along with board packages from all of the companies I’m an investor in along with all of my personal financial information.
- Quarterly: Similar to monthly, but for the quarter.
- Annual: Similar to monthly, but for the year. I also generate a variety of other “annual data” much of it to do with either money or fitness.
My Daily routine takes around an hour. Weekly, which includes reviewing my upcoming calendar, takes about 30 minutes. I don’t know how long Monthly, Quarterly, or Annual take as they are usually spread out over multiple days.
In theory, I’m using Firefox and Outlook as my personal dashboards to get to this data and then viewing it in a variety of apps including Excel, Adobe, and Word. However, this is really unsatisfying as the data is (a) in different formats, (b) impossible to search effectively, (c) not persistent, and (d) difficult to handle or manipulate.
My guess is I need both an (a) ingestion and (b) presentation layer. The ingestion layer seems straightforward – the software I’d use for my personal dashboard should be able to generate an XML template for each “type of data”. I should be able to configure this (or – optimally – the ingestion layer should be able to figure this out automatically). The ingestion layer should be able to handle different types of inputs – html files, xml files, emails, or some other quasi-API. So – “Glue”.
The presentation layer is a little harder for me to get my mind around. A year ago I would have said “hmtl is fine – just give it to me in Firefox via a web page.” In some cases this is fine, but I want finer grained control over how this stuff is displayed. Some of the web pages I look at are formatted worse and are less flexible than the DEC-based EISes I played with in the 1980’s. In many cases we haven’t made any progress on the presentation layer not withstanding all the efforts of Edward Tufte. So – “HCI”.
I’m hopeful that in a decade I’ll have a much more effective way of dealing with my periodic information routine. Until then, I’m searching for companies working on both the ingestion layer and presentation layer (preferably both). Feel free to give me a shout if this is something you are working on.
I love data. And I adore playing with it graphically, as I learn a lot from graphing longitudinal data about things I’m involved in. However, I find that almost all of the web services I use suck at providing visualization / graphing tools for their data. For example, I’ve never really found any of the graphing options in any of the running software I use satisfying or useful.
I’ve known about Tableau Software for a long time. The CEO and founder is Christian Chabot – we worked together at Softbank Venture Capital. Tableau has built a significant software company and when Christian called me up to ask if they could play around with some of my running data as part of their launch of their new web-based services, I agreed.
The hardest part of this exercise was getting granular running data out of the various systems that I keep it in. I use a Garmin watch and have very detailed GPS and heart rate data on every run I do. However, the two primary systems I store this data in (MotionBased and TrainingPeaks) have abysmal data export systems. After fighting with them for a while, I eventually did the equivalent of “scraping” the data by exporting the data underlying a bunch of individual runs.
Once I got the data out, Tableau was pretty amazing. It was extremely easy to use (in comparison – say – to Microsoft Excel where you can spend hours and still not get the format you want.) And – it was extremely fast.
After I played around it with some, the data wizards at Tableau took over and created the widget that you see above. There are a few things to note about it:
- It is a live exploratory visualization, not a static chart. You can select workout days, highlight across views to see heart rate, or filter to different kinds of activities.
- This was done with no custom development. Typically interactive visualizations like this take a lot of custom flash work; with Tableau anyone can create and publish an interactive visualization with drag & drop ease.
- Tableau’s vision with this product is to set data free on the web. They want to make real data, no charts, accessible to people so they can question conclusions and offer their own analysis.
Tableau has been around for years and has thousands of customers, but visualizations like these are still in private beta as they make sure they hammer out all the bugs on their latest release. I’m not an investor, but based on what I see I wish I was. Nicely done Tableau (and Christian).
Well – that serves me right. If you requested a Gist beta invite, be patient. I’m grinding through my inbox and you’ll have your invite by tomorrow at the latest. Thanks everyone who requested one, especially for all the kind feedback on the blog.
But that’s not what I’m thinking about this morning. Last week I read an intro O’Reilly book to HCI called Designing Gestural Interfaces: Touchscreens and Interactive Devices. It was ok, but one of the insights – that the public restroom has become a test bed for gestural interface technology – really stuck with me.
I found myself in a restrooms at DIA last night before I got in my car for the hour long drive home. I generally hate public restrooms as my OCD kicks into high gear around everyone’s germs. I no longer think that bad things are going to happen to me if I don’t touch every street sign on a walk, nor do I get stuck in my house in the morning because I have to do everything in multiple of three’s (and – if I blow it, then nines, and, if I blow it then 27’s, ugh – yuck.) However, I still dislike the idea of the public restroom. But sometimes you’ve just gotta go.
It was pretty late at night and I found myself in a recently cleaned and completely empty restroom at one end of Level 6 at DIA. I decided to perform an experiment – could I go about my business without touching a single thing other than myself or my clothes. I like to wash my hands before I go to the bathroom (You don’t? Think about it for five seconds. You’ve been shaking hands and touching things all day? C’mon.) The soap dispenser spit out soap after I put my hands under it. The sink automatically turned on when I put my hands under it (I had to move them around a little.) I walked up to the toilet, did my thing, and walked away to the sound of a toilet flushing. Back to the sink for a redo of the previous drill. I wandered over to the towel dispenser which automatically dispensed some towels when I waved my hands under it.
The only think I had to touch was the door. Even that seems easy to solve – automatic opening and closing doors have been around forever. None of the gestures I did were particular complex and – as I think about it – all were pretty obvious.
Life is a laboratory. Don’t forget to always be exploring and experimenting.
Ever type that into a pop up box on your computer when installing software? If not, you’ve never installed anything from Microsoft (or many other companies) – at least not legally.
This morning I was copied on an email from my partner Ryan McIntyre to a company we are talking to about funding that said:
“I use Pro Tools and other pro audio software regularly and since the SW is quite expensive, the SW vendors go to great lengths to use copy-protection, and most audio plugins and applications (and there are dozens) have some sort of authorization code scheme, ranging from friendly to downright byzantine. It drives me nuts, but my constant exposure to it means I’ve formed some opinions about what is “easy” when it comes to entering authorization codes. The easiest plug-ins (authorization-wise) in the audio world use alphabetical codes broken up into strings of words, so instead of the longs strings of numbers, you get long strings of words, which are much easier for a human to enter without a mistake. A couple code examples might be:
You get the idea. I’m assuming third-part auth-gen packages must exist to generate codes like these that give you a big enough address space yet also make guessing authorizations relatively difficult. And that you could relatively easily change your process at the manufacturer for associating MAC addresses with device IDs.”
I prefer auth-codes that are haikus. I wonder if there’s a patent on this?
Wow – Operating System Interface Design Between 1981-2009 is awesome (thanks Dave). It is a screenshot from every major graphical OS since the Xerox Alto. Some old favorites of mine include the Apple Lisa (1983), the Amiga Workbench (1985), Geos (1986), NeXTSTEP (1989), and OS/2 2.0 (1992). At some point in time I had a computer that ran these and/or installed them on a computer of mine.