Brad's Books and Organizations

Books

Books

Organizations

Organizations

Hi, I’m Brad Feld, a managing director at the Foundry Group who lives in Boulder, Colorado. I invest in software and Internet companies around the US, run marathons and read a lot.

« swipe left for tags/categories

swipe right to go back »

Oblong – Seeing Is Believing

Comments (8)

At Foundry Group, we’ve been talking about human computer interaction (HCI) as one of our key investment themes.  Our premise behind HCI is that the way humans interact with computers is going to change radically over the next 20 years.  If you roll forward to 2028 and look back to today, the idea of being tethered to a computer via a mouse and keyboard is going to be a "quaint" as using the punch card or a cassette tape as a primary data storage medium.

Rather than try to explain Oblong, take a look (it’ll take three minutes – it’s worth it, I promise.)


g-speak overview 1828121108 from john underkoffler on Vimeo.

We invested in Oblong a year ago although, as I wrote in my post on their site titled Science Fact, my interaction with the people involved in the company dates back to 1984.  John Underkoffler, the original mind behind all of this, also writes about how Oblong came to be.

Oblong’s products are real and shipping today – take a look at the commercial overview and well as the description of the various layers of g-speak.

Now this is innovation with a capital I.

  • http://mindtouch.com/blog Steve Bjorg

    In the future, we will all have very muscular arms, obviously! :) All kidding aside, why is this cool? I don't get it. Flicking on the iPhone is cool, b/c the ratio of energy expended by the user vs. energy expended by machines is very high (i.e. a lot got done with little effort). Ditto for the thumb-joysticks on gamepads. G-speak on the other hand (pun intended) has a very poor effort/reward ratio. I remain skeptical.

    • http://www.feld.com Brad Feld

      Steve – I could give you piles of examples, but I’ll give you one. Imagine trying to do a 3d walkthrough using a mouse and keyboard. The experience basically sucks. I’m good with a mouse and keyboard but I’m a disaster trying to navigate 3-space effectively using just a mouse and keyboard (or a trackball, or a joystick). However, within 5 minutes, with g-speak, I am able to do 3-space navigations as effectively as I can move a mouse around in 2d.

  • http://experticity.com Baron

    Oblong has a very cool HCI application, which I am sure we will all embrace some time in the future.

    We came at the HCI issues from a different place. We (Experticity) started pre-Minority Report as well, so when I saw a simulation of our app in the film, I felt ill. Fortunately, we actually built it and remain the leader in our space, and have rolled out production product to a growing number of enterprises. Consumers and Corporations love it, and we are producing results that are off the chart. While it is not as technically dazzling as Oblong, It is solving very real, pressing and growing issues that we all face every day, now more then ever. If you want to learn more, go to experticity.com and or contact me.

  • Rick Gregory

    Hmm. 10/10 for coolness. But I didn't see anything in that video that I can imagine doing in my day to day life even 20 years from now. Yes, I get that you can grab things and manipulate them and zoom around in 3D information environments – I just don't get why I'd do that outside of needing to demo a product.

    I'm not being snarky – I honestly don't see a lot of use cases for this. As an adjunct to other things? Yeah. As a primary way that I interact with computing? No. For example, how would this make your authoring of this post better? Or my reading it? Or commenting? Or online banking or a myriad of other things?

    ON the positive side, I can imagine some amazing things you could do in data analysis with this and I can see flicking a cool video onto a friend's picture with that gesture meaning 'send this to ben'.

    The future of HCI doesn't have just one path and thinking it does holds us back – some things are better done with a keyboard, the Oblong UI will excel in other areas and a simpler mobile interface will be great in still other areas. People 20 years from now will fluidly move from one to another much as we turn on a radio, flick on a light and grab a remote to turn on the TV (all of which are interfacing to a technology) today.

    And finally, we'll have implicit HCI too, ala wearable computing. 20 years from now I'll be 70 (ack!) and fully expect to have detailed health monitoring that I don't even notice with data flowing from a small data patch to my 'phone' and then being distributed to my home network, my doctor etc. Emergencies will be handled automatically – a heart attack changes blood chemistry, so if the monitor detects that change it can alert 911 even if I can't. Mundane tasks too – automatic uploading of data from the patch to my fitness program with recommendations on how to adjust my routine based on my results.

    Short version? As computing becomes ubiquitous and situational. HCI will become fragment and adapt the uses. g speak is cool, but is just one niche of what HCI will be.

    • http://www.feld.com Brad Feld

      Rick – I completely agree with you that the evolution of HCI will take a number of different paths. See http://www.foundrygroup.com/blog/archives/2008/03… We’ve already made several HCI investments; Oblong is only one of them.

      It’s also important to recognize that g-speak is a “spatial operating environment” (see http://oblong.com/article/086E19gPvDcktAf9.html)…. There are several key components to g-speak, including free-hand, three-space gestural input. Recognize that this is a superset of virtually all existing gestural input approaches, especially everything in the touch computing domain (e.g. it’s trivial to apply g-speak’s toolkit to a 2d device.) In addition, g-speak includes something the Oblong guys refer to as “recombinant networking” – this is a unique way of dealing with large real-time multi-person data sets. Recombinant networking is required to be able to do something interesting in 3-space with large amounts of data as well as multiple users simultaneously.

      There are already a number of real world implementations of Oblong’s spatial operating environment – both in production and prototype stage. Look for more tangible examples coming soon to a video stream near you.

  • http://www.altgate.com/ Furqan Nazeeri

    I disagree with Rick in that there are very real current applications of this. It reminds me of that (http://www.youtube.com/watch?v=Jd3-eiid-Uw) of Johnny Lee (PhD student now at Microsoft) who hacked a Wii to create a sort of “poor man's g-speak”. And regarding the comment about “muscular arms” I think that would be like saying, “no thanks” to the Xerox/Apple GUI introduced in the 80's because of the risk of carpal tunnel. Personally, I can't wait to be freed from the office desk and mouse…

  • Sean Murphy

    This is a fascinating thread. Great points all around…

    At first glance, as Steve said, it does seem like Oblong creates a poor effort/reward ratio. Why exert all that effort if I can just move my mouse a few inches? However, I think the real heart of this discussion are the use cases and practical application for Oblong today vs. tomorrow.

    HCI is one of several waves of the future. It represents a confluence of digital media and our desire/need to interact with it. I think Oblong’s take on this will evolve as people continue adopting new ways to use it. We’re already seeing this with mobile devices. As people have already said, a main theme here is ubiquity. No one wants to be chained to a computer – we want high interactivity from anywhere…

    This notion of “ubiquity” is everywhere. A great read is a Forrester Research article by James McQuivey called “How Video will take over the world.” I like his notion of “OmniVideo” – which I think plays nicely with the idea of people interacting with digital media beyond the computer.

    http://omnivideo.wordpress.com/about/

    I’m waiting for the complete eye replacement surgery from the Minority Report. – Innovation with a capital Eye.. haha…

  • Chad Moss

    This is an area where there is a lot of R&D being funded however the larger research labs are trying more basic gesture emulation vs interpreted and they seem to be simply changing or evolving the “input” vs redefining it. The extended bandwidth opportunities for these applications are exciting. Just as this stretches HCI it will demand networks far beyond what we have today when the full application set is realized. Taking this a step further and not only change the HCI, but adapt a visual experience as describe below by some early thoughts from Jim Crowe . This gets very interesting.

    http://www.wired.com/wired/archive/6.11/crowe.htm

    When I couldn't find anyone working in neurophysiology or artificial intelligence who had an inkling about the bandwidth of the optic nerve, I approached the problem myself from a different angle. I calculated that to produce an encompassing stereoscopic, hemispherical image a foot away from the face, with 24-bit color, 2,400-pixel resolution, and 30 frames-per-second refresh, would take 15 terabits per second one way or 30 terabits full duplex.

  • http://intensedebate.com/people/bfeld bfeld

    Steve – I could give you piles of examples, but I’ll give you one. Imagine trying to do a 3d walkthrough using a mouse and keyboard. The experience basically sucks. I’m good with a mouse and keyboard but I’m a disaster trying to navigate 3-space effectively using just a mouse and keyboard (or a trackball, or a joystick). However, within 5 minutes, with g-speak, I am able to do 3-space navigations as effectively as I can move a mouse around in 2d.

  • http://intensedebate.com/people/smurphy smurphy

    This is a fascinating thread. Great points all around…

    At first glance, as Steve said, it does seem like Oblong creates a poor effort/reward ratio. Why exert all that effort if I can just move my mouse a few inches? However, I think the real heart of this discussion are the use cases and practical application for Oblong today vs. tomorrow.

    HCI is one of several waves of the future. It represents a confluence of digital media and our desire/need to interact with it. I think Oblong’s take on this will evolve as people continue adopting new ways to use it. We’re already seeing this with mobile devices. As people have already said, a main theme here is ubiquity. No one wants to be chained to a computer – we want high interactivity from anywhere…

    This notion of “ubiquity” is everywhere. A great read is a Forrester Research article by James McQuivey called “How Video will take over the world.” I like his notion of “OmniVideo” – which I think plays nicely with the idea of people interacting with digital media beyond the computer.

    http://omnivideo.wordpress.com/about/

    I’m waiting for the complete eye replacement surgery from the Minority Report. – Innovation with a capital Eye.. haha…

  • Baron

    Oblong has a very cool HCI application, which I am sure we will all embrace some time in the future.

    We came at the HCI issues from a different place. We (Experticity) started pre-Minority Report as well, so when I saw a simulation of our app in the film, I felt ill. Fortunately, we actually built it and remain the leader in our space, and have rolled out production product to a growing number of enterprises. Consumers and Corporations love it, and we are producing results that are off the chart. While it is not as technically dazzling as Oblong, It is solving very real, pressing and growing issues that we all face every day, now more then ever. If you want to learn more, go to experticity.com and or contact me.

  • http://intensedebate.com/people/bfeld bfeld

    Rick – I completely agree with you that the evolution of HCI will take a number of different paths. See http://www.foundrygroup.com/blog/archives/2008/03… We’ve already made several HCI investments; Oblong is only one of them.

    It’s also important to recognize that g-speak is a “spatial operating environment” (see http://oblong.com/article/086E19gPvDcktAf9.html)…. There are several key components to g-speak, including free-hand, three-space gestural input. Recognize that this is a superset of virtually all existing gestural input approaches, especially everything in the touch computing domain (e.g. it’s trivial to apply g-speak’s toolkit to a 2d device.) In addition, g-speak includes something the Oblong guys refer to as “recombinant networking” – this is a unique way of dealing with large real-time multi-person data sets. Recombinant networking is required to be able to do something interesting in 3-space with large amounts of data as well as multiple users simultaneously.

    There are already a number of real world implementations of Oblong’s spatial operating environment – both in production and prototype stage. Look for more tangible examples coming soon to a video stream near you.

  • http://intensedebate.com/people/fnazeeri fnazeeri

    I disagree with Rick in that there are very real current applications of this. It reminds me of that (http://www.youtube.com/watch?v=Jd3-eiid-Uw) of Johnny Lee (PhD student now at Microsoft) who hacked a Wii to create a sort of "poor man's g-speak". And regarding the comment about "muscular arms" I think that would be like saying, "no thanks" to the Xerox/Apple GUI introduced in the 80's because of the risk of carpal tunnel. Personally, I can't wait to be freed from the office desk and mouse…

  • Chad Moss

    This is an area where there is a lot of R&D being funded however the larger research labs are trying more basic gesture emulation vs interpreted and they seem to be simply changing or evolving the “input” vs redefining it. The extended bandwidth opportunities for these applications are exciting. Just as this stretches HCI it will demand networks far beyond what we have today when the full application set is realized. Taking this a step further and not only change the HCI, but adapt a visual experience as describe below by some early thoughts from Jim Crowe . This gets very interesting.

    http://www.wired.com/wired/archive/6.11/crowe.htm

    When I couldn't find anyone working in neurophysiology or artificial intelligence who had an inkling about the bandwidth of the optic nerve, I approached the problem myself from a different angle. I calculated that to produce an encompassing stereoscopic, hemispherical image a foot away from the face, with 24-bit color, 2,400-pixel resolution, and 30 frames-per-second refresh, would take 15 terabits per second one way or 30 terabits full duplex.

  • Steve Bjorg

    In the future, we will all have very muscular arms, obviously! :) All kidding aside, why is this cool? I don't get it. Flicking on the iPhone is cool, b/c the ratio of energy expended by the user vs. energy expended by machines is very high (i.e. a lot got done with little effort). Ditto for the thumb-joysticks on gamepads. G-speak on the other hand (pun intended) has a very poor effort/reward ratio. I remain skeptical.

  • http://intensedebate.com/people/rickgregory rickgregory

    Hmm. 10/10 for coolness. But I didn't see anything in that video that I can imagine doing in my day to day life even 20 years from now. Yes, I get that you can grab things and manipulate them and zoom around in 3D information environments – I just don't get why I'd do that outside of needing to demo a product.

    I'm not being snarky – I honestly don't see a lot of use cases for this. As an adjunct to other things? Yeah. As a primary way that I interact with computing? No. For example, how would this make your authoring of this post better? Or my reading it? Or commenting? Or online banking or a myriad of other things?

    ON the positive side, I can imagine some amazing things you could do in data analysis with this and I can see flicking a cool video onto a friend's picture with that gesture meaning 'send this to ben'.

    The future of HCI doesn't have just one path and thinking it does holds us back – some things are better done with a keyboard, the Oblong UI will excel in other areas and a simpler mobile interface will be great in still other areas. People 20 years from now will fluidly move from one to another much as we turn on a radio, flick on a light and grab a remote to turn on the TV (all of which are interfacing to a technology) today.

    And finally, we'll have implicit HCI too, ala wearable computing. 20 years from now I'll be 70 (ack!) and fully expect to have detailed health monitoring that I don't even notice with data flowing from a small data patch to my 'phone' and then being distributed to my home network, my doctor etc. Emergencies will be handled automatically – a heart attack changes blood chemistry, so if the monitor detects that change it can alert 911 even if I can't. Mundane tasks too – automatic uploading of data from the patch to my fitness program with recommendations on how to adjust my routine based on my results.

    Short version? As computing becomes ubiquitous and situational. HCI will become fragment and adapt the uses. g speak is cool, but is just one niche of what HCI will be.

Build something great with me