It says so much about TechCrunch that they're so easily impressed by flashy crap like this. The same kind of fawning occurred over that stupid site Qwiki that reads Wikipedia to you using text-to-speech while showing a Flickr slideshow.
Just because something would look cool in a movie doesn't mean it "revolutionary" in the real world.
I still don't see gesture based control as the 'future of computing'. I think it can have wide applications in gaming (already does) and as this shows, presentation, or perhaps short interactions with a home control system as you walk through rooms. But as a replacement for general computer interaction I can't see it and the simple reason is: "We humans are inherently lazy and ill designed for this". Our shoulders get tired, our arms go numb, some of us do opt to stand but most of us like to sit. There is a reason that our primary input methods to date have evolved around comfortable positions with minimal amounts of movement or muscle strain and I believe the future will be the same.
Don't get me wrong, this is a VERY slick demo, kudos to the team, but as 'the future of computing', not so much. It will definitely have its uses but really I still believe that only some form of perhaps thought control, or other unimagined technique that allows us to maintain an non-fatiguing position will truly be the next paradigm shift. We're lazy that way.
Just like installing 8 steering wheels in a car would give all passengers the opportunity to participate in a collaborative driving experience. just what the world needs.
Interestingly, I've seen this video--3 years ago and yet this technology has yet to gain any significant traction. Sorry, but this is not the "near" future of computing. The primary reason being that "gesture" based interaction is too complicated. You can't expect the user to remember the gestures and be able to operate the device no matter how intuitive you make it. Primarily, because gestures are not universal. They are cultural, and even geographical. For example, think of the "come here" gesture. Comparing "come here" between the Japanese and English version and you have to very different motions. Even if you compare the gesture between two different people of the same culture, you're going to get varied emphasis on parts of the motion.
Funny thing is, people invest so much time and effort into clunky "natural" UIs, yet don't want to expand the quality of good old keyboard and mouse interfaces. There are so many things possible that aren't being done! Heck, even some useful things that are being done aren't very wide-spread.
Here is one simple example. Most apps today have tons of keyboard shortcuts. How about showing a list of possible shortcuts when I press Ctrl? I press Ctrl, the list appears. If I press Ctrl + A, the window is filtered down to show only shortcuts starting with A. When a shortcut is completed or when Ctr is released, the window disappears. Simple, yet it would enable people to learn shortcuts without disrupting their workflow. The list can contain a shortcut for permanently disabling this functionality, so anyone who doesn't like it could instantly get rid of it. Isn't this "natural"?
If you're interested in PC interfaces, computer games can be pretty inspiring because of their UI diversity. Some have really good ideas, others have obvious issues one can learn from.
It's difficult to discuss things like this without more context.
Hm... Draggable checkboxes. Let's say you have several checkboxes arranged in one column. The user points cursor at the first one, presses left mouse button and then drags the cursor across other checkboxes. When the pointer crosses a checkbox, the latter acquires the same state as the first checkbox clicked. The benefits over the typical solution (extra 'select them all' box) is that the user can skip one or several options by swaying the pointer, while maintaining the ability to easily select multiple things in a row.
The urge to crap on this is so, so strong. Fortunately, I powered through and read the whole thing. As it turns out, they've actually sold some of this technology.
"With just the partnerships they have in place, Oblong is already cash-flow positive (and have been since last year). They haven’t raised money since the $8.8 million Series A in 2007."
Cash-flow positive? Hrm. I'm assuming that means profitable? Maybe not, but it's something. They've actually sold and installed this at some customer locations. That's pretty amazing. I'm curious what it's used for.
When I saw who it was, I immediately thought, "Oh no, not this crap again," but there is some new stuff here:
* The company is "cash-flow positive" (as mentioned above)
* They have an actual product, Mezzanine, that has sold to actual customers
* They have a software SDK that will (presumably) allow mere mortals to build actual apps for the thing; and there is also an app server
* It doesn't rely on the cumbersome gloves any more; it works with "wands" and iPhones/iPads
In addition to all this, it seems that Underkoffler has a somewhat realistic expectation of the time scale involved here: 3-years-ish.
"'All of us who read TechCrunch have a reasonably large amount of screens we use, so we need to get there,' he continues. At the same time, 'we’re not nearly finished,' Underkoffler adds. 'It’s fair to say it will be about three years until this is fully into consumer electronics devices,' he continues, noting that the biggest inhibitor is simply cost and bringing it down to a reasonable level for consumers."
Although I disagree with him about the biggest inhibitor. I think the biggest inhibitor is going to be finding actual uses for the thing. I'm trying to understand how this would fit in my anything I do during the day. Rotating 3D models in real time using a wandy-thing-ma-bob looks cool in demo, but it's serious "naval contemplation" type stuff in the real world.
Right now, I'm sitting here typing text in to a pretty plain looking website, and I'm really, really in to it. I could be doing other things, but this is what I choose to do. Why? Because the hope that I feel -- the hope that someone will read what I've said and respond to it -- isn't closely tied to the amount of wizardry used in the interface.
The ultimate control device is a brain-machine interface, and some products are already getting close. I don't see the reason why people would invest their time developing something else at this point in time.
Non-invasive BCIs will never be able to extract enough information to replace conventional interface elements. Its a basic physics thing (inverse problem and all that.)
Yeah, but how many people have nose piercings? I won't ever put something in my body to control a computer, but I really like the few gestures I can do on my mac trackpad. I would much rather have gestures.
Just because something would look cool in a movie doesn't mean it "revolutionary" in the real world.
Imagine a tech blog written by technologists...