This is a project of Microsoft Research. It's purely academic at this point (done in collaboration with ETH Zürich), and at this stage it has nothing to do with the commercial side of the company, except being funded by them.
Apple flamewars and speculation about when this will hit the market are out of place here. Right now it's a bunch of academics showing off a cool prototype, which they couldn't be doing if this project had been developed with immediate commercialization in mind.
Brilliant! Not sure I like that particular implementation (with IR), but the idea of having the whole desktop keyboard work like a giant track pad is an enticing one.
Can a track pad surface be integrated into every keyboard key?
The problem with the implementation demonstrated in the video is that the gesture is recognised, and then the action happens.
There is no one-to-one mapping between the gesture in progress and action on screen, so I suspect it will not feel anywhere near as nice as a trackpad or touchscreen.
I think that could be improved with a higher resolution sensor, but the question is whether you could build such a thing while retaining space for physical keys.
IIRC, current gesture recognition still takes a couple hundred milliseconds on modern consumer touch screens (iPhones, etc.), especially for the "click" action.
A lot of plastic is already IR transparent but totally opaque in the visible spectrum. You can hack a cheap camera and take out the IR filter to make a neat "xray camera".
Maybe you should do that and write a paper about it. :)
My initial thoughts: 1) is there space underneath the key for such a sensor? 2) would the infrared light bounce off of the plastic on the key? 3) would transparent keys affect chicken-peckers (and other users) adversely?
Not saying it's not possible, although I imagine there will be challenges.
Should be able to be improved once its in use and they have enough usage reports. For now I'm guessing there is too many false positives during typing itself, eg moving your hand toward the mouse or something else. There is also all those peckers out there to think about doing weird and strange hovering motions. In the future, if they can map out enough false positives they could potentially fix it.
Also, I'm sure someone with this tech could make use of a certain gesture that activates one-to-one style mapping for a certain period of time / until a button is pressed.
For that matter even with Touch Screen, a gesture is extracted/identified by touch Algorithms (either in Touch controller or SoC). Only later, an action is taken. So there will defenitely be an inherent delay. These algos involve similar classification methods as mentioned in this demo.
But at least on a touch screen I can perform a partial gesture and see a partial result — i.e., the feeling is that the image zooms as my fingers pinch the screen, or the map pans as my fingers move. It's a very different feeling to the approach demonstrated in the video.
I always wanted this to exist. But ofcouse, never had the hardware skills to make one.
I have the same feeling as you about the IR thing. makes me if they are only BETWEEN keys, wouldn't the make the sensing in accurate?
My HP laptop's mouse buttons are actually part of the trackpad itself. That is to sya, you can even move your finger over them and it still functions as if you are using the track pad. so, couldn't we extend this to the entire keyboard? It's just a really cool concept to having them merged as one. It wouldn't disrupt you each time u have to use the mouse. You can just do it right on the keyboard with moving you hands all the way to the mouse or keypad!
What's wrong with the IR implementation? I guess they could introduce capacitive sensors as well, but I'd imagine IR + capacitive sensing would only make it more accurate.
IR makes sense mainly because there has to be a simple way to know that you're trying to type and not trying to move around the mouse/gesturing. Soon as your fingers are a certain height off the keyboard, it can know you're not typing. Otherwise you'd probably want to hold down SHIFT or something to indicate you want to use the mouse/gesture.
Not sure how that would work in practice, but I personally work with a cup of tea between my hands most of the time. That could interfere. Leap Motion was popular some time ago, and it detected the cup as such, not something bigger. I don’t know if they used IR though.
As you said, when interpreting non-obvious outputs of probes, several detection elements presumably help interpret issues: say pressure, conductivity for trackpads. I believe ‘Moves’ (the app recently bought by Facebook) use both the phone cell-tower signal to triangulate ___location and the motion to correct the occasional jump to the next neighbourhood when the nearest tower is suddenly busy.
For that ‘virtual trackpad’, using several frequencies or motion sensors might help; I’m not a big fan of your suggestion of a button: it seems… cumbersome, but it might actually work. I was made to realise today how I unconsciously used three- to four-button keystrokes most of the time.
I wonder which IR proximity sensor they used. I'm currently using the OSRAM SFH7773 in my projects — it's the cheapest one I could find, but it is still quite expensive ($1.20 at qty 25 from RS/Allied). They are using 64 of them.
I also wonder how they deal with interference. Each sensor emits IR pulses and measures the light that bounced back. On the SFH7773 at least, there is no way to synchronize the pulses to an external source. Which means you will get light bleeding inbetween sensors.
Sadly, the publication is not (yet) available.
On a side note: I noticed that ETH Zurich seems to be doing a lot of groundbreaking work recently in electronics and robotics related fields.
Where would you put the single camera to have a view of the fingers of both hands whilst on the keys? The 64 sensors are acting as a single camera I imagine [yeah sorry didn't RTFA].
It looks like they borrowed the Apple's chiclet keys and frame, when he flips it over the PCB has Microsoft written over it [1]. To be fair for a prototype why wouldn't they use it, it looks nice and sleek, its flat and thin so sensors can be inserted between the gaps and more or less they had one lying around in their office.
... umm.. because its white? and every thats white is Apple?
If you actually look at it you'll see its a not printed anything like an Apple keyboard, the letters are offset for one, they are centered on Apple keyboards, and has Windows keys.
It’s not an Apple keyboard, it’s a keyboard made by some manufacturer other than Apple (it seems to be some cheap Apple keyboard knock off), but with OS X modifier keys.
However, that’s barely relevant. They built a prototype, probably with some cheap keyboard they either had around or just quickly ordered online. The particular keyboard used for this particular prototype is literally of no relevance at all. It doesn’t even have anything to do with what they want to show.
I really don’t get this whole fucking discussion. It’s some of the most stupid stereotype pattern matching that I have every seen on HN.
The default position that all innovation comes only out of apple (vis the Apple keyboard comment) suggests that:
1. Apple marketing is still doing a great job convincing everyone they still have innovation in the heart of everything they do (I dare say it's not innovation as improvement and miniturisation)
2. Microsoft needs to do more to burnish it's innovation credentials
As much as I want to agree with you, the keys have OS X labels on them, such as the command button, so this is an actual apple keyboard modified with sensors.
Not sure about this specific execution, but the idea of having a unified input surface is very appealing. 3D gestures have also been floating around, but seems like this is the closest I have personally seen to something that is close the current reality of computer use.
That said, the rotation of the image gesture seems overly confusing. Mouse or trackpad seem like they would be an easier way to go...
I would have reached the mouse and rotated the stupid picture in fraction of the time, not even having the need to remember application-specific gestures.
How insanely cool. I'm going to have a go at this with the Arduino - I recall last time I wired up a theremin that the right photo resistor can be quite accurate.
Microsoft R&D does some cool stuff. Kudos. I love where this is going—a laptop with this style tech and no trackpad at all would be very interesting, very minimal. At some level it seems like the kind of thing Apple would do—they've already removed the separate button from the trackpad, so removing the trackpad completely is almost a logical next step.
Looks like it might need to be a little higher res though, before it becomes productizable.
This looks really interesting. I was considering making something similar when my Myo [0] arrived (and probably still will).
The thing that I thought might make it difficult is the exaggerated movements required. All the demos I've seen seem to make the device appear a little insensitive. This demo (of the keyboard) has the same issue. It's probably that you need the exaggerated signals to be sure of the user's intent - or maybe that's just for the sake of the demo?
The idea of turning the surface of the keyboard into a trackpad is very cool, you could almost get rid of the mouse. Except for, how do you click the mouse pointer? Accidental clicks of keyboard keys is probably what makes it impractical.
Use one of the keys on the keyboard? Even if you were manipulating the pointer by hovering one hand (instead of gesturing), you could still click a button with the other hand.
Imagine holding down SHIFT+SomeOtherKey (with your pinky and thumb, so maybe SHIFT+ALT), this could indicate you want to use the mouse. Then just use your right hand to control the mouse, any key you tap will be a single click, double-tap is a double click, and tapping any two keys will be a right-click. Feels pretty intuitive.
With the old fingerworks touch-pad keyboards[1] single-finger taps were
for pressing keys, two-finger movements moved the mouse cursor and
two-finger taps were for clicks. It worked really well. The main annoyance was the lack of
feedback when typing, a problem which this Microsoft version solves! If
the gesture & tap recognition is sufficiently good. Fingerworks went on
to be acquired by Apple and its technology was incorporated into the
first iPhone.
Using the tracpad on an Macbook Pro, I don't usually find myself clicking, but mostly using gestures such as tapping. Even two finger tapping is recognized. I don't see why one of the crazy gestures this cool thing can recognize couldn't represent what we currently think of left click and another for right.
well yes, most of us tap instead of clicking. question is, how would it differentiate a "tap" from a "keypress" if trackpad and keyboard are in the same place?
you could do some other gesture, like raising two fingers? We are trained to tap now, but people can figure out different gestures like pinch to zoom so I don't think the clicking has to necessarily be a tapping gesture
you could click the buttons with your thumb behind the space bar. In fact, I guess for typing it would be better if you kept the fingers on the keyboard, so why not make the keyboard keys touch sensitive?
Does anyone remember the FingerWorks keyboards that one could put into PowerBooks? Reminds me quite a bit of the stuff that they did with their multitouch keyboard.
(They got acquired by Apple in 2005)
Gotta say, not sure why some are shitting on MS on this one. Whether you like the keyboard or not, it shows that MS isn't all business and fuckups. Sometimes it's a smart idea, or one leading to an innovation.
Very cool. Did anyone else notice that the direction of many of the gestures were reverse of the "natural" motion. I.e. to scroll left, the hand gestures to the left, rather than to the right as if you're dragging the viewport to the right. Also, to zoom in, the thumb and index finger are brought together, rather than spread apart as if you're dragging to points on the viewport away from each other.
Obviously this would all be configurable but it's strange to work on something to reduce HCI friction while using a design that's counterintuitive.
Hah, I HATE these, but actually like parent's idea to have a key be a small track pad. Would have to know when to turn it off, of course, unless you want to click some random thing everytime you type 'j'...
Cool, I had the same basic idea at one point to do this with an Arduino. I wanted to have infrared emitters below the keys, and have an infrared sensor somewhere slightly in front of the keyboard (but at finger height). Your fingers should reflect infrared light back to the sensor. Not sure how accurate it would be, but I think it would be accurate enough to capture sweeping gestures such as your hand moving up/down for scrolling. Something like this wouldn't be precise enough to replace a mouse, but definitely useful enough to improve workflow (scrolling, switching between workspaces, apps, shortcuts). If the keys were clear, I'd imagine it would be even easier for the sensor to determine where your hand is based on how much infrared light your hand is reflecting back from different positions.
Very intuitive stuff, I hope to see this in laptops soon.
Really cool idea, but I think they have the implementation all wrong. I'm picturing something like a long Leap Motion sensor across the top (far side) of the keyboard. Such a keyboard needs the sensitivity of a trackpad to be much better than simple key commands, no?
The Type Cover doesn't, however it does have a neat trick where it seems to be able to sense your fingers nearby and activate the backlight. I always wondered what method it uses to do that and how hackable it is.
Conceptually this is a pretty cool idea! I easily see the applications around web browsing or reading a doc, but a few practical problems stand in the way.
Navigation and clicking with the keyboard will be a huge problem. How could I move the mouse pad and simultaneously click? The demo video uses gaming, but most games still require use of both clicking on keys and a mouse click. If I tried to hold down a key on the keyboard, but wanted to click at the same time...do I have one finger move in mid air above some random part of the keyboard?
Hopefully there's continued refinement though...I'm surprised there hasn't been much thought about replacing extra keys with more customized multi-touch gestures.
I find this functionality is very cool, but does it have to be motion sensing? I find physically moving your hand off the keyboard and back on to be somewhat tiring, especially since it's doing apparently the same thing as "scroll down". But since the hand stays on the left side, how about adding a button underneath the left palm, where by moving the palm slightly the command is performed? This I think takes away any inconsistencies of "motion sensing" while still keeping your efficiency at a high (not to mention keeping your hands comfortable!).
Well this is interesting. A while back I got a motion sensor to try some of this on. My current keyboard[1] was originally designed for the Thinkpad laptops/computers and has the joystick between the GHB cluster. The mouse buttons are below the space bar. That can speed somethings up by the track pad on the Macbook is much more expressive. In terms of full disclosure though I do tend to collect obscure HCI devices (like the Microsoft Commander if you remember that one!)
I'd be so glad to own a laptop where the touch pad became unnecessary and got removed. I'm a laptop-only user, and the padding between the keys and the closer edge of the laptop is a source of pain for me, as it forces me to place my lower arm at the edge of the computer, which causes my lower forearm and carpus to hurt. A 15-inch laptop can be built with a keyboard that begins from the closest edge of the device's bottom, and the space between the screen joint and the end of the keyboard can be filled with... I don't know, a couple of large speakers and dedicated media buttons? A phone charging unit? A coffee holder. But such layout would definitely help my wrists and arms.
MS is putting a lot of emphasis hand gesture recognition since their Kinect 'surfaced'. Meanwhile they're still quite behind Apple when it comes to touchpads. I wonder whether it will pay off in the future - I for one would quite like a surface type tablet with this sort of keyboard such that one doesn't have to lift his hand for gestures.
I've been messing around with a Leap Motion device lately, and while I was initially skeptical, there are some actions which are genuinely more pleasant than using a keyboard alone.
Sure, it may not be ideal for everyday computing, but it can be great for specific scenarios. Think a presentation, or something that has less frequent interactions than surfing the web.
If you currently have a Leap and you're interested in joining the private beta to provide feedback that will help us hit the mark, I encourage you to shoot me an email at [email protected].
The most recent commit to leapjs on GitHub was 7 days ago, and Wikipedia says they raised $30 million in January of 2013. Take that for what it's worth, but as far as I know they are doing just fine.
I think Leap Motion is great, but I think that same kind of sensing can be achieved through lower tech ways, like a multi-color glove plus a webcam.
It is a cool idea and I hope they succeed, but I'm working on the low-tech LEAP idea and I'd like to see how far that can stack up to their more complex system.
I would be interested to hear more about the low tech ways, but I don't find Leap to be overly complex. There is a lot of math if you're on the developer side, but I think you need that level of complexity to be able to customize it for whatever your needs are.
This looks very similar to the Leap Motion. Both use infrared sensing, although they seem to use it in different ways. I think the setup in the video could probably be recreated using a Leap Motion integrated into the spacebar or other keys. I'm curious to see what they'll do with it (whether it'll actually become a product.)
They could probably double their resolution with an IR sensor under the keys, transparent elastomer, and IR-transparent plastic for keys. But they did mention sampling at 300Hz was an achievement so maybe they're running into some embedded issues.
What about an capacitative touchscreen with increased distance? That would be much more accurate, allowing the keyboard to be used as a trackpad (but would probably require fancier algorithms to distinct touch input from typing).
This looks like an awesome alternative to have to swipe the screen with your hands, and would make Windows 8 much more palatable on a desktop machine. Would be great to see it actually running though
I want to know whether or not this is more fatiguing than using the Thinkpad pointing stick, and whether or not people prefer the tactility of the rubber cap over the swiping gestures on hard keys.
I did not enjoy working with it, one of the few times I had to use a Mac to do something. However, I think these new forms of interaction will definitely play an important role in the way we will use (mechanical) interfaces in the future.
I absolutely love using my Magic Mouse. Whenever I get on someone else's computer I'm trying to go back and forth on web pages and desktops with the mouse.
the sideway scorlling seems like the only worthwhile addition to me, other than that, i kinda like the feel of physical buttons and scroll wheel under my fingertips
I used to think the same way, and when I bought an iMac about 3 years ago I just stuck to the Magic Mouse that came with it. I will never go back as long as I'm using a Mac now - it just became part of my workflow so seamlessly that using a regular mouse is out of the question. I get just about all of the gestures that the trackpad offers with the precision of the mouse. I love it.
When is someone going to start replacing parts of the keyboard? Remove the function keys and replace them with quick gestures? Is there a copy/paste gesture on the way?
Rather than removing functionality of already crammed laptop keyboards, I rather think this will be used to get rid of the touchpad and allow smaller laptop form factors that still have full sized keyboards, e.g. new kinds of surface devices.
On a laptop wouldn't it be far easier/cheaper to just add a keyboard facing camera in the bezel and use that to track gestures? Indeed you could probably retrofit that using a standard webcam.
Most of the time I'm not using my webcam (on a Kubuntu desktop machine). It would be great to point it at the keyboard, do a "touch the corners" setup routine and then be able to do gestures. Perhaps map CapsLock to "GestureLock". Seems do-able with processing.org, perhaps http://www.silentlycrashing.net/ezgestures/ is a starting point.
Your lack of imagination is disappointing. I didn't say that we need 24 unique gestures. I could throw out a straw man solution but let me simply say that some sort of convention is needed.
Strange that the "swiping & pinch to zoom" gestures (starting at 41) are exactly opposite of how they work in an iPad.
I can understand swiping being backward - people can disagree about "move the camera" vs. "move the paper" - but pinch-to-zoom-in is wrong in all contexts.
ah, microsoft. you've got this cool research, but somehow you manage to make the usability all wrong. how did these people not notice that they implemented pinch-to-zoom backward from how it works on their phones?
I'm sure they noticed, and totally didn't care. At this level of research/prototyping, it would be like worrying that your prototype electric car with 50% more range has the steering wheel on the wrong side.
> people can disagree about "move the camera" vs. "move the paper" - but pinch-to-zoom-in is wrong in all contexts.
It's not wrong, it's just another manner of perspective: "shrink the aperture" vs. "shrink the paper".
> how did these people not notice that they implemented pinch-to-zoom backward from how it works on their phones?
On a touchscreen, it feels like you're actually working with the content directly, but when it's a keyboard, touchpad, or scrollwheel on a mouse, you're really working with the "camera".
Sure, there is a model that can explain it. Same w/ camera-vs-paper in swipe.
By "wrong" I mean "does not work that way in any other device, and so violates the user's by-now-fairly-well-burned-in expectation". So, in terms of the principle of least surprise, wrong.
Wilfully so, given that (to my knowledge at least) pinch-to-zoom has only existed for a few years and has only worked in this one way.
Eh. I prefer natural scrolling. Took me two seconds to get used to and I don't like the 'normal' way anymore. On my work laptop, though, I keep it off because it reverses a mouse scroll wheel too, which I find abhorrent.
Smaller distance between finger tips == smaller abstract distance from what's show on the screen. To "move farther away" you move your fingers farther apart.
It doesn't seem all that odd. Just requires shifting the mental model (as with swiping).
Meanwhile, later in the same video (3:21), they show the same gesture following the more common behavior (widening fingertip distance zoom into a document).
Seems like something that should be configurable per program (though I think cross-app consistency would make things easier for me).
This seems like one of those things people will eternally argue about, much like 3rd person video game camera controls and to a lesser extent, plane simulator controls in games viewed from a chase camera.
That doesn't mean they are allowed to forget using the right words. "Mechanical keyboard" is a very specific term for keyboards, it means that the switches used are of the 'CherryMX' type or similar. Scissor-switches are totally different, and do not fit the 'mechanical keyboard' description at all.
Positional controls, like for adjusting the ___location or rotation of an element on the screen. It's hard to imagine somebody doing 3D animation or advanced audio editing using only keyboard shortcuts, but it could be feasible with this device.
However, their video seemed to focus on gestures only. I'm not sure why they aren't demoing it for positional controls- maybe they haven't exactly figured it out yet.
I'm really excited about this and from now on I'm going to have troubles not thinking about it every time my hand moves from my keyboard to my mouse.
all well and good, but Microsoft refuses to let users of the Microsoft Natural Ergonomics Keyboard 4000 to swap the middle "Zoom" to "Scroll"... so I don't see how Msft PM's will let this fulfill it's promise.
PS: the Keyboard drivers ship with key remapping software, they just don't let you remap the zoom, though if a user is willing to hack the config they can do it manually :(
You give up the precision of a mouse though. I could maybe see it being used to rotate the viewport, but even there I think you'd want the precision.
> When doing presentations.
That's what god made the clicky devices for.
> When watching a movie/video on fullscreen.
You're going to be close enough to your keyboard to do that and adjusting the time to just the one you want with the precision of the mouse is going to be less convenient than scrubbing through with a gesture?
> Cooking by following a recipe on the computer and not having clean hands.
Maybe, maybe. I'm not sure how stuff dropping off your fingers might effect it but yeah I could see something like that perhaps.
#
For most things though the precision you're giving up doesn't seem likely to be worth it in terms of the time saved from moving a hand to the mouse.
Apple flamewars and speculation about when this will hit the market are out of place here. Right now it's a bunch of academics showing off a cool prototype, which they couldn't be doing if this project had been developed with immediate commercialization in mind.
TLDR: Yay science!