Robotics has been trying the same ideas for the last who knows how many years. They still believe it will work now, somehow.
Perhaps it goes beyond the brightest minds at Google that people can grasp things with their eyes closed. That we don't need to see to grasp. But designing good robots with tactile sensors is too much for our top researchers.
This is lack of impulse response data, usually broken by motor control paradigms. I reread Cybernetic by Norbert Weiner recently and this is one of the fundamental insights he had. Once we go from Position/Velocity/Torque to encoder ticks, resolver ADCs, and PWM we will have proprioception as you expect. This also requires several orders of magnitude cycle time improvement and variable rate controllers.
I think this is correct, to an extent. But consider handling an egg while your arm is numb. It would be difficult.
But perhaps a great benefit of tactile input is its simplicity. Instead of processing thousands of pixels, which are passive to interference from changing light conditions, one only has to process perhaps a few dozen tactile inputs.
Ex-Googler so maybe I'm just spoiled by access to non-public information?
But I'm fairly sure there's plenty of public material of Google robots gripping.
Is it a play on words?
Like, "we don't need to see to grasp", but obviously that isn't what you meant. We just don't need to if we saw it previously, and it hadn't moved.
EDIT: It does look like the video demonstrates this, including why you can't forgo vision (changing conditions, see 1m02s https://youtu.be/4MvGnmmP3c0?t=62)
I think the point GP is raising is that most of the robotic development in the past several decades has been on Motion Control and Perception through Visual Servoing.
Those are realistically the 'natural' developments in the ___domain knowledge of Robotics/Computer Science.
However, what GP (I think) is raising is the blind spot that robotics currently has on proprioception and tactile sensing at the end-effector as well as a along the kinematic chain.
As in you can accomplish this with just kinematic position and force feedback and Visual servoing. But if you think of any dexterous primate they will handle an object and perceive texture, compliance, brittleness etc in a much richer way then any state-of-the art robotic end-effector.
Unless you devote significant research to creating miniaturized sensors that give a robot an approximation of the information rich sources in human skin, connective tissue, muscle, joints (tactil sensors, tensile sensor, vibration sensors, Force sensors) that blind spot remains.
Ah, that's a really good point, thank you - makes me think of how little progress there's been in that ___domain, whether robots perceiving or tricking our perception.
For the inverse of the robot problem: younger me, spoiled by youth and thinking multitouch was the beginning of a drumbeat of steady revolution, distinctly thought we were a year or two out from having haptics that could "fake" the sensation of feeling a material.
I swear there was stuff to back this up...but I was probably just on a diet of unquestioning, and projecting, Apple blogs when the taptic engine was released, and they probably shared one-off research videos.
I'm convinced the best haptics that I use every day are the "clicks" on the Macbook trackpad. You can only tell they're not real because they don't work when it's beachballing.
Sorry, but I just burst out laughing at my own comment, when I considered the technical difficulties in trying to teach a robot to handle the change of context needed to balance on its hands, rather than its feet, let alone walk around on them. Ahaha.
Perhaps it goes beyond the brightest minds at Google that people can grasp things with their eyes closed. That we don't need to see to grasp. But designing good robots with tactile sensors is too much for our top researchers.