Natural intelligence is too expensive. Takes too long for it to grow. If things go wrong then we have to jail it. With computers we just change the software.
I listened to Lex Friedman for a long time, and there was a lot of critiques of him (Lex) as an interviewer, but since the guests were amazing, I never really cared.
But after listening to Dwarkesh, my eyes are opened (or maybe my soul). It doesn't matter I've heard of not-many of his guests, because he knows exactly the right questions to ask. He seems to have genuine curiosity for what the guest is saying, and will push back if something doesn't make sense to him. Very much recommend.
He is one of the most prepared podcasters I’ve ever come across. He puts all other mainstream podcasts to deep shame.
He spends weeks reading everything by his guests prior to the interview, asks excellent questions, pushes back, etc.
He certainly has blind spots and biases just like anyone else. For example, he is very AI scale-pilled. However, he will have people like today’s guests on which contradict his biases. This is something a host like Lex could never do apparently.
Dwarkesh is up there with Sean Carrol’s podcast as the most interesting and most intellectually honest in my view.
Good were the days when you only could get robbed by what you had in your pockets.
Nowadays one goes around with a direct link to his or her entire bank account. And criminals know this in Brazil. They will rob your phone, but what they really want is to use you as an ATM. Private banks are not held accountable to the massive and rampant identity fraud in banking, where criminals will launder criminal transactions.
The private sectors does not care. Somebody who opened an account yesterday receiving R$5000,00 at 2 in the morning in the middle of the street? Nothing suspicious...
This same account cashing out at an ATM the same next day? It is OK to me...
Brazilian banks need to be held accountable to 'know your customer' laws ASAP and be held liable for criminal activity undertaken on their systems.
Robotics has been trying the same ideas for the last who knows how many years. They still believe it will work now, somehow.
Perhaps it goes beyond the brightest minds at Google that people can grasp things with their eyes closed. That we don't need to see to grasp. But designing good robots with tactile sensors is too much for our top researchers.
This is lack of impulse response data, usually broken by motor control paradigms. I reread Cybernetic by Norbert Weiner recently and this is one of the fundamental insights he had. Once we go from Position/Velocity/Torque to encoder ticks, resolver ADCs, and PWM we will have proprioception as you expect. This also requires several orders of magnitude cycle time improvement and variable rate controllers.
I think this is correct, to an extent. But consider handling an egg while your arm is numb. It would be difficult.
But perhaps a great benefit of tactile input is its simplicity. Instead of processing thousands of pixels, which are passive to interference from changing light conditions, one only has to process perhaps a few dozen tactile inputs.
Ex-Googler so maybe I'm just spoiled by access to non-public information?
But I'm fairly sure there's plenty of public material of Google robots gripping.
Is it a play on words?
Like, "we don't need to see to grasp", but obviously that isn't what you meant. We just don't need to if we saw it previously, and it hadn't moved.
EDIT: It does look like the video demonstrates this, including why you can't forgo vision (changing conditions, see 1m02s https://youtu.be/4MvGnmmP3c0?t=62)
I think the point GP is raising is that most of the robotic development in the past several decades has been on Motion Control and Perception through Visual Servoing.
Those are realistically the 'natural' developments in the ___domain knowledge of Robotics/Computer Science.
However, what GP (I think) is raising is the blind spot that robotics currently has on proprioception and tactile sensing at the end-effector as well as a along the kinematic chain.
As in you can accomplish this with just kinematic position and force feedback and Visual servoing. But if you think of any dexterous primate they will handle an object and perceive texture, compliance, brittleness etc in a much richer way then any state-of-the art robotic end-effector.
Unless you devote significant research to creating miniaturized sensors that give a robot an approximation of the information rich sources in human skin, connective tissue, muscle, joints (tactil sensors, tensile sensor, vibration sensors, Force sensors) that blind spot remains.
Ah, that's a really good point, thank you - makes me think of how little progress there's been in that ___domain, whether robots perceiving or tricking our perception.
For the inverse of the robot problem: younger me, spoiled by youth and thinking multitouch was the beginning of a drumbeat of steady revolution, distinctly thought we were a year or two out from having haptics that could "fake" the sensation of feeling a material.
I swear there was stuff to back this up...but I was probably just on a diet of unquestioning, and projecting, Apple blogs when the taptic engine was released, and they probably shared one-off research videos.
I'm convinced the best haptics that I use every day are the "clicks" on the Macbook trackpad. You can only tell they're not real because they don't work when it's beachballing.
Sorry, but I just burst out laughing at my own comment, when I considered the technical difficulties in trying to teach a robot to handle the change of context needed to balance on its hands, rather than its feet, let alone walk around on them. Ahaha.
It’s open loop in that the measurement of it being in focus is reliant on the subject matter, and a different measurement.
I’m not making this up, camera manufacturers have told me to my face that focus is open loop, period. They can’t guarantee repeatable focus.
Notably the measurement isn’t of the state of the motor/gearing. Furthermore, being “in focus” means the subject matter’s out of focus blur is below some threshold; there is a range of focus states that qualifies — but those seemingly small differences can affect camera calibration, with >pixel-level differences in effective focal lengths.
I can back you up on that. I work on a highly custom system that uses EF lenses through an EF-to-serial adapter. The lenses don't even know what position they're at when you power them on. If you want to move them to a specific focus step you have to go through a homing procedure that drives the focus motor to a stop (I don't recall if it's the near or far stop).
Lens-to-lens it's definitely not consistent. Step 142 on two different lenses will have a different focal depth. We calibrate each lens ourselves (thankfully you can read the serial number through the EF mount) and then still have to do closed-loop image-based sharpness estimation to guarantee that things are as good as we can get.
How could they then tell to what distance they're focused?
I used to have two Ultrasonic lenses, the 17-40/4L and the 17-55/2.8. They both had distance scales which would move around as the lens focused.
My current Olympus lenses have a focus-by-wire manual mode, with a distance scale on the barrel. The camera also reports the focus distance in the EXIF. Are these just vague ballparks?
Some - not all - of the EF lenses have a distance encoder as well. It's fairly approximate; I think it exists mostly for the benefit of the flash system, which needs a rough starting point for the distance in some of the open-loop modes.
For the current Olympus ones, I think there's a broadly similar encoder on the ones with proper manual focus scales, and the pure fly-by-wire ones reset focus at connection so the camera can work it out.
Sorry for the late reply. What we’ve found is that they’re all close but not identical. Our system has a point LIDAR pointed at the ground and then uses that to drive the focal distance for the lens (as well as input for some other sensor fusion stuff). We generally run the lenses wide open (f/2.0 or f/2.8). Mapping step counts to distance would get us close, but unless we did calibration for each individual lens it wasn’t close enough to get the focus tack sharp. When you’re looking at the focal distance in your EXIF metadata you’re probably not validating whether that 2.4m distance to the subject was actually 2.3m and for the vast majority of cases the absolute accuracy of it doesn’t matter. Same with the distance markers on the focus ring… you’re not setting the focus within a few cm based on those marks, just getting it close enough that you can do closed-loop focusing with an eyeball in the loop.
Once we did the per-lens calibration curve we didn’t have to do it again. I’m assuming the small differences are just manufacturing tolerance and they’re relying on the focus/phase sensors to get it the rest of the way.
Correct. The AF sensor continually takes more samples even while the motor is moving. You can tell because sometimes the motor overshoots and then you can see it come back.
Yup… but it’s not necessarily in the same state if you do it twice. It’s close, but not identical. Even if the image metadata claims the focus point is identical, the lens in all likelihood is not in the same state and will have some deviation in its intrinsics.