Hacker News new | past | comments | ask | show | jobs | submit login

Computing power absolutely does matter, because it allows us to run more complicated experiments in a reasonable amount of time, which is crucial for moving research forward. Today we work on the low-hanging fruit so that tomorrow we can reach for something higher. As a side note, your comment about runtime complexity does not make much sense when there exist problems which provably cannot be solved in linear time. It is dangerous to discourage research on that simplistic basis; we could have much more powerful POMDP solvers today (for instance) if people hadn't been scared off by overblown claims of intractability fifteen years ago.



>> your comment about runtime complexity does not make much sense when there exist problems which provably cannot be solved in linear time.

Look, it's obvious the human mind manages to solve such problems in sub-linear time. We can do language, image processing and a bunch of other things, still much better than our algorithms. And that's because our algorithms are going the dumb way and trying to learn approximations of probably infinite process from data when that's impossible to do in linear time or best. In the short term, sure, throwing lots of computing power at that kind of problem speeds things up. In the long term it just bogs everything down.

Take vision, for instance (my knowledge of image processing is very shaky but). CNNs have made huge strides in image recognition etc, and they're wonderful and magickal, but the human mind still does all that a CNN does, in a fraction of the time and with added context and meaning on top. I look at an image of a cat and I know what a cat is. A CNN identifies an image as being the image of a cat and... that's it. It just maps a bunch of pixels to a string. And it takes the CNN a month or two to train at the cost of a few thousand dollars, it takes me a split second at the cost of a few calories.

It would take me less than a second to learn to identify a new animal, or any thing, from an image and you wouldn't have to show me fifteen hundred different images of the same thing in different contexts, different lighting conditions or different poses. If you show me an image of an aardvark, even a bad ish drawing of one, I'll know an aardvark when I see it _in the flesh_ with very high probability and very high confidence. Hell- case in point. I know what an aardvark is because I saw one in a Pink Panther cartoon once.

What we do when we train with huge datasets and thousands of GPUs is just wasteful, it's brute forcing and it's dumb. We're only progressing because the state of the art is primitive and we can make baby steps that look like huge strides.

>> It is dangerous to discourage research on that simplistic basis

It's more dangerous to focus all research efforts on a dead end.


It takes many months to train a human brain so that it would recognize what a cat is, far more than a few calories - and it needs not only a huge amount of data, but also an ability to experiment; e.g. we have evidence that just passive seeing without any moving/interaction is not sufficient for a mammal brain to learn to "see" usefully.

Your argument about classifying images trivially excludes the large amount of data and training that any human brain experiences during early childhood.


>> Your argument about classifying images trivially excludes the large amount of data and training that any human brain experiences during early childhood.

Not at all. That's exactly what I mean when I say that the way our brain does image recognition also takes into account context.

Our algorithms are pushing the limits of our computing hardware and yet they have no way to deal with the context a human toddler already has collected in his or her brain.

>> It takes many months to train a human brain so that it would recognize what a cat is, far more than a few calories

I noted it would take _me_ less than a second to learn to identify a new animal from an image. Obviously my brain is already trained, if you like: it has a context, some sort of general knowledge of the world that is still far, far from what a computer can handle.

I'm guessing you thought I was talking about something else, a toddler's brain maybe?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: