Hacker News new | past | comments | ask | show | jobs | submit login

While this is promising, there's a long way to go between this and the other things you mentioned. Go is very well-defined, has an unequivocal objective scoring system that can be run very quickly, and can be simulated in such a way that the system can go through many, many iterations very quickly.

There's no way to train an AI like this for, say, health: We cannot simulate the human body to the level of detail that's required, and we definitely aren't going to be able to do it at the speed required for a system like this for a very long time. Producing a definitive, objective score for a paper clip collection is very difficult if not impossible.

AlphaGo/DeepMind represents a very strong approach to a certain set of well-defined problems, but most of the problems required for a general AI aren't well-defined.




> most of the problems required for a general AI aren't well-defined.

Do you care to give an example? Are they more or less well defined than find-the-cat-in-the-picture problem?

> Producing a definitive, objective score for a paper clip collection is very difficult if not impossible.

Erm, producing of objective comparison of relative values of Go board positions is still not possible.


> Do you care to give an example? Are they more or less well defined than find-the-cat-in-the-picture problem?

You mean like go over and feed the neighbor's cat while they're on vacation?

How about instead, being able to clean any arbitrary building?

Go isn't remotely similar to the real world. It's a board game. A challenging one, sure, and AlphaGo is quite a feat, but it's not exactly translatable to open ended tasks with variable environments and ill-specified rules (maybe the neighbor expects you to know to water the plants and feed the goldfish as well).


At this point, there is no evidence that the limiting factor in these cases is AI/software.

The limiting factor with the neighbors cat is the robotics of having a robust body and arm attachment. We know that the scope of current AI can:

1) Identify a request to feed a cat

2) Identify the cat, cat food and cat's bowl from camera data

3) Navigate an open space like a house

Being able to clean an arbitrary building is also more the challenge of building the robot than the AI identifying garbage on a floor or how to sweep something.

It is not clear there are hard theoretical limits on an AI any more. There are economic limits based on the cost of a programmer's attention. There are lots of hardware limits (including processor power).


In my opinion the deepest and most difficult aspect of this example is the notion of 'clean' which will be different across contexts. Abstractions of this kind are not even close to understood in the human semantic system, and in fact are still minimally researched. (I expect much of the progress on this to come from robotics, in fact.)


I remember seeing a demonstration by a deep learning guy of a commercially available robot cleaning a house under remote control. You are seriously underestimating the difficulty of developing software to solve these problems in an integrated way.


This. It is a lot like the business guy thinking it is trivial to program a 'SaaS business' because he has a high level idea in his mind. Like all things programming the devil is in the detail.


The hardware is certainly good enough to assist a person with a disability living in a ranch house with typical household tasks. As demonstrated by human in the loop operation.

https://www.youtube.com/watch?v=eQckUlXPRVk


We have have rockets that can go to orbit, and we have submersibles that can visit the ocean floor. That does not mean the rocket-submarine problem is solved, doing both together is not the same problem as doing both separately.


It also doesn't mean that a rocket-submarine is the way to go.


The difference is a go AI can play billions of games and a simple 20 line C program can check, for each game, who won.

For "cat in the picture", every picture must have the cat first identified by a person, so the training set is much smaller, and Google can't throw GPUs at the problem.


> Google can't throw GPUs at the problem.

The field progresses swiftly. https://arxiv.org/abs/1602.00955


The absolute value of any Go board position is well-defined, and MCTS provides good computationally tractable approximations that get better as the rest of the system improves but already start better than random.


Check the Nature paper (and I think this is one of the biggest take-aways from AlphaGo Zero):

"Finally, it uses a simpler tree search that relies upon this single neural network to evaluate positions and sample moves, without performing any Monte Carlo rollouts."

In this new version, MCTS is not even used to evaluate a position! Speaking as a Go player, the ability for the neural network to accurately evaluate a position without "reading" ahead is phenomenal (again, read the Nature paper last page for details).


the absolute value of a go board position is well defined? where?


As a human go player I can say that evaluating board position is close to impossible.

You may have a seemingly good position and in two turns it seems that you have lost the game already.


> We cannot simulate the human body to the level of detail that's required

A-ha! So we use AGI for this! :-)




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: