Hacker News new | past | comments | ask | show | jobs | submit login

I would dispute that. I consider both of those examples to be AI, but not general AI and not particularly strong AI.

Meanwhile I do not consider gradient descent (or biased random walk, or any number of other algorithms) to be AI.

The exact line is fuzzy. I don't feel like most simple image classifiers qualify, whereas style transfer GANs do feel like a very weak form of AI to me. But obviously it's becoming quite subjective at that point.




Yeah, that's moving the goalposts.


How? I agreed with the examples you gave so that would seem to match your goalposts.

As to my own goalposts, those haven't changed in quite some time. If I did happen to disagree with you though, well, someone disagreeing with you does not on its own imply that they've recently moved their own goalposts.

You seem to be assuming that there's some universally accepted definition that's been changing over time but that doesn't seem to be the case to me. What I see is a continual stream of bogus and overhyped claims that get rebuked.

If anything it's the people trumpeting AI this and AI that who are trying very hard to shift the goalposts for their own benefit.


> I consider both of those examples to be AI, but not general AI and not particularly strong AI.

The original goalpost was AI. You moved the goal post to general/strong AI. No one was claiming to be working on that back in the day even if they hoped their efforts would be along the path to that eventually. If you asked someone even just a few years ago when general AI would become possible, I think most people would have said a date after 2050 if they thought it was possible at all.

> You seem to be assuming that there's some universally accepted definition that's been changing over time but that doesn't seem to be the case to me

I can guarantee you that if you asked someone what AI meant in 1995 it would be radically different than what someone would answer in 2025. Obviously at both periods of time the boundaries of the definition were fuzzy, but the entire fuzzy blob has undeniably shifted radically.

> What I see is a continual stream of bogus and overhyped claims that get rebuked. If anything it's the people trumpeting AI this and AI that who are trying very hard to shift the goalposts for their own benefit.

People claiming AI is more capable than it really is moves the goal posts further away. If I falsely claim I have an AI that can out-litigate the best lawyers in the world, that certainly makes a real AI that can get a passing grade in an introductory law school class a lot less impressive. No one makes any money from claiming their product will underdeliver compared to what people expect.


I plainly stated that I considered the examples to fall within the bounds of AI. Further clarifying my viewpoint does not change that. Hopefully you can appreciate the need for increased specificity regarding this subject now that such a wide variety of things exist from hot dog classifiers to LLMs that output beautiful but logically nonsensical prose to katago to whatever else.

Even if we happened to disagree about that clarification (although we appear to agree?) that would not on its own imply that any shifting of goalposts had occurred. The "original goalpost" of which you speak is not attributed to me. To imply that I once held a particular view is to straw man me. If you want to know what I thought in the past then just ask!

I'd also like to point out that a change in the common usage of a term is not the same thing as the shifting of goalposts. I don't believe that happened here but it bears pointing out nonetheless.

> asked someone what AI meant in 1995

Who is the someone? I wouldn't expect a layman, either then or now, to have an even remotely rigorous working definition. This is important because if you are going to claim that goalposts have shifted then you're going to need to be clear about whose goalposts and what exactly they were.

> the boundaries of the definition were fuzzy, but the entire fuzzy blob has undeniably shifted radically

It seems we have a fundamental disagreement then. From my perspective the term "AI" in popular culture brought to mind an expert system conversing proficiently in natural language in both the 90s and today. That is to say, a strong and general AI. I think that most laymen today would classify something like chatgpt as AI, but if pressed with examples of some of its more egregious logical failures would probably become confused about the precise definition of the term.

Meanwhile the technical definition has always been much more nuanced and similarly appears to me to have remained largely unchanged. If anything I think the surprising revelation has been that you can have such extensive and refined natural language processing capabilities without the ability to reason logically.


You are switching back and forth between talking about your personal definition, rigorous working definition, and an '"AI" in pop culture' definition. I am talking about the popular definition shifting further and further away from the technical definition, specifically in the context of "the AI effect" as brought up in the root comment.

You consider the example listed to be AI, but not general AI. You believe that the pop culture definition of AI is general AI. Thus you don't believe those things meet the pop culture definition of AI. The discrepancy between your definition of AI and the pop culture definition of AI proves that somebody moved the goalposts.

I disagree with your assumption that the association of natural language processing with AI means that this has always been exclusively what AI referred to. However, working with that assumption, you still acknowledge that when presented with a system that does exactly that, people feel their definition needs to change to exclude it.

Realistically though, I think pop culture largely thought of AI the way it was presented in scifi stories - a cold, calculating thing that analyzed data and came to decidedly non-human conclusions based on it. Skynet's probably the canonical example. The idea that an AI needs to understand information in the same way we do was definitely not always the case.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: