Hacker News new | past | comments | ask | show | jobs | submit login
Knowing less about AI makes people more open to using it (theconversation.com)
103 points by botanicals6 3 months ago | hide | past | favorite | 82 comments



This seems to be largely predicated on the idea that the more you understand how AI works the more the magic disappears[1], but I think the opposite is true -- I like to think I understand this stuff from the logic gates to backpropagation to at least BERT-era attention, and the fact that all of those parts come together into something I now spend hours a day conversing with in plain English to solve real problems is absolutely awe-inspiring to me.

[1] from the source article “we argue that consumers who perceive AI as magical will experience feelings of awe leading to greater AI receptivity”


I don't know. Maybe you're more in the magical group than you think. I spend hours a day helping my client with prompt engineering for customer service training and it's my experience that most of the awe and positivity I see from people comes from a lack of rigorously evaluating the output they're getting back.


Also, for many people I know who don't properly understand "AI", their reaction is less "awe" and more "fear". It's a real challenge to get them to take any of the actual benefits of using it properly at face value, because of all the things they worry about due to Sci-Fi nightmare scenarios and Facebook disinformation (not to mention some of the genuine and valid fears piled on top of all that; mis-use / abuse by corporations and governments, as well as other "bad actors" like spammers and scammers, etc).


I find it's the opposite. A lot of people think it's like a Google / Wikipedia summarizer, and don't realize it can be wrong because they think it's something like the Watson computer from Jeopardy


Yeah, that's definitely another sub-set of the "completely misinformed about / totally misunderstand AI" crowd. :)


I think there are a lot of technological advancements that are easy to "not like" when you know some select few details about them. Including literal sausage making as another commenter on here mentioned.

T-shirts are great until you hear about the conditions of where they're made and how their disposal is managed. Social media is great until you realize how much they know about you and how they use that knowledge. Modern medicine is easy to not like when you look at the animal experiments that made it happen. And again, sausages - I know some vegetarian folks who are vegetarian in protest of how most meat is produced.

I kind of wonder if there is a subset of comfortable modern society where every aspect is easily likable no matter how much you know about it. Bonus points if that society is environmentally sustainable.


Sure, but I think there is a key diff between those examples and LLMs. In those cases, people don't necessarily mind the machines involved in the process, they dislike the socioeconomic and labor structures around those machines, or the animal cruelty—this is a side effect of the social organization of labor, not of the machine itself. Here, contrarily, the claim is that the machine itself is the thing that becomes less impressive once you know how it works-this is also likely true of any machine, but the point is not about the demystification of the technical function in the tshirt and sausage examples.


Yes good call, I do seem to have misinterpreted the text a bit.

I suppose using the word "impressive" as you did might make that misunderstanding a bit harder to run into.


The difference is that once you know how a t-shirt is made, it doesn’t change your perception of the functionality of a t-shirt.

Laymen will hear AI and imagine I, robot and terminator. They hear “neural networks” and think AI acts like a physical brain.

Once you understand how AI works, your perception of the functionality changes as well (eg. From skynet to reality)

I hear “sweat-shop factories” and, while it’s a disgusting practice, the t-shirt still covers my torso.


Life is [generally] inherently exploitative. Vegetarianism and veganism is a first world luxury, the bottom line is animal protein consumption is a readily available high-quality caloric intake option to encourage children to grow into healthy adults*.

Learning to exist in the world and hold the uncomfortable parts of being a human being has been a valuable and useful skill in my life across many dimensions.

* I'm not advocating for eating excessive meat, but going to extremes to avoid it does suggests something other factor is in play.


> Vegetarianism and veganism is a first world luxury

"Luxury" makes me choke : animal protein has always been seen as a luxury meal in most cultures, one that you don’t have every day, as found in any authors from XIX or early XX and you’ll never seen animal protein as a poor food, let alone meat [0]

For 1kg of plant protein you’ll get 230gr of milk protein, 220gr egg’s prot 150gr chicken prot, 120gr porc prot or 70gr beef prot. There’s some difference in proteins availability but in a way smaller scale. Regarding the quality : most traditional societies have combined plant proteins in their daily diets : corn and beans (Latin American), cheakpeas and bulgur (Middle East) rice and lentils (India), soyfood and rice (China, Japan, Korea), millet and peanuts (Africa), rice and tempeh (Java, a very poor place with one the biggest density of the world). The list can go long. Modern science showed us those associations skyrocket the ratio of limiting amino acids, like Lysin + Cystine/Methionine, for soy and brown rice.

Plants have always been the poor’s staple protein because they are dirt cheap, nutritious and convenient. More importantly for the future: the require many times less land to grow because of the protein consumed/returned by animals ratio (see above). Wild fish is an exception but the relative yields have been decreasing for decades, only compensated by more and bigger boats. Also 1/5 of world wild fishing production exist for feeding livestock and farmed fishes.

It’s true (and sad in my opinion) plant proteins are now luxury in certains cities/places, the reasons for that are habits and taste preferences, world richness imbalance (Europe imports 60kg/person/year of Brazilian soy only for livestock consumption, a quantity that would provide 50% of their protein need if consumed directly as so) and subsidies and price regulations to maintain an unsustainable diary industry.

I’m not advocating for everyone eating soy but let’s face it : feeding livestock with human consumable proteins is not the most efficient diet.

0 L’assomoir from Emile Zola, loosely translated by myself : "On Sundays, when there was work at the mine, we'd have a bit of bacon, and on the rare good days, a chunk of beef the size of your fist. It was a real treat"


The vast numbers of low income vegetarians in India would no doubt find this argument very novel.


..if it weren't so regularly rolled out by the ignorant.


There's probably a reversal of that trend when you know a lot about it. I don't know how you could go through building your own GPT-2 and not see it as an incredible technology and useful advancement that is going to give us an incredible number of insights and tools.

I still can't get over how soon people just accepted that you can universally translate all languages, that audio can be accurately transcribed, that you can speak a sentence and get pretty good research on a topic. These things are all actually insane and everyone has just decided to ignore this and focus on the aesthetic grossness of some of the things people are trying to get them to do.


Because the only time I can be confident in the answers it gives me, is when I already knew the answer before asking, which makes using the LLM pointless. Anything else would require me to do research to fact check it anyway. Might as well skip the LLM and do the research directly.

This is what a lot of people forget about LLMs, if you didn't already know what the answer should be, then you won't know when it's giving you crappy responses. And if you do know what you are talking about and actually do catch it giving wrong information (or think you did), it'll just say "whoops.... you're right!". Sometimes, even if you aren't.

The only things I can feel confident asking an LLM is stuff that I already know.


I mean, if you are willing to accept mistakes, then a lot of this was doable a while ago. Google Translate has been around a long time (and it's improved due to transformers but it was pretty good a long time ago too). Wikipedia has been around forever. Siri and other such agents have been doing audio transcription well long before.

So all of these have improved. But they aren't completely novel


There were shareware Windows programs for audio transcription and even controlling the computer with your voice back in the 90s, I had some fun playing with one of them around 1999/2000. You had to train it to your voice by reading some text it gave you, but after that was pretty good. The modern stuff is definitely better untrained, but I'm not so sure about after training, though it has been a long time since I've used it.


Yeah I remember discovering the voice controls built into Windows Vista when I was a kid and being blown away by it.


Or on the far end of the spectrum of literacy, we have serious techno optimists that understand the fullest potential of lifesaving research (alpha fold, etc)


Well that end of the spectrum isn’t exactly language models


Both Alpha Fold and LLMs are built around transformers.


That's pretty much where the similarity stops, though.


I wonder if the "literacy" about AI is not so much about the inner workings of the technology, but about the broad ramifications of its use.

It takes at least a wee bit of sophistication to go from "this is a neat search engine, and it can help me with my writing tasks" to "the stuff that's generated by this thing is going to affect society in unpredictable ways that are not necessarily positive."

A similar spectrum of attitudes may already exist for social media, where the term "algorithm" predated the term "AI."


> These insights pose a challenge for policymakers and educators. Efforts to boost AI literacy might unintentionally dampen people’s enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption. To make the most of AI’s potential, businesses, educators and policymakers need to strike this balance.

Why would we avoid educating people, in order to keep them willing to use AI? Why is getting people to use AI seen as a good in itself? Did AI write this article? (Don’t answer that.)


You're striking at the fundamental irrationality behind the current hype cycle. It is still, in the vast majority of cases, a solution looking for a real problem. Since there isn't an actual clear benefit or problem to solve yet, that justifies the high costs, people are trying to make their (silly) bets come true by brute force (get everyone to use this thing so I get my ROI, whether it makes any actual sense or not).


I’m sure it’s already a solution to a lot of things, that just aren’t sexy or justifiable for the current cost - for instance, if it was cheap enough, I would never have to use a keyboard again, which due to a disability I have, is becoming more difficult. That would help a lot of people almost certainly, but since it isn’t like, the next iphone, people turn their noses at it.

Solutions like neuralink if they become mature could probably use some sort of llm powered translation layer to machine instructions (which neither tech is near being perfect with). Things like that, I think we can technically do already, it is a matter of refining - the magical stuff though, I’m less sure of.


Have you tried dictation apps built on Whisper? They're pretty damn magic to me. Also they are often small enough to run locally.


Due to quirks in how I speak most dictation apps fail me miserably and I don’t like giving apps a lot of microphone access. I think dictation would be great, but sometimes is hard to do so at the speed one can think and type - I am hopeful for more technologies in the future that help disabled people gain more technical use, whether that be sci fi stuff like neuralink, or more friendly web protocols towards screen readers and stuff. Seems like LLM’s slot in to some layers there, somewhere, to me. One of the only great uses I get out of it is it greatly saves on my typing with code complete + vim. Fewer keystrokes is good for anyone’s hands, and it seems good at predicting what you want to do rather than what it thinks you should do.


I would love to understand your thinking around not wanting to give apps microphone access while simultaneously being excited about the prospect of giving apps direct access to your literal brain.


If my choice is two invasive technologies I’d probably end up choosing the one that works better, and I think it will be done irresponsibly, but if it means retaining access to the world around me, difficult to see any other real choice other than slowly becoming locked in one’s body. I have a degenerative muscle disease and only about 20 years left before my voice muscles will leave me, if my heart does not give out first. So, that is my thinking. Right now they simply don’t work well enough for me to adopt and give up that privacy, is the real issue with adoption (for me). If I regress further, and that’s all I can reach to, I would continue to try - just very frustrating to use.


That makes sense, thanks for your response!

I’m truly sorry to hear about the challenges you’re going through, and I genuinely hope improvements in the technological landscape afford you greater access to the world in each coming year than is taken away.


the issue is i slur badly due to multiple disorders and these apps can’t figure it out. many other disorders also struggle


> for instance, if it was cheap enough, I would never have to use a keyboard again, which due to a disability I have, is becoming more difficult. That would help a lot of people almost certainly

I mean, sure, but it’s a disability aid which isn’t useful to the average person. I don’t think most people are saying that ‘AI’ is entirely useless, but you mostly are talking about niche uses like this, not the world-changing thing it is being sold as.


I don't understand how this facilitates or admits any discussion.

If I tried to learn more about how you feel, anything I say that indicates any questioning of what you wrote, is to support something that is vapid, negative, worthless, that doesn't solve anything, a "solution" looking for a problem.


> Why would we avoid educating people, in order to keep them willing to use AI? Why is getting people to use AI seen as a good in itself?

because you can't extract a few tenths of a cent every time someone engages their brain


> you can't extract a few tenths of a cent every time someone engages their brain

Don't give the startups any ideas


A non cynical answer: probably to try and encourage people to find new and useful ways to use AI, while learning about their limitations (e.g. hallucinations) and their strengths (e.g. rephrasing, autocomplete, concept art).


> (Don’t answer that.)

It is nevertheless important to say out loud:

> Why is getting people to use AI seen as a good in itself?

Because user counts pump up the stock price. And that is all AI has.

Whether you believe the claims that inference is profitable or not (and there are good reasons to distrust them), AI does not live up to the financial hype.

AI cannot stand on it's own merits. It's not acceptable to let history run it's course and let the AI skeptics be shown wrong in due time. Because it'll dampen the hype, and perhaps these skeptics aren't so wrong. The people can't be educated into a healthy skepticism of AI, because they wouldn't use it enough.

It's readily obvious that the emperor has no clothes. The actions of the companies and executives involved betray their statements about how great AI is.

AI is forced into products, at deeply subsidized prices. You wouldn't do that if the tech is that big a deal. Apple charged premium prices for the iPhone.

Benchmarks are aggressively cheated. OpenAI funding FrontierMath and only giving a verbal agreement after having already broken so many of those is a joke. If the systems actually worked as promised there is no reason for this mess, and every reason in the world to gather accurate data on the generality of the intelligence.

And biggest of all: This entire mess has the implied framing of the Manhattan Project. That it's all a big race towards AGI, and whomever develops AGI will win capitalism forever. So important that they're getting support from the US government with their "Stargate" project. And until rather recently, everyone was making lots of noise about AI safety and the world-destroying dangers of letting someone else develop AGI.

In 1942 Georgii Flyorov figured out the Manhattan Project's existance from the sudden silence in nuclear fission research.

Today, despite stakes that are proclaimed to be even higher, all the big players will not shut up about their accomplishments. Everything is aggressively published and propagandized. Every single fart an AI model makes is spun into a research paper. You might as well mail the model weights directly to Beijing.

Those are not the actions of companies trying to win an R&D race. Those are the actions of companies pushing up their stock price by any means necessary.


Thanks for the sober perspective.

I really wonder what "losing the AI race" (typically meaning USA vs China) is supposed to indicate.

They have a better LLM or something......and then what? A rogue chatbot takes over the world or something?

We're like two plus years into being a few months away from LLMs taking every office jobs, and I'm still at a total loss as to where this is all supposed to go or what I'm even supposed to be sold on.


None of this thread means anything at all, it's 90% nihilistic cynicism wedded to 10% regurgitating talking points from their training data.

The real high-school-sophomore smelling thing, which you'd miss through all the purple prose about the Manhattan Project, is "open research is bad and proves it's fake bunko crap...that the Chinese are stealing(!?)"

I've been here for 15 years and am shuddering to think there are commentators here who would start selling you "open is bad" the instant they had a soapbox to pound their chest on.


That’s not what they wrote and you misinterpreted it. The point was that if it were true what is being claimed — that we’re close to AGI and we (the US, or EU, or China, etc., depending on where you live) must create it first or someone else will beat us to it and there’s no recovery from that — then the current behaviour makes no sense.

So someone is lying somewhere, and the end result is pumping stock prices.


Sure, if we only admit things in its cinematic universe it makes sense. It's incoherent as soon as you take a step out of the trees into the forest. For instance, we also know there's a well-made and popular argument that OpenAI is closed.

It's bizarre to flatten it to "they're giving it away because it's worthless but they're lying and saying its worth something so their stock goes higher", even setting aside the questionable premises that relies on.


Is there already something more advanced than langchain? I haven't really seen many integrated AI applications aside from bots of course.


We lose the race when politicians platform to bring AI jobs back from China.



Titled: "Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?"


the profit motive


The thought that there's a magical machine that I can just ask for an answer and it's going to provide the correct one is absolutely thrilling, but then I remember that at least 1/3rd of LLM-provided answers in my own domains of knowledge turn out to be wrong upon closer inspection.

I don't think the human brain is evolved to deal with a machine that promises to have all the answers, always speaks with an authoritative tone, and is trained to be agreeable. As a species we like shortcuts and we don't like to think critically, an easy to get answer is always going to be infinitely more appealing than a correct answer, no matter how wrong it is.

Right now there's a generation of kids growing up who believe that they don't have to learn anything because LLMs have all the answers. World leaders don't seem concerned about this, likely because a dumb population who doesn't know how to think critically is easy to control.


It’s just like politics!

The less you know about any politician, the more you like them!


As Phillip Kotler says marketing is cosmetic cheating AI marketing wants all not to know the intricacies and risks by analysing it.They want to celebrate what we get.Sounds like bounded rationalities canvassed by marketing cheaters.


>As Phillip Kotler says marketing is cosmetic cheating AI marketing wants all not to know the intricacies and risks by analysing it.They want to celebrate what we get.Sounds like bounded rationalities canvassed by marketing cheaters.

Your comment is incomprehensible. Please use commas.


Here’s a try:

> As Phillip Kotler says: “marketing is cosmetic cheating”. AI marketing wants [us] all not to know the intricacies and risks [that we would find] by analysing [the AI]. [The AI labs] want us to celebrate what we get.

Sounds like bounded rationalities [that have been] canvassed by marketing cheaters.


Americans tend to like their own House representatives far more than they like Congress writ large.


And sausage-making...


Eh, politics ain't that special when it comes to this phenomenon. Don't meet your heroes works for pretty much every field.


What I find fascinating is how much it feels like a conversation, but then also like a conversation with yourself. Since you're asking the question, and the AI echos back at you. So I guess one way it can be used is to figure out things about yourself, like a journal. Just the AI helps you to explore questions you might have.

"The Socratic Method involves a shared dialogue between teacher and students. The teacher leads by posing thought-provoking questions. Students actively engage by asking questions of their own. The discussion goes back and forth."

So AI can be an awesome learning tool, for anyone, at any level. Like many technologies, it is what you make it to be. Even things like Instagram can be educational places.


It's kind of weird how AI isn't yet about robots or reasoning but merely the last phase in the evolution of search engines. An LLM is essentially an extremely user-friendly search engine without the commercial/product aspect. It's an ideal information engine.

It's understandable why LLMs pose a major threat to Google. Google search is essentially an information engine where the information is intermingled with junk commercial content.

These days I only use Google for cases where I want to go to a website I already know but I'm too lazy to type in the URL or use a bookmark. The utility it provides now is merely a convenience.


Right, until they start stuffing the junk commercial content into the LLM interfaces, which will happen eventually.


Reminds me of the saying (internet meme):

"Give a man a game and he'll have fun for a day. Teach a man to make games and he'll never have fun again"


> balance between helping people understand AI and keeping them open to its adoption.

Why is the latter a goal?


The paper referenced is behind a paywall. This article is really hard to understand because the paper it is reporting on uses "AI literacy" as the determinant of "openness to AI". I'm very curious about what they mean by "AI literacy".



That's how scams work.


I don’t know how my ipad works but I like using it a lot. I have no clue how strawberries get from a field to my grocery store but I really like eating them.

There’s a limit to how much I am willing to learn and there’s a heck of a lot of things I’ll just accept at face value because I believe they make my life better.

So far AI makes my life better so I don’t particularly care to learn about it.


The assertion here is that there is a negative correlation between knowledge of AI and enjoyment of AI. For both iPads and Strawberries, I expect you would not like them less were you to learn more about them.


I would also venture a guess that someone on hacker news who claims they know nothing about how an ipad or produce transport works isn't necessarily someone to pay close attention to.


I can kinda bullshit an answer about chips and whatever or that there’s fertilizer and migrant labor involved.

But do I really know anything beyond what I’ve read in a New Yorker article that I skimmed? Not really.


How fun do you think the ipad factories and strawberry fields are to work in? How many pesticides are on the strawberries? How many people in Africa get killed so some warlord can run the cobalt mines.


That's a fine and reasonable take, but that's not what the article is about at all

> those with less understanding may see AI as magical and awe inspiring. We suggest this sense of magic makes them more open to using AI tools.

> this link between lower literacy and higher receptivity persists even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even a bit scary


Ignorance is bliss.


[flagged]


Given the paper you must really understand AI.

I feel like I have a good grasp of the math behind LLMs, but I don't understand them in that I don't understand why people find them useful. I was part of a trial using them at work. The chat based ones were worthless for any problem harder than a CS400 class which is what we're actually paid to do. The autocomplete ones produces code so poor it was constantly yanking me out of flow.


Just release an AI agent that replaces lawyers.


The fact that anyone thinks this will ever happen reveals a fundamental misunderstanding of what a lawyer does. They are responsible and liable for your case to be presented to a judge. An AI can't be liable and responsible for that. It will still need someone to rubber stamp its output. And that person will be a credentialed lawyer


True, but one lawyer can do a huge amount of rubber stamping. That takes the price of billable hours to nearly zero.

Don’t get me started on replacing judges with AI agents. At the end of the day they’re just lawyers as well.


You… may misunderstand what the ‘rubber stamping’ is actually about. A lawyer signing off on something, at least theoretically, means that someone has read it, understood it, considering how it pertains to [whatever], and can stand over it. An LLM pretending to (the above) is not a reasonable substitute.


> A lawyer signing off on something, at least theoretically, means that someone has read it, understood it, considering how it pertains to [whatever], and can stand over it.

How often does the lawyer/judge actually read everything as opposed to trusting the work of their paralegals/clerks?


My wife was a clerk. The judges read every line multiple times because most clerk submissions go through multiple drafts


"Rubber stamping" is an English idiom that specifically means to approve without reading or understanding it.


Haha, what? You have to read the entire thing to rubber stamp it and then fix any issues. It's really not going to take billable hours to nearly zero at all. And it will be even more difficult for judges, whose rulings set precedent and thus must be carefully worded


First release one that replaces programmers then we'll talk :).


I think we can all agree the first to go should be the VCs.


"Thou shalt not make a machine in the likeness of a human mind."




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: