Hacker News new | past | comments | ask | show | jobs | submit | jdiff's comments login

"For the few seconds people brush their teeth"? That's not how fluoridated drinking water works. Fluoridated water works all of the time, not just when brushing teeth, and it's not a vital chemical that the body craves.

You are missing something. If you're this confused about a topic, you should at a bare minimum read the Wikipedia page.


I've read it and am not convinced we need to be ingesting fluoride water. Nor is Europe or most other countries and their dental health is fine.

Yeah because in Europe we add fluoride and iodine to table salt, as well as to our toothpaste.

Also, we don't have anywhere close to the sugar consumption the US has, which keeps both our diabetes and dental health issues at rates far below the US.


The questions you posed are not questioning fluoride, they're asking what the basic premise even is. If you don't understand that, you are far from the position needed to be evaluating and analyzing the necessity or benefits of it.

The Wikipedia page you mentioned reading also points out that it's not only a US thing. Or even a water-only thing.


When I see an argument with a phrase like "basic premise" I know I'm reading some word mambo jambo, otherwise the author would just give their summary of that "basic premise" instead of deadlinking it (refer to something without actually referring it).

You don't have an argument yourself, you just wanted to share that you are pro some position.


It is a severe reach to say that AI can provide human connection. At best, it can provide the illusion, for some. And for those people, I'm not going to say that that's not valuable or legitimate for them. But it's pushing it one step too far to imply that that's a good idea for everyone.

God help us if we determine there is nothing special about human cognition. A lot of people are putting a lot of faith in what amounts to the soul. I’m not at all sure it exists.

Existence of a/the soul isn't the only reason machines cannot replace people in the context of something like this.

One reason is that the human experience is dependent upon the biological nature of man. The biological systems color the experience. The pumping of blood, the nervous system, the heart beating, and ultimately, one's awareness of the specific type of mortality inherent to biological organisms, are integral to the experience. If you accurately reproduce that experience then perhaps you've simply made a human rather than a machine. Of course that claim spurs many subsequent philosophical arguments. Ultimately though, a video game console emulator is not the literal console no matter how accurate it is.

A second reason is simply the subjective experience of a person. Regardless of how accurate the simulation is, ultimately, if the person is aware the other end isn't human, the experience is tainted (for better or worse depending on the individual's opinion - but tainted nonetheless). Knowledge of the truth will necessarily affect the experience. The alternative - being in the dark or outright deception - raises other questions of genuinity that taint the experience.

A conversation with a human, by another human, will never be the same as a conversation with a machine - by definition.


Both can be true. There can be nothing physically or metaphysically "special" about human cognition, and at the same time, we can also be very, very far away from creating even a holistic facsimile. We've got echoes of it in statistical, predictive models, though, and that's shoved the idea into the discourse far before its time.

I agree. My point is that if there’s no mystic soul, it’s probably a mistake to say AI can’t provide the same actual connection that humans do. Today’s AI can’t, for most people, but it’s a statement about maturity of the tech, not human special-ness.

I also maybe agree with “very very far away”, in the 20ish year range. Farther than some people think, closer than others do.

If and when we get to a place where AI reaches that holistic facsimile, I’m not sure what I’ll think of humans who reject the idea and insist that we are qualitatively different because (insert biological or spiritual rationale here). I suspect it will feel like seeing someone mistreat a call center employee because they happen to be in India, or sound like a disliked minority.


And the illusion only has to last the duration of a phone call. I think it's a reasonable bar that can be passed today.

There is absolutely no reason to jump immediately to conspiracy here.

I've had to yank tokens out of the mouths of too many thinking models stuck in loops of (internally, within their own chain of thought) rephrasing the same broken function over and over again, realizing each time that it doesn't meet constraints, and trying the same thing again. Meanwhile, I was sat staring at an opaque spinner wondering if it would have been easier to just write it myself. This was with Gemini 2.5 Pro for reference.

Drop me a message on New Year's Day 2027. I'm betting I'll still be using them optionally.


I've experienced gemini get stuck as you describe a handful of times. With that said, my predication is made on the observation that these tools are already force multipliers, and they're only getting better each passing quarter.

You'll of course be free to use them optionally in your free time and on personal projects. It won't be the case at your place of employment.

I will mark my calendar!


Not even front end, unless it literally is a dumb thin wrapper around a back end. If you are processing anything on that front end, AI is likely to fall flat as quickly as it would on the backend.

based on what?

My own experience writing a web-based, SVG-based 3D modeler. No traditional back end, but when working on the underlying 3D engine it shits the bed from all the broken assumptions and uncommon conventions used there. And in the UI, the case I have in mind involved pointer capture and event handling, it chases down phantoms declaring it's working around behavior that isn't in the spec. I bring it the spec, I bring it minimal examples producing the desired behavior, and it still can't produce working code. It still tries to critique existing details that aren't part of the problem, as evidenced by the fact it took me 5 minutes to debug and fix myself when I got tired of pruning context. At one point it highlighted a line of code and suggested the problem could be a particular function getting called after that line. That function was called 10 lines above the highlighted line, in a section it re-output in a quote block.

So yes, it's bad for front end work too if your front end isn't just shoveling data into your back end.

AI's fine for well-trodden roads. It's awful if you're beating your own path, and especially bad at treading a new path just alongside a superhighway in the training data.


it built the meat of the code, you spent 5 minutes fixing the more complex and esoteric issues. is this not the desired situation? you saved time, but your skillset remained viable

> AI's fine for well-trodden roads. It's awful if you're beating your own path, and especially bad at treading a new path just alongside a superhighway in the training data.

I very much agree with this, although I think that it can be ameliorated significantly with clever prompting


I sincerely wish that had been the case. No, I built the meat of the code. The most common benefit is helping to reducing repetitive typing, letting me skip writing 12 minor variations of `x1 = sin(r1) - cos(r2)`.

Similar to that, in this project it's been handy translating whole mathematical formulas to actual code processes. But when it comes out of that very narrow box it makes an absolute mess of things that almost always ends in a net waste of time. I roped it into that pointer capture issue earlier because it's an unfamiliar API to me, and apparently for it, too, because it hallucinated some fine wild goose chases for me.


wrt unfamiliar APIs, I don't know if this would have worked in your case (perhaps not) but I find that most modern LLMs are very comfortable with simply reading and using the docs or sample code here and now if you pass them the link or a copy paste or html containing the relevant info

I'm not sure that's a convincing argument given that crypto heads haven't just been enthusiastically chatting about the possibilities in the abstract. They do an awful lot of that, see Web3, but they have been using crypto.

Capturing HTML as scalable SVGs is huge, how do you manage condensing all of CSS and its quirks into an SVG? Do you only support a subset of styling properties and rely on the browser to calculate layouts for you?

I was upset about the size of generated svg file because at first all styles were inlined in each element. So I created a function to make mini css classes (.c1, c2, c3,...) So the final size is quite small.

My guess is that those questions are very typical and follow very normal patterns and use well established processes. Give it something weird and it'll continuously trip over itself.

My current project is nothing too bizarre, it's a 3D renderer. Well-trodden ground. But my project breaks a lot of core assumptions and common conventions, and so any LLM I try to introduce—Gemini 2.5 Pro, Claude 3.7 Thinking, o3—they all tangle themselves up between what's actually in the codebase and the strong pull of what's in the training data.

I tried layering on reminders and guidance in the prompting, but ultimately I just end up narrowing its view, limiting its insight, and removing even the context that this is a 3D renderer and not just pure geometry.


> Give it something weird and it'll continuously trip over itself.

And so will almost all humans. It's weird how people refuse to ascribe any human-level intelligence to it until it starts to compete with the world top elite.


Yeah, but humans can be made to understand when and how they're wrong and narrow their focus to fixing the mistake.

LLMs apologize and then proudly present the exact same output as before, repeatedly, forever spinning their wheels at the first major obstacle to their reasoning.


> LLMs apologize and then proudly present the exact same output as before, repeatedly, forever spinning their wheels at the first major obstacle to their reasoning.

So basically like a human, at least up to young adult years in teaching context[0], where the student is subject to authority of the teacher (parent, tutor, schoolteacher) and can't easily weasel out of the entire exercise. Yes, even young adults will get stuck in a loop, presenting "the exact same output as before, repeatedly, forever spinning their wheels at the first major obstacle to their reasoning", or at least until something clicks, or they give up in shame (or the teacher does).

--

[0] - Which is where I saw this first-hand.


As someone currently engaged in teaching the Adobe suite to high school students, that doesn't track with what I see. When my students are getting stuck and frustrated, I look at the problem, remind them of the constraints and assumptions the software operates under. Almost always they realize the problem without me spelling it out, and they reinforce the mental model of the software they're building. Often noticing me lurking and about to offer help is enough for them to pause, re-evaluate, and catch the error in their thinking before I can get out a full sentence.

Reminding LLMs of the constraints they're bumping into doesn't help. They haven't forgotten, after all. The best performance I got out of the LLMs in my project I mentioned upthread was a loop of trying out different functions, pausing, re-evaluating, realizing in its chain of thought that it didn't fit the constraints, and trying out a slightly different way of phrasing the exact same approach. Humans will stop slamming their head into a wall eventually. I sat there watching Gemini 2.5 Pro internally spew out maybe 10 variations of the same function before I pulled the tokens it was chewing on out of its mouth.

Yes, sometimes students get frustrated and bail, but they have the capacity to learn and try something new. If you fall into an area that's adjacent to but decidedly not in their training data, the LLMs will feel that pull from the training data too strongly and fall right into that rut, forgetting where they're at.


A human can play tictactoe or any other simple game in a few minutes after being described the game. AI will do all kinds on interesting things that either are against the rules or will be extremely poor choices.

Yeah, I tried playing tictactoe with chatGPT and it did not do well.


most humans i played against did not do well either :)

Definitely matches my experience as well. I've been working away on a very quirky, non-idiomatic 3D codebase, and LLMs are a mixed bag there. Y is down, there's no perspective distortion or Z buffer, there are no meshes, it's a weird place.

It's still useful to save me from writing 12 variations of x1 = sin(r2) - cos(r1) while implementing some geometric formula, but absolutely awful at understanding how those fit into a deeply atypical environment. Also have to put blinders on it. Giving it too much context just throws it back in that typical 3D rut and has it trying to slip in perspective distortion again.


Yeah I have the same experience. I’ve done some work on novel realtime text collaboration algorithms. For optimisation, I use some somewhat bespoke data structures. (Eg I’m using an order-statistic tree storing substring lengths with internal run-length encoding in the leaf nodes).

ChatGPT is pretty useless with this kind of code. I got it to help translate a run length encoded b-tree from rust to typescript. Even with a reference, it still introduced a bunch of new bugs. Some were very subtle.


It’s just not there yet but I think it will get there for translation kind of tasks quite capably in the next 12 months, especially if asked to translate a single file or a selection in a file line by line. Right now it’s quite bad which I find surprising. I have less confidence we’ll see whole-codebase or even module level understanding for novel topics in the next 24 months.

There’s also a question of quality of source data. At least in TypeScript/JavaScript land, the vast majority of code appears to be low quality and buggy or ignores important edge cases and so even when working on “boilerplate” it can produce code that appears to work but will fall over in production for 20% of users (for example string handling code that will tear Unicode graphemes like emoji).


I gotta ask what are you actually doing because it sure sounds funky

Working on extending the [Zdog](https://zzz.dog) library, adding some new types and tooling, patching bugs I run into on the way.

All the quirks inherit from it being based on (and rendering to) SVG. SVG is Y-down, Zdog only adds Z-forward. SVG only has layering, so Zdog only z-sorts shapes as wholes. Perspective distortion needs more than dead-simple affine transforms to properly render beziers, so Zdog doesn't bother.

The thing that really throws LLMs is the rendering. Parallel projection allows for optical 2D treachery, and Zdog makes heavy use of it. Spheres are rendered as simple 2D circles, a torus can be replicated with a stroked ellipse, a cylinder is just two ellipses and a line with a stroke width of $radius. LLMs struggle to even make small tweaks to existing objects/renderers.


That's been said since at least 2021 (the release date for GitHub Copilot). I think you're overestimating the pace.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: