Hacker News new | past | comments | ask | show | jobs | submit | christianqchung's comments login

ChatGPT doesn't let you share chats with images (why?), but here is what o3 said for an image with memory turned off:

"Thought for 11 seconds

It’s hard to pin down an exact spot from this single photo, but a few visual clues narrow the possibilities:

Temperate-zone trees: Most of the trees have lost their leaves, but scattered white-pines or spruces remain green. That mix—deciduous hardwoods plus a sprinkling of conifers—is typical of the northeastern United States and southeastern Canada.

Broad, moderately fast river: The channel looks 75-100 m wide with swirling surface currents but no large rapids, suggesting a mid-sized river in its lower reaches—something like the Merrimack, Connecticut, Delaware, or Susquehanna rather than a small mountain stream.

Late-winter / early-spring light: Bare branches and a clear blue sky with fairly high sun angle hint at March–April in that latitude range.

So my best guess is that you’re standing on the bank of a northeastern U.S. river—perhaps somewhere in New England or the Mid-Atlantic—during late winter or early spring. If you can share any nearby landmarks, town names, or additional photos, I might be able to zero in more precisely."

With memory turned on:

"Thought for 5 seconds

Beautiful spot! From what I can see:

a broad, slow‑moving river with a sandy/rocky bank in the foreground

bare deciduous trees mixed with some white pines on the far shore (so it’s probably early spring or late autumn)

no mountains on the horizon, just gently rolling terrain typical of low‑lying floodplains in the U.S. Northeast

That mix of hardwoods and pines, plus the flat, braided water surface, reminds me a lot of the Merrimack or Concord Rivers in Massachusetts or southern New Hampshire—both have public footpaths where you can stand right at the water’s edge like this.

Of course, many temperate rivers look similar. If you can give me a hint (e.g., the town or nearby landmark), I can narrow it down further—maybe even point you to trail maps or history of the exact stretch you photographed."

Southern NH is correct, and I am certain it is drawing from memory/past chats. However, I can't replicate a specific behavior I once had, which is in temporary chat (no past chat/memory enabled), it said that it guessed where the photo was taken based on my ___location.


"ChatGPT doesn't let you share chats with images (why?)"

Probably because if you uploaded pornography (or illegal imagery) to ChatGPT and then shared a link with the world it would be embarrassing for OpenAI.


Wouldn't that apply to any website with image hosting abilities though? Why does that apply to OpenAI in particular?

On an unrelated note, I like your blog.


My guess is that OpenAI are risk averse on this particular issue, because people could come up with some very creative awful scenarios with ChatGPT and image analysis. "Which of these people looks the most stupid" plus a jailbreak would be instant negative press.

The comment you replied to isn't about whether friendships are transactional. They detailed a long list of ways that your theoretical and "realistic" friendships are bad and unrealistic definitions. If you didn't have friends closer to your first definition in most ways than your second, you haven't had a close friend, which isn't something to be shamed over but is something to consider.

No. Betteridge's law. More seriously, while this piece is ancient in AI terms (May 2023), I don't think genuine emotional and social intelligence is something that can be learned at an average level by talking to AI or reading. Using the voice models is a step up from this, but I still think they're too tuned to following your instructions without nuance for something like this. If reading was enough to pick up social and emotional skills, I'd think that people who read the right books would be masters at several trades if it gave even 10% of the experience that real world practice did.

I'm also not trying to be reactionary and dismissive, but how are you supposed to learn social cues from AI right now? In an optimal case, the LLM predicts accurately what would happen. Maybe you could say something awkward and the AI would say back <s/he lets out an exasperated sigh and turns away>; but in real life you have to notice these cues among a barrage of other factors. Would this really help anyone who is this desperate? Additionally, the tone of how you say words matters almost as much as the content, and this is missing from text.

I concede that in extreme cases, some people could learn stuff from trying this, and that's a good thing. I just don't really know how much, who exactly, how, and whether they'd learn incorrect stuff as well.


"Kennedy also claimed most autism cases are severe before citing a statistic from the report that said 25 percent of children with autism are nonverbal. HHS did not respond to a question as to whether the 25 percent statistic accounts for all severe cases."

The tricky thing about a (probably misleading) statistic like that is 0% of the autistic people I know are nonverbal, but I suppose if I know them in any real capacity, they have to be verbal. Anyway, this freakout about autism seems overblown to me. Can any RFK championers explain this one to me?


There was a CDC report recently that defined "profound autism" as "nonverbal, were minimally verbal, or had an intelligence quotient <50".

They found roughly 25% of kids diagnosed by age 8 fitted this category.

This category seems to be growing, but not as fast as autism disgnosis generally.

The obvious statistical issue here is that to have received a diagnosis by age 8 you need to be showing more severe symptoms.

This shows up in the fact that these kids are "more likely to be female, from racial and ethnic minority groups, of low socioeconomic status" compared with kids with autism generally because the ability to get diagnosed for milder forms is harder for those groups (more mildly autistic girls seem to mask their social symptoms better than boys, which has historically led to them being undercounted and unrepresented in the common stereotype).

https://journals.sagepub.com/doi/10.1177/00333549231163551


> this freakout about autism seems overblown to me.

That's the point. It's a distraction with no possible verifiable result. His supporters will see this on the news and say: "Wow, Kennedy is finally really doing something about that scary Autism stuff, he's great!"

At some point he can just misrepresent some existing research and blame microplastics (or something else) for autism and proudly talk about this in the media.


I have no dog in this, but the sources I can find all say something like 25-30% non-verbal or minimally verbal. People seem to group them together, so I think 'minimally verbal' must be nearly as severe. https://pmc.ncbi.nlm.nih.gov/articles/PMC3869868/


Okay, it's common across other industries, but not this one. Here is Google, Facebook, and Anthropic comparing their frontier models to others[1][2][3].

[1] https://blog.google/technology/google-deepmind/gemini-model-...

[2] https://ai.meta.com/blog/llama-4-multimodal-intelligence/

[3] https://www.anthropic.com/claude/sonnet


Right. Those labs aren’t leading the industry.


Confusing take - Gemini 2.5 is probably the best general purpose coding model right now, and before that it was Sonnet 3.5. (Maybe 3.7 if you can get it to be less reward-hacky.) OpenAI hasn't had the best coding model for... coming up on a year, now? (o1-pro probably "outperformed" Sonnet 3.5 but you'd be waiting 10 minutes for a response, so.)


But Llama 4 Scout does badly on long context benchmarks despite claiming 10M. It scores 1 slot above Llama 3.1 8B in this one[1].

[1] https://github.com/adobe-research/NoLiMa


Indeed, but it does not take away the fact that long context is not trained through long content but by scaling short content instead.


> To them, it doesn't even matter if things get "worse" for a while. Their life is already meeting every economic headwind imaginable.

Really? How do you even know that? You think another round of price hikes within the year is unimaginable, which what the economic consensus on immediate tariffs this high predicts?

The unemployment rate is 4%. The amount of liberation day tariff supporters is an order of magnitude higher than that. Pretending that things can't get worse is dangerous and stupid.


That makes no difference to who should get tariffs by the administration's own logic. They're cozying up to Russia. No other explanation is feasible.


I can think of at least a dozen reasons but I'll give you one: We are in delicate peace negotiations with Russia _right_ now. There is good reason to isolate all foreign policy decisions with that country to those negotiations. It is called doing more than one thing at a time.


We are also in delicate peace negotiations with Ukraine right now, but we still put import taxes on them. If you think that the administration would put more import taxes on Russia after the negotiations are done, then at least you're consistent. I need your other 11+ reasons to be convinced.


11+ conspicuously missing reasons later...

Bro digitally simps for nations for political identity, and instantly folded under the lightest application of reasoning. That is sad.


Ukraine is party to those same delicate peace negotiations. Why weren't they excluded, if this is the reason why?


If we were in delicate peace negotiations then we should put more pressure on them. Tell them extra tarrifs will be removed if they agree. The main reason there is still war is Putins stubbornness in admitting he started an unwinnable war. More pressure is helpful


You mean delicate play to dismantle Ukraine, sell it bit by bit to Russia and steal the remaining resources?

You can’t possibly qualify this shitshow as "peace negotiations"


I'm not sure Krasnov will do anything to offend Russia.


Why do you think top upvoted posts have to be a 1:1 correlation of value? If you look at the most watched videos on youtube, the most popular movies, or sorted by top of all time on subreddits, the only correlation is that people liked them the most.

The post has a catchy title and a (in my opinion) clear message about using models as API callers and fuzzy interfaces in production instead of as complex program simulators. It's not about using models to write code.

Social media upvotes are less frustrating imo if you see it as a measurement of attention, not a funneling of value. Yes people like things that give them value but they also like reading things with a good title.


  The post has a catchy title and a (in my opinion) clear message about using models as API callers and fuzzy interfaces in production instead of as complex program simulators. It's not about using models to write code.
I mean, the message is wrong as well. LLMs can provide customer support. In that case, it's the business logic.


Radioactive isotopes in nuclear fuel: do not ingest.

Radioactive isotopes in banana: consume away.

Understand now? I'm not even defending anything other than how two things that vaguely appear contradictory, are in fact, not.


I don’t.

Because your banana example doesn’t have additives meant to be reactive — so is unlike adding fluoride to water at levels which impact dental health.


It's the concentration that matters. Toothpaste has over 1000x the concentration of fluoride vs 0.7mg/L as tap water. Many well water sources have naturally occurring fluoride that exceeds the additive levels.


Very low concentrations of medicine are usually ineffective at its stated purpose outside of pseudo scientific theories, so even if it is safe to ingest, it is not clear it is able to do anything for people’s teeth.

That is why I do not understand how the two can co-exist. Either the concentration is so low that drinking it for dental health is pointless, or it is not something people should be drinking. If there was a middle ground, we would have a 1 a day pill for this and not bother brushing our teeth or putting fluoride in water.


> If there was a middle ground, we would have a 1 a day pill for this

We do; fluoride tablets are common.

> and not bother brushing our teeth

Brushing your teeth does a lot of things besides applying fluoride to the surface. It's mostly about getting stuff that's already there off the surface.


My initial searches suggest that these are by prescription only. Do you have any information to the contrary? I did find toothpaste tablets, but those are meant to be used as toothpaste from what I can tell.


> My initial searches suggest that these are by prescription only. Do you have any information to the contrary?

No, I don't. Does that matter?


This is not a one a day pill then. If ingesting it is regulated, that would be consistent with it not being something regular people should ingest regularly.


It is exactly a one a day pill. What are you imagining?


Maybe another example will help?

Chlorine for water treatment in the backcountry when applied to a liter of water: Ingest.

Chlorine straight from a household bleach bottle: Do not ingest.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: