> A customer who wants to track the status of their order will tell you a story about how
I build NPCs for an online game. A non trivial percentage of people are more than happy to tell these stories to anything that will listen, including an LLM. Some people will insist on a human, but an LLM that can handle small talk is going to satisfy more people than you might think.
> Grahn said TikTok has “has never received a request for European user data from the Chinese authorities, and has never provided European user data to them.”
lol, ok. Chinese intelligence services must be completely asleep at the wheel (doubtful)
China government literally forces direct integration into every messaging and financial service. If you talk shit about the government in wechat, the police will call you the next minute.
Fun fact: This is also why iCloud in China isn't actually run by Apple at all, but by a company called AIPO Cloud (Guizhou) Technology Co. Ltd, and the Terms of Service/Privacy Policy is different from the rest of iCloud.
I am convinced they would absolutely tell the truth if they had handed over any data to the Chinese government and would never just lie about it to protect their business interests /s
Every coding assistant or LLM I've used generally makes a real hash of TypeScript's types, so I'm a little skeptical, but also:
> RustAssistant is able to achieve an impressive peak accuracy of roughly 74% on real-world compilation errors in popular open-source Rust repositories.
74% feels like it would be just the right amount that people would keep hitting "retry" without thinking about the error at all. I've found LLMs great for throwing together simple scripts in languages I just don't know or can't lookup the syntax for, but I'm still struggling to get serious work out of them in languages I know well where I'm trying to do anything vaguely complicated.
Worse, they often produce plausible code that does something in a weird or suboptimal way. Tests that don't actually really test anything, or more subtle but actual bugs in logic, that you wouldn't write yourself, but need to be very on the ball to catch in code you're reviewing.
74% feels way too low to be useful, which aligns with my limited experience trying to get any value from LLMs for software engineering. It's just too frustrating making the machine guess and check its way to the answer you already know.
I don't think argument by assertion is appropriate where there's a lot of people who "clearly" believe that it's a good approximation of human intelligence. Given we don't understand how human intelligence works, asserting that one plausible model (a continuous journey through an embedding space held in neurons) that works in machines isn't how humans do it seems too strong.
It is demonstrably true that artificial neurons have nothing to do with cortical neurons in mammals[1] so even if this model of human intelligence is useful, transformers/etc aren't anywhere close to actually implementing the model faithfully. Perhaps by Turing completeness o3 or whatever has stumbled into a good implementation of this model, but that is implausible. o3 still wildly confabulates worse than any dementia patient, still lacks the robust sense of folk physics we see in infants, etc. (This is even more apparent in video generators, Veo2 is SOTA and it still doesn't understand object permanence or gravity.) It does not make sense to say something is a model of human intelligence if it can do PhD-level written tasks but is outsmarted by literal babies (also chimps, dogs, pigeons...)
AI people toss around the term "neuron" way too freely.
The article explicitly calls out that there are valid use cases (although doesn’t enumerate them). Federated sign-on and embedded videos seem like obvious examples
I have no trouble believing a gay boomer from the South instinctively cares about personal privacy; he will have spent much of his early life needing to be very protective of his.
I would agree that most people with that exact background would have learned the hard way to care about privacy.
The single example that ascended to be the CEO of Apple though? That selection process would seem more relevant than any personal background.
My base assumption is that any impressions we have about Tim Cook (or any other executive of a company that size) are a carefully crafted artifact of marketing and PR. He may be a real person behind the scenes, but his (and everyone's) media persona is a fictional character he is portraying.
It feels like if you'd expect someone to be something based on their background, _and_ they profess to be that thing, then the onus is on the person disputing it to come up with the evidence contra?
> any impressions we have about Tim Cook ... is a fictional character he is portraying
The relevant ones here are that he's gay, of a certain age, and from the South, and that he heads up a company who appear to invest heavily, over a long period of time, in privacy protections -- these all feel like they'd be easy to falsify if there existed evidence to the contrary.
For sure. If I want feedback on some writing I’ve done these days I tell it I paid someone else to do the work and I need help evaluating what they did well. Cuts out a lot of bullshit.
Definitely has a bit of a Capoeira "trust me bro this would work great in a real fight" vibe in all the videos. Would be interesting to see more full-speed sparring, and also see how it would evolve with protective gear and stand-in weapons that let them really go at each other.
Letting party members rather than party elites choose the head of party always sounds good in practice but often ends up badly (Truss, Corbyn, etc)
reply