Hacker News new | past | comments | ask | show | jobs | submit | codydh's comments login

These are really fun, I’m on the fourth book now.


I was really drawn in and thinking deeply about this, until I got to the end:

The content analysis of each film and TV show was generated with the ChatGPT 4o large language model. [..] So the content analysis for each film and TV show – as well as the short explanation of its answer – are based on ChatGPT 4o’s responses.

I immediately discounted everything I'd just read. I cannot imagine doing this work manually, but I also cannot imagine believing that these effects are real based on ChatGPT's determination.

Maybe that's just indicative of my current "AI" cynicism, but it was remarkable to me how immediately my opinion changed.


What if the creator didn't do the reveal?

Would you still think about this and feel that you have learnt something?

Disclaimer: I don't think it is a good idea not to reveal usage of AI. I just feel it is wrong to dismiss what you learnt just because you read something from an LLM. It is not about something you read being absolutely true or having mistakes or whatever, it is about what happened in your mind, what you take from that experience.


> it is about what happened in your mind, what you take from that experience

Is this fiction or non-fiction? If it's fiction, then sure, yes, enjoy the ride. You can let it change the way you think a little.

If you're going to act on it? No. It's like reading a scientific paper and discovering an error in the statistical work: it invalidates the whole thing and you need to forget it.

Too many people these days are taken in by clever little just-so stories on social media that have no more validity than Rudyard Kipling's animal stories.

(On the other hand, as a piece of media criticism it's totally fine! The observation about pessimism is not new and seems valid even if it isn't and can't be "objective". Separate issue.)


Even in truth there are grey areas. You read a scientific paper and discover a serious error in a side topic the paper only had a short digress about. Would that invalidate the whole thing?

And where do you draw the line of invalidating errors and not so important errors (Sorites Paradox)?

This said, there are cases where a contradiction undermines the whole thing. I am a legal and one law professor warned us students, if we say A is valid but not A as well, he will dismiss the complete answer.

Once I got a contradictory information from a government agency and I was forced to follow up with a request for clarification. That's bad.


I could see there being value in the discussion and thinking rather than in drawing a definitive conclusion.

After all, animal stories like Rudyard Kipling's have a long history of providing people with tools for thinking about life and social issues.


I think people's concept of how many items in a spreadsheet you can work through is often miscalibrated.

Does the author of this piece post their dataset? That would allow people to cross-check the results from GPT and see to what extent they think it got it right or wrong.

Categorizing 200 films based on ~10 attributes by reading their wikipedia overviews feels like it would be not significantly more work than building the fancy scrolling visualisation itself.

If there were 1000 films being analysed then I'd definitely be looking for some sort of tooling to help (or crowd sourcing the analysis)


I am left with questions...

Are a content analysis of films supposed or expected to be Truth - with a capital T? Can they be? We are in the realm of fictional narrative as to what concerns the films. Granted, all fiction and narrative must have a connection with Reality - i.e. 'the real world' (however that may be defined) - anything else is impossible. However, fiction is a playground where you can play with the concepts that exists in Reality, in ways which we cannot do in Reality itself [note]. So it is by nature 'detached' from (but coupled to) Reality.

As ChatGPT is an AI language model, is it reasonable to assume that what it can and will produce is always a narrative? Even when asked to make a computer program, are ChatGPT producing 'factual code' or a narrative written as code? Albeit, a narrative that can actually be run through a compiler?

If ChatGPT produces narratives, then the question is not whether analysis of science fiction films are True or False, it is a question of how good the narrative is - i.e. its quality ... I think?

And here only humans can judge the quality of the narrative. It is currently impossible for an AI language model to evaluate a narrative (I contend), as its evaluation will itself be a narrative... (this may be where our humans minds are currently being tricked by our own human invention, the 'AI'...)

So in a way, does a film analysis narrative made by an AI language model about a science fiction film not become science fiction in itself? It is the knowledge about the premise of the analysis narrative that becomes important. But then the premise is important everywhere we meet narratives? It this obvious or non-obvious?

My head is slightly spinning here... as it seems to me that the film analysis is in itself a narrative and is visually presented in a format that shows and indicates a narrative...?

[note] This 'playing' with reality may actually feed back into Reality, and often does. Here I do not relate to entire branches of narrative which is intentionally made to make feedback to reality e.g. so called 'news' (at least modern news and journalism) and political influence/propaganda.


I'm not quite sure what you mean by "narrative" here; is it what post-structuralism would call a "text" https://en.wikipedia.org/wiki/Text_(literary_theory) ?


Today, the narrative is to lie about everything. Our sales call it aligning truth so we can receive a paycheck but they are lying again, they just need to lie about everything. But in the end you can see everybody does it. Marketing is leading, politicians are following, adjusting LLM to present data they way they want it is just fine tuning us to accept more lies and get used to them.


If you're using the same model for all of the analyses would it not give reasonably consistent results?


Who cares if the results are "consistent"? Whatever that means. The question is whether the results are true, correct. If they are erroneous, consistency is useless.


If they are consistently wrong within a given margin of error, then they are still very useful. I agree that we'd need to define our terms here to have a meaningful debate.


I mean, they're not useful at all, let alone "very useful". Can you explain what the use of this article is assuming the classification of movies in various categories (e.g. future, present, past; world better, same, worse) is consistently wrong? To me it seems clear we can throw it all away, as it is now just a bunch of meaningless words.


I think my concern is less about consistency, and more that I don't know how the LLM is deciding what is, for example, a "dystopian setting." Sure, it could be true that the percent of sci-fi movies since the 1950s that have a dystopian setting has increased from 11% to 46%. An alternate explanation is that the LLM is trained on more modern language, and so is better at detecting modern dystopian storylines. And with this method, we cannot know.


> Maybe that's just indicative of my current "AI" cynicism, but it was remarkable to me how immediately my opinion changed.

The "you read that on the internet?" attitude, aka "wikipedia isn't a source"


Discrimination towards AI.

This feels like some sort of sci-fi film already.


I think that's unfair and definitely being too cynical. I'm pretty skeptical about LLM-AI in general too, and yet can readily admit it's highly effective at certain limited tasks such as.. summarizing things. And I'm sure the author did at least some cursory quality control to make sure it was doing what he expected. Hell, this is LLM-AI being used exactly as it should - to automate away the skutwork it can do well to free up the human for synthesizing something good out of it.

Frankly, my "input" related criticisms are more around IMDB rankings and whatever criteria ended up with "Despicable Me" being considered a science fiction film.


Because it's a children's film? The movie is about a technological plot to steal the moon. How is that not science fiction?


So you read something insightful and thought-provoking, and then realised that the words you read had been generated in a way that you don’t respect. Did the words themselves, or their objective value, change due to your changed perspective?


Of course their objective value changed. I’m not sure why you seem to be taking issue with that. If you heard an impassioned appeal to your emotions and then immediately afterwards discovered the appeal came from a pathological liar, would it affect your opinions?

Words don’t change depending whether they’re true or false but I would hope it’s clear their value does.


You're literally asking what is wrong with lies. Plenty, think about it.


Ah, so you drive a Tesla.


Their mouse and all of their trackpads support right click without extra buttons.


Technically the trackpads have had zero buttons for years now. It's all haptics and pressure sensors. It's a major step up from the old method, since you can reliably click anywhere on the pad. (They also support a hard-press gesture, but not many applications use that out of the box.)


There's a good guide to the prioritization (QCI) of various carriers here: https://www.reddit.com/r/NoContract/comments/oaophe/data_pri...


It feels crazy-making to me that information like this only exists in Reddit threads. Prioritization is, IMO, a totally valid way to price differentiate. But it should be clearly stated when you’re buying in the same way GB data limits are.


I believe if you get the higher-priced Visible plan (Visible+), you have higher/equivalent to postpaid priority. I switched to this plan a year ago from Verizon, in an area where being deprioritized on any of the carriers means it's useless much of the day, and it's been great.


Presumably the same reason all Teslas have weird door handles: "Oooh, different!" Despite the fact that they're kind of miserable to use and less reliable than a standard handle.


The door handles probably shave a percent or two battery life due to aerodynamics; most EVs have recessed door handles of some sort now.


I've read this trope many times but honestly I don't buy it. At highway speeds conventional door handles, which are absolutely tiny in frontal area and have fairly low drag shape to begin with, are behind the side mirror turbulence anyway. I wish some high budget Youtuber would test this in a wind tunnel so we can put it to rest.


From what I've read, since door handles generally are placed in the path of turbulent air that has beed disturbed by the mirrors, it doesn't really make much of a difference.


Most door handles are considerably lower than the mirrors. Door handles are normally several inches down from the window line while mirrors are usually at or above that line.

The Mach E's front door handle things are practically at the window line to take advantage of being in the mirror turbulence area, a handle several inches down probably wouldn't be in it very much.


Do you have a source for this claim? Many EVs I see on the road, especially performance ones, do not have recessed doorhandles.


Approaching this from a US perspective:

Pretty much all Teslas have recessed doorhandles.

The Mach E has buttons to open the doors. The Lightning has regular door handles.

The Hyundai Ioniq 5/6 have recessed door handles.

The Mercedes EQS has recessed door handles.

The Polestar 1 and 3 have recessed, but the Polestar 2 does not.

The Volvo XC40 has recessed handles. The XC30 and XC90 have regular handles.

VW's EVs tend to have regular door handles.

Seeing as how Teslas make up the majority of EVs on the roads in the US, and they all have recessed handles, by definition in the US the majority of EVs have recessed handles. By number of models, its kind of mixed but it seems there are more models with recessed handles than regular handles.


The claim was for the aerodynamics, not what models have which.


Serial Chesterson's Fence violators due to arrogance and ego.


Agreed. There's a lot of blaming 18 year olds for taking out loans (which many of them were encouraged to take) pursuing degrees that might not earn enough to pay off (while they were encouraged to follow their heart).

The real issue is that schools are too expensive. It shouldn't be a mistake that takes a lifetime to pay back if you pursue a degree that "the marketplace" doesn't reward with a six- or seven-figure salary.


If I'm understanding what you're looking for, a straightforward version of this is built into macOS: https://support.apple.com/guide/mac-help/replace-text-punctu...


I thought that, too but on second reading, I think they want text to be expanded they didn’t type. IOW, they want “IOW” to be replaced by “in other words”, even if it appears in email messages they receive.

A way to do that would be with a font that has ligatures for abbreviations you want to see expanded, but that would only work if you can control what font applications use (in theory, replacing all fonts on your system by ones with ligatures would help, but that already wasn’t very feasible before Apple locked down the OS)

I also think they would want something more intelligent, as many, many abbreviations have multiple expansions. They wouldn’t want to see “in other words” if the writer meant the Chiwere language or “isle of Wight” (https://en.wikipedia.org/wiki/IOW)


It seems that many of the arguments against this system rely on believing governments (or similarly powerful groups) will strong-arm Apple and fabricate evidence.

I fail to see how this system (which I'm not a big fan of, to be clear) makes it easier if you assume governments can and will fabricate evidence. Doesn't seem you need an iPhone, smartphone, or to have owned or done anything at all in that case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: