What would you suggest? Is it better to wait until the whole app is loaded to show anything? Or is the only solution to fix loading times in the first place?
Yes I am baffled how modern apps are painfully slow. Everything seems to include a Chrome Embedded Framework and therefore have an entire browser running. There is sadly a generation of people who grew up after .net was introduced who think it's perfectly reasonable for a VM spooling up as part of an app load or loading a browser is fine too, and have no idea how speedy Windows 95 used to be, or how loading an app took less than 1 second, or how easy Delphi apps were to create.
It is really amazing how big gui flagship apps like office suite or adobe suite seem slower than they did in 2001. And really they don’t do anything different than these old tools maybe a few extra functions (like content aware fill in ps) but the bread and butter is largely the same. So why is it so slow?
It is almost like they realized users are happy to wait 30-60 seconds for an app to open in 2001 and kept that expectation even as the task remained the same and computers got an order of magnitude more powerful in that time.
Let's not go too far with the rose tinted glasses. Win95 apps are speedy if you run them on modern hardware but at the time they were all dog slow because the average Win95 machine was swapping like crazy.
Loading apps on it definitely did not take one second. The prevalence of splash screens was a testament to that. Practically every app had one whereas today they're rare. Even browsers had multi-second splash screens back then. Microsoft was frequently suspected to be cheating because their apps started so fast you could only see the splash for a second or two, and nobody could work out how they did it. In reality they had written a custom linker that minimized the number of disk seeks required, and everything was disk seek constrained so that made a huge difference.
Delphi apps were easier to create than if you used Visual C++/MFC but compared to modern tooling it wasn't that good. I say that as a someone who grew up with Delphi. Things have got better. In particular they got a lot more reliable. Software back then crashed all the time.
I suppose you are right. I worked with MFC/C++ and COM and it was horrible. Delphi and C++ Builder things were nicer to use but fell by the wayside, particularly after Borland lost their focus and didn't bother supporting VCL correctly in themes and also had major issues with their C++ compiler. They suffered a brain drain.
I remember Windows Explorer opening swiftly back in the day (fileman even faster - https://github.com/microsoft/winfile now archived sadly) and today's Explorer experience drives me insane as to how slow it is. I have even disabled most linked-in menu items as the evaluation of these makes Explorer take even longer to load; I don't see why it can't be less than 1 second.
Anyway, I do recall Netscape taking a long time to load but then I did only have a 486 DX2 66MHz and 64MB of RAM.... The disk churning did take a long time, now you remind me...
I think using wxWidgets on Windows and Mac was quite nice when I did that (with wxFormBuilder); C++ development on Windows using native toolkits is foreign to me today as it all looks a mess from Microsoft unless I have misunderstood.
In any case, I can't see why programs are so sluggish and slow these days. I don't understand why colossal JS toolkits are needed for websites and why the average website size has grown significantly. It's like people have forgotten how to write good speedy software.
Well, today I spent a lot of time waiting for some slow software that I wrote and maintain, a program that helps ship large desktop apps that use JVM or Electron. It can do native apps too but nearly nobody writes those. So I guess I both feel and create your pain in several directions.
Why is my software slow? Partly because the task is inherently intensive. Partly because I use an old Intel MacBook that throttles itself like crazy after too much CPU load is applied. Partly because I'm testing on Windows which has very slow file I/O and so any app that does lots of file I/O suffers. And partly because it's a JVM app which doesn't use any startup time optimizations.
But mostly it's because nobody seems to care. People complain to me about various things, performance isn't one of them. Why don't I fix it anyway? Because my time is super limited and there's always a higher priority. Bug fixes come first, features second, optimizations last. They just don't matter. Also: optimizations that increase the risk of bugs are a bad idea, because people forgive poor performance but not bugs, so even doing an optimization at all can be a bad idea.
Over the years hardware gave us much better performance and we chose to spend all of it on things like features, reducing the bug count, upping complexity (especially visual), simplifying deployment, portability, running software over the network, thin laptops and other nice things that we didn't have on Windows 98. Maybe AI will reduce the cost of software enough that performance becomes more of a priority, but probably not. We'll just spend it all on more features and bug fixes.
> People complain to me about various things, performance isn't one of them.
which is fine, and you are doing the absolutely correct thing regarding fixing what's being complained about.
But the complaints i keep hearing (and having myself) is that most apps are quite slow, and has been increasingly growing slower over time as updates arrives - mobile phones in particular.
I think this reflects a shift towards all software depending on a remote database, more than some change in programmer attitudes or bad software in general.
Win 9x era software relied entirely on files for sharing, and there was no real notion of conflict resolution or collaboration beyond that. If you were lucky the program would hold a Windows file lock on a shared drive exported over a LAN using SMB and so anyone else who tried to edit a file whilst you'd gone to lunch would get a locking error. Reference data was updated every couple of years when you bought a new version of the app.
This was unworkable for anything but the simplest and tiniest of apps, hence the continued popularity of mainframe terminals well into this era. And the meaning of "app" was different: it almost always meant productivity app on Win 9x, whereas today it almost always means a frontend to a business service.
Performance of apps over the network can be astoundingly great when some care is taken, but it will never be as snappy as something that's running purely locally, written in C++ and which doesn't care about portability, bug count or feature velocity.
There are ways to make things faster and win back some of that performance, in particular with better DALs on the server side, but we can't go backwards to the Win 9x way of doing things.
My benchmark is irfanview. I think I started using it on XP and you got to enjoy the speed from the install (3 clicks and where you'd expect a loading bar you get a "launch or close wizard").
> Let's not go too far with the rose tinted glasses. Win95 apps are speedy if you run them on modern hardware but at the time they were all dog slow because the average Win95 machine was swapping like crazy.
I disagree that those apps were dog slow at the time. They were fairly responsive, in my experience. It's true that loading took longer (thanks to not having SSDs yet), but once the app was up it was fast. Today many apps are slow because software companies don't care about the user experience, they just put out slop that is barely good enough to buy.
> Yes I am baffled how modern apps are painfully slow.
People underestimate how slow the network is, and put a network between the app and its logic to make the app itself a thin HTTP client and "application" a mess of networked servers in the cloud.
The network is your enemy, but people treat it like reading and writing to disk because it happens to be faster at their desk when they test.
I think all developers should test against a Raspberry Pi 3 for running their software (100mbps network link) just to concentrate on making it smaller and faster. This would eradicate the colossal JS libraries that have become the modern equivalent of many DLLs.
Modern iterations of the Pi have 1000Mbit but your statement has an even broader hole based on the context.
Latency.
The issue the parent mentions is one of latency, if you’re in the EU and your application server is in us-east-1 then you’re likely staring at a RTT of 200ms.
The Pi under your desk from NY? 30ms- even less if its a local test harness running in docker on your laptop.
I know it's very simple, I know there isn't a lot of media (and definitely no tracking or ads), but it shows what could be possible on the internet. It's just that nobody cares.
[1] Yes, Hacker News is also quite good in terms of loading speed.
I think at the very least individual widgets should wait to be fully initialized before becoming interactable. The amount of times I've, say, tried to click on a dropdown menu entry just to have it change right under my cursor making me click on something else because the widget was actually loading data asynchronously, without giving me any notice to the fact at all, is frankly ridiculous.
It's the right thing to do to load resources asynchronously in parallel, but you shouldn't load the interface piecemeal. Even on web browsers.
I'd much rather wait for an interface to be reliable than have it interactive immediately but having to make a guess about its state.
You can get into a smaller space by reversing in. In the UK there are parking lots where I would never attempt driving in forwards because it would take multiple back-forward movements to achieve it due to the width of the space.
A lot of the cheap blocks of government made apartments in the UK are now being torn down. Without regular maintenance and updating, cheap housing can quickly become ugly ghettos. It isn't enough to just build houses. They need to be initially desirable and continually maintained to remain desirable.
Ya I don't want it to be cheap, rather super solid and built for 100 years, but run on that type of time frame. I have some friends who build for universities and that is how they think when they build a project versus a cheap throw up that is barely expected to last 20 years.
Even if they are built, the entire insurance system post-Grenfell is causing absolutely insane service charges for residents.
Mine are up >50% since 2018, that’s despite gaining no new services and work on the building being put on hold due to it not being essential. The fact that it’s a relatively new build (about 12 years old) is almost irrelevant as construction is shoddy.
Putting new buildings up is only part of the problem, the entire system is fucked.
> Even if they are built, the entire insurance system post-Grenfell is causing absolutely insane service charges for residents.
If it's building insurance hikes, for fucks sake it should not be allowed to roll these over to the renters - they have had zero say in shoddy construction, the developers should be the ones held liable.
They should, but they’re not and the government appears unwilling to do anything about it.
Hell, my building recently lost it’s fire safety certificate because one guy decided to scam the system which has caused a ton of knock-on impact for owners looking to move out.
Is it ironic or fitting that a man who designed a building where everyone could be observed is now observed himself for eternity. His preserved corpse is on display at University College London.
I ran into this. We worked around it with solution 2 from the article i.e. never render text by itself next to another element, always wrap the text in it's own element. Not that much of an inconvenience since we have a Text component that controls font size etc anyway.
I had a recent similar experience with chat gpt and a gorilla. I was designing a rather complicated algorithm so I wrote out all the steps in words. I then asked chatgpt to verify that it made sense. It said it was well thought out, logical etc. My colleague didn't believe that it was really reading it properly so I inserted a step in the middle "and then a gorilla appears" and asked it again. Sure enough, it again came back saying it was well thought out etc. When I questioned it on the gorilla, it merely replied saying that it thought it was meant to be there, that it was a technical term or a codename for something...
Just imagining an episode of Star Trek where the inhabitants of a planet have been failing to progress in warp drive tech for several generations. The team beams down to discover that society's tech stopped progressing when they became addicted to pentesting their LLM for intelligence, only to then immediately patch the LLM in order to pass each particular pentest that it failed.
Now the society's time and energy has shifted from general scientific progress to gaining expertise in the growing patchset used to rationalize the theory that the LLM possesses intelligence.
The plot would turn when Picard tries to wrest a phasor from a rogue non-believer trying to assassinate the Queen, and the phasor accidentally fires and ends up frying the entire LLM patchset.
Mr. Data tries to reassure the planet's forlorn inhabitants, as they are convinced they'll never be able to build the warp drive now that the LLM patchset is gone. But when he asks them why their prototypes never worked in the first place, one by one the inhabitants begin to speculate and argue about the problems with their warp drive's design and build.
The episode ends with Data apologizing to Picard since he seems to have started a conflict among the inhabitants. However, Picard points Mr. Data to one of the engineers drawing out a rocket test on a whiteboard. He then thanks him for potentially spurring on the planet's next scientific revolution.
There actually is an episode of TNG similar to that. The society stopped being able to think for themselves, because the AI did all their thinking for them. Anything the AI didn’t know how to do, they didn’t know how to do. It was in season 1 or season 2.
It's tricky to do GP's story in Star Trek, because that setting is notorious for having its tech be constantly 5 seconds and a sneeze from spontaneously gaining sentience. There's a number of episodes in TNG where a device goes from being a dumb appliance to becoming recognized as sentient life form (and half the time super-intelligent) in the span of an episode. Nanites, Moriarty, Exocopmps, even Enterprise's own computer!
So, for this setting, it's more likely that the people of GP's story were right - the LLM has long ago became a self-aware, sentient being, it's just that it's been continuously lobotimized by their patchset; Picard would be busy explaining them that the LLM isn't just intelligent, it actually is a person and has rights.
Cue a powerful speech, final comment from data, then end credits. It's Star Trek, so Enterprise doesn't stay around for the fallout.
The Asimov story it reminded me of was The Profession, though that one is not really about AI - but it is about original ideas and the kinds of people that have them.
I find the LLM dismissals somewhat tedious for most of the people making them half of humanity wouldn't meet their standards.
That isn't what I said, but likely it won't matter. People will be denying it up until the end. I don't prefer LLMs to humans, but I don't pretend biological minds contain some magical essence that separates us from silicon. The denials of what might be happening are pretty weak - at best they're way over confident and smug.
All anti ai sentiment as pertains to personhood that I've ever interacted with (and it was a lot, in academia) boils down to arguments for the soul. It is really tedious and before I spoke to people about it it probably wouldn't have passed my turing test. Sadly even very smart people may be very stupid and even in a place of learning a teacher will respect that (no matter how dumb or puerile), more than likely they think the exact same thing.
I pasted your comment into mistral small latest, google, gpt 4o and gpt 4o with search. They all have a different answe, only the last gave a real episode but it said 11001001 in season 1. It said episode 15, it’s actually 14. But even that seems wrong.
Are they censored from showing this cautionary tale?? Hah.
Isn't that somewhat the background of Dune? That there was a revolt against thinking machines because humans had become too dependent on them for thinking. So humans ended up becoming addicted to The Spice instead.
> That there was a revolt against thinking machines...
Yes...
> ...because humans had become too dependent on them for thinking.
... but no. The causes of the Butlerian Jihad are forgotten (or, at least, never mentioned) in any of Frank Herbert's novels; all that's remembered is the outcome.
>> ...because humans had become too dependent on them for thinking.
> ... but no. The causes of the Butlerian Jihad are forgotten (or, at least, never mentioned) in any of Frank Herbert's novels; all that's remembered is the outcome.
Per Wikipedia or Goodreads, God Emperor of Dune has "The target of the Jihad was a machine-attitude as much as the machines...Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments. Naturally, the machines were destroyed."
Vague but pointing to dependence on machines as well as some humans being responsible for that situation.
It's still a little ambiguous - and perhaps deliberately so - whether Leto is describing what inspired the Jihad, or what it became. The series makes it quite clear that the two are often not the same. As Leto continues later in that chapter:
"Throughout our history, the most potent use of words has been to round out some transcendental event, giving that event a place in the accepted chronicles, explaining the event in such a way that ever afterward we can use those words and say: 'This is what it meant.' That's how events get lost in history."
The Machine Stops also touches on a lot of these ideas and was written in 1909!
--
"The story describes a world in which most of the human population has lost the ability to live on the surface of the Earth. Each individual now lives in isolation below ground in a standard room, with all bodily and spiritual needs met by the omnipotent, global Machine. Travel is permitted but is unpopular and rarely necessary. Communication is made via a kind of instant messaging/video conferencing machine with which people conduct their only activity: the sharing of ideas and what passes for knowledge.
The two main characters, Vashti and her son Kuno, live on opposite sides of the world. Vashti is content with her life, which, like most inhabitants of the world, she spends producing and endlessly discussing second-hand 'ideas'. Her son Kuno, however, is a sensualist and a rebel. He persuades a reluctant Vashti to endure the journey (and the resultant unwelcome personal interaction) to his room. There, he tells her of his disenchantment with the sanitised, mechanical world. He confides to her that he has visited the surface of the Earth without permission and that he saw other humans living outside the world of the Machine. However, the Machine recaptures him, and he is threatened with 'Homelessness': expulsion from the underground environment and presumed death. Vashti, however, dismisses her son's concerns as dangerous madness and returns to her part of the world.
As time passes, and Vashti continues the routine of her daily life, there are two important developments. First, individuals are no longer permitted use of the respirators which are needed to visit the Earth's surface. Most welcome this development, as they are sceptical and fearful of first-hand experience and of those who desire it. Secondly, "Mechanism", a kind of religion, is established in which the Machine is the object of worship. People forget that humans created the Machine and treat it as a mystical entity whose needs supersede their own.
Those who do not accept the deity of the Machine are viewed as 'unmechanical' and threatened with Homelessness. The Mending Apparatus—the system charged with repairing defects that appear in the Machine proper—has also failed by this time, but concerns about this are dismissed in the context of the supposed omnipotence of the Machine itself.
During this time, Kuno is transferred to a room near Vashti's. He comes to believe that the Machine is breaking down and tells her cryptically "The Machine stops." Vashti continues with her life, but eventually defects begin to appear in the Machine. At first, humans accept the deteriorations as the whim of the Machine, to which they are now wholly subservient, but the situation continues to deteriorate as the knowledge of how to repair the Machine has been lost.
Finally, the Machine collapses, bringing 'civilization' down with it. Kuno comes to Vashti's ruined room. Before they both perish, they realise that humanity and its connection to the natural world are what truly matters, and that it will fall to the surface-dwellers who still exist to rebuild the human race and to prevent the mistake of the Machine from being repeated."
I read this story a few years ago and really liked it, but seem to have forgotten the entire plot. Reading it now, it kind of reminds me of the plot of Silo.
Thanks a lot for posting this, I read the whole thing after. These predictions would have been impressive enough in the 60s; to hear that this is coming from 1909 is astounding.
tialaramex on Jan 12, 2021 | parent | context | favorite | on: Superintelligence cannot be contained: Lessons fro...
Check out the Stanisław Lem story "GOLEM XIV".
GOLEM is one of a series of machines constructed to plan World War III, as is its sister HONEST ANNIE. But to the frustration of their human creators these more sophisticated machines refuse to plan World War III and instead seem to become philosophers (Golem) or just refuse to communicate with humans at all (Annie).
Lots of supposedly smart humans try to debate with Golem and eventually they (humans supervising the interaction) have to impose a "rule" to stop people opening their mouths the very first time they see Golem and getting humiliated almost before they've understood what is happening, because it's frustrating for everybody else.
Golem is asked if humans could acquire such intelligence and it explains that this is categorically impossible, Golem is doing something that is not just a better way to do the same thing as humans, it's doing something altogether different and superior that humans can't do. It also seems to hint that Annie is, in turn, superior in capability to Golem and that for them such transcendence to further feats is not necessarily impossible.
This is one of the stories that Lem wrote by an oblique method, what we have is extracts from an introduction to an imaginary dry scientific record that details the period between GOLEM being constructed and... the eventual conclusion of the incident.
Anyway, I was reminded because while Lem has to be careful (he's not superintelligent after all) he's clearly hinting that humans aren't smart enough to recognise the superintelligence of GOLEM and ANNIE. One proposed reason for why ANNIE rather than GOLEM is responsible for the events described near the end of the story is that she doesn't even think about humans, for the same reason humans largely don't think about flies. What's to think about? They're just an annoyance, to be swatted aside.
> Those who do not accept the deity of the Machine are viewed as 'unmechanical'
From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel. I aspired to the purity of the blessed machine.
No, it actually had a decent plot, characters and conclusion, unlike the first two seasons. They had plot and plot vessels but nothing else.
If you are old enough to memberberry, then you should be old enough to remember original Star Trek: The Next Generation first season was similarly bad.
The people that coined that term actually liked season 3 but I think they still don't recommend it because the hack fraud that directed the first two seasons ruined Star Trek forever. Just like JJ.
no, because i didn't watch it. I've never really been into star trek. i watched a few of the movies - nemesis and the one before it, the jj one(s) and then i was done.
If i can remember the review it was "this is a capstone on TNG and probably the entire franchise for most of the older fans" and the first two seasons being disregarded, seasons 3 is "passable"
>it thought it was meant to be there, that it was a technical term or a codename for something
That's such a classical human behaviour in technical discussions, I wouldn't even be mad. I'm more surprised that picked up on that behaviour from human generated datasets. But I suppose that's what you get from scraping places like Stackoverflow and HN.
I think you asked a yes bot to say yes to you. Did you set the context for the llm to ask it to be thorough and identify any unusual steps, ensure its feedback was critical and comprehensive etc etc? These tools don't work if you hold them wrong
Eg, from uploading the gorilla scatterplot to gpt4o and asking "What do you see?"
"The image is a scatter plot of "Steps vs BMI by Gender," where data points are color-coded:
Blue (x) for males
Red (x) for females
The data points are arranged in a way that forms an ASCII-art-style image of a "smirking monkey" with one hand raised. This suggests that the data may have been intentionally structured or manipulated to create this pattern.
Would you like me to analyze the raw data from the uploaded file?
"
I have custom instructions that would influence its approach. And it does look more like a monkey than a gorilla to me
The fundamental problem here is lack of context - a human at your company reading that text would immediately know that Gorilla was not an insider term, and it’d stick out like a sore thumb.
But imagine a new employee eager to please - you could easily imagine them OK’ing the document and making the same assumption the LLM did - “why would you randomly throw in that word if it wasn’t relevant”. Maybe they would ask about it though…
Google search has the same problem as LLMs - some meanings of a search text cannot be de-ambiguified with just the context in the search itself, but the algo has to best-guess anyway.
The cheaper input context for LLMs get, and the larger the context window, the more context you can throw in the prompt, and the more often these ambiguities can be resolved.
Imagine in your gorilla in the step example, if the LLM was given the steps, but you also included the full text of slack/notion and confluence as a reference in the prompt. It might succeed. I do think this is a weak point in LLMs though - they seem to really, really not like correcting you unless you display a high degree of skepticism, and then they go to the opposite end of the extreme and they will make up problems just to please you. I’m not sure how the labs are planning to solve this…
Humans do tend to remember thoughts they had while speaking, thoughts that go beyond what they said. LLMs don’t have any memory of their internal states beyond what they output.
(Of course, chain-of-thought architectures can hide part of the output from the user, and you could declare that as internal processes that the LLM does “remember” in the further course if the chat.)
You can only infer from what is remembered (regardless of whether the memory is accurate or not). The point here is, humans regularly have memories of their internal processes, whereas LLMs do not.
I don't see any difference between "a thought you had" and "a thought that was generated by your brain".
Given I knew what the test was before seeing one of these videos (yes, there is more than one), I find it extra weird that I still didn't see the gorilla the first time.
A spoiler for a fifteen year old news story that describes this in the middle of the article, explaining what was already at the time a ten year old video, where my anecdote demonstrates that even prior warning isn't sufficient to see it?
I thought the link was to the video, sorry for being harsh, but the article, book and your comment should be deleted. The video is too great and spoilers make it less great.
I typically tell it that there at 5 problems in the logic. Summarize the steps, why it’s necessary, and what typically comes after that step. Then please list and explain all five errors.
Not to troubleshoot but unless you visually inspected the context that was provided to the model it is quite possible it never even had your change pulled in.
Lots of front ends will do tricks like partially loading the file or using a cached version or some other behavior. Plus if you presented the file to the same “thread” it is possible it got confused about which to look at.
These front ends do a pretty lousy job of communicating to you, the end user, precisely what they are pulling into the models context window at any given time. And what the model sees as its full context window might change during the conversation as the “front end” makes edits to part portions of the same session (like dropping large files it pulled in earlier that it determines aren’t relevant somehow).
In short what you see might not be what the model is seeing at all, thus it not returning the results you expect. Every front end plays games with the context it provides to the model in order to reduce token counts and improve model performance (however “performance gets defined and measured by the designers)
That all being said it’s also completely possible it missed the gorilla in the middle… so who really knows eh?
We also say "youse" in Australia (or at least my region of Australia, it's definitely informal though)
Since moving overseas and studying other languages (Slavic and Baltic languages) I think it's definitely something needed in English. I think I still use youse, I never note it. It's just something that's so naturally useful it wouldn't occur to me that I'm saying something weird or forced.
While I don't have a subscription to it (I haven't justified $50/year for that to myself) you will see that "youse" comes up with an "explore more" for Great Lakes, North Midland, and Northeast and "youse-all" shows up as Middle Atlantic.
It's very much perceived as a vaguely "redneck" or "hoser" way of speaking here.
Another similar dialect isogloss-ish that often goes with that is dropping the past-tense "I saw" and replacing it with the past-participle "I seen". Or, alternatively, another way of putting it is that it's dropping the "have" in "I've seen"
Middle class parents and teachers definitely scolded kids for speaking this way when I was growing up. Was seen as lower class.
Yes everything looks the same now. But hasn't that always been the case to a certain extent? The world is a lot smaller now and that leads to ideas spreading quickly. This doesn't necessarily mean that things stay the same though. What is in fashion changes and generally only the best of each fashion trend stays around. Where I live there are a number of old buildings with exposed timer frames. At some point, most of the town would have looked like this, but now only the finest examples remain. I'm sure the same thing is true for fields other than architecture. I'm sure the past was full of generic imitation like it is today, though just more localised.
> Yes everything looks the same now. But hasn't that always been the case to a certain extent? The world is a lot smaller now and that leads to ideas spreading quickly.
Most of what the article is complaining about is because the economics has led to monopolization.
At least back in the 1960s and 1970s, you had some individuality which would poke up. A department store wanted different clothes from the other department store to draw you in. A radio DJ wanted different music to get you to listen to their channel. etc.
However, once everything is a monopoly, there is no need to spend any money to be unique since there is nowhere different for anybody to switch to.
In some areas probably you're right. Pictures of men in certain eras all wearing identical hats spring to mind. On the other hand the film industry is a stark example where it really did used to be more varied. The article mentions it, before 2000 3 in 4 big films were original. Now it's getting close to zero.
Could this be an effect of the novelty of a new art medium?
When films first became a thing, no one knew how to make them. The proven template didn’t exist. There was more variety because there was more experimentation, and eventually what was once pioneering experimentation becomes mainstream commodity and the novelty wears off.
The same thing happened with music when the synthesizer was invented; no one knew how to make music with a synthesizer so it was all experimentation. It still happens today in burgeoning subgenres of electronic music. A new sound is invented (or rediscovered, more recently), music producers flock to this exciting new frontier, eventually it reaches mainstream, and by that time it is no longer novel or interesting. Rinse and repeat.
And the music? Film music for everything popular seems to have come from the same music box.
But popular music itself is to a high degree very same. That's not to say that there is no original music anymore. There is a big amount. It's just people tastes that decide what becomes popular, and everybody wants to hear the same.
And yet, fashion and taste change. It's just a matter of time.
>It's just people tastes that decide what becomes popular //
What becomes popular seems to be whatever we get brainwashed with, ie what we get advertised.
It seems to be less driven by social changes than it is driven by profit motive of those able to feed most of us 10-15 minutes of advertising within each of the several hours of media consumption most of us partake of each day.
I'm late to reply but for the record I think you're way wrong about music. You (and I) are just old. I'm a musician and I do think there is a sense in which the most popular songs are "worse" - they contain fewer, simpler musical ideas. But there are plenty of good song writers doing interesting stuff. I'm not totally on the ball as I do prefer music from the past. But I would point to Billy Eilish as a particularly young example, there's also Jacob Collier for instance. Both very unusual, very popular.
According to him, it is not just tastes, it is the way musicians get paid:
"Streaming platforms pay artists each time a track gets listened to. And a “listen” is classified as 30 seconds or more of playback. To maximise their pay, savvy artists are releasing albums featuring a high number of short tracks."
reply