Hacker News new | past | comments | ask | show | jobs | submit login

Love the site, and impressed by what they generated there. With that said... I'm starting to feel like music might be the last thing to be affected by Generative AI.

I IV V with different accents over the music and different drum sounds is fine, but thats not really music. It's pretty bad when you can pick out the chords progression in 5 sec. Cue the infamous 4-chord song skit by Axis of Awesome.






I actually am of the exact opposite opinion. With image / text / video, I am much more able to differentiate between AI vs human.

However in music - there is so much badly done human music as well, for me it's nearly impossible to understand the difference between a badly done human music and a high fidelity AI music (the chord progression, happens as often in human music). Moreover, I have put Suno AI on playlist mode before and it's actually been enjoyable, and I am a big AI sceptic! Sometimes even more so than Spotify's own (although they've been accused of putting AI music on playlists as well - but I am fairly sure the weak stuff that put me off was by humans - did I say I cannot differentiate?).

Especially some music genres - like Japanese Vocaloid ; Power Metal, some country, where certain genre specific things overwhelm the piece, AI does a very good job in mimicking those from the best of the best and put meagre efforts to shame.

Here is one AI song I generated in an earlier version of Suno - let me know if anything stands out as AI: https://www.youtube.com/watch?v=I5JcEnU-x3s

and another I recorded in my studio with an artist: https://www.youtube.com/watch?v=R6mJcXxoppc


Agreed, am a musician too, and especially for popular music, AI music is as complex and often indistinguishable from music created by humans.

Kind of sad, especially for composers (which I am trying to be). Ah well, can only keep moving forward.


I’m only an occasional hobbyist, but I am super excited for how AI can empower me to write ideas I want, but which are beyond my ability and / or not possible using normal tools. I really think we’ll see a revolution in music theory once it’s easy to incorporate microtonal, multi-tempo, and other crazy stuff.

Also as we can blur the line between instrument and audio; why can’t my piano morph into an organ over the course of a piece? (I’m familiar with the Korg Morpheus and similar; I mean in a much more real sense).


I do agree with you about AI music powering a revolution - it really creates some amazing music and it's still early in the technology - But for us musicians who studied literally decades to learn all these techniques (harmony, counterpoint, music form, polyrhythms, piano, partimento...), it's painful to see that others can create a piece of music in seconds which took us a lifetime to learn how to do (I write mainly classical music and it was a punch in the gut when I first heard Udio's classical music generation. Very impressive - sigh).

An no disrepect towards anyone using AI to create music, it is here and unstoppable, but I don't currently use generative AI in music myself. Yeah, think for performed works for a live audience (at least in classical music), most people want to hear music composed by humans (for the most part). Hopefully, will stay this way for awhile, otherwise, I've been going down a road that goes nowhere. Ah well, wouldn't be the first time:)


100% agreed. But I think it is like any other revolution that pushes human creativity to a higher level of abstraction.

Someone on HN, don’t remember who, made the observation that some artists mistake mastery of tools for the art, whereas artists who focus on the actual art can roll with changes to the tools.


Agree with this for the most part. In fact, I write my pieces by hand on paper, use notation software (Dorico) to create the sheet music, listen to the piece using playback software (Noteperformer) which is connected to multiple virtual instruments (BBC Symphony Orchestra, etc.). These are all wonderful tools, and make writing music easier. For me at leas, they're not absolutely necessary, but are definitely helpful.

But, AI isn't just a tool, it's actually generating musical ideas at a highly finished level. For the first time, we have something that takes over a substantial amount of the creativity used to write a piece, a process which has always been the ___domain of people, and it's doing it at levels that are close to what the best skilled humans can do. Yeah, this isn't just something that aids creation, but is doing the creating itself.

Maybe one day, will use AI to create substantial amounts of the music I write, but am not nearly at that point yet - don't think most classical concert audiences want to got to a concert hall to hear AI generated music, but that may change. Guess we'll have to wait and see.


> I IV V with different accents over the music and different drum sounds is fine, but thats not really music

Music is more about the human that made it and their relation to you than the sound properties themselves. Same as other art. The more indirect the music process and the further you are from the living experience of the human creator, the less it resembles art. I feel art is more of an spectrum rather than a binary switch and the metric is how much direct human involvement did the audio experience have in terms that you can relate to.

Remove the human completely and you just have sound. It is likely that something like bepop, gabber or industrial synthwave would have been considered "sounds" rather than art by medieval folks or Mesopotamian people if they heard the sounds without knowing whether the source was human or not. Same with us if we were to heard some music from the year 3200 or 4500, we would likely not consider it music.


thats exactly what I'm feeling. GenAi for images or text is useful, it feels like the resolved values can be added to things or accomplish a purpose. The GenAI music feels like sound (as you put it) - like great its there, but thats not music.

Suno is way more than that. Listen to the jazz tags or something, you're being way over dismissive here.

But why would I ever want to listen something generated by someone else when I can just generate infinite amount of the same stuff myself?

Because it sounds good, you can do both.

But what's the point of spending time and thought for music prompted by others? What I can generate sounds exactly as good and has the exact same value...

Can’t you say the same thing about why listen to human jazz musicians?

I am not a jazz musician. I can't generate their music, all the nuances, the improvisation or the feeling... in other words the human factor of their craft 1 to 1 with a press of a button.

It’s not replacing the music I listen to, but its definitely capable of replacing the random music I hear on the radio



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: