site banner

Friday Fun Thread for March 22, 2024

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

Looks like AI Music is having its ChatGPT moment: https://app.suno.ai/

Lyrics, long (2-minute) songs, different languages, quite high quality. If you're too lazy to write your own lyrics ChatGPT will do it for you, which gives it the whiff of terminal genericness. My personal favourites:

Bean Soup: https://app.suno.ai/song/524dd0c0-c4e3-4c94-9968-aaa7c93c6fbc

LOOK MOM I AM A MUSICIAN!!!: https://app.suno.ai/song/aada88a3-9d7f-422a-843d-2a544379d059

You could take this and just slap it in a game I reckon - not the greatest video game OST of all time but perfectly decent: https://app.suno.ai/song/4c95e7de-8d99-4db0-af7d-922c274569bd

Per their terms of use, you fully own anything you make if you made it while you have their 10 dollar subscription. I think a lot of people lose work over this.

Holy shit. In some sense, it was inevitable that this moment would come fast, but it still caught me off-guard listening to sample songs and hearing just how coherent they all are. All the previous AI-generated music I remember hearing was permeated the stench of AI: weird sonic artifacts that were vestiges of some unnatural process taking place in the frequency domain (similar to the artifacts that you hear when you watch a video on 2x speed), the equivalent of image generation models’ screwed-up hands.

But from the few songs I’ve listened to here, none of that whatsoever is present. It actually sounds like distinct instruments are playing distinct notes. I’m floored. Just from a technical perspective, gotta wonder how they made such an improvement. The same company apparently released an open text-to-speech model almost a year ago, so I would imagine that the overall architecture and pipeline is probably similar, but who knows.

One minor flaw that I noticed is that sometimes, the model “loses the plot” and forgets about longer-term structure. Here’s some random song I found on the “Explore” page. If you pay attention, you’ll notice that there’s this neat descending bass thing going on in the intro: BbM7, A7 (with a half a bar of the tritone sub Eb7), Am, Dm. The progression continues for four more bars and then repeats, so it still remembers the structure at this point, nice. But then, after the intro ends, the model forgets this initial complexity in the chord progression, and instead switches to a more pedestrian “royal road progression” (as I’ve heard it called): BbM7 C Am Dm. Goodbye, borrowed chord A7, goodbye tritone sub, goodbye subtle jazzy touches! Looks like human composers will still live another day!…

…Nah, no way. This thing is insane.

EDIT: Listening to some more songs, there’s gotta be more to the architecture/pipeline than the company’s previous TTS model. Take the the clarity of the vocals: it seems that there’s a separate model that generates the vocal track, which is then mixed in with other tracks. Or maybe not? Maybe you don’t need this inductive bias to generate such clear vocals, and one model can do it all?