site banner

Friday Fun Thread for March 22, 2024

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

Holy shit. In some sense, it was inevitable that this moment would come fast, but it still caught me off-guard listening to sample songs and hearing just how coherent they all are. All the previous AI-generated music I remember hearing was permeated the stench of AI: weird sonic artifacts that were vestiges of some unnatural process taking place in the frequency domain (similar to the artifacts that you hear when you watch a video on 2x speed), the equivalent of image generation models’ screwed-up hands.

But from the few songs I’ve listened to here, none of that whatsoever is present. It actually sounds like distinct instruments are playing distinct notes. I’m floored. Just from a technical perspective, gotta wonder how they made such an improvement. The same company apparently released an open text-to-speech model almost a year ago, so I would imagine that the overall architecture and pipeline is probably similar, but who knows.

One minor flaw that I noticed is that sometimes, the model “loses the plot” and forgets about longer-term structure. Here’s some random song I found on the “Explore” page. If you pay attention, you’ll notice that there’s this neat descending bass thing going on in the intro: BbM7, A7 (with a half a bar of the tritone sub Eb7), Am, Dm. The progression continues for four more bars and then repeats, so it still remembers the structure at this point, nice. But then, after the intro ends, the model forgets this initial complexity in the chord progression, and instead switches to a more pedestrian “royal road progression” (as I’ve heard it called): BbM7 C Am Dm. Goodbye, borrowed chord A7, goodbye tritone sub, goodbye subtle jazzy touches! Looks like human composers will still live another day!…

…Nah, no way. This thing is insane.

EDIT: Listening to some more songs, there’s gotta be more to the architecture/pipeline than the company’s previous TTS model. Take the the clarity of the vocals: it seems that there’s a separate model that generates the vocal track, which is then mixed in with other tracks. Or maybe not? Maybe you don’t need this inductive bias to generate such clear vocals, and one model can do it all?