site banner

Friday Fun Thread for September 26, 2025

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

3
Jump in the discussion.

No email address required.

I actually found it comparable to a lot of the early GPT3 written content. The models were able to produce grammatically coherent text that mostly stayed on topic, it just didn't go anywhere and after reading a few paragraphs you'd get pissed having been duped into reading essentially verbal diarrhea.

This music is much the same, there is no purpose and it doesn't go anywhere. Its like 10 second segments stitched together without much thought except having smooth transitions and staying in the sameish genre.

Perhaps there are decent songs produced and it's down to prompting and iteration, but I haven't heard them yet.

This music is much the same, there is no purpose and it doesn't go anywhere.

What do you mean? Why does music need a purpose? It just needs to sound good. A purpose and meaning are optional.

Now the lyrics, when they're AI, can have a samey, obvious vibe. But that's just the lyrics.

Principles such as tension and release, foreshadowing, thematic development, chord resolution are the essential tools of virtually every composer, regardless of genre and whether the composer is actively aware of it or not. While often discussed in a classical context, their application is universal and apply to pretty much all genres.

This seems like a consistent weakness in at least current AI-generated music, which often does well with short-term musical grammar but fails to build a cohesive, long-form structure, resulting in a piece that feels aimless and unsatisfying, IE "it doesn't go anywhere".

These same concepts of building expectation and providing resolution are fundamental to other time-based art forms, such as literature and film, and to effective communication in general.

I think this feels similar to how earlier text generating models wrote and similar to how they still struggle to write longer form text. Perhaps the issue is an insufficiently large context window or perhaps it's something else more fundamental to how the models function, I don't know.

Its like 10 second segments stitched together

I'm pretty sure it's more or less exactly that: short segments stiched together.