site banner

Friday Fun Thread for September 8, 2023

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

3
Jump in the discussion.

No email address required.

I wanted to test your theory about trivia questions, so I tried a little test - asked it to give me some trivia questions on Roman history, and then to give me a harder set. The first set was very simple (amusingly, two consecutive questions had the same answer: Julius Caesar, and two questions in the second set also had Caesar as the answer), but the second was more interesting. One or two were hard enough to stump me, which made sense, but at least three were historically inaccurate as questions, and so were the answers that ChatGPT gave. The most incorrect was claiming that damnatio memoriae was instituted by Domitian, but it also had mistakes reflecting a pop-historical conception of what the Roman Empire was. I guess this is an example of ChatGPT's repetition of consensus, and that aspect makes it inherently difficult to write good trivia questions.

As a poet, it's also awful at poetry. It writes some painful doggerel about whatever subject you choose, and seemed incapable of following instructions about form, meter, or length. A while back I tried to get it to write a Spenserian stanza, and it wrote the same ABAB four-line stanzas over and over again no matter how hard I tried to explain what a Spenserian stanza was (ABABBCBCC).

Poetry isn't my forte; but GPT4 doubled-down and insisted this was proper.

In tavern's heart, a figure stands with grace, The Bartender Venator, known by name. His eyes, a mirror of the human race, Reflect the joy, the sorrow, and the game. Each glass he fills, a story to embrace, In liquid form, no two tales are the same. Yet, in his hands, each patron finds their place, And in his presence, life's not quite so tame. So raise a toast, to him who knows your face.

Yep, this is an example of GPT insisting on something like an ABABABABA rhyme scheme (grace/race, name/game, etc.), which is actually quite an odd one that you would rarely see a person using, since it's difficult to get good rhymes if you're using the same one so often (see: rhyming race with embrace). My theory is that, beyond what's going on under the hood causing trouble with sticking to form, GPT is bad at selecting good rhymes, because good rhymes are generally in some way unexpected and present novelty to the reader - i.e. the opposite of predicting the next token.