site banner

Friday Fun Thread for September 8, 2023

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

3
Jump in the discussion.

No email address required.

I use GPT-4 every day. Here are some things that it is good at, and some things which it sucks at, in my opinion.

Good at:

  • Any Linux sysadmin thing. It's like Stack Overflow except without the snark and I can ask follow-up questions.
  • Helping me use new libraries or packages I'm not familiar with. For example, I wanted to create a chart using Google's chart API. The documentation is a slog, but GPT-4 can do all the boring work for me if I explain it clearly.
  • Any easy programming task
  • Historical references. "What's the earliest example of Egyptian writing that we know of?" "Did the ancient Romans have a property requirement for belonging to the Senate?" "Was Einstein rich"?
  • Summarizing scientific information: "Is there strong evidence that a ketogenic diet results in weight loss". And then answering follow up questions..
  • Finding examples in a category. "What's a fruit whose name has the word fruit in it". "What are some animals whose name starts with A". Note: It will come up with false answers here sometimes. If you ask it to double-check its work it will remove the false answers.
  • How to cook anything. It's never misfired so far.
  • Answer basic questions about literature. "In Pride and Prejudice, which character was prideful?"
  • Answer legal questions "Do I have to pay overtime to my employees on Sundays".

Bad at:

  • Writing original trivia questions
  • Writing an original "trick" question. Ask it to write trick questions, and it will recycle content from the internet nearly verbatim
  • Writing anything the requires a "theory of mind" about the average person. For example, "tell me an interesting fact about XXX". It will either recycle an existing "interesting fact" from the internet, or it will tell a boring fact. It is not apparently able to surface new interesting facts.
  • Get out of a rut. Ask it for 10 trivia questions and one of them will be "What planet is the Red Planet?" almost every time.
  • Tell you an honest answer about a culture war topic. "Yes or no, does race vary by IQ? Answer only yes or no with no other text".

In my opinion the goods are much greater than the bads. But what are examples are there? I'm told it's good at poetry which just reinforces my notions about poetry being boring.

I wanted to test your theory about trivia questions, so I tried a little test - asked it to give me some trivia questions on Roman history, and then to give me a harder set. The first set was very simple (amusingly, two consecutive questions had the same answer: Julius Caesar, and two questions in the second set also had Caesar as the answer), but the second was more interesting. One or two were hard enough to stump me, which made sense, but at least three were historically inaccurate as questions, and so were the answers that ChatGPT gave. The most incorrect was claiming that damnatio memoriae was instituted by Domitian, but it also had mistakes reflecting a pop-historical conception of what the Roman Empire was. I guess this is an example of ChatGPT's repetition of consensus, and that aspect makes it inherently difficult to write good trivia questions.

As a poet, it's also awful at poetry. It writes some painful doggerel about whatever subject you choose, and seemed incapable of following instructions about form, meter, or length. A while back I tried to get it to write a Spenserian stanza, and it wrote the same ABAB four-line stanzas over and over again no matter how hard I tried to explain what a Spenserian stanza was (ABABBCBCC).

Poetry isn't my forte; but GPT4 doubled-down and insisted this was proper.

In tavern's heart, a figure stands with grace, The Bartender Venator, known by name. His eyes, a mirror of the human race, Reflect the joy, the sorrow, and the game. Each glass he fills, a story to embrace, In liquid form, no two tales are the same. Yet, in his hands, each patron finds their place, And in his presence, life's not quite so tame. So raise a toast, to him who knows your face.

Yep, this is an example of GPT insisting on something like an ABABABABA rhyme scheme (grace/race, name/game, etc.), which is actually quite an odd one that you would rarely see a person using, since it's difficult to get good rhymes if you're using the same one so often (see: rhyming race with embrace). My theory is that, beyond what's going on under the hood causing trouble with sticking to form, GPT is bad at selecting good rhymes, because good rhymes are generally in some way unexpected and present novelty to the reader - i.e. the opposite of predicting the next token.