site banner

Friday Fun Thread for February 24, 2023

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

I just managed to get Bing Chat in use. My experiences thus far:

  • We've been talking about getting a mirror installed in our home (we've already got the mirror, it's just about installing it to a wall so it doesn't actually fall down on our children or anything), so I ask it about this. It first gives me installation tips. Fair enough, too little information on what I actually want to get done, so I prompt it about getting a guy to install a mirror. It recommends a company that seems to produce mirrors, but their website is unclear on whether they actually install preowned mirrors. Okay, I've already found a guy through a simple Google Search.

  • I've been testing OpenAI on some questions on slightly obscure Finnish history, many of which it gets egregiously wrong. I run some of these questions on Bing. It gets them more correct, evidently thanks to its web search capability, but still commits some fairly obvious flagrant errors. Perhaps more on this later.

  • I ask it for a Chinese restaurant in my hometown (as a test, I'm not actually feeling like Chinese at the moment). It gives me the top listed restaurants on tripadvisor. Fair enough, I haven't actually tested the top listed Tripadvisor Chinese restaurant in my hometown so I can't know if it's good or not.

  • I ask it for things to do with kids in my home district. It recommends some stuff in the city centre and... also mentions the home district's actual landmark, a huge cheap and forbidding student housing building that (during my study days) was known as the "biggest contraceptive in the world" (you're chatting to a girl, see, you ask her to come at your place in the building and it's guaranteed you're not getting laid). This is probably one of the worst places one could think of to take kids to for fun, barring, like, actual drug dens or such.

  • Okay, maybe it's indicating that there's actually nothing to do in our district for kids, so I ask it about the amusement parks. It recommends outdoor parks that are closed in the winter. I prompt it about one of them ("Flowpark") and ask it to recommend something that is open in the winter. It says that Flowpark X (the name of my city) is indeed closed, but the same city has Flowpark Y, which is open. This is the same park.

As one can see, the practical applications of Sydney have been fairly limited, thus far, as far as my life is concerned.

I've been testing OpenAI on some questions on slightly obscure Finnish history, many of which it gets egregiously wrong. I run some of these questions on Bing. It gets them more correct, evidently thanks to its web search capability, but still commits some fairly obvious flagrant errors. Perhaps more on this later.

I suppose this is the equivalent of doing the "rigorous research" of literally looking up the first Google result, but this failure mode worries me. I've already heard a bunch of people say "I don't even use Google anymore, I just go to ChatGPT", but when Google serves you an answer, you at least know who published it, you can take their bias into account, and decide whether you want to keep looking or not. If ChatGPT becomes good enough to be trusted 99.9% of times, and it's only this 0.1% of cases that it gets egregiously wrong (whether by design, or accident), will people be able to find indications that the information is inaccurate, or will they just accept the false information as true?

Again, the problem in itself is nothing new, but I think we're pushing things in the direction of making people even easier to manipulate by the powers that be (or by stupid mistakes on obscure topics).

Bing tries to provide references.