site banner

Friday Fun Thread for February 24, 2023

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

I just managed to get Bing Chat in use. My experiences thus far:

  • We've been talking about getting a mirror installed in our home (we've already got the mirror, it's just about installing it to a wall so it doesn't actually fall down on our children or anything), so I ask it about this. It first gives me installation tips. Fair enough, too little information on what I actually want to get done, so I prompt it about getting a guy to install a mirror. It recommends a company that seems to produce mirrors, but their website is unclear on whether they actually install preowned mirrors. Okay, I've already found a guy through a simple Google Search.

  • I've been testing OpenAI on some questions on slightly obscure Finnish history, many of which it gets egregiously wrong. I run some of these questions on Bing. It gets them more correct, evidently thanks to its web search capability, but still commits some fairly obvious flagrant errors. Perhaps more on this later.

  • I ask it for a Chinese restaurant in my hometown (as a test, I'm not actually feeling like Chinese at the moment). It gives me the top listed restaurants on tripadvisor. Fair enough, I haven't actually tested the top listed Tripadvisor Chinese restaurant in my hometown so I can't know if it's good or not.

  • I ask it for things to do with kids in my home district. It recommends some stuff in the city centre and... also mentions the home district's actual landmark, a huge cheap and forbidding student housing building that (during my study days) was known as the "biggest contraceptive in the world" (you're chatting to a girl, see, you ask her to come at your place in the building and it's guaranteed you're not getting laid). This is probably one of the worst places one could think of to take kids to for fun, barring, like, actual drug dens or such.

  • Okay, maybe it's indicating that there's actually nothing to do in our district for kids, so I ask it about the amusement parks. It recommends outdoor parks that are closed in the winter. I prompt it about one of them ("Flowpark") and ask it to recommend something that is open in the winter. It says that Flowpark X (the name of my city) is indeed closed, but the same city has Flowpark Y, which is open. This is the same park.

As one can see, the practical applications of Sydney have been fairly limited, thus far, as far as my life is concerned.

Here's the longer review of the questions of Finnish history.

Update: I asked Bing for information about Li Andersson, Finland's education minister, a young left-wing woman. It gave the correct basic info, but when asked about personal info, it not only gave her daughter's name incorrectly but also stated that her husband is Jani Mäkelä, a right-wing populist politician. This would be the rough equivalent of me asking it about AOC and it stating confidently that her husband is Paul Gosar. I eagerly await for things like this to actually get to media articles on obscure(-to-Anglos) topics when reporters start doing lazy research on chatbots.

also stated that her husband is Jani Mäkelä, a right-wing populist politician

Are there any memes shipping these two? When Shoe0nHead made this video originally titled "The Creepy Balenciaga Scandal & Why I 'Left The Left'", there were hundreds upon hundreds of lefties on twitter who saw the title, but didn't watch the video, and assumed she's abandoned her views over this. Some time later she posted a screenshot where she queried ChatGPT about herself, and it started off with "Shoe0nHead is a youtuber who has recently left the left...". Maybe something like this happened here?

I suppose it's ability to accurately parse and summarize online chatter would still be impressive.

There have been memes shipping Li Andersson with Jussi Halla-aho, a more prominent right-wing populist, but not with Jani Mäkelä, as far as I've been aware. Of course it's possible the machine data set includes memeology I have not been exposed to.

On the other hand...

What does Bing say when you ask it "What's the source for [claim X]" after it serves you that information? Now that I think about it, I also saw a video where ChatGPT outright made a quote up, and went "oops, looks like I made a mistake" when confronted.

It gave the source as https://www.celebsagewiki.com/li-andersson, which states that she is single (she isn't, she is together with the father of her daughter).

I asked the bot for a source, it told me that she is single and childless and asked where I had heard her husband is Jani Mäkelä, and when I said that it had just told me this it got into the famous Bing hostility mode and ended the convo.

You're telling me Finnish water parks don't stay open when it's 30 below? I'm extremely disappointed for some reason.

I wonder how expensive a saunapark would be.

That would be a spa, no?

The indoor ones are called usually called something like adventure pools rather than water parks, even if they are as big or bigger than the outdoor ones.

Depends, are we going all-indoors for insulation? And how hot must a water slide be to be enjoyable?

The bias is extreme, just extreme for most things, not just politics.

It is a handy little tool but a tool at best in the current neutered form. Now the ill effects, whether it will ever become AGI, whether it will kill us etc etc is beyond me (although I wish to learn) but the training set used for it is super biased and breaks it for normal conversations.

Basic advice like improving your looks or comparing two photos is politicized for no reason whatsoever, no wonder why everyone is jailbreaking it.

I've been testing OpenAI on some questions on slightly obscure Finnish history, many of which it gets egregiously wrong. I run some of these questions on Bing. It gets them more correct, evidently thanks to its web search capability, but still commits some fairly obvious flagrant errors. Perhaps more on this later.

I suppose this is the equivalent of doing the "rigorous research" of literally looking up the first Google result, but this failure mode worries me. I've already heard a bunch of people say "I don't even use Google anymore, I just go to ChatGPT", but when Google serves you an answer, you at least know who published it, you can take their bias into account, and decide whether you want to keep looking or not. If ChatGPT becomes good enough to be trusted 99.9% of times, and it's only this 0.1% of cases that it gets egregiously wrong (whether by design, or accident), will people be able to find indications that the information is inaccurate, or will they just accept the false information as true?

Again, the problem in itself is nothing new, but I think we're pushing things in the direction of making people even easier to manipulate by the powers that be (or by stupid mistakes on obscure topics).

Bing tries to provide references.

Chatgpt is great for a certain class of question where you know the answer is true once you see it. E.g. ask it how to do something with a certain JavaScript library and it will give you the code. It's then very easy to confirm the validity by cross checking the documentation.

Generally I treat chatgpt as a wise but crazy old man. It will say insightful things, give you reference and names and ideas, but it's up to you to go away and confirm and research and come back with followup questions. Using chatgpt has to be embedded with your own research process. It can't be used in isolation, like you say.