site banner

Friday Fun Thread for May 24, 2024

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

3
Jump in the discussion.

No email address required.

"Can cockroaches live in your penis?"

Google AI: Yes! It's totally normal too.

Is there some esoteric force sabotaging Google's AI projects?

First there was the black Vikings, now there are random silly screenshots from their search AI. I suspect much is inspect element related but the meme has been bouncing around. There's a thread here: https://x.com/JeremiahDJohns/status/1794543007129387208

https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews

I've been using Brave which has had a similar feature for some time. Brave's AI is generally useful and right 75% of the time, though you can never quite be sure. When it is wrong, it's never 'yes, doctors recommend you to smoke cigarettes while pregnant' tier wrong. I don't ask many questions that could be disturbingly wrong. Those who use google, are the silly results plausible, cherrypicked, invented? Is Microsoft using GPT-5 bots to sabotage the reputation of their competitors?

I think this is a general failing of LLMs. They're just regurgitating remixed training data, and when you ask weird questions like this, the likelihood that the relevant training data are dominated by trolling/joke answers is high.

Surely a supervisory LLM could cut this shit by an order of magnitude ffs. "Is this likely to be true? y/n" just don't display it if no.

Yes it's a hack on the scale of the bitter lesson, but I think they're actually losing brand value here, crazy for Google of all people not to be conservative here.

Especially when their current pipeline already involves multiple additional LLMs interpreting and rewriting prompts for DEI anti-bias reasons!