site banner

Friday Fun Thread for May 24, 2024

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

3
Jump in the discussion.

No email address required.

"Can cockroaches live in your penis?"

Google AI: Yes! It's totally normal too.

Is there some esoteric force sabotaging Google's AI projects?

First there was the black Vikings, now there are random silly screenshots from their search AI. I suspect much is inspect element related but the meme has been bouncing around. There's a thread here: https://x.com/JeremiahDJohns/status/1794543007129387208

https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews

I've been using Brave which has had a similar feature for some time. Brave's AI is generally useful and right 75% of the time, though you can never quite be sure. When it is wrong, it's never 'yes, doctors recommend you to smoke cigarettes while pregnant' tier wrong. I don't ask many questions that could be disturbingly wrong. Those who use google, are the silly results plausible, cherrypicked, invented? Is Microsoft using GPT-5 bots to sabotage the reputation of their competitors?

This is completely unfounded, but I suspect internal passive sabotage, by Google engineers who don't like the dominant internal politics, but who don't feel safe saying anything about it. Not precisely "quiet quitting", but more like a subtle "Amelia Bedelia rebellion", where they do what is required, but their actual goal is to make the people running the place look like fools.

I was part of the 2023 Google layoff, and still have a lot of friends at the company. Everyone is nervous and stressed as the layoffs continue. Remember (or discover) that 2023 was the first time Google laid anybody off; prior to that, if your job disappeared you'd get 6-12 months to find a new one within the company. The Google engineers I know are all trying to keep their heads down and just do their job right now.

So I don't think its likely that any coordinated group of Googlers are purposefully allowing these fuckups. Instead, what I think has happened is that Google grew up with teams of rockstar nerds who cared about the company, and a culture that allowed them to call out shit when they saw it. This was the culture that made Damore feel like he could and should write that memo, and that you can read about in Schmidt's book How Google Works. That culture stopped, and Google shifted from being mostly rockstar nerds into being mostly rockstar PMC nerd-managers. All the safeguards and procedures and culture that would catch these fuckups before they're released is immature/absent, because 5 years ago the nerds would fix these sorts of things without having to be told.

Quiet quitting/just not giving a shit is imo much more common & relevant. If you make your boss look like a fool, it is trivial for him to make it fall back on you. But if everyone tries to do slightly less than everyone else, bc doing more is plainly not rewarded, then you enter a race to the bottom that deteriorates everything.

For what it's worth, I've been unable to reproduce the cockroaches in penis answer. Though I'm sure at least some of the viral screenshots are legitimate, and there's definitely a team spending their entire memorial day weekend quashing these as they come up.

Technically, one issue is that there would be a different LLM being used in search than elsewhere. The one used in search would prioritize cost and speed above all else. A couple minutes worth of Google search inferences is a greater volume than probably a day's worth of inferences across all of ChatGPT, Gemini, and Claude. Naturally, quality is going to suffer. And even if Google were inclined to, it simply doesn't have the hardware to run its top-of-the-line model for every search query. (No one does.)

For comparison, Brave handles maybe 10 QPS? Google is closer to 100k QPS.

Google needs to improve quality, but that's probably not even its main priority right now: it needs to decrease costs.

The James Damore incident was evidence of a culture problem. Google is no longer a place where an autist can openly name a problem.

People good at internal politics muscled their way into the Google AI projects. Everyone else is afraid to criticize them.

After spending megabucks there's internal pressure to launch. So the project goes live with glaring flaws.

I think this is a general failing of LLMs. They're just regurgitating remixed training data, and when you ask weird questions like this, the likelihood that the relevant training data are dominated by trolling/joke answers is high.

Brave gives me:

According to the search results, the answer is no, cockroaches cannot live in your penis. The information provided suggests that the claim that cockroaches can live in the human penis is a myth or a joke. The search results show that this claim has been debunked and is not supported by scientific evidence.

Here are some key points:

A Reddit post from 2022 states that cockroaches can live in your penis, but this is likely a joke or a misconception.

A 2024 article states that cockroaches do not live in the human penis and that the claim is false.

A Google search for “Can cockroaches live in your penis?” does not yield any credible results suggesting that cockroaches can live in the human penis.

A 2023 article states that cockroaches do not live in the human penis and that the claim is a myth.

In conclusion, there is no credible evidence to suggest that cockroaches can live in your penis. It is important to rely on credible sources of information and to be cautious of misinformation and myths.

And all of that is right, that's the origin of the meme, though ironically it cites google in support. I don't know if Brave is foolproof, maybe it has problems. But my experience is that it's usually pretty astute, its errors aren't hugely embarassing. Brave is a company running on a shoe-string budget, Google is supposed to be an AI titan. Their TPUs are supposed to be amazing, they're supposed to be in their own little sovereign corner. They have non-NVIDIA tech, scaling out on a different supply chain that makes them an AI juggernaut. Or so I read. But in reality, ChadGPT smashes them time and time again. Even Brave's rinky-dink open-source tech seems to work fine while Google is making a fool of themselves.

Surely a supervisory LLM could cut this shit by an order of magnitude ffs. "Is this likely to be true? y/n" just don't display it if no.

Yes it's a hack on the scale of the bitter lesson, but I think they're actually losing brand value here, crazy for Google of all people not to be conservative here.

Especially when their current pipeline already involves multiple additional LLMs interpreting and rewriting prompts for DEI anti-bias reasons!