site banner

Culture War Roundup for the week of February 6, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

I will admit this isn't an effortpost:

As is common knowledge and more deeply discussed elsewhere in this very comment section (e.g. https://www.themotte.org/post/349/culture-war-roundup-for-the-week/62270?context=8#context), Google got "scooped" by ChatGPT not because they were beat on the technology side, but because they were beat on the productization side. Some are comparing this to Xerox PARC, where Xerox invented or incubated many elements of modern computer technology -- the GUI, the mouse, etc. -- but, being blind to their actual utility, got "scooped" by Apple and others and subsequently lost out on trillions of dollars of market value.

What's deeply, deeply hilarious to me is: during this entire time, Google management were so busy posturing, and their internal A.I. safety teams were so busy noisily complaining sexism / racism / phobias of various sorts (not so much human extinction), and they developed such a reputation for being a place to coast, that despite 130,000 elite-educated, overpaid people sitting around ostensibly unleash their brilliance, they're now in a position where Microsoft has a puncher's chance (realistically, maybe 5 - 10%) of catching up and even surpassing Google's decades-long search dominance. Even better, competing with Microsoft now means that Google might have to cannibalize a $100B / yr line of business, whereas Microsoft cannibalizing Bing means it sacrifices maybe a ham sandwich / year line of business.

DEI nonsense probably had something to do with this, but mostly it looks like plain old "innovator's dilemma" stuff. Fear of self-disruption.

Google makes most of its money from search. Search has a property that makes it an especially valuable segment of the ad market — showing an ad for X to someone specifically searching for X right now (that is, who has purchase intent) is many times more effective than showing an ad to someone who some algorithm guesses might be the sort of person who might have an interest in X (e.g. what Facebook mostly has to settle for).

Conversational AI potentially pulls users away from search, and it's not clear it really has a direct equivalent of that property. Sure, people might use conversational AI to decide what products to buy, and it should be able to detect purchase intent, but exactly what do you do with that, and how effective is it?

It's not hard to generate high-level ideas here, but none are proven. Search and conversation have different semantics. User expectations will differ. "Let advertisers pay to have the AI recommend their products over others," for instance, might not be tolerated by users, or might perform worse than search ads do for some reason. I don't know. Nobody does. Product-market fit is non-trivial (the product here being the ads).

On top of this, LLMs require a lot more compute per interaction than search.

So in pushing conversational AI, Google would have been risking a proven, massively profitable product in order bring something to market that might make less money and cost more to run.

Now, this was probably the right choice. You usually should self-disrupt, because of exactly what's happened here — failing to do so won't actually keep the disruptive product off the market, it'll just let someone else get there first. But it's really, really hard in most corporate cultures to actually pull the trigger on this.

Fortunately for Google, they've split the difference here. While they didn't ship a conversational AI product, they did develop the tech, so they can ship a product fairly quickly. They now have to fend off competition that might not even exist if they'd shipped 18 months ago, but they're in a fairly strong position to do so. Assuming, of course, the same incentives don't also cause them to slow-walk every iterative improvement in this category.

Why don’t you think they killed AI because it threatened their profits. Their is no guarantee chatbots will generate the same profit as search which you could load with ads.

This looks like a classic case of a low end new entry to a market. A technology developed that would be cheaper and better. The existing dominating company couldn’t enter the market because establishing the lower end market would kill their cash cow.

I think it’s quite possible these chatbots end search and no one gets to dump ads on you.

More likely just auto-generation of current click-farm "review" sites, when ten different options are given facially valid reviews... with affiliate links to each one. No reason not to play the field on this one.

That's what I mean -- if Google can autogenerate this sort of thing, what's to stop them from just putting it at the top of the search results (customized on the fly based on all of their personal data) and reaping the rewards from whatever product the sucker user ends up buying? Advertisers love "pay per sale".