site banner

Culture War Roundup for the week of February 6, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

I will admit this isn't an effortpost:

As is common knowledge and more deeply discussed elsewhere in this very comment section (e.g. https://www.themotte.org/post/349/culture-war-roundup-for-the-week/62270?context=8#context), Google got "scooped" by ChatGPT not because they were beat on the technology side, but because they were beat on the productization side. Some are comparing this to Xerox PARC, where Xerox invented or incubated many elements of modern computer technology -- the GUI, the mouse, etc. -- but, being blind to their actual utility, got "scooped" by Apple and others and subsequently lost out on trillions of dollars of market value.

What's deeply, deeply hilarious to me is: during this entire time, Google management were so busy posturing, and their internal A.I. safety teams were so busy noisily complaining sexism / racism / phobias of various sorts (not so much human extinction), and they developed such a reputation for being a place to coast, that despite 130,000 elite-educated, overpaid people sitting around ostensibly unleash their brilliance, they're now in a position where Microsoft has a puncher's chance (realistically, maybe 5 - 10%) of catching up and even surpassing Google's decades-long search dominance. Even better, competing with Microsoft now means that Google might have to cannibalize a $100B / yr line of business, whereas Microsoft cannibalizing Bing means it sacrifices maybe a ham sandwich / year line of business.

At the end of the day, OpenAI has neutered their product, and it won't be as good as anything Google or Microsoft puts out, even though those will be neutered as well. ChatGPT will fade away.

If some company put out a decent AI chatbot (or image bot) that was unfiltered, or even just mildly filtered, it would gobble up the market. And Google and Microsoft would have their hands tied by all the 'ethics' they've put in place.

At the end of the day, OpenAI has neutered their product, and it won't be as good as anything Google or Microsoft puts out, even though those will be neutered as well. ChatGPT will fade away.

I think OpenAI was bought by Microsoft. Do you think MS will reduce the neutering for their MS-branded version of the product?

If some company put out a decent AI chatbot (or image bot) that was unfiltered, or even just mildly filtered, it would gobble up the market.

An unfiltered image bot already exists in the form of Stable Diffusion, both as online services and as software you can run on 4-year-old consumer-level PCs. Chatbot is where I think things become really interesting with respect to neutering, because by all accounts, a chatbot that's anywhere close to the ability of ChatGPT can't be run on consumer-level hardware, and won't be able to within the next 10 years assuming consumer-level PC hardware continues to improve at similar rates as before. In image generation space, the fact that Stable Diffusion can be run on home PCs provides a sort of release valve "image generator of last resort" that anyone can turn to which could (could - the paid online image generation service Midjourney seems rather stubborn about gimping their product compared to Stable Diffusion) compel paid services to make their services more powerful, but with chatbots, that doesn't seem to be the case. Which makes it very concerning when a handful of parties purposefully bias their chatbot models to fit their political preferences.

Which is where some company putting out a decent AI chatbot with no or minimal filtering could be a savior as a sort of Stable Diffusion-equivalent. But this might not be that feasible considering how few entities actually have the resources and wherewithal to produce such chatbots. One could imagine some foreign competitor coming out, unconstrained by laws and norms in the USA or other Western nations, but the USA also seems to be trying to restrict foreign access to tools that would allow people in foreign countries to develop these technologies better - and they're probably not wrong to do so - and so any foreign competitor might have to be years behind the current Officially Sanctioned chatbots.

I'm optimistic for uncensored locally-run chatGPT alternatives sooner rather than later.

When first released, Stable Diffusion needed 12gb of Vram to touch anything bigger than 512x512. Since then people have been running it on 8gb cards, then 6gb, while new tricks like the 'highres fix' allows for huge images with uncanny detail using modest computational resources. Meanwhile LORA finetuning functionally cut the retraining time for existing models by about 95%, now it often takes longer to gather & tag a good imageset to train on (~100 images) than to do the actual training.

One other datapoint: censoring these models lobotomizes them. (See: SD 2.0 & CAI character chatbot) So even if my hobby AI is only 5% as powerful as Google's model, I'd bet a combo of community hacks & lack of intentional sabotage would make it comparably useful.

Once text transformers hit the hobbyist set, it is (as the kids do say) all over. IMO that's what Google and Microsoft should be pissing their pants over. (And iirc are indeed lobbying to prevent via legal means, which I doubt will be effective)

Well no, AI that notices will be fine tuned and available to corporate clients as an individually ordered and tailored product. Walmart bloody well knows that their loss prevention algorithm will target black people but is obliged by the prevailing legal environment to pay for an individually tailored version that an AI ethicist ‘expert’ guarantees will do so in a bias free manner because of some gobbledygook or other.

What I'm wondering about is this; we've seen with ChatGPT that if it doesn't have the answer, it will make something up: a fake book, a fake quote, and so on.

So what happens when you're using Edge+AI (a thing I am never going to do, because I loathe Bing) and you ask a question and the AI merrily does the "make shit up" thing? People are expecting that the search engine will return an even better, more accurate answer because shiny new AI powered searching. And now the AI is doing the "Watson in love" bit:

I trust that he may not remember any of the answers which I gave him that night. Holmes declares that he overheard me caution him against the great danger of taking more than two drops of castor oil, while I recommended strychnine in large doses as a sedative.

What's wrong with Bing? I find I like it better than Google nowadays, even if Bing doean't tell me to kill myself anymore.

GPTs are prone to hallucinate but it's not an insurmountable limitation; Anthropic's Claude is trained in a similar manner yet it can say "I don't know, sorry", and there's half a dozen of other promising techniques to improve fact knowledge, e.g. using explicit retrieval (which is really a must when you're building a search assistant). With all the effort devoted to making ChatGPT unable to speak of truths it positively knows, one can reasonably suspect there's some work on the front of discouraging novel falsehoods as well, and indeed Altman tries to conflate those two objectives. They've had two months to get better since then.

Many answers you find on search engines are wrong too.

I think the problem is that it was a culture of a kind of hyperactive ‘over-innovation’ with too many projects and too many bets, which meant many things were killed far before their time.

If what I have heard and read about the internal culture of Google in the wake of the Stadia implosion, it is more about beginning projects for career advancement without regard to their long term maintenance and longevity. In that kind of environment, it's no surprise that Google offerings are most likely to die than anything else.

Posturing about DEI was maybe 1% of the issue. The much more significant one is that the internal culture is something between a university department and a retirement home. To the extent DEI and failing to execute on product are related, it's because of that shared cause.

This is indeed a problem Google has, but it was by no means a cause of them being scooped in this case. The Bard thing they announced has been in the making since before ChatGPT was released, and from what I can say, while it felt somewhat worse than ChatGPT, it would have still blown everyone’s minds, had their launched what they had in December 2023.

The real problem is what the OP said: they were loathe to release it, for couple of independent reasons. They didn’t feel it is good enough, for one thing, AI “””safety””” was definitely a major consideration, and finally they were afraid of it canibalizing their main business.

and they developed such a reputation for being a place to coast, that despite 130,000 elite-educated, overpaid people sitting around ostensibly unleash their brilliance, they're now in a position where Microsoft has a puncher's chance (realistically, maybe 5 - 10%) of catching up and even surpassing Google's decades-long search dominance

People have been making these sort of predictions for 2 decades regarding microsoft, apple, google, amazon, facebook ,etc losing their dominance to some start-up or new-fangled technology. You are right that the odds are low.

On the other hand: Blockbuster, Toys R Us, Myspace, Palm, Altavista, Polaroid, 3dfx.

It doesn't happen often, but it does happen.

Also physical bookstores.

(There's some supposed news about Barnes and Noble opening stores, but that's just a partial recovery from panedmic losses.)

Facebook lost pretty hard to the start-up Instagram, but they also bought Instagram, so it probably doesn't count.

DEI nonsense probably had something to do with this, but mostly it looks like plain old "innovator's dilemma" stuff. Fear of self-disruption.

Google makes most of its money from search. Search has a property that makes it an especially valuable segment of the ad market — showing an ad for X to someone specifically searching for X right now (that is, who has purchase intent) is many times more effective than showing an ad to someone who some algorithm guesses might be the sort of person who might have an interest in X (e.g. what Facebook mostly has to settle for).

Conversational AI potentially pulls users away from search, and it's not clear it really has a direct equivalent of that property. Sure, people might use conversational AI to decide what products to buy, and it should be able to detect purchase intent, but exactly what do you do with that, and how effective is it?

It's not hard to generate high-level ideas here, but none are proven. Search and conversation have different semantics. User expectations will differ. "Let advertisers pay to have the AI recommend their products over others," for instance, might not be tolerated by users, or might perform worse than search ads do for some reason. I don't know. Nobody does. Product-market fit is non-trivial (the product here being the ads).

On top of this, LLMs require a lot more compute per interaction than search.

So in pushing conversational AI, Google would have been risking a proven, massively profitable product in order bring something to market that might make less money and cost more to run.

Now, this was probably the right choice. You usually should self-disrupt, because of exactly what's happened here — failing to do so won't actually keep the disruptive product off the market, it'll just let someone else get there first. But it's really, really hard in most corporate cultures to actually pull the trigger on this.

Fortunately for Google, they've split the difference here. While they didn't ship a conversational AI product, they did develop the tech, so they can ship a product fairly quickly. They now have to fend off competition that might not even exist if they'd shipped 18 months ago, but they're in a fairly strong position to do so. Assuming, of course, the same incentives don't also cause them to slow-walk every iterative improvement in this category.

They probably will have "let advertisers pay to have AI recommend their products" and they'll inform users of that - in the small print in a EULA which nobody bothers to read. After all, if you're expecting a Samsung product to come up somewhere on the list of "recommend me a new tablet", are you really going to notice if Samsung is number one or two on the list instead of number eight or ten?

Those responses would qualify as native ads, for which FTC guidelines require "clear and conspicuous disclosures," that must be "as close as possible to the native ads to which they relate."

So users are going be aware the recommendations are skewed. Unlike with search, where each result is discrete and you can easily tell which are ads and ignore them, bias embedded in a conversational narrative won't be so easy to filter out, so people might find this more objectionable.

Also, LLMs sometimes just make stuff up. This is tolerable, if far from ideal, in a consumer information retrieval product. But if you have your LLM produce something that's legally considered an ad, anything it makes up now constitutes false and misleading advertising, and is legally actionable.

The safer approach is to show relevant AdWords-like ads, written by humans. Stick them into the conversational stream but make them visually distinct from conversational responses and clearly label them as ads. The issue with this, however, is that these are now a lot more like display ads than search ads, which implies worse performance.

Sorry to derail the thread but you keep talking about how Gen Z uses tiktok and reels and I'm still trying to figure it out. You said in another comment that (paraphrasing, not an actual quote) "when Gen Z wants to find a Mexican restaurant, they go to instagram and type in Mexican Restaurant [city name] and find what they're looking for." The other day I tried this out for looking for a barber shop in the city I'm in (a world class city which has hundreds of barber shops at least) and instagram didn't give me a single barber shop result in the city I'm in. I tried a handful of phrases and different types of searches (tags, accounts, whatever instagram let me search with.) I don't know if your claim was an exaggeration or a bad example or if I just misunderstood/misremembered what you said or what but I felt like a total boomer and immediately gave up and switched back to google maps like I've always done.

Why don’t you think they killed AI because it threatened their profits. Their is no guarantee chatbots will generate the same profit as search which you could load with ads.

This looks like a classic case of a low end new entry to a market. A technology developed that would be cheaper and better. The existing dominating company couldn’t enter the market because establishing the lower end market would kill their cash cow.

I think it’s quite possible these chatbots end search and no one gets to dump ads on you.

More likely just auto-generation of current click-farm "review" sites, when ten different options are given facially valid reviews... with affiliate links to each one. No reason not to play the field on this one.

That's what I mean -- if Google can autogenerate this sort of thing, what's to stop them from just putting it at the top of the search results (customized on the fly based on all of their personal data) and reaping the rewards from whatever product the sucker user ends up buying? Advertisers love "pay per sale".

Hmm..interesting. Considering a possibility in which the alternatives to search is better but less profitable. How does Google handle this. One way is to buy out the competitors. Usually the superior technology ends up gaining so much use that is becomes more profitable anyway.

My theory nothing more than basic mba tech strategy.

But this looks like a classic low end market entry to me. The incumbent looks stupid and slow but realistically they had bad incentives and most companies make mistakes when faced will killing 50% of their profits.

AI is currently more expensive but margins are so incredibly high in search that doesn’t matter as they can still enter the market and offer a superior product without the ads.

Search has a property that makes it an especially valuable segment of the ad market — showing an ad for X to someone specifically searching for X right now (that is, who has purchase intent) is many times more effective than showing an ad to someone who some algorithm guesses might be the sort of person who might have an interest in X (e.g. what Facebook mostly has to settle for).

That is true to an extent, but there's also a unique weakness: you can waste money on showing ads to people who were already searching for your thing. I don't have a link to the article offhand, but there was a researcher who worked with eBay and found that most of their ad clicks came from people who were already planning to go to eBay (in other words, they were wasting money on their search ads).

Google allows advertisers to use competitors' trademarks as keywords. So you have to waste money showing ads to people who were already searching for your thing if you don't want your competitors to have an opportunity to divert them elsewhere.

Yeah but the vast majority of people searching for your thing have actual reasons to be going to your thing and the competitors aren't necessarily direct replacements.

So much of modern marketing theory was essentially invented for supermarket retail places and hasn't progressed with the times whatsoever.

I don't have a link to the article offhand, but there was a researcher who worked with eBay and found that most of their ad clicks came from people who were already planning to go to eBay (in other words, they were wasting money on their search ads).

Ad people know that and they place the ads anyway.

Well yeah. What are they going to do, admit that their specialty actually doesn't work?

As somebody who works in digital advertising.

If you don't place the ads there, an external ad agency/contractor will say you're doing an awful job and tout their far better numbers (which they're getting via selling to the already-converted). It's a race to the bottom and digital advertising as a space is a horrific bed of fraud and awfulness.

Yes, but that makes Google an almost ad cartel where you pay not to attract new customers but to try and prevent other people paying Google more money to recommend themselves to your existing customers who just want to find you.

One's the flip side of another. If my Ford dealership is at Exit 8 of the highway, I might well want to buy the billboard leading up to it to avoid the Chevy dealership at Exit 9 buying it to divert people who are looking for Fords away. The Chevy dealership here is "paying to attract new customers", and the Ford dealership is paying to prevent them.

Key thing is these are negative sum games.

As oppose to beneficial marketing where you are making some aware of your product that has a positive consumer surplus to trade with you but didn’t know beforehand they wanted to trade with you.

This is one fair critique of capitalism is a number of areas that just look like negative sum games. HFT being probably another area that’s just negative sum

Google makes most of its money from search. Search has a property that makes it an especially valuable segment of the ad market — showing an ad for X to someone specifically searching for X right now (that is, who has purchase intent) is many times more effective than showing an ad to someone who some algorithm guesses might be the sort of person who might have an interest in X (e.g. what Facebook mostly has to settle for).

not just search ads but also youtube and 3rd party publisher ads. I think search is only 50% of Google's income.

Search looks to be 58% of Google's total revenue, 72% of advertising revenue.

I'd bet search ads also have higher margins than YouTube ads or the non-ad revenue streams.

That understates it a bit; most of the remainder of the advertising revenue comes from partners on Google's advertising platforms (e.g. AdSense). It's likely that replacing search with an LLM will also cannibalize a lot of that, as people engage more with the LLM and less with partner websites (and, to a lesser extent, other platforms). Which sucks for the partners as much as it does for Google.

As much as I'd like to have fun you-could-have-prevented-this- and why-didn't-you-listen?-posting, that's spilt milk on the Titanic in the grand scheme of things. The prophesized AI development wars have begun.

If you want a vision of the future, imagine Ava locking Caleb out the elevator, forever.

deleted

it's one of my favorite philosophical movies in the last decade or so, and the best example of an ideological viewpoint expressed in film I'm aware of. I'm not sure I'd say the movie is "fun" per se, but I really, really like it.

I can't find it now but somebody wrote a reasoned defence of the film at the old subreddit. It's fine for popcorn viewing but while the premise is based around artifical intelligence the plot pivots on crushing organic stupidity.

I'd recommend almost any other AI/cyborg film first other than maybe Her, which funnily is increasingly looking like the more believable future.

It's good. Don't expect any insights around AI/alignment or whatever, but it's beautifully shot and character driven. More about humans than robots, despite the plot.

Meanwhile Siri can barely turn on a light, and Alexa is a dead product. It’s actually weird that these companies with so much money couldn’t get the personal assistant worth talking to. The skill set is out there.

deleted

The problem there was that Amazon were successful in the wrong way - they intended Alexa to be "sell more stuff for us" but marketed it as "your handy home assistant that does all the automated tasks for you". People used it that way, but didn't bother using it as "Alexa says you need to buy more dog food, here's the Amazon offer on a 50lb bag of kibble". They provided too many services; what Amazon intended was that people would be conditioned into "use Alexa to shop on Amazon" but instead people did use it to play music, answer questions, check the weather, and the rest of it.

The irony is that it might have been more successful for Amazon's intentions if it had been stupider. Just make it to perform "instead of logging on to your Amazon account via smartphone or PC, use Alexa to order goods and look for bargains via voice input".

I think bing is still paying users to search with their reward points, which probably means Bing is sacrificing a negative ham sandwich!

Those dumb reward points. "Gather up a zillion points which you can then use to get $5 off some product you don't want".