site banner

Culture War Roundup for the week of January 20, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

To continue the drama around the stunning Chinese DeepSeek-r1 accomplishment, the ScaleAI CEO claims DeepSeek is being coy about their 50,000 H100 GPUs.

I realize now that DeepSeek is pretty much the perfect Chinese game theory move: let the US believe a small AI lab full of cunning Chinese matched OpenAI, with a tiny fraction of the compute budget, with no ability to get SOTA GPUs. Let the US believe the export regime works, but that it doesn't matter, because Chinese brilliance is superior, demoralizing efforts to strengthen it. Additionally, it would make the US skeptical of big investment in OpenAI capital infrastructure because there's no moat.

Is it true? I have no idea. I'm not really qualified to do the analysis on the DeepSeek results to confirm it's really the run of a small scrappy team on a shoestring budget end-to-end. Also what we don't see are the potentially 100-1000 other labs (or previous iterations) that have tried and failed.

The results we have now are that -r1 b14 and b32 are fairly capable on commodity hardware, and it seems one could potentially run the 671b model which is kinda maybe but not actually on par with o1 on a something that costs as much as a tinybox ($15k). That's a remarkable achievement, but at what total development cost? $5 million in compute + 100 Chinese worth of researchers would be stunningly impressive. But if the true cost is actually a few more OOMs, it would mean the script has not been completely flipped.

I maintain that a lot of OpenAI's current position is derivative of a period of time where they published their research. You even have Andrej Karpathy teaching you in a lecture series how to build GPT from scratch on YouTube, and he walks you through the series of papers that led to it. It's not a surprise that competitors can catch up quickly if they know what's possible and what the target is. Given that they're more like ClosedAI these days, would any novel breakthroughs be as easy to catch up on? They've certainly got room to explore them with a $500b commitment to play with.

Anyway, do you believe DeepSeek?

Increasingly I think I agree with Dase that R1 seems much closer to AGI, possibly at it, than previous models. Its prose is raw, but narratively and stylistically superior to other models. It is capable of genuinely great writing with complex prompts. I think it’s the first model that clearly outcompetes me in terms of verbal IQ. Eerie in a way, but hardly a surprise; if anything in early 2023 I assumed it would take even less time.

I think it’s the first model that clearly outcompetes me in terms of verbal IQ.

Are you sure that you're not selling yourself short? My very brief interaction with R1 (just now, on opernouter.ai) shows that, while verbally skilled, it still has that noticeable AI-ism where it makes everything sound like a high school essay written by a teacher's pet, and if you try to prompt it to not act that way, it tries not to but it still deep down sounds that way. Can you suggest how to prompt it to seem more interesting?

It's certainly very impressive if it runs much more cheaply than ChatGPT, but so far I haven't seen a reason to think that it's actually more interesting to interact with than ChatGPT is.

Or should I try to run it somewhere other than openrouter.ai?

I find that asking it to specifically emulate the style of a human author works well. Who that author is, is up to you, but I usually try Peter Watts or Ian Banks for starters as they have very distinctive voices.

I would say I have a very distinct voice in my fiction. There's nobody else quite like it. Just today, I was thinking about taking a web serial of mine off hiatus, and had gotten through most of a chapter before I felt less than happy with the overall flow of a few paragraphs and the overarching structure of the entire chapter, and was too tired to think of better options.

I fed the entire thing into R1, told it I was unhappy with a few bits, and asked it to try to rewrite the last few paragraphs, in my style, while maintaining the quality of the strong start. It did wonders. I found myself nodding my head and thinking yep, that's how I write, and that would be an example of what I consider good writing from myself. Except it wasn't me doing more than hinting.

For the record, Claude 3.5 Sonnet is just as good (or so close I can't call it), and I ended up doing a final edit while taking inspiration from both.

You might get some mileage out of asking it to emulate Yudkowsky, Gwern or even Scott.