site banner

Culture War Roundup for the week of July 14, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Periodic Open-Source AI Update: Kimi K2 and China's Cultural Shift

(yes yes another post about AI, sorry about that). Link above is to the standalone thread, to not clutter this one.

Two days ago a small Chinese startup Moonshot AI has released weights of the base and instruct versions of Kimi K2, the first open (and probably closed too) Chinese LLM to clearly surpass DeepSeek's efforts. It's roughly comparable to Claude Sonnet 4 without thinking (pay no mind to the horde of reasoners at the top of the leaderboard, this is a cheap-ish capability extension and doesn't convey the experience, though is relevant to utility). It's a primarily agentic non-reasoner, somehow exceptionally good at creative writing, and offers a distinct "slop-free", disagreeable but pretty fun conversation, with the downside of hallucinations. It adopts DeepSeek-V3’s architecture wholesale (literally "modeling_deepseek.DeepseekV3ForCausalLM"), with a number of tricks gets maybe 2-3 times as much effective compute out of the same allowance of GPU-hours, and the rest we don't know yet because they've just finished a six-months marathon and don't have a tech report.

I posit that this follows a cultural shift in China’s AI ecosystem that I've been chronicling for a while, and provides a nice illustration by contrast. Moonshot and DeepSeek were founded at the same time, have near-identical scale and resources but have been built on different visions. DeepSeek’s Liang Wengeng (hedge fund CEO with Masters in engineering, idealist, open-source advocate) couldn't procure funding in the Chinese VC world with his inane pitch of “long-termist AGI research driven by curiosity” or whatever. Moonshot’s Yang Zhilin (Carnegie Mellon Ph,D, serial entrepreneur, pragmatist) succeeded at that task, got to peak $3,3 valuation with the help of Alibaba and Sequoia, and was heavily spending on ads and traffic acquisition throughout 2024, building a nucleus of another super-app with chatbot companions, assistants and such trivialities at a comfortable pace. However, DeepSeek R1, on merit of vastly stronger model, has been a breakout success and redefined Chinese AI scene, making people question the point of startups like Kimi. Post-R1, Zhilin pivoted hard to prioritize R&D spending and core model quality over apps, adopting open weights as a forcing function for basic progress. This seems to have inspired the technical staff: "Only regret: we weren’t the ones who walked [DeepSeek’s] path."

Other Chinese labs (Qwen, Minimax, Tencent, etc.) now also emulate this open, capability-focused strategy. Meanwhile, Western open-source efforts are even more disappointing than last year – Meta’s LLaMA 4 failed, OpenAI’s model is delayed again, and only Google/Mistral release sporadically, with no promises of competitive results.

This validates my [deleted] prediction: DeepSeek wasn’t an outlier but the first swallow and catalyst of China’s transition from fast-following to open innovation. I think Liang’s vision – "After hardcore innovators make a name, groupthink will change" – is unfolding, and this is a nice point to take stock of the situation.

Can someone explain to me why these companies are open sourcing their models? Developing/training this stuff seems enormously costly, what’s the business case for just giving it away?

There's a big breakdown here, but my summary take --

Business case:

  • Models aren't useful products on their own; there's too much competition, too shallow a moat, and few buyers have the skillset and equipment to use a model themselves. Runtime with effective models is where these businesses expect to make their money, made more convenient by their familiarity with optimal operation and tuning of their own models, and by the giant sack of GPUs that they happen to have sitting available.
  • This is especially true where (as now) model creators don't have a good understanding of all or even a large portion of use cases for a model. Where exposing an API, as increasingly many western LLM-makers are doing, limits you to prompt engineering, an open source model can be rapidly tuned or modified in a pretty wide variety of ways. You can't necessarily learn from everything someone else has done with an open-source model, or even what they've done without breaking the license, but you can learn a lot.
  • Businesses producing open-source models can attract specialized workers, not just in being skilled, but in having a very specific type of ideology, similar to how linux (or rust) devs tend to be weird in useful ways.
  • 'Sticky' open-source licenses have the additional benefits of allowing most innovations by other smart people to filter back in. (In more legally-minded jurisdictions, they also put down beartraps to other developers that would love to borrow a great implementation without complying with the license.)
  • (Cynically, they can only succeed with government backing, and open sourcing a model makes them politically indispensable.)

Philosophical arguments:

  • Open-sourcing a model is Better for what it allows; interaction with academic communities, rapid iteration, so on. A business that emphasizes these topics might not be the most remunerative, but it'll be better at its actual goal.
  • (Optimistically, some devs want to get to the endgame of AGI/ASI as soon as possible, and see the API business model as distracting from that even if it does work.)

Pragmatic argument:

  • The final models fit on a single thumb drive. It's not clear any company running this sort of thing can seriously prevent leaks over a long enough time for it to be relevant. There's an argument that China is more vulnerable to this sort of unofficial espionage, but we've also had significant leaks from Llama, Midjourney, etc.

It's not clear any company running this sort of thing can seriously prevent leaks over a long enough time for it to be relevant.

With sufficient will, they could do just this. This is a choice they actively make one way or another.

Not if you want to keep highly skilled researchers and programmers working for you as it would mean locking down the systems so hard that it makes daily work a chore and the sorts of people you need for that level of work hate working under such restrictions.

Yeah, I have a friend who works in a very sensitive area of banking and it’s a nightmare:

  • Four layers of security before he can get to his desk
  • Everything on the computer is absolutely locked down and the software is rubbish as is the authentication system
  • Constant surveillance from cameras absolutely everywhere

I think other stuff too but I forget the details.

Even in finance the logic is that it’s always impossible to prevent a willing employee from committing crime and leaking sensitive information, monitoring systems are just set up so that if and when it happens (1) they can trace it to source and (2) convince the regulator they did everything they could and reported it as soon as possible.

And also to minimise the scale of the breach, right? It's bad if an employee tells me that BigCorp and BiggerCorp are expected to finalise their merger by May, but it's worse if they give me 2000 pages of detail on the subject including all the due diligence on both parties.