This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Yes, but we aren't quite there yet. Not even close, in my opinion, at least when we're talking about serious job displacement. Unless there's a phase change, were looking at years of more insane capex and opex.
And those trillions will need to be payed back with interest. We're not talking about a Netflix or Office 365 licence that every office drone just has. For millions of workers, access to those tools will rival transportation and housing in ongoing cost.
No problem, if your employer already has half the staff on SolidEdge/Ansys/ect licenses and generally does not care what toolboxes everybody gets. For the rest? Small business, low productivity labor, labour limited by hardware throughout (classic example: radiology)? They won't really contribute to paying off that debt, so they won't get a lot of tokens, and none from the good models.
We've gotten used to tech, especially software, being cheap. For the current economics to make any sense, this will come to a hard stop. On the cost side, AI is much more like an excavator than like a shovel, and it really needs to replace just as many workers to make sense building them.
And I can well imagine this never happening. Maybe they'll never get reliable enough for that much unsupervised work, especially work you can't write unit tests for.
I should have said "post bankruptcy, the AI Labs could be happy fat inference providers. I actually think the current SOTA models, if we worked on improving harnesses, etc a lot, could still be a huge change on their own. Obviously smarter model = better, but just what we have now with a ton of scaffolding can do a lot.
But yes, the amount of capex they have all spent means they now need something way better than "excel helper" to pay it back. But in a non-hyperscale world, LLMs as a normal technology could a profitable medium/large SAAS industry.
More options
Context Copy link
More options
Context Copy link