This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
AI is Too Big to Fail
I don't have the expertise needed to evaluate the economic arguments, so I'm mainly posting this here to solicit feedback on the linked article.
It's probably too late to avoid a future of "brutal serfdom" regardless of what happens, even if we reach singularity escape velocity. Power will do what it always has done, which is centralize in the hands of a few to the detriment of the many; turning every human into a cyborg god won't change that (you simply have the problem of organizing the coexistence of cyborg gods rather than the problem of organizing the coexistence of baseline humans). To think otherwise is to implicitly rely on a Rousseauean (and anti-Hobbesean, channeling Hlynka) presupposition that people are basically good and just and suffering is merely an incidental byproduct of material lack, which we have reason to be skeptical of. The second half of the 20th century provided what were probably the most fertile material and social conditions for freedom that have ever been seen in human history; regardless of wherever we're going now, we're leaving freedom in the rear-view mirror.
"That's a no from me, dawg" as the great bard of our time, Randy Jackson, would say.
You actually did the article author a big favor with your down select of sections. Buried in a lot of emotionality are some interesting economic and geopolitical points for debate.
But so much of the article is full of these kind of things:
How can a company "demonstrate integrity?" This is the same wishy-washy style assertion as "be an ally" or "speak truth to power." It's just so sophomoric.
If I have to pick just one cognitive and logical failing from the article, it directly falls into the fundamental attribution error trap multiple times:
and
and
and
Then you also have these kind of whoppers:
and, in the "conclusion."
Then, there's the truly tinfoil hat level of conspiracy thinking:
Team Trump (which is really being controlled by the Silicon Valley oligarchs) is going to revamp the IRS in order to support a crypto investment scheme? They're going to pull this off under the radar yet in plain sight. And the tens of thousands of bureaucrats at the IRS, FTC, SEC etc. that would need to be "in" on this scheme are just going to be unaware of it happening? Or they are in on it? And what about when the Big Banks get wind of this? I though they controlled Congress. No, wait, that's Silicon Valley. Or Big Oil. No, I meant Big Pharma.
While above the median level of "orange man bad / big tech bad", it isn't much above that level. I don't know what this authors politics are and, unlike him, I will not presume to know his personal cognitive state or full internal belief and value structure.
On a content only level, I look at this as another flavor of AI doomerism. This isn't paperclip machine doomerism, this is economic theory doomerism. "We've put so much money into AI that it has to work out!" But money doesn't just disappear if a business fails. If the business burnt through all their money, it's probably bad for that businesses' particular investors, but it also means that money went somewhere - other vendors, other businesses. The market moves the money the best it can. Of course I'll admit that this isn't necessarily a great outcome. It's not as if bubbles and over investment are good things in the long run --- right?. Regardless, while growth may flatline (which is bad) the money is still moving. Why 2008 was so frightening was because it looked like money might actually stop moving. A system level credit crunch means that even really good and obvious investments or simple spending can't happen because of a lack of liquidity.
But back to the main economic point; are we so "all in" on AI that if it "fails to deliver" we're 100% giga-fucked? Sure, if we keep all of these definitions slippery and uncertain, why not. On the "failure to deliver" point, I don't see any real rubric or threshold from the author beyond "you better hope and pray that AI delivers a magical transformation." Okay, so we need the ROI on AI to be approximately one Abracadabra. Got it. If we don't get to this magical level of returns, what, exactly, happens? All the BigAI firms go insolvent overnight. Locked out employees, broken keycards. And the new datacenters and chip fabs just immediately fall into a state of disrepair and end up looking like the steel mills outside of Youngstown, Ohio? Again, I'll be charitable here and say that if the BigAi bubble bursts hard, it probably is recession time for a while. But the money doesn't evaporate and all of the human capital doesn't commit suicide. There is a VERY direct line to be drawn from the dot com bubble of late 90s to early 2000s all the way to the rocketship of silicon valley beginning in .... 2009? Or earlier? Google IPO'ed in 2004 IIRC.
Doomerism isn't better than irrational exuberance just because it is the inverse. This is the cowardice of cynicism and pessimism more generally. "I hope I'm wrong but I'm probably not (unsaid: because I'm just so dang smart!)" isn't the flex people think it is. You're prognosticating a negative outcome probably as means to do some preemptive emotional self-satisfaction. I'm not against hearing about downsides to AI. In fact, I've posted about them myself at least two times. All I'm looking for is a cogent enough argument on the hows of Things Falling Apart.
An excellent takedown of a really dumb article. Good job. I hate it when pundits try to describe and predict our unbelievably complex national/global economy using a couple of pithy ideas ("We're investing too much into AI!" and "Silicon Valley is allied with Trump!"), exaggerated beyond the point of all usefulness. If we overspend on AI, well then darn, we've somewhat misallocated our abundant resources. Maybe we'll optimize better next decade.
More options
Context Copy link
Some of the point about the money with AI is that the current financial engineering means that a lot of money is coming onto existence on paper (especially in these circular 'Company A invests in Company B but Company B then immediately uses that money to buy compute from Company A, valued in accordance with the new benchmark' deals) and not really escaping the loop the same way. This is now, somewhat, obscuring a state of pretty clear stagnation and weakness in the broader economy.
This is a good point. Valuation voodoo can actually lead to meaningful damage when a bubble pops because it isn't an actual representation of cash flow.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link