This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Can We Circle Back With Rome On This?
WSJ Article on Pope Leo and his concern about AI
Request: Tech ninja's of The Motte, find the non-paywalled version of the above.
The article states that Pope Leo has a specific interest in AI and it's potential impact on humanity. This makes Pope Leo perhaps one of only a few billion people who are concern about AI and it's potential impact on humanity.
There's some background about Francis, brief commentary on Catholic Social teaching, and some pithy quotes. I'd like to avoid the surface level of discussion on "Well, what does the Catholic Church think of AI?" and try to poke at the deeper issue here -
Why does Silicon Valley feel the need to build a lobbying strategy for the Vatican? Obviously the Vatican does not have the legislative or regulatory authority of the United States Government or the EU. They aren't going to try to fine Big Tech for anything. If there is a condemnation of "AI" (a term becoming more meaningless by the day) it's going to be predictable - we must respect human dignity, people should not be commoditized, avoiding sin on the internet is as important as avoid sin elsewhere.
Looking at it from a positive endorsement perspective, perhaps Big Tech thinks they can get the Vatican to offer a milquetoast endorsement of AI? We know there are dangers and we must be wary and ask for Christ's help, but AI is a liberating technology for the masses (or something along those lines). But, does BigTech think that this would actually significantly help their bottom line?
I'd hazard a guess that it has nothing to do with the bottom line. And this is my worry. As a free-markets, pro-growth believer, I've always thought we should let corporations be corporations and do what they are designed to do; make money. Civil liberties, the vision for society etc is what should be left to government and culture (and war about both we shall!). Corporations, in my view, should just be big dumb money-makers. "All they care about is money!" says the sophomore year self-proclaimed communist. To which I have always said, "Good! Then they're staying focused on their job."
This seems different. This seems like an ideological campaign. It's setting off a lot of tropey conspiracy theories in my head about Silicon Valley transhumanist techno-religion beliefs. Is this a trojan horse where the Zuckerbergian Lizard People are smiling to the face of the people while plotting to replace him? Perhaps that's too dramatic.
So, I offer it up to the Motte. Looking for explanations and perspectives on this while positing, at the outset, that this isn't just about the money. Which makes it a lot more important.
Spoken like a sophomore year self-proclaimed capitalist.
Depending on the circumstances, an entity whose purpose it is to make money can act in ways which make society better or worse. Thus, they have to be aligned to the values of society through laws and regulations. For example, protection rackets are highly profitable, but we judge them net negative and thus they are forbidden, with enough penalties to turn the EV negative hopefully. Likewise for environmental or workplace safety regulations.
But regulations are always either overbearing or incomplete. The solution here is that people can also treat corporations as entities capable of moral behavior, which is a fiction which is also commonly applied to other people with great success. When Google had the motto "don't be evil", this was an implicit acknowledgement that corporations can be seen as moral entities.
This framework allows us, when we learn that a corporation has just invested into hunting street urchins in Somalia for their organs to not shrug and go "well EvilCorp's sole purpose is to make money, so there is nothing to complain about". Instead, we can go "EvilCorp is clearly evil, and I will not do business with them". Collectively, this affects their bottom line (depending on how consumer-facing they are), and serves to deter some unethical behavior.
Then there is the consideration that multiple companies competing with each other is not the ground state in the absence of regulations. The ground state instead are monopolies and regulatory capture. For things which will change the bottom line of one person by plus one million $ and change the bottom line of a million people by minus one dollar, it is clear that the one person (or corporation or special interest association) will put a lot more effort into lobbying than the million people.
I think that takes like "corporations are the real unaligned ASIs" are obviously stupid, because corporations are not superintelligent. But it is certainly a good idea to keep in mind that unless you are their sole shareholder, the corporation has fundamentally different goals than you have.
This is my main complaint about libertarians generally. They don’t understand the nature of power, and they don’t understand the connection between money and power. Once a corporation gets big enough it is going to start exercising power by whatever means available to it, including access to state power. If it gets really big it’s going to start trying to exercise state power of its own, with all the restrictions on other people’s liberty that that implies.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link