site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

Is the rapid advancement in Machine Learning good or bad for society?

For the purposes of this comment, I will try to define good as "improving the quality of life for many people without decreasing the quality of life for another similarly sized group" an vice versa.

I enjoy trying to answer this question because the political discourse around it is too new to have widely accepted answers disseminated by the two American political parties being used to signify affiliation like many questions. However, any discussion of whether something is good or bad for society belongs in a Culture War threat because, even here on The Motte, most people will try to reduce every discussion to one along clear conservative/liberal lines because most people here are salty conservatives who were kicked out of reddit by liberals one way or another.

Now on to the question: Maybe the best way to discover if Machine learning is good or bad for society is to say what makes it essentially different from previous computing? The key difference in Machine Learning is that it changes computing from a process where you tell the computer what to do with data, and turns it into a process where you just tell the computer what you want it to be able to do. before machine learning, you would tell the computer specifically how to scan an image and decide if it is a picture of a dog. Whether the computer was good at identifying pictures of dogs relied on how good your instructions were. With machine learning, you give the computer millions of pictures of dogs and tell it to figure out how to determine if there's a dog in a picture.

So what can be essentialized from that difference? Well before Machine Learning, the owners of the biggest computers still had to be clever enough to use them to manipulate data properly, but with Machine Learning, the owners of the biggest computers can now simply specify a goal and get what they want. It seems therefore that Machine Learning will work as a tool for those with more capital to find ways to gain more capital. It will allow people with the money to create companies that can enhance the ability to make decisions purely based on profit potential, and remove the human element even more from the equation.

How about a few examples:

Recently a machine learning model was approved by the FDA to be used to identify cavities on X-rays. Eventually your dental insurance company will require a machine learning model to read your X-rays and report that you need a procedure in order for them to cover treatment from your dentist. The justification will be that the Machine Learning model is more accurate. It probably will be more accurate. Dentists will require subscriptions to a Machine Learning model to accept insurance, and perhaps dental treatment will become more expensive, but maybe not. It's hard to say for sure if this will be a bad or a good thing.

Machine learning models are getting very good at writing human text. This is currently reducing the value of human writers at a quick pace. Presumably with more advanced models, it will replace commercial human writing all together. Every current limitation of the leading natural language models will be removed in time, and they will become objectively superior to human writers. This also might be a good thing, or a bad thing. It's hard to say.

I think it's actually very hard to predict if Machine Learning will be good or bad for society. Certain industries might be disrupted, but the long term effects are hard to predict.

I am strongly of the opinion that since neoliberal PMC jobs are the easiest to automatic with AI, there will be incredibly strong regulation banning AI from taking the jobs of the PMC. The power to regulate is the power to destroy, and as incapable of actual productivity the PMC and their legion of bullshit jobs are, they know how to run a grift and bask in their own self importance.

No, what you need to fear from AI is when Facebook fires up an instance of AutoGPT for each user and tasks it with keeping them doom scrolling for as long as is possible. If you thought "the algorithm" was already amoral and sanity shredding, you ain't seen nothing yet. That was a mere baby, feebly hand tuned by meat that thinks (or thinks it thinks). When the AI is fully unleashed on slaving our attention spans to our screens, it's going to be like how Fentanyl turbo charged opioid deaths. You're gonna start seeing people literally starving to death staring at their phones. Actually, nix that, they'll die of dehydration first. I momentarily forgot that nearly always happens first.

I'm gonna register this prediction now too. Apparently Ai has trouble with fingers. You'll know it's gotten loose when there is a new tiktok trend of young people amputating all their fingers. The AI will have decided it's easier to convince us to get rid of our own fingers than figure out how to draw them better. Given the rates of Tiktok induced mental illness, it would probably be right in that assessment.

I am strongly of the opinion that since neoliberal PMC jobs are the easiest to automatic with AI, there will be incredibly strong regulation banning AI from taking the jobs of the PMC. The power to regulate is the power to destroy, and as incapable of actual productivity the PMC and their legion of bullshit jobs are, they know how to run a grift and bask in their own self importance.

I highly doubt this will happen. You talk as if the PMC is a giant union where everyone is aligned, which shows you don't understand the social context there and are clearly just poo-pooing your outgroup.

People in the PMC with power have capital, whether it's political, intellectual, or financial. The financial movers and shakers will not agree to regulating AI, at least until they have gotten their piece of the pie. Even if they do, it will take years and years to get everyone to agree on a framework.

You've also got the AI companies themselves. Altman has come out and said he doesn't think regulation at this stage is a good idea, and he's got an incredible amount of political and intellectual capital. Many people in government, for good reason, see Altman as one of the most important figure in the world right now. They don't want to piss him off.

I'm gonna register this prediction now too. Apparently Ai has trouble with fingers. You'll know it's gotten loose when there is a new tiktok trend of young people amputating all their fingers. The AI will have decided it's easier to convince us to get rid of our own fingers than figure out how to draw them better. Given the rates of Tiktok induced mental illness, it would probably be right in that assessment.

This would be a rad short story. An AI that gets 'frustrated' at its own limitations against the real world and it's solution is to just sand off all the sharp edges that are giving it problems.

Like it genetically engineers all the cows to be spherical so it's physics simulations can be more accurate.

An AI that gets 'frustrated' at its own limitations against the real world and it's solution is to just sand off all the sharp edges that are giving it problems.

I'm obligated to point out that this already happened, the AI was capitalism, the sharp edges were all direct human interactions, and our atomized broken society is the result.

I would be interested in seeing this thought/analogy expanded.

I thought I got this idea from Mark Fisher or Nick Land, but random googling isn't leading me to any obvious writing of theirs on this specific concept. Come to think of it maybe it was one of IlForte's pithier comments. Regardless you should read both of them.

Seeing Like a State plus a broad view of what constitutes a "state," perhaps?

I thought I had seen later Scottposts applying this logic to capitalism.

His Meditations on Moloch sounds like this vein too.

I am strongly of the opinion that since neoliberal PMC jobs are the easiest to automatic with AI, there will be incredibly strong regulation banning AI from taking the jobs of the PMC. The power to regulate is the power to destroy, and as incapable of actual productivity the PMC and their legion of bullshit jobs are, they know how to run a grift and bask in their own self importance.

This is exactly why the crossbow and handgonnes never took off and why we still live under a feudal system ruled over by our lieges and ladies.

More seriously, this technology is too valuable to not use, anyone who does use it is going to gain a massive advantage over anyone that doesn't, its use is inevitable.

More seriously, this technology is too valuable to not use, anyone who does use it is going to gain a massive advantage over anyone that doesn't, its use is inevitable.

The same is true of nuclear power. It's the only technology that will allow us to hit emission targets and keep the grid stable with cheap, reliable power.

But we've built 3 nuclear power plants in as many decades, and our infrastructure is crumbling and less reliable than ever. Our ruling class simply does not care so long as they can keep living that 0.01% life. Even now they are setting preposterous 10 year EV targets, despite not putting a dime towards building out a domestic EV supply chain or infrastructure. Including upgrading our electric grid to deal with the massive increase in demand all those EVs will create. Which brings us back to the nuclear power they scorn so much.

Your appeals to a reasonable nation performing certain obvious reasonable tasks are pointless. This is clown world. You need to think dumber.

The same is true of nuclear power. It's the only technology that will allow us to hit emission targets and keep the grid stable with cheap, reliable power.

Exactly. The general population believes what it was told for 50 years - nuclear power is something immensely dangerous and deadly, something that can explode at any moment, kill millions and turn the whole country into uninhabitable desert full of motocycle riding mutants.

Now, imagine if normies are told:

THE COMPUTER can kill you. Yes, THE COMPUTER can shred you into paperclips, without warning. And not only you, but everyone, everyone in the whole world. Yes, even ordinary computer in your son's room can do it.

Do not wait for your doom. Say something, do something.

The problem is, we've already had hacker scares for years, I don't know what it would really take for people to realize the threat outside of re-hashed Terminator references.

The American public won't give up guns, do you think they'll give up computers?

Heck, even if it's just AIs they're told to give up, forces that want to do that will have to move fast, because every passing moment it reaches more hands, and the hands that have it are gonna hold on tight. And at some point soon, we will reach a point of cultural no return on everyone having these tools.

The general population believes what it was told for 50 years - nuclear power is something immensely dangerous and deadly, something that can explode at any moment, kill millions and turn the whole country into uninhabitable desert full of motocycle riding mutants.

Again, at least according to this poll, 76% of Americans - the most relevant demographic for this forum - favor nuclear energy. Even the opponents do not necessarily hold the most alarmist and charged view of nuclear as a power source.

Nuclear power has a lot of benefits, but it takes a significant amount of time and money to get online, with the benefits being generally diffused. The number of organisations that can actually get a nuclear power plant online for long enough that they can start to make a profit is quite small.

AI is comparatively cheap, the changes are quick and easily observable and the pay off for an individual willing to utilise it is substantial. As a class medievial European nobility may have benefited from a complete ban on crossbows and handguns, but the ratio of costs to return of employing these weapons meant that anyone who chose to defect and take up their use would out compete those who did not. The same is true of AI, it cannot be ignored.

Your appeals to a reasonable nation performing certain obvious reasonable tasks are pointless. This is clown world. You need to think dumber.

I'm appealing to human greed and desire for power. You need to think smarter.