site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

Is the rapid advancement in Machine Learning good or bad for society?

For the purposes of this comment, I will try to define good as "improving the quality of life for many people without decreasing the quality of life for another similarly sized group" an vice versa.

I enjoy trying to answer this question because the political discourse around it is too new to have widely accepted answers disseminated by the two American political parties being used to signify affiliation like many questions. However, any discussion of whether something is good or bad for society belongs in a Culture War threat because, even here on The Motte, most people will try to reduce every discussion to one along clear conservative/liberal lines because most people here are salty conservatives who were kicked out of reddit by liberals one way or another.

Now on to the question: Maybe the best way to discover if Machine learning is good or bad for society is to say what makes it essentially different from previous computing? The key difference in Machine Learning is that it changes computing from a process where you tell the computer what to do with data, and turns it into a process where you just tell the computer what you want it to be able to do. before machine learning, you would tell the computer specifically how to scan an image and decide if it is a picture of a dog. Whether the computer was good at identifying pictures of dogs relied on how good your instructions were. With machine learning, you give the computer millions of pictures of dogs and tell it to figure out how to determine if there's a dog in a picture.

So what can be essentialized from that difference? Well before Machine Learning, the owners of the biggest computers still had to be clever enough to use them to manipulate data properly, but with Machine Learning, the owners of the biggest computers can now simply specify a goal and get what they want. It seems therefore that Machine Learning will work as a tool for those with more capital to find ways to gain more capital. It will allow people with the money to create companies that can enhance the ability to make decisions purely based on profit potential, and remove the human element even more from the equation.

How about a few examples:

Recently a machine learning model was approved by the FDA to be used to identify cavities on X-rays. Eventually your dental insurance company will require a machine learning model to read your X-rays and report that you need a procedure in order for them to cover treatment from your dentist. The justification will be that the Machine Learning model is more accurate. It probably will be more accurate. Dentists will require subscriptions to a Machine Learning model to accept insurance, and perhaps dental treatment will become more expensive, but maybe not. It's hard to say for sure if this will be a bad or a good thing.

Machine learning models are getting very good at writing human text. This is currently reducing the value of human writers at a quick pace. Presumably with more advanced models, it will replace commercial human writing all together. Every current limitation of the leading natural language models will be removed in time, and they will become objectively superior to human writers. This also might be a good thing, or a bad thing. It's hard to say.

I think it's actually very hard to predict if Machine Learning will be good or bad for society. Certain industries might be disrupted, but the long term effects are hard to predict.

Not my article but: https://www.rintrah.nl/the-end-of-the-internet-revisited/

I'm not sure the machine learning/AI revolution will end up being all it's hyped up to be. For local applications like identifying cavities, sure. For text generation however, it seems much more likely to make the internet paradoxically much more addictive and completely unusable. There's so much incentive (and ability) to produce convincing scams, and chatGPT has proved to be both easy to jailbreak and/or clone, that any teenager in his basement can create convincing emails/phone calls/websites to scam people out of their money. Even without widespread AI adoption, this is already happening to some extent. I've had to make a second email account because the daily spam (that gets through all the filters) has made using it impossible, and Google search results have noticeably decayed throughout the course of my lifetime. On the other side of the coin, effectively infinite content generation, that could be tailored specifically to you, seems likely to exacerbate the crazy amount of time people already spend online.

Another thing I'm worried about with the adoption of these tools is a loss of expertise. Again this is already happening with Google, I just expect it to accelerate. One of the flaws of argument that knowledge-base on the internet allows us to offload our memorization and focus on the big picture, is that you need to have the specifics in your mind to be able to think about them and understand the big picture. The best example of this in my own life is python: I would say I don't know python, I know how to google how to do things in python. This doesn't seems like the kind of knowledge that programmers in the past, or even the best programmers today have. ChatGPT is only going to make this worse: you need to know even less python to actually get your code to do what you want it to, which seems good on the surface, but increasingly it means that you are offloading more and more of your thinking onto the machine and thus becoming further and further divorced from what you are actually supposed to be an expert in. Taken to the extreme, in a future where no one knows how to code or do electrical engineering, asking GPT how to do these things is going to be more akin to asking the Oracle to grant your ships a favorable wind than to talking to a very smart human about how to solve a problem.

I'm not sure I really like what I see to be honest. AI has the potential to be mildly to very useful, but the way I see it being used now is primarily to reduce the agency of the user. For example, my roommate asked us for prompts to feed to stable diffusion to generate some cool images. He didn't like any of our suggestions, so instead of coming up with something himself, he asked ChatGPT to give him cool prompts.

The best days of the internet are behind us. I think it's time to start logging off.

ChatGPT is only going to make this worse: you need to know even less python to actually get your code to do what you want it to, which seems good on the surface, but increasingly it means that you are offloading more and more of your thinking onto the machine and thus becoming further and further divorced from what you are actually supposed to be an expert in. Taken to the extreme, in a future where no one knows how to code or do electrical engineering, asking GPT how to do these things is going to be more akin to asking the Oracle to grant your ships a favorable wind than to talking to a very smart human about how to solve a problem.

We have been offloading thinking to tools forever, I highly doubt we will reach some breaking point now. We absolutely do lose knowledge when we gain this, but we trade it for more efficiency. Is it bad that we have calculators everywhere?

I'm not sure I really like what I see to be honest. AI has the potential to be mildly to very useful, but the way I see it being used now is primarily to reduce the agency of the user.

I agree with this on the advertising portion. I'm becoming increasingly concerned that targeted advertising could lead to terrifying outcomes, like a small group controlling public opinion. (actually that already exists, but still)