site banner

Culture War Roundup for the week of February 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

I'm going to shamelessly steal @Scimitar's post from the Friday Fun thread because I think we need to talk about LLMs in a CW context:


A few months ago OpenAI dropped their API price, from $0.06/1000 tokens for their best model, to $0.02/1000 tokens. This week, the company released their ChatGPT API which uses their "gpt-3.5-turbo" model, apparently the best one yet, for the price of $0.002/1000 tokens. Yes, an order of magnitude cheaper. I don't quite understand the pricing, and OpenAI themselves say: "Because gpt-3.5-turbo performs at a similar capability to text-davinci-003 but at 10% the price per token, we recommend gpt-3.5-turbo for most use cases." In less than a year, the OpenAI models have not only improved, but become 30 times cheaper. What does this mean?

A human thinks at roughly 800 words per minute. We could debate this all day, but it won’t really effect the math. A word is about 1.33 tokens. This means that a human, working diligently 40 hour weeks for a year, fully engaged, could produce about: 52 * 40 * 60 * 800 * 1.33 = 132 million tokens per year of thought. This would cost $264 out of ChatGPT.

https://old.reddit.com/r/singularity/comments/11fn0td/the_implications_of_chatgpts_api_cost/

...or about $0.13 per hour. Yes technically it overlooks the fact that OpenAI charge for both input and output tokens, but this is still cheap and the line is trending downwards.

Full time minimum wage is ~$20k/year. GPT-3.5-turbo is 100x cheaper and vastly outperforms the average minimum wage worker at certain tasks. I dunno, this just feels crazy. And no, I wont apologize for AI posting. It is simply the most interesting thing happening right now.



I strongly agree with @Scimitar, this is the most interesting thing happening right now. If you haven't been following AI/LLM progress the last month, it has been blazingly fast. I've spent a lot of time in AI doomer circles so I have had a layer of cynicism around people talking about the Singularity, but I'll be damned if I'm not started to feel a bit uncomfortable that they may have been right.

The CW implications seem endless - low skill jobs will be automated, but which tribe first? Will HR admins who spend all day writing two emails be the first to go? Fast food cashiers who are already on their way out through self ordering consoles?

Which jobs will be the last to go? The last-mile problem seems pretty bad for legal and medical professionals (i.e. if an LLM makes up an answer it could be very bad) but theoretically we could use them to generate copy or ideas then go through a final check by a professional.

Outside of employment, what will this do to human relations? I've already seen some (admittedly highly autistic) people online saying that talking to ChatGPT is more satisfying than talking to humans. Will the NEET apocalypse turn into overdrive? Will the next generation even interact with other humans, or will people become individualized entirely and surround themselves with digital avatars?

Perhaps I'm being a bit too optimistic on the acceleration, but I can't help but feel that we are truly on the cusp of a massive realignment of technology and society. What are your thoughts on AI?

I had my first experience this week using ChatGPT for my job. I'm a web developer. I manage all of the web properties for a small non-profit.

They have an old wordpress site that is bloated with a lot of plugins. Its often impossible to get support from any plugin developer because they take one look at the mess that I'm managing and nope their way out of helping.

ChatGPT was actually helpful. It helped me diagnose the exact problem. Then it helped me locate an area to implement solutions. Then it gave me a solution. Then it told me how to test the solution. And then the solution failed to do anything. I went through all the steps a few times, and was able to realize it was giving me some bad code. I kept prompting it in different ways and asking for explanations of its code. Eventually I got to a valid solution.

Anyways it was weird as hell. I've worked with junior programmers underneath me before. And I'd be happy to have someone like ChatGPT as a junior programmer beneath me. I'd never recommend they advance past junior programmer. But they basically make for a super googler + semi-dumb code thief.

And I'm not saying that to be like "oh look how crap AI is". Its more like "shit, its too far gone".


I have two young daughters. They are by most standard metrics pretty smart well adjusted little kids. I can say with strong confidence that my wife and I are better at our jobs than the AI. (both our jobs involve a fair bit of text manipulation + talking to people). But in twenty years when my daughters are entering the workforce I don't really have much confidence that they will be better at a text manipulation job than the current generation of AIs.

Forget future advancements, just using the current level of AI will eventually crowd out a bunch of entry level text manipulation jobs.

I look at my daughters playing at night and I think of what world they might grow up in. Right now they love playing a make believe version of day care. They tuck the babies in for nap time, feed them, and then spend an inordinate amount of time giving them diaper changes (including reactions to poopy diapers "eww stinky", or "oh good just pee"). I can't help but think that "daycare professional" might be an oddly resilient career path in the future. Its not like anyone is gonna submit their kids for surveilance in a daycare setting to train up a set of AIs.

Hopefully we'll get to a time where they are wondering about what useful jobs they can have. I was a bit of AI apocalypse skeptic a few years ago. Most of my skepticism is gone.

My prediction is if one of your daughters is in daycare, she may be in charge of watching AIs who are watching the children.

20 years ago we couldn't imagine today even tho nothing has really changed ... Except literally the entirety of human interaction in the western world has shifted to a small device in your hand. It's why I'm shocked when some like Chuck Klosterman doesn't feel like much has changed in our culture in 20 years (not his exact point). It's that the way culture is produced, shared, and taken in has so completely changed that it feels like nothing has.

And when AIs are watching our children in two decades, it'll feel just like yesterday.