This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I'm going to shamelessly steal @Scimitar's post from the Friday Fun thread because I think we need to talk about LLMs in a CW context:
A few months ago OpenAI dropped their API price, from $0.06/1000 tokens for their best model, to $0.02/1000 tokens. This week, the company released their ChatGPT API which uses their "gpt-3.5-turbo" model, apparently the best one yet, for the price of $0.002/1000 tokens. Yes, an order of magnitude cheaper. I don't quite understand the pricing, and OpenAI themselves say: "Because
gpt-3.5-turbo
performs at a similar capability totext-davinci-003
but at 10% the price per token, we recommendgpt-3.5-turbo
for most use cases." In less than a year, the OpenAI models have not only improved, but become 30 times cheaper. What does this mean?https://old.reddit.com/r/singularity/comments/11fn0td/the_implications_of_chatgpts_api_cost/
...or about $0.13 per hour. Yes technically it overlooks the fact that OpenAI charge for both input and output tokens, but this is still cheap and the line is trending downwards.
Full time minimum wage is ~$20k/year. GPT-3.5-turbo is 100x cheaper and vastly outperforms the average minimum wage worker at certain tasks. I dunno, this just feels crazy. And no, I wont apologize for AI posting. It is simply the most interesting thing happening right now.
I strongly agree with @Scimitar, this is the most interesting thing happening right now. If you haven't been following AI/LLM progress the last month, it has been blazingly fast. I've spent a lot of time in AI doomer circles so I have had a layer of cynicism around people talking about the Singularity, but I'll be damned if I'm not started to feel a bit uncomfortable that they may have been right.
The CW implications seem endless - low skill jobs will be automated, but which tribe first? Will HR admins who spend all day writing two emails be the first to go? Fast food cashiers who are already on their way out through self ordering consoles?
Which jobs will be the last to go? The last-mile problem seems pretty bad for legal and medical professionals (i.e. if an LLM makes up an answer it could be very bad) but theoretically we could use them to generate copy or ideas then go through a final check by a professional.
Outside of employment, what will this do to human relations? I've already seen some (admittedly highly autistic) people online saying that talking to ChatGPT is more satisfying than talking to humans. Will the NEET apocalypse turn into overdrive? Will the next generation even interact with other humans, or will people become individualized entirely and surround themselves with digital avatars?
Perhaps I'm being a bit too optimistic on the acceleration, but I can't help but feel that we are truly on the cusp of a massive realignment of technology and society. What are your thoughts on AI?
Many businesses don't even know how to use excel or adobe correctly. I work in one such business. There's just no interest in doing new things, everyone is paid by the hour. And apparently the quality of our products and user experience (this is relating to authors in the publishing industry) is well ahead of our peers, who are much larger and wealthier. They are presumably even less organized than we are. I suggested using AI-generated images for our more abstract book covers - but got shot down for a fake reason. Nobody wants change. This may well be different in other fields, notably tech. However, I suspect it is true for most sectors.
I subscribe to Yudkowsky's school of thought where self-driving cars will not be seen before the apocalypse because our regulatory institutions are so incompetent and slow. I think widespread white-collar job automation won't happen within our dwindling lifespans because society is not run by intelligent energetic, innovative thinkers. It's run by people with Essence of Baby-Boomer, people who just prefer doing things as they used to be done, people who don't comprehend that there could be a faster way to do things, even if they just look for the excel tool designed for their purpose. Our website doesn't load fairly often and looks like it was made in 2004, despite our failed attempt to modernize it. How hard is it to set up a modern website that lets people reliably order books? Not very hard in the universal sense of human capacity, very hard for us.
If there is any risk of unemployment, I imagine many jurisdictions will manipulate requirements and definitions to prevent machines being used to replace politically influential constituencies. HR doesn't do much functional work today but they have political power, power to influence people. It's easiest to stop new things from happening, to use delaying tactics or raise objections. See the OSS's wonderful guide to institutional sabotage, which seems to have been widely adopted: https://twitter.com/CityBureaucrat/status/1450240118195986437
It seems so easy to raise some specious reason why AI couldn't be used for this job, or to cherrypick some failure and pounce upon it. People have never made egregious failures before! The EU for instance demands that all AI decisions be legible and explainable, which is basically impossible due to the fundamental nature of the technology.
I cannot imagine how this ends well for Western civilization. How can our governing institutions, which have failed all but one of their tests, manage to get this right? They failed on nuclear energy, on waging idiotic wars in the Middle East. They failed on China (publicizing our plan to lure them to liberal democracy via free trade and openness is like bluffing after showing your hand), they let China develop its industries when it was weak and challenge them now that they're strong, ceding the initiative completely. They failed on climate change - if it is a serious matter then we should've gone nuclear decades ago and since it isn't then we shouldn't be squandering trillions on renewables. Gain of Function - big fat failure. Not knowing for sure whether masks do anything to airborne pandemics - big fat failure.
The only thing our leaders managed to do right was not killing everyone with a nuclear war and that might well be a lucky chance given how irresponsible they were. How can they possibly get AI right? It's not a simple matter like 'not choosing to destroy the world in great power war'.
/images/1677911876674946.webp
Tax software?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link