This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I think what you're missing about AI is that the investment of time and money is so incredibly low for what it offers. I pay 20 bucks a month for the paid chatGPT and it's among the very best best time:money investments I've ever made, and it's not even mandatory (you can do fine hooking up a chat app out there to an API key and paying cents after a $5 initial up-front investment, or bounce between free versions, or buy a subscription to a model aggregator for $7 bucks a month that is making an arbitrage on the API cost vs your subscription and your actual predicted usage).
Fundamentally, what might take a human a significant time investment is just gone completely with an LLM. You will never annoy it with stupid questions. You can ask and rephrase the same question multiple times without sounding stupid. You can send it off to do research on something you are mildly curious about but too lazy to synthesize yourself. You can converse with it in a foreign language to practice. You can quickly sanity check a potential action without judgement, and without waiting for a friend to text you back. You can have it reformat text or perform an annoying repetitive text-based task. You can have it write test questions for you to practice on. You can have it pretend to be the other side of a job interview and give you feedback on your answers. You can have it add some comments to your code. You can have it write a skeleton for a program. You can have it quickly give you a summary of a PDF you feed it, unique to your problem. I could go on. It's truly a fundamental change, and potentially very useful in the workplace as it is in life.
Yes, there are pitfalls and dangers in all of those, but in terms of risk-reward? You are risking almost nothing but a tiny bit of your own time, and getting back something potentially very valuable. The "task annoyance" that you wouldn't even inflict on an intern is suddenly a non-issue. I find many (though far from all) of the issues people encounter with AI stem from either misunderstanding what AI can actually DO and what it's best at, or being bad at imagination in terms of your prompts. Very few of the things I listed above might need extensive checking to the extent that you might as well have done it yourself. Many of them are things for which there is no adequate replacement IRL, or at least, not at remotely the same price point or time commitment.
People go through tons of effort to set up language-learning pen-pals, as an example. You can have AI do that now. You can even tell it what language level you're on, or what country to pretend to be from, or tell it to introduce new concepts to you slowly. And worst case, even if the AI makes a few grammar mistakes, so do real people. There's very little downside!
I will admit that the Hugo use-case seems honestly a bit ill-suited for what they used it for. AI isn't that great at free-wheeling internet navigation, so using it for vetting seems like a bad idea. Now, if you instructed it to go through self-submissions or resumes? With a testing and verification step to set up the right detailed prompt, that could be very effective as a screening tool.
More options
Context Copy link