This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Can you expand upon what expectation of the future changes is exactly fueling your crisis? To my thinking when the AI of driverless cars can't make left turns and while AI can create text and art it is just pattern regurgitation, this isn't despair-inducing to me, but I also haven't been paying close attention.
Is the 'fight' Nate Soares is talking about just on regulation? Since while I agree up until now that's been absence, I see AI regulation as something government could eventually codify. I think Silicon Valley giants are firmly in the Military Industrial complex and the recipients of government research dollars.
One more thing I thought I'd mention. It may be the AGI is an easier problem than driverless vehicles. Some of the "higher" functions of humans may be easy to replace than the more physical functions like driving a car. There's a very good chance full self-driving vehicles will become a thing AFTER, not before AGI.
There's a misconception (rapidly being corrected) that AI is coming for factory jobs because those were the jobs that were automated last time. In fact, it will be the intellectual jobs that will be automated this time. Plumbers, construction worker, hair stylists, and burrito rollers won't be replaced until much later.
This is classic Moravec's paradox, but I don't think it holds. Consider this article from 2012. Now consider 2022's Flamingo Now look at results within the last month, like BLIP-2 or MM-CoT, which were orders of magnitude smaller, vastly cheaper to train and are disposable enough for open-source.
Convnets and everything after them have made vision easy. Driving will be solved soon enough. It's just those damn nines of safety.
More options
Context Copy link
...if at all...
I remember Rats pooh-pooing the law of comparative advantage back in the day. Don't you know? AI can be better at everything and replace all the jobs. Well, yes but even if we get to that point technologically, it's precisely the high-salary jobs that it makes the most sense to replace first. And at that point, why waste precious GPUs on flipping burgers?
More options
Context Copy link
More options
Context Copy link
I expect near term AGI (circa 2030). By this I mean that anything a 130 IQ person can do with a laptop and an internet connection, an AI will be able to do better and cheaper.
I'd recommend playing with ChatGPT some more. It's far from just pattern regurgitation. I don't think these criticisms hold a lot of weight honestly. Importantly, the people who are most dismissive of AI tend to be those with the least domain expertise.
We can't even solve carbon emissions. The U.S. government, poorly managed as it is, is just one entity in a sea of competing interests. How can we get 100% compliance from the whole world, especially when many people are completely ignorant of the threat?
Why do you think we’ve arrived at this situation? That the people with the most domain knowledge are also some of the people who are most incorrect, in your view?
More options
Context Copy link
Chatting with ChatGPT would only increase the training data!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link