This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I assume chatbots won’t have the same network effects as social media so we will end up with all sorts of chatbots.
I understand that making and training a chatbot currently requires some hardware, manpower, time and knowledge (which is probably patented up to high heavens). So I'm not sure how high is the barrie to entry there - i.e. would be something akin to Parler/Gab/Rumble possible?
Can you even patent things like this? I’ve never heard of say calculus being patented, hardware to a great extent is commercially available.
Even if you could patent say something specific - don’t these guys don’t even know what the AI does to make a response? So it would be a vague patented.
Why not? Software and algorithmic patents are still very much a thing. In fact, I happen to be on a couple of them myself (through my employer, so I don't really own anything). You can't patent facts, but you can patent algorithms and means of achieving a goal - which is what training a model is. You can't patent calculus, but you can patent something like this: https://patents.google.com/patent/US7840625B2/
Vague patent + expensive lawyers = trouble for the newcomers. It's like the rhino case - rhinos are said to have poor eyesight, but with their mass it's not really a problem for them. They can trample you in so many ways that do not require precision. Avoiding them would require precision. Same here - not being smashed by a vague patent would be a problem for newbies, not for the incumbents.
Also, I think you are not entirely correct about the epistemic situation there. What is going to be patented is how you code & train the model, and this is pretty well understood. What is not very well understood is how exactly each piece of training data influences the model parameters and what exactly you need to do for a given model to make it to return a given result, and given the result, how to trace its emergence from the training data. I.e. if your model says the cat is a dog, it's hard to figure out why it happened and which parameters you need to update to fix it. At least this is my understanding of it. But it's a hard problem in general - e.g. see the halting problem and its various consequences, or in general how hard it is to debug programs. That doesn't prevent anyone from patenting algorithms.
Interesting but past me.
Are these things open sourced? Which might prevent patenting if the first discoverer open sources it?
Could be some interesting legal challenges. And if the GOP can win elections they could always just pass laws getting rid of these patents. It would seem wrong to have AI locked up in a few mega corps and everything’s controlled by lawyers.
So maybe change the law. I don’t think society would like 3 megacorps controlling AI and 350 million Americans with no AI.
Some of it is open source, but open source does not prevent patents. I mean, patent holder would not likely hunt down small hobbyists, but at the moment it gets bigger and starts earning money, you get the lawyers on your doorstep.
The GOP is about three or four conceptual levels behind being able to to that. I would love to have the GOP that could understand problems like that and efficiently design strategies to handle them, but they are not even on the same continent with it. They still can't figure out how to make transparent and secure elections happen, something countries that use ink-stained fingers pull off successfully. If they can't figure out 19th century technology, how good would they be with 21th century technology? Especially when 99% of people who understands this technology also hates the GOP.
They probably wouldn't. But that doesn't mean that a) it wouldn't happen and b) the solution we'll arrive at wouldn't be worse than the initial problem. Remember the last big problem our system handled, aka the wu-flu? Are you impressed with how efficiently, sanely, competently, honestly and non-harmfully it was (is being?) handled? Now, western societies deal with infections for centuries. We at least have some experience how it works. How AI revolution would work - nobody has a slightest idea. Adjust your priors on how competently, efficiently and sanely any problems arising from it would be handled. I am not exactly optimistic there. At least it won't be boring I guess.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm less sanguine. The resources necessary to train a 10 trillion parameter AI won't be available to scrappy young startups. Governments and woke megacorps seem poised to dominate the AI landscape.
More options
Context Copy link
More options
Context Copy link