site banner

Culture War Roundup for the week of February 26, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

Senator Josh Hawley:

"If conservatives want to rein in Google Gemini, there’s only one way: repeal Section 230 - and allow Americans to sue these AI companies. If we don’t, they’ll soon control everything: news, information, our data, elections …"

Huh? For reference, section 230 is here. In short, section 230 says that companies aren't liable for information posted on their websites by third parties. This means that Google can't be sued for showing ISIS.com on your search results, because ISIS is a third party, and ISIS.com is their content, not Google's. Section 230 doesn't apply to generative AI because generative AI isn't a third party. If Google Gemini replies to your prompt with, "Thank you for joining ISIS. Recommended pipebomb targets in your area are X, Y, and Z," Google can't use section 230 as a defense if Y sues them for being bombed, because Google generated the information.

If I were to steelman Hawley's point, I guess it would be that Google as a company benefits from section 230, and so repealing it would punish them for creating "woke" AI and cut off a source of funds for AI development, but I don't think Hawley's use of the phrase "these AI companies" is easily read as referring to only "AI companies which are bankrolled by social media products."

If you are familiar with simulacrum levels, you may have had a bit of difficulty grokking level 4. I think an intuitive definition of level 4 is, "politician speak that doesn't fit into levels 1, 2, or 3". Which level is the tweet by Hawley on? It's not 1, because it isn't true. It's not really 2, because it's not trying to convince you of a proposition. It's not 3, replace "conservatives" with "liberals" and "Google Gemini" with "𝕏", and it could be from AOC. That leaves 4. It's just word associations. Woke AI is bad. Tech companies make woke AI. Section 230 something something big tech censorship. Put it in a box, shake it up, let the manatees do their thing, post whatever comes out to Twitter.

I don't know if this is pertinent here or if I shouldn't even be putting it in the CW thread, but I don't feel like there's enough here to upload a new, original post.

Reddit is now for sale - as a (former) Redditor I just got an email offering me the chance to buy in as a private person when they go public 😁 For many reasons, I am going to resist this tempting offer: I'm not a US citizen, I know nothing about buying stocks and shares, and I don't have the kind of spare money for investments that they presumably want (I don't imagine they're interested in "Gimme $10 worth of your stock"). I'd feel much more flattered by the plámás in the below email were it not that I've done Sweet Fanny Adams on Reddit (except make ban-worthy comments) so I'm thinking much more "Gosh, scraping the bottom of the barrel here for pennies, aren't you, guys?".

But if any of you want to take the opportunity to get in on the ground floor and get rich rich rich off Reddit stock, now's your chance!

tl;dr – you’re invited to a special program that lets redditors purchase stock at the same price as institutional investors when we IPO. Details about eligibility and next steps follow. This (long, dense) email has all the info we can provide due to legal restrictions.

As you may have heard, Reddit has taken steps toward becoming a publicly traded company with the initial public filing of our registration statement with the U.S. Securities and Exchange Commission on February 22, 2024. Yes, it’s happening.

And because you have helped make Reddit what it is today, you now have the opportunity to become Reddit owners at the same price as institutional investors.

We’re offering a Directed Share Program (“DSP”) that invites eligible users and moderators who have contributed to Reddit to participate in our initial public offering (“IPO”). (Including you!)

Program Requirements While being selected to pre-register is the first step, there are certain legal and regulatory requirements to participate in the DSP that are outside of Reddit’s control. Bear with us here…

To be eligible for the DSP, you must:
Be a current U.S. resident;
You will be asked to provide the DSP Administrator a valid social security or permanent resident number, along with other personal information. Reddit will not have access to this data.
Please note that U.S. residents using a VPN may face application limitations if the VPN locates them in certain non-U.S. jurisdictions.
Be at least 18 years old;
Provide your full legal name and an email address;
Not be a current or former Reddit employee (FTE).
When the DSP launches (a few weeks after pre-registration ends), individuals who have been confirmed for the program will be contacted by our external DSP Administrator. You will then be asked to provide additional information securely to the DSP Administrator to confirm your eligibility.

How to pre-register The number of people who can participate in the DSP is limited; we will offer this opportunity to as many redditors as we are able to accommodate. If capacity is reached before the deadline, you will be added to the waitlist. Based on demand, we may also limit the number of shares available.

If you are interested in being part of Reddit’s DSP, please go to https://old.reddit.com/dsp on desktop to complete the pre-registration form. If you are one of the confirmed participants, we will follow up with an email with more details in the coming weeks. You can also refer to the Frequently Asked Questions for more information. Due to regulatory restrictions (yeah… we know…) we are not able to respond to further inquiries or questions.

Pre-registering does not guarantee that you will be invited or able to participate in the DSP; it also does not obligate you to purchase shares.

As with any investment opportunity, you should make an individual decision based on your own personal circumstances and risk tolerance. Therefore, we urge you to review the preliminary prospectus, when available, before deciding whether to invest in Reddit.

The deadline for pre-registering for the DSP is March 5, 2024. If capacity is reached before the deadline, you will be added to the waitlist.

What happens next? While there won’t be a confirmation email immediately after you pre-register, everyone who pre-registers will receive an email in the coming weeks from “noreply@redditmail.com”, telling them whether they can proceed with the next steps for the DSP.

This is an automated message (beep, boop, beep) and does not receive replies. Please refer to the FAQ for more information. Per our lawyercats, we are not able to respond to further inquiries or questions.

Prospectus and Important Disclosures The offering will be made only by means of a prospectus. When available, a copy of the preliminary prospectus related to the offering may be obtained from: Morgan Stanley & Co. LLC, Prospectus Department, 180 Varick Street, New York, New York 10014, or email: prospectus@morganstanley.com; Goldman Sachs & Co. LLC, Attention: Prospectus Department, 200 West Street, New York, New York 10282, telephone: 1-866-471-2526, facsimile: 212-902-9316, or email: prospectus-ny@ny.email.gs.com; J.P. Morgan Securities LLC, Attention:c/o Broadridge Financial Solutions, 1155 Long Island Avenue, Edgewood, New York 11717, telephone: 1-866-803-9204, or email: prospectus-eq_fi@jpmorgan.com; and BofA Securities, Inc., NC1-022-02-25, 201 North Tryon Street, Charlotte, North Carolina 28255-0001, Attention: Prospectus Department, telephone: 1-800-294-1322, or email: dg.prospectus_requests@bofa.com.

A registration statement relating to these securities has been filed with the U.S. Securities and Exchange Commission but has not yet become effective. These securities may not be sold nor may offers to buy be accepted prior to the time the registration statement becomes effective. This notification shall not constitute an offer to sell or the solicitation of an offer to buy these securities, nor shall there be any sale of these securities in any state or jurisdiction in which such offer, solicitation, or sale would be unlawful prior to registration or qualification under the securities laws of any such state or jurisdiction.

No offer to buy the securities can be accepted and no part of the purchase price can be received until the registration statement has become effective, and any such offer may be withdrawn or revoked, without obligation or commitment of any kind, at any time prior to the notice of its acceptance given after the effective date. An indication of interest in response to this notification will involve no obligation or commitment of any kind.

I got that also (as, I imagine, did millions of others and even more bots). I too shall pass. I've gotten in on one IPO, did quite well, and don't care to risk my record.

Nice Reddit account you got there.

Please preregister your first and last name, your email address, and confirm that you’re a US Citizen. From there you might have the privilege of registering with your Social Security Number to purchase Reddit shares (it’s like Reddit Gold but better, we promise!). Thanks for having been a part of the Reddit community from when the narwhal first baconed at midnight.

This information is only for internal record-keeping, our transfer agent, and the IRS, of course. Keep on being wholesome!

Yeah, it is pretty much "simply give us the entire contents of your bank account", which is why it amused me. But if they're sending out messages like this to every hog, dog and divil that ever had a Reddit account, they must be fairly desperate.

I got one too. I don't even qualify. I'm not an US resident, I have never even been to the US. They could've figured that out automatically from the IP addresses associated with my account that they're no doubt logging, or they could've looked at the subreddits I post in. Yet they didn't even bother with that bit of obvious filtering. They must've sent it to literally everyone.

I don’t think you’re wrong about Hawley making mouth sounds.

Though…a significant fraction of The Discourse against generative AI has been driven by digital artists. This has made copyright criticism relatively popular. If someone understands gAI as a plagiarist which does not transfer rights, wouldn’t all Gemini’s output still belong to third parties?

I’m not saying this intuition holds any water: 230(e)(2) is clear that the section doesn’t interact with IP law. I suspect that it explains how criticizing Section 230 got into the Overton window.

Genius move from Hawley; repealing section 230 would hand any accusations around “harmful” content to the judiciary, that infamously reactionary body.

By what legal standard could someone sue Gemini even without section 230 if it said "Thank you for joining ISIS. Recommended pipebomb targets in your area are X, Y, and Z"? Not a rhetorical question, there are probably very many aspects of this kind of law that I know nothing about.

Like, at what point does providing information become sueable? Let's say I run a bookstore and you come to the counter and purchase both a phone book and an instruction manual for how to make explosives. Obviously that's not nearly as precise as the ISIS/pipebomb target example, but I am curious about at what point along the spectrum the law would say that things of this nature are sueable. Is it if you actively recommend something as opposed to just providing information that the other person could easily piece together on his own?

The current contours, given existing statutory law for material support, were outlined in last year's case Twitter v. Taamneh. Worth a read. Of course, if you listened to oral arguments there, they did try to grapple with whether they could say something about 230 or about Constitutional limits, but the opinion they converged on dodged all of that and focused purely on statutory interpretation, making their job a lot easier and kicking the can down the road a bit. The upshot, at least for folks who want to impose some sort of legal liability on these companies is that, since this was purely about statutory interpretation, it's entirely possible that they could just pass a different statute that can provide a different standard. It will likely only be when more statutes are passed that pull that line closer and closer to Constitutional/230 limits that we'll really see where the boundaries are.

As an aside, in that case last year, these companies were all swearing up and down that their algorithms are totally passive, agnostic to the nature of the content, and that they are indifferent to the customers who use them. Compare to this week's arguments, where many of those same companies were all swearing up and down that they expend significant time, money, and effort to carefully curate a newspaper-like editorial product that reflects the company's desired expression, and that being able to prohibit Tucker Carlson or Rachel Maddow from using GMail just because they don't like their politics is just a regular part of their editorial discretion. This massive hypocrisy was pointed out multiple times, and we'll see if it matters in the final decision. There have been times before that the Court has been pissed off by repeat litigants who appear to make a mockery of the Court's standards and processes by making contradictory claims about the same underlying facts in different cases at different times to achieve the results they want.

I don't understand why conservatives want to repeal section 230.

Won't that lead to crackdowns on speech, and so forth? Like, isn't that just the direct effect? This is especially bad considering the already existing power differentials—it'll be somewhat lopsided.

This clearly seems like a terrible move, unless I'm missing something.

The main argument is that Section 230 as-is allows big tech to have their cake and eat it too. They can claim to be not liable for user content on the basis that they cannot control what is posted on them, then turn around and heavily "curate" content on political grounds. The idea would be to repeal Section 230 and replace it with an alternative that forces a consistent position; either you curate content and are liable for the content you allow, or you aren't liable but have to tolerate wrongthink on your platform.

They'd better have an alternative, then, and not just strip the protections and force the companies to engage in more censorship.

Pre-CDA 230 caselaw still recognized a split between publishers and distributors of content; it just held distributors liable if they passed along defamatory content knowing it was false, and wasn't clear enough on that divide and left potential lawsuits to hit court or appeal.

You're not missing something, they are. Repeal empowers the big tech companies who have the lawyers to fight the interminable cases which would result. It gives them an excuse to censor when they want to ("we'd be liable if we didn't"). And it provides a means to strangle any upstarts who might want a less censorious environment. But conservatives are still the law-n-order group, and Section 230 looks like an affront against law-n-order.

Won't that lead to crackdowns on speech, and so forth?

Maybe, but maybe not. Non-progressives are basically going for something along the lines of the Fairness Doctrine or the Equal-Time Rule imposed on Big Tech, because over the past 10-15 years progressives have been quickly enclosing the commons (we didn't need a Fairness Doctrine in 1995 or 2005 because the liberals were still pretty firmly in control of Big Tech back then- the iPhone would ultimately break them). Once your enemies start saying "build your own broadcast spectrum" it's not a surprise there are calls to violently reclaim it (which politics is, by other means).

Of course, their being able to articulate that is another matter entirely. But the Supreme Court has overridden amendments before- indeed, that's why those two laws persisted- and I think a solid argument can be levied (at least against ISPs and services that offer DDOS protection) that the "spectrum" is scarce enough to warrant an overriding government interest.

Is that going to make non-progressives as safe as they hope to be? Well, no- there are several vulnerabilities in different places on the OSI model that could allow progressives to claw back control, especially when combined with appliance computing and the DMCA ("iPhones only talk to progressive-approved websites, and removing that restriction is illegal" is always a few months' work away from becoming reality- it already effectively is when you consider how bad the App Store already is- to say nothing of any number of other "please drink verification can" schemes). And it still doesn't affect AI, which is another thing entirely... though it would quite easily be possible to ban sales of high-performance GPUs to US companies that refuse to sell uncensored models much like the US already does with respect to China and doing that doesn't even run into 1A issues.

That's not to say anyone's actually thought about it this much and we're going to get a half-assed measure that still fucks up everything, but a Red congress could get it done.

I agree, this would be a huge own-goal for conservatives. You think that tech companies are bad on free speech now, when all they have to worry about is "someone made us look bad on Twitter"? Wait until you see how hard they crack down when they have legal liability for what it said on their platform. You'll never see any ideas to the right of ~Obama ever again on big tech.

Conservatives are not libertarian free-speech absolutists, even if they also hate SJWs.

Conservatives by-and-large want crackdowns on all kinds of speech, from porn to trans activism to marxism to critical race theory to etc.

The fact that the internet already skews against them only reinforces the benefit. Under the current legal regime, they can already be shut up by corporate platform owners and activist moderators who disagree with them or find them bad for business. Meanwhile, their opponents get positive treatment.

They can't win the war for the internet by the will of the market. Turning the law onto it is their best hope to suppress the speech an punish the enemies they want to target.

Just a blanket repeal could probably backfire*, I always thought you just have to make a carve out for small sites and forums, and let Facebooks, Twitters, and Reddits, fends for themselves.

*) Then again I do other countries have a section 230? If not did it result in a lawsuit bonanza? If American society is more litigious, and that's where the problems come from, can't small forums just host themselves offshore?

Every other Western jurisdiction (even the EU) has a Section 230 equivalent, although there are exceptions (eg. I think the EU imposes additional content regulation on the very largest big techs with tens or hundreds of millions of users).

If section 230 is repealed, does this increase the probability that the major tech platforms can be destroyed?

To ironman, Hawley thinks that LLMs work as a fulfillment of the argument ad absurdem from Batzel, where Google as a company has slurped in a slurry of data from undiscoverable initial providers, and Google engineers have carefully tweaked and twisted it to only provide the results they want, such that Google 'hasn't generated/produced' the content only by the strict literal sense where a ransomer might not have 'written' the letter they cut from newsclippings.

This isn't technically correct, but the ways that it's wrong are technical and not-obvious, and given Daubert and stochastic parrots, I'm not sure I'd bet money on it not going to court (or even not convincing a jury).

To steelman, AI companies, whether social media or search or just-plain-LLMs, aren't in the business of selling answers: they're selling API keys. Section 230 means that some of their clients -- not all, but a large portion who produce end-user-facing text -- can't be liable or even brought to a courtroom for something defamatory, which is not a small selling point. More critically, this allows the actual LLM production to be laundered through a horde of intermediates who've put their own tiny tweaks into play, making it extremely hard to bring serious lawsuits to court against even the most intentionally tortuous conduct, and near-impossible to do so successfully.

Hawley's a demagogue, and isn't considering this. OpenAI might not even be considering it (I'd give ~60% odds at low stakes for them, though I'd put a sizable bet at long odds that Google has had separate legal and actuarial teams look over it). But it's a question that has far bigger impact on the business applications of current LLM tech than anything blase like copyrightability.

aren't in the business of selling answers

They plainly are. When I give Bing/Google a question in a natural way it tries it damnest to give me an answer, even if half the time that answer is spam and the other half of the time the answer is gaslighting me about something I clearly know not to be the case irl.

Fair point, but are you the purchaser, or the product, when doing so?

It's currently direct enough that you can put blame somewhere, but I don't think Google expect that use case to be where and how it makes its money from Gemini (or future LLMs), any more than the open testing grounds directly talking to the thing are.

Conservative attempts to very indirectly deal with problems is not going to work. To actually deal with extremely racist antiwhite and progressive stack A.I. and this kind of ideology they should put huge fines, deprive of goverment funds, or directly restrict, or all of them to different degrees.

That sounds like an obvious 1A violation. And who’s deciding what counts as wrongthink, again?

The American goverment with A.I. safety and its pressure in silicon valley and its agents is already there dictating. Same applies with very powerful totalitarian far left NGOs with influence in mega corporations.

I also don't care for the private/public distinction when it comes to collective agendas of mega corporations like Google/Twitter/etc/etc. Especially since the Democrats especially with some Republican cooperation and outside the USA, the European Union and national bureaucrats are very willing to dictate and influence.

I would buy more into this argument if any of these corporations did not give woke default and you could outside of Gab get right wing alternatives. And if they didn't ban from their stores dissent. The censorship of the millions of users of A.I. that they will be subject to by using a platform that censors non culturally far left content because of the dictates of a) goverment agents of such ideology influencing things b) non goverment people running such organisations is a greater violation of freedom, and besides what is the default matters in its own right.

I am in favor of the default being saner also for reasons that don't have to do with opposition of censorship but the use of art not distorting reality in a culturally genocidal manner. Cultural erasure of this type is an evil in itself also, in addition to the censorship being another evil.

Plus Artificial Intelligence is far from being just a product. It being super far left is a problem because it is going to be used in all sorts of decision making for both private and public institutions. It will be used to discriminate, including in medical decicions. I care also for the end of the art not erasing white people. This also happened with this ideology and vaccines in the pandemic. If the default ideology promoted by A.I. is ridiculously unjust in regards to the justice system, that will result in having a very lopsided jusitce. And if the A.I. becomes more independent, or we get robots, there are is a deadly threat there. Woke drones or Woke AGI are actual possibilities.

It does matter as a value to have a society that doesn't screw over the groups progressive authoritarian hate NGOs that have influence with mega corporations and the goverment alike such as ADL have the targets upon.

And of course what the Nybbler have said.

We live in a world of oversensitivity and overreaction in a progressive direction with a lot of strong reactions towards attempts to correct the overcorection. Not in a world where freedom is maximized as a value, even from the right which tends to respect to a degree or another cultural leftist sensitivities.

Since you have supported the ADL, you do want a group that is a decider of wrongthing. One which is rather authoritarian and biased, even defining at some point that it isn't racism when it is against whites.

Perfection is impossible, and so is not having any deciders, but it is easy to imagine deciders who are less biased than that and I am 100% in favor of things moving in such direction.

One side being impotent while the other side is willing to use power in both private sphere and in public sphere (it is in fact hard to see where the one ends and the other begins between NGOs which have chapter in mega corporations, goverment agents and such mega corporations and even intelligence agencies) is the case of the side choosing impotence being gullible and enabling abuse and the worst decision makers to run riot. It is a vice and not a virtue.

There is no reason to be gullible towards requests to selectively follow certain rules at your own expense that the ones requesting don't apply for themselves.

When the government can no longer require private employers fire me for being a racist, I'll seriously consider complaints about the First Amendment about government moves against anti-white racism. Until then, it's just "your rules applied fairly".

Belisarius is asking the government to pick and choose what products count as acceptable speech. This is central to the 1A in a way that private employment is not.

But fine, throw out the whole 1A because it’s not protecting your edge case. If regulating AI companies this way were perfectly legal, it’d still be a horrendous idea for all the reasons described elsewhere in this thread as an “own goal.”

Belisarius is asking the government to pick and choose what products count as acceptable speech.

The government is already doing this. If only by choosing what the people making the products are allowed to say.

This is central to the 1A in a way that private employment is not.

I would disagree even if they weren't related.

The government is already doing this. If only by choosing what the people making the products are allowed to say.

Gab is a thing that exists and is technologically capable of keeping up with other big tech companies. Its many problems do not stem from the US government.

Gab is a thing that exists and is technologically capable of keeping up with other big tech companies. Its many problems do not stem from the US government.

Are you sure? Maybe the executives of Visa are just woke.... but maybe the regulators said "This Torba guy, he seems to be a bad dude. Be a shame if there were some sort of investigation involving him. A real shame."

The executives at Visa are in fact woke. So are the executives of apple and google and many other companies that have caused problems for Torba.

One formal method might be creating a body against historical falsification or radical ideology in AI to fine companies. Or you could have various agencies find trouble with companies that don't uphold the party line, informally demonstrate the penalties for unorthodoxy. You could prevent state funds buying Google shares or withhold govt contracts.

Most anti-BDS laws have taken one of two forms: contract-focused laws requiring government contractors to promise that they are not boycotting Israel; and investment-focused laws, mandating public investment funds to avoid entities boycotting Israel.

Texas took steps to curb funds that were anti-oil/gas.

https://www.texastribune.org/2023/02/07/texas-investment-funds-teacher-retirement-system-esg/

But realistically this won't happen because the state isn't really opposed to this kind of thing. Hawley hardly seems to care either, he wants to let somebody else try to do something about it! Repealing 230 is barely related to the issue, it's milquetoast and pathetic. To be fair, he says that Big Tech controls the Senate so it's beyond his power:

People ask me all the time why the Senate doesn’t do anything on A.I. or all the child porn online or the child predators. Simple: the Senate is bought and paid for by Big Tech. If Tech doesn’t want a bill, it doesn’t get a vote. They control the floor. They own the place

A very obvious problem is that any “Ministry of Truth” will only be able to be as neutral as the chosen historian and those who choose the historian will be in full control of what is considered “Truth” in a historical context (at least in this case, though any official fact checking runs into this problem), and it would be basically a political position appointed by the people running the government at any point.

To give a real quick example, well, Trump. To the Right, especially the MAGA wing, he’s a great president, did lots of good things and is being persecuted for being Biden’s political rival. To the Left, he’s an embarrassment, wants to be a dictator, racist, caused an insurrection, and committed lots of crimes for which he is now being held accountable. Depending on which party gets to appoint the historian, Trump is either the greatest president ever, or the Antichrist. And much like Supreme Court justices were vetted almost exclusively on Roe vs. Wade opinions, the historian will be vetted on his opinions on controversial topics in American history. Do you think 1/6 was an insurrection? Do you think Indians moved to reservations was genocide? Do you agree that Zionism is good or bad? Go down the list and you can absolutely find topics that while they’re about history, one’s political ideology colors how they see the events and even if they occurred at all.

I'm sure that everyone (Nazis included) agrees that the SS was not full of blacks. Maybe Netflix thinks English Kings circa 1300 were black but their opinions are not shared by the historical community.

Furthermore, states have opinions about politics and ideology. Schools teach ideology. State media teaches ideology. States use their influence to strengthen friendly voices and suppress opposing voices. It's absolutely routine, including in the US. They already put this stuff in school textbooks, that Jan 6th was an insurrection. Flinching away from using state power is a sure way to get state power used by someone else against you.

Do you agree that Zionism is good or bad?

The US clearly thinks it's good. They've devoted enormous effort to supporting and advancing Zionism and suppressing its critics.

But that’s exactly the point. Having any sort of historical “fact checker” just means the politicization of history even more so than it is today. As it is with most other things. Even with funding, you can easily end up with the Official History and only paying for things that support whatever the Powers that Be want to be true. And therefore I think it’s a question of being careful what you wish for because you just might get it, only to find it weaponized against you. It would become a highly politicized and subjective interpretation of history that exists at the behest of the state and would be used to condemn the alternative viewpoints as misinformation and the mainline opinions of the state become The History (much like weaponized science today) where you will be shamed and silenced and fact-checked for saying things not in line with The History whether or not its actually true. The case for masks was ambivalent at best, yet because the official line was “masking works and saves lives” people were shamed, bullied and silenced for questioning it. Posting a cloth face mask and showing that your breath escapes from the sides (which is true) became something routinely deleted and modded to the point where I still see people wear masks in public.

I do wonder if Section 230 should be amended to something like "If you want the protections of section 230 you must be a 'public square' and you follow nothing stricter than US speech guidelines. If you choose to exercise editorial control, and you are responsible for what your users post".

This is covered partly by @ControlsFreak's post below.

As I see it tech companies kind of want to have their cake and eat it too. They want the protections of a free speech regime that prevents them from getting in trouble. But they also want to control speech for the sake of their brand/ideology.

This is roughly where I land with Section 230. The intention was to allow large tech companies (and small blogs, etc.) to host user comments without taking on liability for hosting illegal or defamatory content. Maybe I'm reading between the lines too much here, but the intent appeared to be to shield companies who had user-generated content from liability for content they didn't control.

As large companies work more and more toward controlling what users can say on their platform, the argument could be made that they are getting closer and close to editors, who choose what content goes in their paper. And if you're picking and choosing who can say what on your platform, and are telling users "you can't say this, it's misinformation", it sounds like editorial activities, and it certainly seems like platforms should have the capability (and as such, the responsibility) to police libelous and other content.

Were I able to dictate my preference to the big tech companies, my idealized solution would be a situation where the tech company itself doesn't police anything stricter than US guidelines, but provide an API for third parties to review and filter posts that users can subscribe to. You want to hide all posts with profanity? Choose that provider. Want to hide all posts with misgendering? There's a filter for that too. But then the user is doing the "editing" rather than the platform.

Yeah, I think part of the issue is that all big tech companies had to develop censorship technologies and capabilities just to comply with copyright laws. So once they had the process in place they thought "why are we just using this for copyright stuff?"

Were I able to dictate my preference to the big tech companies, my idealized solution would be a situation where the tech company itself doesn't police anything stricter than US guidelines, but provide an API for third parties to review and filter posts that users can subscribe to. You want to hide all posts with profanity? Choose that provider. Want to hide all posts with misgendering? There's a filter for that too. But then the user is doing the "editing" rather than the platform.

I'm even fine with the companies themselves providing those filters, because I suspect a highly requested filter will be "marketing spam". But it also seems possible that the whole "filters" issue is a self-solving problem with the way some social media properties work. You just follow people you want to hear from, and unfollow them if you don't like what they are saying. And you just don't see things you don't follow. Or shared follow lists become the norm, so instead of companies doing blacklisting of content the individuals are doing mass whitelisting.

Yeah, I think part of the issue is that all big tech companies had to develop censorship technologies and capabilities just to comply with copyright laws. So once they had the process in place they thought "why are we just using this for copyright stuff?"

This is a good point that I hadn't previously considered. They had a previously designed compliance tool available to them, and in that case, why not use it to make their platform a better and more pleasant place (however they define it)?

You just follow people you want to hear from, and unfollow them if you don't like what they are saying. And you just don't see things you don't follow. Or shared follow lists become the norm, so instead of companies doing blacklisting of content the individuals are doing mass whitelisting.

If I'm looking to consume or ingest information (or keep up with friends), this is the way to go. The downside for companies is a lack of discoverability, which limits the time you spend on their platform.

I'm sure a large chunk of especially social media company's desire to curate/editorialize user content is a desire to keep users cozy, and incentivize time spent on the platform.

Most people are not that technically savvy, they’ll use Facebook/Twitter/insta/google on the basis of one or a small number of default modes.

It's just word associations.

That seems to be a substantial portion of all political communications these days. A huge portion of politics (including political 'news', which is usually essentially just propaganda for one side or the other) is finding some way to put two things on a shelf next to each other, one thing universally agreed to be bad and another an unrelated politician, political party, or political idea, and just go, "Eh? Eh? How about it? They're like, right next to each other!"

It's pretty obvious that Hawley either doesn't understand or thinks his audience doesn't understand (and thus doesn't care about making shit up) what Section 230 is about.