site banner

Freedom of speech has been poisoned and we need to reframe it

felipec.substack.com

I've written about freedom of speech extensively in all manner of forums, but the one thing that has become clear to me lately, is that people are genuinely uninterested in the philosophical underpinnings of freedom of speech. Today they would rather quote an XKCD comic, than John Stuart Mill's seminar work On Liberty.

Because of this, I've decided to try to reframe the original notion of freedom of speech, into a term I coined: Open Ideas.

Open Ideas is nothing more than what freedom of speech has always been historically: a philosophical declaration that the open contestation of ideas is the engine of progress that keeps moving society forward.

Today the tyranny of the majority believes freedom of speech is anything but that. They believe that "freedom of speech doesn't mean freedom from consequences", despite the fact that such term came from nowhere, has no author, and in addition all great free speech thinkers argued precisely the opposite. The great thinkers argued that if people are afraid of expressing unpopular opinions, that is functionally the same as government censorship: ideas are suppressed, society stagnates, and progress is halted.

So far I have not yet heard any sound refutation of any of these ideas. All people do is repeat the aforementioned dogmatic slogan with zero philosophical foundation, or mention First Amendment details, which obviously is not equal to freedom of speech.

How is anything I've stated in any way an inaccurate assessment of what is happening?

1
Jump in the discussion.

No email address required.

I think you're missing the point. If you wanted to talk to your mother, would I be okay with deciding what you were allowed to say? Would Google? The government? As far as I'm concerned, nobody has the right to hinder communication between anyone else. The fact that Google can even read my emails is already a disaster, and I'm quite sure reading your physical mail is highly illegal, and that the reasons behind this decision aren't invalid for digital mails.

The one who listens has as much freedom as the one who speaks

This sounds like the freedom of association? I like that concept. What I dislike is when companies try to decide who I can associate with, as well as who can associate with me.

The internet didn't work like this before the fallacy of association began. The form of the fallacy is "If illegal content ends up on Google, Google is guilty" or "If a person writes a slur in your game chat, your game is guilty", "If you're friends with a sexist, you're likely a sexist yourself", etc. You might have heard other versions of it, like "Roblox is guilty because pedophiles use it" and "Guns should be illegal because criminals use them". The idea is sometimes mocked as "Hitler drank water once, therefore you're a nazi for enjoying water". I believe that a large chunk of all conflict in the world, and the biggest reason that ideological bubbles have become such a problem, is this very fallacy.

No I’m not missing the point. The freedom of speech people are not talking about you talking to your mother. They’re talking about being throttled or “censored” so their content doesn’t go as far as they would like it to.

You been to London? There’s a place called Speaker’s Corner in Hyde Park. I can go there, get up on a wooden box, and speak about whatever I want. If what I’m saying is well constructed and delivered, a small crowd will inevitably gather round. If not, or I’m crazy, no one will listen to me.

It seems like a ridiculous analogy, but the freedom of speech brigade think Hyde Park should usher in a hundred people and encourage them to stand there and listen to me.

In London, random people will decide if you're worth listening to or not. On social media sites, random people will never get to decide if the algorithm simply blacklists things containing words which align with specific ideologies. The algorithm doesn't even know who I am, nor has the algorithm learned that people do not want to hear what I'm saying, it's simply manually coded to prevent people from talking about certain ideas, even between people who both like said idea.

The mechanism you're describing is an algorithm which favors the content that people enjoy the most, and filters away the content that people enjoy this least, but this is not how social media algorithms work. If a website implements a neutral algorithm, and simply step back and let things take care of themselves, then they'll get in big trouble. Maybe porn will be on the frontpage, maybe one of Hitlers speeches will blow up because it's interesting, maybe bots will successfully game it, whatever. Every algorithm which exists today is carefully engineered to do specific things, and it's not true that your online following is a function of the ratio of people who want to associate with you and hear your ideas. They manually "correct" it every time content that the owners don't like become popular. I'd not dislike such an algorithm much, as it would technically be fair for all users

But

it's simply manually coded to prevent people from talking about certain ideas, even between people who both like said idea.

in order to get your idea in front of other people who might line your idea, it has to distribute your message to a proportion of available people who might like it. My point is, this distribution, if it happens, is a bonus. You, or nobody, is entitled to this distribution. People who complain that their reach is getting throttled are complaining that they’re not getting wider distribution, and then complain that their freedom of speech is getting unlawfully restricted. It’s not, because they are not entitled to that distribution in the first place.

But if, to take an extreme example, I lock you in a soundproof box in the basement of a castle for spreading incendiary rumours, it seems very peculiar to say, ‘no, you have free speech, I’m just not helping you distribute that speech to others’. I think we agree on that much.

To take a less extreme example, if there are two speakers on Speakers Corner, and I give a giant megaphone to the other one that totally drowns out your voice, that doesn’t exactly seem like free and fair speech either.

In actual real life, there is some level of ‘not helping you distribute your ideas’ that is equivalent to ‘shutting you up’.

It doesn’t maybe mean you have to give big megaphones to everyone, but maybe you do have to give them all a soundproof room and make it known where they are and direct people on request and not actively direct them away.

I think the soundproof room in a dungeon is another false equivalency. But for the sake of a civil argument let’s agree on the loudspeaker question. Do we actually think some people are getting a helping hand through a shiny new loudspeaker, with the twin express aims of promoting their ideas and drowning out ideas someone doesn’t like? Or is it just the case that people whose ideas get more reach have skilfully figured out the content algorithm game?

Personally I think that it’s convenient for some people to claim that they’re being throttled by some nefarious group of actors, rather than build the skills necessary to be more successful in the algorithm.

I think the second reality is much more likely, but that turns down the dial on conspiratorial thinking. Messy reality is scarier than a perception of victimhood, which appeals because it moves the locus of control away from you and me.

TLDR - it’s more attractive for us to believe our freedom of speech is being limited by bad actors than accept that we’re not skilled enough or our ideas aren’t very good.

Do we actually think some people are getting a helping hand through a shiny new loudspeaker, with the twin express aims of promoting their ideas and drowning out ideas someone doesn’t like? Or is it just the case that people whose ideas get more reach have skilfully figured out the content algorithm game?

If you look at any public statements made by Google, pre-acquisition Twitter etc. they are absolutely clear that they are attempting to promote trusted, authoritative sites (as judged by them). There is a reason that google medical searches always route to Healthline and WebMD: ever since the 'Google Medic Update' google has routed medical, financial and disaster-related queries to trusted and usually governmental partners.

Google also engages in strategic banning, throttling and promotion of information:

Tackling misinformation online is an ongoing challenge that Google continues to invest in, including researching novel approaches to the problem. For the past several years, Jigsaw, a unit within Google focused on threats to open societies, has collaborated with researchers from universities of Cambridge and Bristol and American University to study the effectiveness of a tactic called prebunking. While a more commonly known tactic, “debunking” seeks to correct false claims already in popular discourse, prebunking helps individuals build psychological resilience to misleading narratives before they ever encounter them.

Prebunking works by alerting individuals to attempts to manipulate them, along with the false claims or tactics likely to be used to do so, coupled with a thorough refutation of those claims or tactics. The approach is highly flexible and can be adapted to a wide range of media, from a simple billboard to long-form narrative content. Academic research has shown it to be effective against a variety of false narratives, from white supremacist hate speech to COVID vaccine misinformation. In the fall of 2022, Jigsaw ran a large-scale trial of prebunking seeking to counter anti-refugee narratives in Central and Eastern Europe following the mass influx of Ukrainians in the wake of Russia’s invasion of the country.

We will continue to fight this pernicious problem by taking a flexible approach to tackle misinformation across different products, taking action at scale, elevating trustworthy information* and equipping people with skills to detect misinformation.

https://web.archive.org/web/20231001021910/https://safety.google/intl/en_us/stories/fighting-misinformation-online/

Google literally says here that they are promoting some material in order to drown out and make ineffective other material. I can't even say that all of this is wrong, I am not necessarily keen on a laissez faire approach to e.g. selling medical products in all cases. But it is absolutely manipulation of the discourse by promoting favoured voices and banning, shadow-banning or drowning out others.

The Dept. of Health also cites communiques from Facebook, Twitter and TikTok to state that:

Some technology platforms have improved efforts to monitor and address misinformation by reducing the distribution of false or misleading posts and directing users to health information from credible sources

https://www.hhs.gov/sites/default/files/surgeon-general-misinformation-advisory.pdf (page 6)

I can't even say that all of this is wrong

Exactly. Things like the Trusted News Initiative. I don’t like this, and we should not trust every word the mainstream media says, or even trust ANY of it blindly, but it’s a damn right more preferable than loads of far left and far right crackpots producing their own propaganda and all of it being given equal billing with FT, BBC, NYT, Economist etc.

I don’t like this, and we should not trust every word the mainstream media says, or even trust ANY of it blindly, but it’s a damn right more preferable than loads of far left and far right crackpots producing their own propaganda and all of it being given equal billing with FT, BBC, NYT, Economist etc.

Why is it preferable? Because such propaganda might lead to people believing absurdities and following them off a cliff?

More comments