I've written about freedom of speech extensively in all manner of forums, but the one thing that has become clear to me lately, is that people are genuinely uninterested in the philosophical underpinnings of freedom of speech. Today they would rather quote an XKCD comic, than John Stuart Mill's seminar work On Liberty.
Because of this, I've decided to try to reframe the original notion of freedom of speech, into a term I coined: Open Ideas.
Open Ideas is nothing more than what freedom of speech has always been historically: a philosophical declaration that the open contestation of ideas is the engine of progress that keeps moving society forward.
Today the tyranny of the majority believes freedom of speech is anything but that. They believe that "freedom of speech doesn't mean freedom from consequences", despite the fact that such term came from nowhere, has no author, and in addition all great free speech thinkers argued precisely the opposite. The great thinkers argued that if people are afraid of expressing unpopular opinions, that is functionally the same as government censorship: ideas are suppressed, society stagnates, and progress is halted.
So far I have not yet heard any sound refutation of any of these ideas. All people do is repeat the aforementioned dogmatic slogan with zero philosophical foundation, or mention First Amendment details, which obviously is not equal to freedom of speech.
How is anything I've stated in any way an inaccurate assessment of what is happening?

Jump in the discussion.
No email address required.
Notes -
Some people seem to equate “freedom of speech” with “freedom of reach”. You can say whatever you want. That doesn’t mean what whatever you say must be published by loudspeaker media institutions and promoted by social media algorithms.
If social media algorithms are made to filter certain ideas, you have censorship. And that's not how the algorithms are meant to be used in the first place. Algorithms are supposed to personalize your feed so that content you're interested in is shown to you. The argument you're using here is often used to promote censorship, and it's often combined with the argument "Freedom of speech is only a protection against government censorship", and here I'd say the same thing as OP - that it's a poisoned version of the actual concept, and poorly thought out.
Of course, this leaves some ambiguity in the definition of free speech, but I think those can be fixed if we borrow the concept of positive vs negative rights
This is obviously wishful thinking. We don’t own the algorithms. The tech companies that own the platforms own the algorithms. Companies operate in service of shareholder value. Therefore algorithms are constructed for the set of events, circumstances and behaviours that creates most shareholder value. Definitely not for you or me, no matter how many “For You” tabs you see.
The point stands. You can say whatever you want. No media / social media company is obliged to take that and bring it to one or a hundred or a million other people.
Also – and this is the most important point of all - having the freedom to say whatever you want is good, because you might win someone (or a whole society) over to your way of thinking. But if the opposite happens, and someone or society decides you’re a crackpot and they don’t want to hear from you ever again, that’s okay too. The one who listens has as much freedom as the one who speaks.
I think you're missing the point. If you wanted to talk to your mother, would I be okay with deciding what you were allowed to say? Would Google? The government? As far as I'm concerned, nobody has the right to hinder communication between anyone else. The fact that Google can even read my emails is already a disaster, and I'm quite sure reading your physical mail is highly illegal, and that the reasons behind this decision aren't invalid for digital mails.
This sounds like the freedom of association? I like that concept. What I dislike is when companies try to decide who I can associate with, as well as who can associate with me.
The internet didn't work like this before the fallacy of association began. The form of the fallacy is "If illegal content ends up on Google, Google is guilty" or "If a person writes a slur in your game chat, your game is guilty", "If you're friends with a sexist, you're likely a sexist yourself", etc. You might have heard other versions of it, like "Roblox is guilty because pedophiles use it" and "Guns should be illegal because criminals use them". The idea is sometimes mocked as "Hitler drank water once, therefore you're a nazi for enjoying water". I believe that a large chunk of all conflict in the world, and the biggest reason that ideological bubbles have become such a problem, is this very fallacy.
No I’m not missing the point. The freedom of speech people are not talking about you talking to your mother. They’re talking about being throttled or “censored” so their content doesn’t go as far as they would like it to.
You been to London? There’s a place called Speaker’s Corner in Hyde Park. I can go there, get up on a wooden box, and speak about whatever I want. If what I’m saying is well constructed and delivered, a small crowd will inevitably gather round. If not, or I’m crazy, no one will listen to me.
It seems like a ridiculous analogy, but the freedom of speech brigade think Hyde Park should usher in a hundred people and encourage them to stand there and listen to me.
In London, random people will decide if you're worth listening to or not. On social media sites, random people will never get to decide if the algorithm simply blacklists things containing words which align with specific ideologies. The algorithm doesn't even know who I am, nor has the algorithm learned that people do not want to hear what I'm saying, it's simply manually coded to prevent people from talking about certain ideas, even between people who both like said idea.
The mechanism you're describing is an algorithm which favors the content that people enjoy the most, and filters away the content that people enjoy this least, but this is not how social media algorithms work. If a website implements a neutral algorithm, and simply step back and let things take care of themselves, then they'll get in big trouble. Maybe porn will be on the frontpage, maybe one of Hitlers speeches will blow up because it's interesting, maybe bots will successfully game it, whatever. Every algorithm which exists today is carefully engineered to do specific things, and it's not true that your online following is a function of the ratio of people who want to associate with you and hear your ideas. They manually "correct" it every time content that the owners don't like become popular. I'd not dislike such an algorithm much, as it would technically be fair for all users
But
in order to get your idea in front of other people who might line your idea, it has to distribute your message to a proportion of available people who might like it. My point is, this distribution, if it happens, is a bonus. You, or nobody, is entitled to this distribution. People who complain that their reach is getting throttled are complaining that they’re not getting wider distribution, and then complain that their freedom of speech is getting unlawfully restricted. It’s not, because they are not entitled to that distribution in the first place.
The difference between talking outside and online, is that real spaces aren't moderated or owner by other people. The supermarket cannot stop other people from hearing your voice, your local park cannot make you invisible to other people. Your destribution is only hindered by the laws of physics. Imagine if, in real life, you were told "You aren't entitled to use the sidewalk", or "If nobody lets you use your local bridge, maybe you should reflect on your behaviour", or perhaps "Your local water company can refuse to sell you water if your political views do not align with theirs". This is the important difference, which it feels like you're brushing over or not noticing
I’m not brushing over it or not noticing. You’re making completely false equivalencies between publicly owned and privately owned.
Now you might argue that X or YouTube etc should be publicly owned (I.e. commandeered by the state). But thats a completely different argument.
More options
Context Copy link
If you gave a speech in the liquor isle about the dangers of alcohol, you’d be removed. You’ll also be removed for causing a disturbance. It happens all the time. Homeless people yelling at the voices in their head get kicked out quite often.
More options
Context Copy link
More options
Context Copy link
But if, to take an extreme example, I lock you in a soundproof box in the basement of a castle for spreading incendiary rumours, it seems very peculiar to say, ‘no, you have free speech, I’m just not helping you distribute that speech to others’. I think we agree on that much.
To take a less extreme example, if there are two speakers on Speakers Corner, and I give a giant megaphone to the other one that totally drowns out your voice, that doesn’t exactly seem like free and fair speech either.
In actual real life, there is some level of ‘not helping you distribute your ideas’ that is equivalent to ‘shutting you up’.
It doesn’t maybe mean you have to give big megaphones to everyone, but maybe you do have to give them all a soundproof room and make it known where they are and direct people on request and not actively direct them away.
I think the soundproof room in a dungeon is another false equivalency. But for the sake of a civil argument let’s agree on the loudspeaker question. Do we actually think some people are getting a helping hand through a shiny new loudspeaker, with the twin express aims of promoting their ideas and drowning out ideas someone doesn’t like? Or is it just the case that people whose ideas get more reach have skilfully figured out the content algorithm game?
Personally I think that it’s convenient for some people to claim that they’re being throttled by some nefarious group of actors, rather than build the skills necessary to be more successful in the algorithm.
I think the second reality is much more likely, but that turns down the dial on conspiratorial thinking. Messy reality is scarier than a perception of victimhood, which appeals because it moves the locus of control away from you and me.
TLDR - it’s more attractive for us to believe our freedom of speech is being limited by bad actors than accept that we’re not skilled enough or our ideas aren’t very good.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is a straw man, because nobody is saying anything remotely close to that.
If I say something controversial about COVID-19, freedom of speech would dictate that I shouldn't be banned from Facebook for it. That not "freedom of reach", that's "let my friends and family who have accepted me read what I wrote".
Same thing on YouTube, reddit, X/Twitter, and so on. My followers follow me for a reason.
But more importantly: why should I be fired from my job because of something I posted on social media? This has nothing to do with "freedom of reach", this is punishing people for challenging established dogma.
Agree with this, for sure. When I say “freedom of speech doesn’t mean freedom of reach”, I don’t at all mean you should be thrown off the platform. What I do mean is the platform has no obligation to algorithmically promote what you say to other people on the internet.
The people who follow you is an interesting question, and a much thornier one for Internet user preferences.
There’s too much content, and we “follow” too many accounts, for reverse-chronological content to work. If it ever worked, it certainly cannot work now, with AI helping everyone pump out 10x to 100x more content and content variations than before. So there’s just too much. Some sorting algorithm is required but…
I hate the algorithms. They act like crack and plug me in intravenously (figuratively speaking) to the brain-rotting content I can’t stop myself from consuming. I know I am better off without the algorithms (as they’ve evolved in 2022-25). In 2016-21, the algorithms brought me interesting, heterodox ideas and content, and I was psychologically and epistemologically better off as a result. That has flipped since 2022 on almost every platform. (YouTube seems to be an exception.)
More options
Context Copy link
More options
Context Copy link
Why put so much effort into pretending you're not exercising power?
Megaphone media has the excuse of limited resources, but SocMeds have no justification for manipulating the kind of content their users will see. Technologically it's perfectly possible to let every user write their own algorithm, but at the very least people should be given the option to switch to a basic "people who liked X also liked Y" algorithm.
I think the technical hurdles to this are a lot higher than you expect. I'd like to see someone make a shot at doing it anyway, but I'm confident it will come with some significant trade-offs. A basic algorithm is probably more likely.
The main problem is that you need to run this somewhere and neither of your choices are good.
Running this on company hardware brings large performance and safety risks. Safety because some guy is going to try to get you to run an infinite loop or virus. Performance because search algorithms over large datasets are computationally intensive at the best of times, and code written by random strangers is not the best of times. Solving both of these without severely limiting the ability to create an algorithm would be a miracle.
Running this on a user's computer instead raises challenges around getting the data into the user's computer to be searched. If you're looking at Twitter and want to get today's tweets from accounts you follow that could be thousands of records. Download speed limitations will ensure you will never be able to run your algorithm on more than a tiny fraction of the full site.
In the most general technical sense, sure, the Halting Problem is unsolvable: no matter how long you let some arbitrary algorithm run you can't always be sure of whether it's going to keep going forever or whether it's actually just about to finish.
In slightly less general technical sense, here, you don't need some arbitrary algorithm just to do a better version of an ordered search, so you can restrict your users to a non-Turing-Complete language on which the Halting Problem is solvable.
Practically speaking, you just do what any automated test suite does: you define "infinite" to be 5 minutes, or 5 seconds, or however much you expect you can spare per run at most, and if the algorithm isn't done by then it gets killed anyway.
This, on the other hand, has been solved even in the technical sense. Even if you're going Turing-Complete you don't have to let your users specify a program in C or binary or something, or run it unsandboxed in the same address space or with the same kernel privileges. Your browser has probably run a hundred little arbitrary Javascript programs so far today, and the worst they could have done would have been to churn your CPU until you closed a tab, because anything more serious is sufficiently restricted. Crooks sending you links to rnicrosoft.com still depend on you typing in your credentials or downloading and running something heinous afterward, even though the second you click a link like that they get to send your computer arbitrary programs that it will immediately run.
Firefox released a patch to fix a sandbox escape* just a few days ago. Properly sandboxing a program has not been solved; it is an active problem that consumes a lot of developer time and current solutions likely still have many holes to be found.
Crooks mostly rely on users downloading and running scripts because it's easy and it works. Writing exploits against browsers isn't worth the effort when you can socially engineer people and get the same results.
Most sandboxing is also bad for performance. Javascript on a random webpage generally doesn't need to perform well but a recommendation algorithm will.
Any cut-off aggressive enough to meaningfully restrict denial-of-service attacks would make algorithm-writing functionally impossible for the majority of users and probably also prevent most of the possible algorithms people would like to write.
* I can't see the bug report but based on the reported severity this appears to be a between-page sandbox escape rather than fully leaving the browser.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I agree. And surely it can’t be far away from legislation that forces tech platforms to give users control over (or at least MUCH more transparency about) the algorithm used.
It’s impossible to consider that the tech cos will do this themselves. They would be slaughtering the greatest golden goose that ever was. Their hand must be forced.
Yup, that or nationalize them outright, or provide a public platform.
More options
Context Copy link
More options
Context Copy link
Ultimately, I think it comes down to not allowing social media to have their cake and eat it too. It's perfectly valid of them to only allow what they want to allow on their platform. But then you cannot claim that you are unable to block content you can be liable for.
if your algo is making opinionated editorial decisions, you are fully responsible for what it shows as a publisher. If it's only making technical editorial decisions or no editorial decisions, then you can enjoy the protections that currently exist. I think it's the only way to thread the needle between freedom of association and freedom of speech.
I deactivated twitter/x a few months ago. Got tempted back in last week and went through the ordeal of solving 10 visual puzzles (weirdly cryptic and very difficult) to prove I was human. Within three minutes, and WITHOUT A SINGLE POINT OF PREFERENCE FROM ME (no likes, no comments, no follows…) I was being shown loathsome racist material. Somebody somewhere wants this to happen, and set it up precisely this way. The base algorithm of X is racist, bigoted, hateful, angry and divisive, and it’s radicalizing people’s opinions every second of every day. And I say this as someone who thought Jack’s original app, from ~2010 and right up to the way it introduced dissenting voices during COVID-19, provided an indispensable service to humanity. There was obvious censorship and bias that Elon set out to fix. But he’s made it 100 times worse in the opposite direction.
I'm not seeing how this particular belief would increase your credibility on the issue.
Bad phrasing. Basically:
This has been ongoing for far longer than that. Tristan Harris's TED talk outlining how he as a google employee explicitly aimed to manipulate you to maximize your "Time On Site", came out in 2016, and his original internal talk on the subject dates back to 2013.
A few additional data points:
Zvi Mowshovitz published his delenda est post on the Facebook algorithm in 2017. So the situation was bad enough to provoke a generally mild-mannered New York Jewish quant into making a public delenda est post by then.
More options
Context Copy link
Fair points. My point is it’s got a lot LOT worse since ~2021/22. This was absolutely influenced by TikTok’s algorithm which meant it didn’t matter who you followed, they were giving you what your behaviour told it you wanted. The others followed suit and the consequences are / will be disastrous.
I largely agree with you. I think the difference is probably (and we may never know for sure) what are they optimizing for now more than how they are going about it.
I think 2015/2016 social media companies were really optimizing for maximizing the attention as their one true goal. Whereas by the time we were deep in the covid years, they were seeking to metacognitively reflect their understanding of you back to you, while continuing to optimize for attention.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link