I've written about freedom of speech extensively in all manner of forums, but the one thing that has become clear to me lately, is that people are genuinely uninterested in the philosophical underpinnings of freedom of speech. Today they would rather quote an XKCD comic, than John Stuart Mill's seminar work On Liberty.
Because of this, I've decided to try to reframe the original notion of freedom of speech, into a term I coined: Open Ideas.
Open Ideas is nothing more than what freedom of speech has always been historically: a philosophical declaration that the open contestation of ideas is the engine of progress that keeps moving society forward.
Today the tyranny of the majority believes freedom of speech is anything but that. They believe that "freedom of speech doesn't mean freedom from consequences", despite the fact that such term came from nowhere, has no author, and in addition all great free speech thinkers argued precisely the opposite. The great thinkers argued that if people are afraid of expressing unpopular opinions, that is functionally the same as government censorship: ideas are suppressed, society stagnates, and progress is halted.
So far I have not yet heard any sound refutation of any of these ideas. All people do is repeat the aforementioned dogmatic slogan with zero philosophical foundation, or mention First Amendment details, which obviously is not equal to freedom of speech.
How is anything I've stated in any way an inaccurate assessment of what is happening?

Jump in the discussion.
No email address required.
Notes -
Why put so much effort into pretending you're not exercising power?
Megaphone media has the excuse of limited resources, but SocMeds have no justification for manipulating the kind of content their users will see. Technologically it's perfectly possible to let every user write their own algorithm, but at the very least people should be given the option to switch to a basic "people who liked X also liked Y" algorithm.
I think the technical hurdles to this are a lot higher than you expect. I'd like to see someone make a shot at doing it anyway, but I'm confident it will come with some significant trade-offs. A basic algorithm is probably more likely.
The main problem is that you need to run this somewhere and neither of your choices are good.
Running this on company hardware brings large performance and safety risks. Safety because some guy is going to try to get you to run an infinite loop or virus. Performance because search algorithms over large datasets are computationally intensive at the best of times, and code written by random strangers is not the best of times. Solving both of these without severely limiting the ability to create an algorithm would be a miracle.
Running this on a user's computer instead raises challenges around getting the data into the user's computer to be searched. If you're looking at Twitter and want to get today's tweets from accounts you follow that could be thousands of records. Download speed limitations will ensure you will never be able to run your algorithm on more than a tiny fraction of the full site.
In the most general technical sense, sure, the Halting Problem is unsolvable: no matter how long you let some arbitrary algorithm run you can't always be sure of whether it's going to keep going forever or whether it's actually just about to finish.
In slightly less general technical sense, here, you don't need some arbitrary algorithm just to do a better version of an ordered search, so you can restrict your users to a non-Turing-Complete language on which the Halting Problem is solvable.
Practically speaking, you just do what any automated test suite does: you define "infinite" to be 5 minutes, or 5 seconds, or however much you expect you can spare per run at most, and if the algorithm isn't done by then it gets killed anyway.
This, on the other hand, has been solved even in the technical sense. Even if you're going Turing-Complete you don't have to let your users specify a program in C or binary or something, or run it unsandboxed in the same address space or with the same kernel privileges. Your browser has probably run a hundred little arbitrary Javascript programs so far today, and the worst they could have done would have been to churn your CPU until you closed a tab, because anything more serious is sufficiently restricted. Crooks sending you links to rnicrosoft.com still depend on you typing in your credentials or downloading and running something heinous afterward, even though the second you click a link like that they get to send your computer arbitrary programs that it will immediately run.
Firefox released a patch to fix a sandbox escape* just a few days ago. Properly sandboxing a program has not been solved; it is an active problem that consumes a lot of developer time and current solutions likely still have many holes to be found.
Crooks mostly rely on users downloading and running scripts because it's easy and it works. Writing exploits against browsers isn't worth the effort when you can socially engineer people and get the same results.
Most sandboxing is also bad for performance. Javascript on a random webpage generally doesn't need to perform well but a recommendation algorithm will.
Any cut-off aggressive enough to meaningfully restrict denial-of-service attacks would make algorithm-writing functionally impossible for the majority of users and probably also prevent most of the possible algorithms people would like to write.
* I can't see the bug report but based on the reported severity this appears to be a between-page sandbox escape rather than fully leaving the browser.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I agree. And surely it can’t be far away from legislation that forces tech platforms to give users control over (or at least MUCH more transparency about) the algorithm used.
It’s impossible to consider that the tech cos will do this themselves. They would be slaughtering the greatest golden goose that ever was. Their hand must be forced.
Yup, that or nationalize them outright, or provide a public platform.
More options
Context Copy link
More options
Context Copy link
Ultimately, I think it comes down to not allowing social media to have their cake and eat it too. It's perfectly valid of them to only allow what they want to allow on their platform. But then you cannot claim that you are unable to block content you can be liable for.
if your algo is making opinionated editorial decisions, you are fully responsible for what it shows as a publisher. If it's only making technical editorial decisions or no editorial decisions, then you can enjoy the protections that currently exist. I think it's the only way to thread the needle between freedom of association and freedom of speech.
I deactivated twitter/x a few months ago. Got tempted back in last week and went through the ordeal of solving 10 visual puzzles (weirdly cryptic and very difficult) to prove I was human. Within three minutes, and WITHOUT A SINGLE POINT OF PREFERENCE FROM ME (no likes, no comments, no follows…) I was being shown loathsome racist material. Somebody somewhere wants this to happen, and set it up precisely this way. The base algorithm of X is racist, bigoted, hateful, angry and divisive, and it’s radicalizing people’s opinions every second of every day. And I say this as someone who thought Jack’s original app, from ~2010 and right up to the way it introduced dissenting voices during COVID-19, provided an indispensable service to humanity. There was obvious censorship and bias that Elon set out to fix. But he’s made it 100 times worse in the opposite direction.
I'm not seeing how this particular belief would increase your credibility on the issue.
Bad phrasing. Basically:
This has been ongoing for far longer than that. Tristan Harris's TED talk outlining how he as a google employee explicitly aimed to manipulate you to maximize your "Time On Site", came out in 2016, and his original internal talk on the subject dates back to 2013.
A few additional data points:
Zvi Mowshovitz published his delenda est post on the Facebook algorithm in 2017. So the situation was bad enough to provoke a generally mild-mannered New York Jewish quant into making a public delenda est post by then.
More options
Context Copy link
Fair points. My point is it’s got a lot LOT worse since ~2021/22. This was absolutely influenced by TikTok’s algorithm which meant it didn’t matter who you followed, they were giving you what your behaviour told it you wanted. The others followed suit and the consequences are / will be disastrous.
I largely agree with you. I think the difference is probably (and we may never know for sure) what are they optimizing for now more than how they are going about it.
I think 2015/2016 social media companies were really optimizing for maximizing the attention as their one true goal. Whereas by the time we were deep in the covid years, they were seeking to metacognitively reflect their understanding of you back to you, while continuing to optimize for attention.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link