I'm sure you could find a listener who's interested in hearing the nuclear codes, or, as another user put it more saliently, the coordinates of a military unit at the frontline that you're entrusted with. The listener's right and interest to hear things is not exactly under question.
I mean, I still support contractual rights to restrict the spread of information, such as nondisclosure agreements and even certain forms of copyrigght.
The constant tension between the "INFORMATION WANTS TO BE FREE!" philosophy and "Some information can cause harm" is everpresent.
I approach you in a dark alley from behind and tell you to empty your pockets with my hand half a second away from retrieving my open carry gun and shooting at you. That should be legal, shouldn't it? All I did was speak to you. If you felt threatened, that's entirely on you. And besides, don't you have the right to hear what I have to say?
Ahh, this takes me back to arguing this stuff on 4chan and reddit back in the day.
What is your specific intent in uttering these words? Is it to give me some useful information that I desire to hear or that I requested to hear? If not, then surely I am entitled to take that into account when I judge how to respond to your speech.
"It should be legal for you to utter those words" and "it should also be legal for me to shoot you on the spot if you utter those words" are not in fact in tension.
In this case, the restriction on speech is more practical than anything. You wouldn't utter those words for fear of being shot. No third party needs to 'interfere.'
"unrestricted free speech should be legal" is no different from "you can't defend yourself from my swing until and unless it connects".
I would characterize it more as "It should be legal for any given person to speak to any given willing audience without interference." Trebly so on the internet, where generally an audience seeks out a speaker and the speech doesn't interfere with anyone who hasn't actively sought it out.
The right to free speech has as a necessary corollary the right to hear. As in, a speaker and a listener/the audience both have an interest in the right to free speech, and both are 'infringed' when a speaker is censored.
That's less the case when someone starts throwing punches, there is no consent, implied or otherwise, to receive a punch, vs. the consent to hear a given speaker. Unless it is in an agreed upon boxing match, of course.
We've never been in the situation where elections are "mostly" decided by corporate censorship, nor will we ever be. However, it could push things lightly at the margins. But this is really no different than what the media was always capable of doing.
Right on the leadup of the 2020 election the New York Post's twitter account got suspended for publishing a story about the contents of Hunter Biden's laptop. The rationale at the time was 'misinformation' but pretty much every aspect of the story and the contents of the laptop has been verified as true and accurate.
The story was clearly newsworthy. And yet it was censored, at what we know now was likely the request of state actors.
Was the media always capable of crushing the spread of a story that a different media outlet published?
Could the marginal effects of this story spreading have impacted the outcome of the 2020 election?
A few disparate thoughts.
I suspect that the arrest of the Telegram guy in France was a trial balloon/shot across the bow to show that Western Countries can use a, for lack of a better term, "Chinese-Style" authority to physically detain extremely wealthy oligarchs and celebrities to try to reign in their open resistance to government edicts. Compare the "Russian-style" authority where they just chuck you out a window or crash your plane.
My model of how centralized governments think holds that NO such government will tolerate a serious power base outside of its own control, which includes any 'platform' or organization that, if activated, could attempt to seize political control of said government from the current holders (organizing to vote for particular candidates counts too!). The instant such an alternative power base seems to arise, the existing government will seek to either seize it, destroy it, or disrupt it.
They will do so with even more urgency in times of war or serious unrest, and we're sliding into such times.
It was all fun and games when tech companies were helping produce more wealth and providing said government with neat tools to e.g. surveil the public and detect crimes, or analyze economic data, or better weapons to fight their enemies. But the balance of power in the relationship is becoming untenable... from the government's point of view.
I believe the U.S. and European governments strongly feel like the tech industry represents such a power base, or at least that they provide the platforms that dissidents and political opponents can use to organize their supporters into effective movements that can then undermine existing power bases. And said governments can pay lip service to classical liberal ideals while plotting to disrupt those opponents and bring those platforms to heel all the same. End of the day this will mean threatening the people in charge of and operating those platforms with serious consequences. Which is hard to do if those people are extremely wealthy and generally popular, and your country has laws that inhibit the government from arresting citizens and taking their stuff on a whim.
The one thing I know for certain is that they will NOT simply stand by and allow power to accrue outside their hands until it actually destabilizes their authority.
Finally, I have literally never felt quite this much shivering terror at the realization that the group who believes in something like unrestricted free speech even and ESPECIALLY against the efforts of government to 'protect' us... is a tiny school of fish in a sea of indifference, patrolled by many censorious sharks.
I was aware that globally the concept or ideal of free speech was vastly a minority preference, but I didn't have much concern about what a Cameroonian or Indonesian thought was okay to say or not say. But even in the West, even in the United States itself it feels like I've got maybe 20% of the population that would honestly vote for a provision protecting free speech if one didn't already exist.
The left was never in favor of it but now they've gained enough institutional control to silence enemies on various platforms, the liberals have abandoned it in the name of stopping or getting Trump, the moderates just want to grill, and the conservatives/MAGA are generally shaky allies on this particular point.
With all the tools for censorship that are now turnkey ready to implement across the board, starts to feel like it is just a question of whom will be in charge when the governments of the world lock down speech entirely.
Great point.
Its also the fact that those financial fraudsters immensely benefit financially from their crimes. We can measure the benefit they got for causing harm to others, too.
As far as I know, most academic fraudsters, ironically, don't become fabulously wealthy, but may gain a lot of status and acclaim.
So it both makes it even less sensible why they'd commit fraud, and harder to articulate the nature of the harm. As you say, "that man lied, and as a result got dozens of speaking slots at conferences and became the envy of graduate students in his field of study, and was able to afford an upper-middle-class lifestyle" doesn't seem as legible as "that man lied and made $80 million."
Enough of a problem that the State of Florida initiated a new pilot program to attempt to make it easier to detect early, and possibly prevent.
Although it adds friction to legitimate real estate transactions too, which is frustrating.
No, you've added VERY useful context!
And this is what I mean. If no one person's neck is on the line for a screwup, then its not surprising they'll just passively approve whatever the underlings scrounge up. And not question the incentitves of said underlings.
It makes me really annoyed because I work in a small office with assistants who handle a lot of work and I am the one who signs off on everything at the end of the day so I am the one eating crow/falling on swords if there is a serious screwup.
I just want to believe that other people take their jobs and the accuracy of their output half as serious as I do!
I've noticed this so much in nonfiction books I've read lately, and a few fiction ones too!
100-120 pages of really amazing insights that are explained and applied in intuitive ways. Then another 100 or so pages of banal platitudes that vaguely follow from the rest. A big one is applying whatever insights they've made to social issue du jour. "Here's how my groundbreaking research into quantum hyperlinking across nonlocal space can help address... climate change." (I made that up, to be clear)
Big ideas don't necessarily need a novel-length treatment to explain in full, even addressing all the possible implications. But selling books is one of the few proven ways to make a buck from specialized academic research (until you have a saleable product, I guess) so that's the mold they'll trying to fill.
I have an internal feeling of justice that calls for extremely severe penalties for these people. I guess I'm in the minority, since it doesn't happen.
Samesies.
I say slash and burn, take their money away, give them humiliating tattoos and make them work at McDonalds somewhere far away from all their friends, or worse. Normal criminals couldn't do that much harm in a lifetime.
I'm not saying to impose the death penalty on the guy.
What always impresses me is how the system seems to have evolved into such a highly polished and lubricated machine that you can sling blame all you like, it won't stick to any individual component.
Almost everyone in the chain of decisions that led to the outcome can just say "Well its not MY fault, I was just relying on [other link in chain], which is what the best practices say!"
Maybe even the guy who produced the fraudulent research can say "I was relying on inexperienced lab assistants/undergraduates who produced faulty data!" I don't know.
But there has to be some method of accountability. Like you say:
However, when it comes to mechanical engineering, we've learned to build bridges that stay up.
The (apocryphal) story about Roman Architects being required to sleep under bridges or arches they built is on point here. Bridges stay up (except when they don't) because there's a close enough loop between the decisionmaker and the consequences for failure. It maybe doesn't have to be quite as tight as "you must be directly harmed if your decisions harm others" like with the bridge story, but it has to be make them afraid, on some level, of being punished if they screw up.
I'm not entirely sure how to bring the consequences for screwing with academic research into medical treatments into a tight loop. One might hope it would be enough to say "If you ever end up in a hospital needing treatment, YOUR RESEARCH is going to be used to treat you." And thus they should have some concern about getting it right. But that's a very distant, uncertain threat. What would be a (proportional) threat you could make to bring down punishment on them the very instant the misconduct is uncovered? And how can you trust the entity that claims to have uncovered misconduct?
Prediction Markets offer one way to put more skin in the game, but it doesn't quite satisfy me that it would be a significant deterrent for others attempting fraudulent research.
And if we set up some organization whose specific objective was punishing those whose academic fraud causes harm to others, that institution would simply become another target for capture by vested interests. I think it has to be something a bit more 'organic.'
If the model you're using allows you to upload information, it HAS helped to simply give it access to the corpus of laws that you're working with.
Digging deeper into regulatory law, beyond just the high level statutes where the rules and rulings may not have been part of the training data does seem like it would require heavier specialization.
Do you know how many kids who weren't able to afford tutoring are using it for tutoring?
Its a very strong claim that "AI is currently quite incapable of educating children."
Ironically I don't feel that accomplished because I consistently hang out in places where high level degrees and incredibly intelligent professionals is almost the baseline.
I have my moments, though.
You'd be surprised (or not) how badly a layperson can fuck things up even using a basic form with instructions in plain english.
As usual, the argument assumes no improvement in the models or any well-designed and marketed product to gain acceptance.
The unanswered question is whether kids who can't afford 300/hr are seeing benefits from LLM tutors.
I will use it for tasks where the scope of work is easily defined.
I've found it makes researching unfamiliar areas of law about 5x faster. It really seems to "understand" statutory interpretation and I've yet to catch it making a material mistake. It does a good job comparing and explaining two different statutes that are on point, especially when provided with current caselaw.
I have also used it to fine-tune an appellant brief. It is at least as helpful as having a particularly bright law student with access to massive knowledge reserves to bounce ideas off of.
A year or so back I suggested that any person who is currently in Law School should drop out. I think it is still good advice, although I understand the counter-arguments. It will not be long before these things are smarter and faster than any first-year associate.
If you can dictate notes for the contract in about 5 minutes, you'll have a first draft from ChatGPT much more quickly than you could create one yourself, even using existing forms.
Teach it how to use you forms and it'd be even better.
I've been hearing rumblings that it has made impact in the legal profession, big law firms are probably supplementing their paralegal staff, at least, and likely their attorneys.
We hear about the obvious ones that get caught fabricating caselaw wholesale, but I'd bet it's effective enough that most of the time no outsider notices.
Wasn't this explicit when OpenAI announced Sora but gave no public access?
They surely have some impressive stuff that isn't even revealed yet.
To a large extent I would expect them to keep ChatGPT as the flagship product and any upgrades would just be added on as features rather than "separate" models.
I am wondering if they've lost first-mover advantage, since any other companies that were sniffing around the same research-space surely know where to direct efforts now, even if they haven't stolen IP directly.
My bet is on OpenAI having a couple sizeable rabbits to pull out of their hat, but they could be true witchcraft or mere illusions.
Here ya go:
https://www.axios.com/2024/02/16/gen-z-gender-gap-political-left-women
Limit it to 18-29 year olds (the ones who will be defining politics over the next couple decades) and there is in fact a sizeable gap that emerged in the last decade.
Here's the data source:
https://news.gallup.com/poll/609914/women-become-liberal-men-mostly-stable.aspx
It gets worse if you selected out unmarried women from the Pew study you cite.
Women who have never been married are three times as likely to associate with the Democratic Party as with the Republican Party (72% vs. 24%).
WSJ did an article on it.
https://www.wsj.com/politics/policy/men-women-vote-republican-democrat-election-7f5f726c
We discussed the whole issue about a month ago.
https://www.themotte.org/post/1100/culture-war-roundup-for-the-week/235167?context=8#context
THERE'S your gender polarization.
Notice that fewer people are getting married.
Notice that the median age of first marriage is rising
There's a couple different ways to interpret all this, but I doubt this supports your argument that things aren't getting more polarized, and it surely suggests that this gets worse in the next 10 years.
There's probably a lot to fight over what exactly makes a weapon 'fully automatic' in a mechanical sense, but yes, as far as I know rate of fire has never been a subject of regulation PRIOR to the NFA.
The excitement is worthy because this may be the first time that the NFA and regulation of machine guns has been found unconstitutional at all, and thus the ball shifted to the government's court to dig into the archives and research the historical basis for restrictions of this sort.
I'm not entirely sure what they'll find, but it seems like a heavy lift at first glance.
Whether the government "barely tried" due to laziness or they did try and realized they had a losing argument and thus simply threw in the towel might be up for debate.
"Evolutionary psychology says women are irrational and neurotic and conformist and maybe not even actually sentient* so they shouldn't have rights" as being a lot more sophisticated than "Cave men stronger than cave women, therefore they shouldn't have rights."
Its certainly more valid when examined critically.
Especially when the best science and studies we can muster on the topic indicate that yeah, women are more neurotic, much more conformist, and have less overall awareness of or acceptance of opinions other than their own. This isn't a claim about any specific women, but its the sort of thing we'd look at when determining what sort of factors make the sexes different.
So if the reasons for that AREN'T evolutionary to some large degree, from whence do they come?
The argument also has to account for the fact that men have been the primary political and military leaders for literal millennia, and almost NO societies anywhere in history were governed by females.
And how that might impact our culture and social norms.
Afghanistan has been Islamist for a long time, so the reversion under Taliban rule is not much of a change.
The U.S. didn't recognize the right of women to vote for the majority of its history. Most of the pro-female policy changes in this country were enacted post-WWII, and mostly since the '60's.
Surely it would only take like 1 generation at most to revert back, if there were an organized movement for it?
Perhaps 'war' overstates it, but they're certainly a replacement for intra-tribal squabbles were two members of the tribe and their retainers battle for the leadership position.
But the 4chan argument is just another iteration of the very unsophisticated premise we see repeated here all the time: "Women are weaker than men, therefore men should control women."
Uh, the premise I usually see is pointing out the EVOLUTIONARY reasons women are weaker than men, and how that has massive implications about things like the ability to engage in abstract reasoning, to commit to true beliefs vs socially popular beliefs, and to make self-sacrificing decisions rather than those that provide short term personal benefits.
Which is more sophisticated than the idea that because Cave MEN could overpower Cave WOMEN that's why men should be in control.
Where are you seeing that argument promulgated around here?
The increasing hostility between the sexes is certainly a problem, but to believe that the solution is for women to accept a subordinate role without political autonomy requires believing either (a) that women could be persuaded to accept this or (b) persuading men to revert to treating women as property.
My friend, Afghanistan is literally doing option (b) as we speak. All it took was the removal of the U.S. military to reassert the general status quo that women can't do much to unseat.
To think that it requires an apocalypse is a bit hyperbolic.
Ironically you've presented a scenario that I can claim expertise in, since one of my jobs is in fact self defense instructor. This precise scenario is one I have thought about and trained on literal hundreds of times.
The calculation I have to make is based on whether I think your gun is real, whether it is loaded, whether you have the wherewithal to pull the trigger, and, ultimately, if I'm faster than you. Which I probably am because, as stated above, I train for this.
And in the vast majority of hypothetical cases I would... hand over my stuff without a protest and let you go on your way. Simply the easiest resolution once you've pressed the matter. But you have acted in such a way that I will consider ALL options on the table. And my calculation will adjust based on whether I have loved ones with me and whether I have reason to believe you would kill anyway.
Simply put, YOU have to make a calculation too, and if your calculation has already included the possibility of being shot yourself and you STILL take this action, I can't speak well of your judgment.
And once YOU have made a statement that shows you are willing to kill me (or someone else) to obtain mere possessions, by my perfectly, coldly rational logic you have forfeited any argument for why you shouldn't be killed in return, so the only question is whether I think that is necessary to protect myself.
Similarly, if you claim that you want to suppress the speech of others, I would HAPPILY support restricting your speech because you can't really complain about being treated the way you already agreed its fair to treat others.
Symmetry is nice, like that.
It sure would. But you've already stated that its on twitter, so the means to do so would certainly not be present unless you go to the effort of locating and hunting that person down, which seems like a LOT OF FUCKING EFFORT when you could just walk away from the screen. Or you could just use twitter's own tools to mute the words you don't want to hear/read and block the people you don't want to interact with.
So there's a certain level of implied consent if you consider a particular set of words offensive enough to kill over... and yet you don't avail yourself of readily available tools that will prevent you from seeing those words at all if you don't wish.
More options
Context Copy link