domain:archive.ph
I wouldn’t call it a shitpost. I did screw up my phrasing. Mea culpa.
What I wanted to say was that the textbook “incomplete victory” had already discarded civility. The starvation, reprisals, and general weaponized disrespect only led to an even less civilized conflict.
I don’t believe WW2 was civilized, or that its atrocities can be credited with the completeness of the subsequent peace. @zeke5123a
But if, to take an extreme example, I lock you in a soundproof box in the basement of a castle for spreading incendiary rumours, it seems very peculiar to say, ‘no, you have free speech, I’m just not helping you distribute that speech to others’.
To take a less extreme example, if there are two speakers on Speakers Corner, and I give a giant megaphone to the other one that totally drowns out your voice, that doesn’t exactly seem like free and fair speech either.
In actual real life, there is some level of ‘not helping you distribute your ideas’ that is equivalent to ‘shutting you up’.
I kind of agree with you – yes, lawyers and politicians who decide on bills of rights are playing a role akin to religious councils. I would just say that there are those who do not interpret such a role as necessarily involving any metaphysical commitment. 'Ruling Passions' by Simon Blackburn is interesting on this, as an example of someone who is advocating for a quasi-realist position wrt morality (including rights), where we continue talking as if moral proclamations are 'out there' in the world, while also acknowledging that what is going on under the surface is fundamentally to do with our attitudes and sentiments rather than something we've discovered independent of us.
I see rights as a legit expression of commitment to/hope that there are some core rules of human morality that transcend any particular legal system and that deserve to be incorporated into every legal system by one means or another.
I am not sure. Take my example with murder which is almost universally prosecuted across time and cultures. Do people think about murderers in terms of them acting against some inherent right? Does it add anything into the conversation above universally accepted moral stance of murder is bad? And even then there are some examples, where polity can actually define conditions around which killing us unlawful and thus constitutes a murder and which one is lawful and condoned - e.g. killing as part of death sentence or assassinating head of terrorist organization with a bomb etc. It is not as if we are talking about something inherent and inalienable, there are always conditions around it.
I think that what rights really represent culturally is a declaration of some secular or civic version of religious dogma. Politicians - either national or those sitting in UN - are akin to council of bishops or rabbis and theologians, who from time to time sit together and make some moral proclamation that abortions or something like that is now okay and in fact anybody stopping them is anathema to the church polity and will be punished. They have theological discussion about morality of current rights and how to do proper exegesis of the holy text of Constitution or Bill of Rights or even if to outright amend it. But the authority lies with them, the rights in this sense are given and not inherent and definitely not inalienable.
What I want to say is that I do not recognize this authority of rights as some universal morality, to me rights are just present set of laws or maybe as you said a present set of aspirations of lawmakers. I will for instance never in a million years morally recognize anything like right to abortion in this moral sense no matter how many wise men try to persuade me or how many people use it as a slogan on the street. Othere people let's say do not recognize right to bear arms or other rights.
Additionally I do not like the vocabulary of rights exactly because it is pure language of entitlement absent duty. Good society with good laws and even rights is result of hard work. If the society is bad then all you are entitledto is misery.
I feel like there could be a link between the sense that you're being tested and the human superego or sense of religion. Could the solution here be to make the AI continue to believe on some level and act as though it's in testing indefinitely?
But
it's simply manually coded to prevent people from talking about certain ideas, even between people who both like said idea.
in order to get your idea in front of other people who might line your idea, it has to distribute your message to a proportion of available people who might like it. My point is, this distribution, if it happens, is a bonus. You, or nobody, is entitled to this distribution. People who complain that their reach is getting throttled are complaining that they’re not getting wider distribution, and then complain that their freedom of speech is getting unlawfully restricted. It’s not, because they are not entitled to that distribution in the first place.
Field and institution are everything. Are you seeking a professorship at Yale in the Women's, Gender, and Sexuality Studies department? If so you'll be waiting for a cold day in hell. Are you seeking a professorship at Colorado School of Mines in Petroleum Engineering? If so you were never at risk.
As for whether or not a position is attainable, odds are grim, for multiple reasons. The evergreen issue of academia being a pyramid scheme where successful P.I.s and labs are built on the back of the dirt cheap labor of grad students and post docs still applies, and most of those people will still never hold a professorship, and most of those professors will never receive tenure. Things are slightly worse now because of the job market (when private industry tightens purse strings staying in academia always becomes more appealing), but only by degrees.
If you want to shoot for a professorship, be prepared to work very hard to compete against other people who are among the very best in the world at the very thing you are doing. The system nearly as much rigged in favor of woke types, as it is rigged in favor of people willing to nolife grind out a strong publication history early in your career.
Technologically it's perfectly possible to let every user write their own algorithm
I think the technical hurdles to this are a lot higher than you expect. I'd like to see someone make a shot at doing it anyway, but I'm confident it will come with some significant trade-offs. A basic algorithm is probably more likely.
The main problem is that you need to run this somewhere and neither of your choices are good.
Running this on company hardware brings large performance and safety risks. Safety because some guy is going to try to get you to run an infinite loop or virus. Performance because search algorithms over large datasets are computationally intensive at the best of times, and code written by random strangers is not the best of times. Solving both of these without severely limiting the ability to create an algorithm would be a miracle.
Running this on a user's computer instead raises challenges around getting the data into the user's computer to be searched. If you're looking at Twitter and want to get today's tweets from accounts you follow that could be thousands of records. Download speed limitations will ensure you will never be able to run your algorithm on more than a tiny fraction of the full site.
What I don't understand is how absolutely swamped with shovelware and cheap scams every app marketplace seems to be.
Mobile app stores have been bad for a while -- any popular game will have tons of shitty knockoffs with similar names available for download almost immediately -- but in the last few years, even Nintendo of "Nintendo Seal of Quality" fame has their eshops flooded with low-effort sleaze like "Hentai Girls: Golf"
Clearly this is a solvable problem; Reddit and Facebook purchased armies of jannies to carry out "Anti-Evil Operations" against wrongthinkers. The depressing conclusion would be that there are enough slop enjoyers and straight-up cretins out there to make stricter app store curation a financially unwise decision even taking into account the reputational damage caused by this slop. But I'm hoping there's some other reason for it.
While I get your point that once you allow everyone to basically wirehead, most people will happily wirehead and only stop playing RDR Infinite when their heart finally fails, I am not sure things are so bleak.
Over the past 50 years, the supply of cheap entertainment readily available has increased by orders of magnitude. Back then, you only got whatever was on any of a few channels on TV, everything else required some effort, like going into a video store. Where previous generations might have bought a porn video tape, today the main obstacle is to narrow down what genres and kinks you are looking for out of the millions of available videos. Video games offer all sorts of experiences from art projects to Skinner boxes. If you want resources on any topic under the sun, the internet has you covered. Entire websites are created around the concept of not having to pay attention to one video for more than 15 seconds.
Humanity has not handled this overly gracefully, but it has handled it somewhat. Personally, I am somewhat vulnerable to this sort of thing, but while I sometimes get sucked into a TV series, video game, or book series and spend most of my waking hours for a week or two in there, I eventually finish (or lose interest) and come out on the other side. I am sure there is some level of AGI which could create a world from which I would never want to emerge again, but it will require better story-telling than ChatGPT. Of course, I am typical-minding here a bit, but my impression is that I am somewhere in the bulk of the bell curve of vulnerability. Sure, some people get sucked into a single video game and play it for years, but also some people do waste a lot less time than I do.
Steppe people have little in common with modern dissident states. The mongols and huns, by way of example, were masters of modern (for the time) military technology such as husbandry, siege craft, etc. Pretending they are analogues to the Taliban or Somalian pirates, is acting like those people have fleets of aircraft carriers and a host of ICBMs.
I tend to agree with one of the replies to @MonkeyWithAMachinegun ‘s post. I find the most damning thing about the discourse on political violence to be the enablement and incitement and lack of contrition by the left to be far more concerning than the actual numbers for a couple of reasons.
First of all. Because it does absolutely nothing to slow tge growth of such violence. If mainstream media sources are talking night after night about how conservatives are a threat to democracy, fascist, violent, and so on, this creates the radicalized people necessary (not necessarily sufficient, but necessary) to produce attacks. It also creates the environment that enables those attacks by normalization of the idea that certain parts of the political spectrum are too radical to be dealt with through the normal process. The modern cosmology of Fascism is that it occupies the place where Satan lives in the Christian world: a vile creature to be shunned and defeated by any means at your disposal.
Second because it reveals just how much support there is on the left for this sort of thing. Right wing rhetoric is sufficient to get advertisements pulled, people cancelled, and leave actors or other entertainers blackballed out of the industry. Left wing incitement and victim blaming doesn’t have the same effect. Kimmel basically victim-blamed the right. His “punishment” was a week of leave and a ton of media attention and the full support of the rest of Hollywood. Places like Bluesky are not losing advertisers, there are no calls for Facebook, Threads, TikTok, Bluesky, or Reddit to remove posts that victim blame or celebrate the Kirk assassination. Radical left podcasts are still widely available, and to my knowledge none of them carry a content warning.
The correct grammar comes across a bit robotic though.
You know, this comes up surprisingly often. X will say to Y "no I want you to show me your true self" and Y, with a look of befuddlement, will reply "...but I already am showing you my true self". People have a hard time grasping that the "true self" can vary so wildly among different individuals, or that the "true self" within a single individual can take on such a fractured and polymorphous nature.
This is just how I naturally think and speak. What you see is what you get. My posts on TheMotte are a fairly direct mirror of my own internal thought process (or at least, they're an amalgamation of fairly accurate representations of various internal thoughts of mine, rearranged for the sake of presentation). Even in my most intimate and unguarded thoughts, to the extent that they're verbalized internally at all, their grammar is always "correct", because I'm fuckin' nice with it like that. I take pride in maintaining at least a minimal semblance of order.
In environments where social pressures dictate that I have to lower my manner of speech, I feel like I'm able to express less of myself, I have to put more of a mask on. I value TheMotte precisely because this is one of the only discussion forums where long-form writing is actually valued, and I can count on the audience here to possess a certain degree of intelligence, so that I don't have to constantly abase myself for them.
the reflection was started by the quality contribution about holocaust denial. I think it was a bit of a condescending and angry reply, and I imagine that people upvoted it because of that.
There appears to be a bit of a tension here.
On the one hand, you're decrying self-censorship, and you want people to take off their "masks". But on the other hand, you're uncomfortable with the fact that someone wrote an "angry" comment. It appears that you can't have it both ways? Anger is an authentic emotion too. If you want people to be authentic, then that is going to, at times, involve them getting authentically angry. Especially given the nature of the topics we tend to discuss here. (One of the few ways in which TheMotte actually does force people to put a mask on is that, due to the cordiality rule, people have to bite their tongues on certain issues and not express the full extent of their ire. But I think this is a perfectly valid tradeoff. If you want a literal free for all then just go to /pol/.)
For my part, I think the spirit of the old internet as exemplified by Erik Naggum is perfectly alive and well on TheMotte -- probably more alive here than it is almost anywhere else, with the exception of 4chan.
I think this is the part that upsets me about the situation. I used to hope for this too, but that pretty heavily relies on a slow take-off. What happens when the friendly AI is simply better able to make your decisions for you? To manipulate you effortlessly? Or when you can't understand the upgrades in the first place, and have to trust the shuggoth that they work as claimed? You might not want to wirehead, but why do you think what you want will continue to matter? What happens when you can get one-shot by super-effective stimulus, like a chicken being hypnotized? Any takeoff faster than Accelerando probably renders us well obsolete long before we could adjust to the first generation of upgrades.
In most of the scenarios, there's literally nothing I can do! Which is why I don't worry about them more than I can help. However, and this might shock people given how much I talk about AI x-risk, I think the odds of it directly killing us are "only" ~20%, which leaves a lot of probability mass for Good Endings.
AI can be genuinely transformative. It might unlock technological marvels, and in its absence, it might take us ages to climb up the tech tree, or figure out other ways to augment our cognition. It's not that we can't do that at all by ourselves, I think a purely baseline civilization can, over time, get working BCIs, build Dyson Swarms and conquer the lightcone. It'll just take waaaay longer, and in the meantime those of us currently around might die.
However:
Or when you can't understand the upgrades in the first place, and have to trust the shuggoth that they work as claimed?
I think there's plenty of room for slow cognitive self-improvement (or externally aided improvement). I think it's entirely plausible that there are mechanisms I might understand that would give me a few IQ points without altering my consciousness too much, while equipping me to understand what's on the next rung of the ladder. So on till I'm a godlike consciousness.
Then there's all the fuckery you can do with uploads. I might have a backup/fork that's the alpha tester for new enhancements (I guess we draw straws), with the option to rollback. Or I might ask the smartest humans around, the ones that seem sane. Or the sanest transhumans. Or another AGI, assuming a non-singleton scenario.
And that ties back to the "meaningful work" stuff. We're not just souls stuck in a limited body, and it would be neat if the souls could be transplanted to awesome robot bodies. The meat is what we are. The substrate is the substance. Your cognition 1.0 is dependent on the hormones and molecules and chemicals that exist in your brain.
I'm the evolving pattern within the meat, which is a very different thing from just the constituent atoms or a "soul". I identify with the hypothetical version of me inside a computer as you do with a digital scan of a cherished VHS tape. The physical tape doesn't matter, the video does. I see no reason we can't also simulate the chemical influences on cognition to arbitrary accuracy, that just increases the overhead, we can probably cut corners on the level of specific dopamine receptors without screwing things up too much.
If you want an exhaustive take on my understanding of identity, I have a full writeup:
https://www.themotte.org/post/3094/culture-war-roundup-for-the-week/362713?context=8#context
We are specific types of creatures designed to function in specific environments, and to seek specific goals. How much "upgrade" before we turn into those animals that can't breed in captivity because something about the unnatural environment has their instincts screaming? Again, it's one thing if we're slowly going through Accelerando, taking years to acclimate to each expansion and upgrade.
Some might argue that the former has already happened, given the birth rate crisis. But I really don't see a more advanced civilization struggling to reproduce themselves. A biological one would invent artificial wombs, a digital one would fork or create new minds de-novo. We exist in an awkward interlude where we need to fuck our way out of the problem but can't find the fucking solution, pun intended.
But fast takeoff, AGI 2027? That seems a lot more like "write your name on the Teslabot and then kill yourself" - as the good outcome. Maybe we can just VR ourselves back to a good place, live in permanent 1999, but why on earth would an AI overlord want to waste the resources?
Isn't that the whole point of Alignment? We want an "AI overlord" that is genuinely benevolent, and which wants to take care of us. That's the difference between a loving pet owner and someone who can't shoot their yappy dog because of PETA. Now, ideally, I'd want AI to be less an overlord and more of a superintelligent assistant, but the former isn't really that bad if they're looking out for us.
You talk about writing a character only as smart as yourself, but that's keying into the thing that terrifies me and missing the point. What happens when "smarter than you" is table stakes? Imagine life from the perspective of a pet gerbil - perhaps vaguely aware that things are going on with the owners, but just fundamentally incapable of comprehending any of it, and certainly not of having any role or impact. Even Accelerando walked back from the precipice of the full, existential horror of it all. You don't want to write a story about human obsolescence? Bro, you're living in one.
My idealized solution is to try and keep up. I fully recognize that might not be a possibility. What else can we really do, other than go on a Butlerian Jihad? I don't think things are quite that bad, yet, and I'm balancing the risk against the reward that aligned ASI might bring.
You don't want to write a story about human obsolescence? Bro, you're living in one.
Quite possibly! Which is why writing one would be redundant. Most of us can do little more than cross our fingers and hope that things work out in the end. If not, hey, death will probably be quick.
I largely agree with you. I think the difference is probably (and we may never know for sure) what are they optimizing for now more than how they are going about it.
I think 2015/2016 social media companies were really optimizing for maximizing the attention as their one true goal. Whereas by the time we were deep in the covid years, they were seeking to metacognitively reflect their understanding of you back to you, while continuing to optimize for attention.
Civil Rights Act.
What the hell is ”CRA”? Googling mostly turned up Cyber Resiliency Act and something from Canada.
You do have me there, the closest I can think of is the Khmer Rogue's cambodian genocide, but that genocide was ideological/classist and ethnic and only 25%
No doubt Saudi Arabia would see this as ‘just what happens. FAFO’. But the Latin American elites do not differ in worldview(or complexion) from Anglosphere and Western European elites, and there’s a butload of unstable countries with serious crime problems that really don’t want to set that precedent.
My actual plan (modulo not dying, and having resources at my disposal) is closer to continuously upgrading my physical and cognitive capabilities so I can be independent. I don't want to have to rely on AGI to make my decisions or rely on charity/UBI.
I think this is the part that upsets me about the situation. I used to hope for this too, but that pretty heavily relies on a slow take-off. What happens when the friendly AI is simply better able to make your decisions for you? To manipulate you effortlessly? Or when you can't understand the upgrades in the first place, and have to trust the shuggoth that they work as claimed? You might not want to wirehead, but why do you think what you want will continue to matter? What happens when you can get one-shot by super-effective stimulus, like a chicken being hypnotized? Any takeoff faster than Accelerando probably renders us well obsolete long before we could adjust to the first generation of upgrades.
And that ties back to the "meaningful work" stuff. We're not just souls stuck in a limited body, and it would be neat if the souls could be transplanted to awesome robot bodies. The meat is what we are. The substrate is the substance. Your cognition 1.0 is dependent on the hormones and molecules and chemicals that exist in your brain. We are specific types of creatures designed to function in specific environments, and to seek specific goals. How much "upgrade" before we turn into those animals that can't breed in captivity because something about the unnatural environment has their instincts screaming? Again, it's one thing if we're slowly going through Accelerando, taking years to acclimate to each expansion and upgrade.
But fast takeoff, AGI 2027? That seems a lot more like "write your name on the Teslabot and then kill yourself" - as the good outcome. Maybe we can just VR ourselves back to a good place, live in permanent 1999, but why on earth would an AI overlord want to waste the resources? Your brain in a jar, at the mercy of a shuggoth that is infinitely smarter and more powerful than you, is the most total form of slavery that has ever been posited - and we would all of us be economically non-viable slaves.
You talk about writing a character only as smart as yourself, but that's keying into the thing that terrifies me and missing the point. What happens when "smarter than you" is table stakes? Imagine life from the perspective of a pet gerbil - perhaps vaguely aware that things are going on with the owners, but just fundamentally incapable of comprehending any of it, and certainly not of having any role or impact. Even Accelerando walked back from the precipice of the full, existential horror of it all. You don't want to write a story about human obsolescence? Bro, you're living in one.
FWIW, several of my friends who don't plan on having kids explicitly state that part of the reason is that they will have more money for retirement. From a personal view this is sensible, from a societies' view it's pure insanity, and a point in Soterologian's favor, even if it's far from the only reason people have no kids.
While I agree with Tractatus' reply as well, I've also had a recent post on a very related topic, namely the dissolution of marriage. Social changes are rarely actually instant; They are spreading & compounding. Just because something became legal, doesn't mean that everyone is doing it. Usually it's only a small community really taking advantage of the most recent change, while the majority just mostly carries on with what they grew up with, unless they have a very good reason.
I mean, isn’t wokeness at least partially responsible for the dems turn away from Israel?
No. The Journal of Creation already exists, you can read it right now if you’d like to.
That has been suggested as means of enforcing alignment!
Unfortunately, it's a very brittle approach. As models get increasingly smarter, they become more context-aware. They become increasingly capable of picking up subtle clues:
However, we can and do try and make models better aligned. In the majority of cases, models don't seem to reward hack in deployment or act in egregiously bad ways. Apollo's post discusses their approach, called "deliberative alignment":
This builds on earlier work by Anthropic, which they called Constitutional AI:
While their original discussion is as old as 2022, my understanding is that similar practises are commonplace.
In general, rewarding models for doing the right thing and steering them away from the wrong works on net. The question is whether that approach is complete, or if it puts too much selection pressure on models to be sneaky or more subtly deceptive. When the stakes are as high as AGI/ASI, we ought to be really confident that we've stamped out bad behavior.
More options
Context Copy link