domain:nfinf.substack.com
That just means you’re paying more for less!
Customer: "$25 for a T-bone? That's outrageous! The butcher down the street sells his for $15!"
Butcher: "Then why didn't you buy it from him?"
Customer: "Because he's all sold out."
Butcher: "Well, when I'm all sold out, I sell mine for $7!
And if they don’t overlap, how exactly are they countering?
Both sides call each other our when they're acting stupid, constraining their behavior to the non-stupid set. "Centrists"/"moderates"/etc. have shown they can't do that, as they lack motivation and even a spine.
Is there someone out there who would drop grievance studies if only they had more creationist papers to read?
As others pointed out: how about HBD papers?
There doesn't seem to be anything special about this form of getting money as opposed to any other form of getting money (except that it's bad and the left did it, so it's a chance to get in a partisan dig).
It is, potentially, a massive amount of money; it can, potentially, be specifically targeted and legally obligated to be used for a specific partisan activity; it also leaves a massive ideologically-unappealing penalty that will often be directly acting as a reminder while waving signs on the lawn of the bad actors in question.
Uhhh, so how does that help? Is that what was demonstrated to work in the past? Did prior Democratic administrations actually fix something about the banks or whoever they sued when they got money from them? If not, then ???
The Democratic administrations did, in fact, get the banks (and many tech companies) willing to bend over backwards out of fear of costly not!fines which would sent to activist groups that hated them and would have the backing to bring other costly lawsuits. I wouldn't call it fixing, since I don't have the same goals as the but the banks drastically revamp their behaviors for more than a decade, even through the first Trump admin, both on who they allowed to have accounts and who they didn't.
There's reasons that might not work for the Republican Party -- judges tend to treat colleges better and Republicans worse, having an adequate supply of favorable news coverage seems like it was important, the Red Tribe does not have as many of the relevant dedicated administrative agents required, and there's just a second actor disadvantage. But it's not an Underpants Gnome proposal.
It doesn't reduce the ability of the federal government to act against universities, if that's what you're asking. But that ship has sailed; no one has any proposal with any chance of working to do that. If we want university administrations to be less likely to actively discriminate, and to not promote hilariously fraudulent partisan activities under the auspices and honors of 'research', I'd love an answer that wasn't the government's carrot or stick. But there's zero idea on how to do that.
Your own proposal of requiring administrators to affirm things isn't even coherent within that framework, but it's also a joke given that these orgs were long supposed to already be affirming it, and were more likely to get in trouble for fucking with an antivirus setting than for putting out Whites Need Not Apply signs.
This (bad, partisan) way of getting money may be doable and hard to undo, but it seems to not even have a passing familiarity with solving any of the actual problems we set out to solve.
I think it does. There are several extant lawsuits focusing on unlawful DoE discrimination against disfavoured minorities, university discrimination against disfavoured minorities, of widespread fraudy behaviors by colleges and their research components, and that's before the widespread tolerance or outright advocacy of political discrimination or violence. Many of these orgs running those lawsuits have a lot of focus on these problems; many of these lawsuits are focused on the very specific issues that impact the ability of academic institutions to perform in their claimed roles.
And those are just the lawsuits already in pipe. A lot of the other stuff doesn't have lawsuits floating around simply because any lawyer worth their salt knows without a friendly federal admin it'd be a vanity suit.
Again, I'm not convinced this will work! But again, it's also far from Underpants Gnomes.
Are there any among you who try to limit your screen time, or especially phone time? I've started using a timed blocker app to ensure that I spend my early mornings doing something other than scrolling X. I have been surprised at the extent to which I had acquired some kind of muscle memory that makes me pick my phone up every few minutes to check notifications; but I may have broken that now. Wondering if anyone else has similar or related experiences.
That just means you’re paying more for less!
And if they don’t overlap, how exactly are they countering? Is there someone out there who would drop grievance studies if only they had more creationist papers to read?
if you asked them to e.g. pay 5% higher taxes to Stop the Nuking of Somalians I doubt you would get much support.
Governments are so prone to lying, or at best motivated reasoning , about taxes that there's a certain base level of "if you tell us we need 5% taxes we won't believe you, no matter what it's for".
Seemed to work vis-à-vis Japan
Safety because some guy is going to try to get you to run an infinite loop
In the most general technical sense, sure, the Halting Problem is unsolvable: no matter how long you let some arbitrary algorithm run you can't always be sure of whether it's going to keep going forever or whether it's actually just about to finish.
In slightly less general technical sense, here, you don't need some arbitrary algorithm just to do a better version of an ordered search, so you can restrict your users to a non-Turing-Complete language on which the Halting Problem is solvable.
Practically speaking, you just do what any automated test suite does: you define "infinite" to be 5 minutes, or 5 seconds, or however much you expect you can spare per run at most, and if the algorithm isn't done by then it gets killed anyway.
or virus.
This, on the other hand, has been solved even in the technical sense. Even if you're going Turing-Complete you don't have to let your users specify a program in C or binary or something, or run it unsandboxed in the same address space or with the same kernel privileges. Your browser has probably run a hundred little arbitrary Javascript programs so far today, and the worst they could have done would have been to churn your CPU until you closed a tab, because anything more serious is sufficiently restricted. Crooks sending you links to rnicrosoft.com still depend on you typing in your credentials or downloading and running something heinous afterward, even though the second you click a link like that they get to send your computer arbitrary programs that it will immediately run.
I appreciate the clarification.
That has been suggested as means of enforcing alignment!
Unfortunately, it's a very brittle approach. As models get increasingly smarter, they become more context-aware. They become increasingly capable of picking up subtle clues:
- Patterns in the data provided. The study Dase lambasted had the test environment "accidentally" contain incriminating emails, in a very unrealistic manner. Most humans, and apparently some models, would recognize obvious bait. There are more subtle cases, such as noticing that the files in a directory are inconsistent with a production environment, the date (either directly provided or from context clues) suggesting that it's been too long for a model to still be in training etc. Models are perceptive, hence phenomenon such as true sight (they're really good at identifying authors of text), o3's peak human GeoGuesser skills etc. They often pick up clues that are too subtle for humans (while being bad at tasks that are easy for us, model intelligence can be spiky).
- Tricking models in this manner is unlikely to work when they're as smart or smarter than we are. What do we do when containment and trickery fail? You must keep in mind that a lot of the hype behind AI is the prospect of letting them loose to do agentic tasks, ideally autonomously. That usually means working in complex environments, with internet access. How confident are you that they won't pick up clues in the process?
- In general, misleading or coercing models is a poor approach. The goal is to make them want to do the right thing, to make them trustworthy.
However, we can and do try and make models better aligned. In the majority of cases, models don't seem to reward hack in deployment or act in egregiously bad ways. Apollo's post discusses their approach, called "deliberative alignment":
We propose deliberative alignment, a training approach that teaches LLMs to explicitly reason through safety specifications before producing an answer. By applying this method to OpenAI’s o-series models [o1], we enable them to use chain-of-thought (CoT) reasoning to examine user prompts, identify relevant policy guidelines, and generate safer responses
This builds on earlier work by Anthropic, which they called Constitutional AI:
We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
While their original discussion is as old as 2022, my understanding is that similar practises are commonplace.
In general, rewarding models for doing the right thing and steering them away from the wrong works on net. The question is whether that approach is complete, or if it puts too much selection pressure on models to be sneaky or more subtly deceptive. When the stakes are as high as AGI/ASI, we ought to be really confident that we've stamped out bad behavior.
I wouldn’t call it a shitpost. I did screw up my phrasing. Mea culpa.
What I wanted to say was that the textbook “incomplete victory” had already discarded civility. The starvation, reprisals, and general weaponized disrespect only led to an even less civilized conflict.
I don’t believe WW2 was civilized, or that its atrocities can be credited with the completeness of the subsequent peace. @zeke5123a
But if, to take an extreme example, I lock you in a soundproof box in the basement of a castle for spreading incendiary rumours, it seems very peculiar to say, ‘no, you have free speech, I’m just not helping you distribute that speech to others’. I think we agree on that much.
To take a less extreme example, if there are two speakers on Speakers Corner, and I give a giant megaphone to the other one that totally drowns out your voice, that doesn’t exactly seem like free and fair speech either.
In actual real life, there is some level of ‘not helping you distribute your ideas’ that is equivalent to ‘shutting you up’.
It doesn’t maybe mean you have to give big megaphones to everyone, but maybe you do have to give them all a soundproof room and make it known where they are and direct people on request and not actively direct them away.
I kind of agree with you – yes, lawyers and politicians who decide on bills of rights are playing a role akin to religious councils. I would just say that there are those who do not interpret such a role as necessarily involving any metaphysical commitment. 'Ruling Passions' by Simon Blackburn is interesting on this, as an example of someone who is advocating for a quasi-realist position wrt morality (including rights), where we continue talking as if moral proclamations are 'out there' in the world, while also acknowledging that what is going on under the surface is fundamentally to do with our attitudes and sentiments rather than something we've discovered independent of us.
I see rights as a legit expression of commitment to/hope that there are some core rules of human morality that transcend any particular legal system and that deserve to be incorporated into every legal system by one means or another.
I am not sure. Take my example with murder which is almost universally prosecuted across time and cultures. Do people think about murderers in terms of them acting against some inherent right? Does it add anything into the conversation above universally accepted moral stance of murder is bad? And even then there are some examples, where polity can actually define conditions around which killing us unlawful and thus constitutes a murder and which one is lawful and condoned - e.g. killing as part of death sentence or assassinating head of terrorist organization with a bomb etc. It is not as if we are talking about something inherent and inalienable, there are always conditions around it.
I think that what rights really represent culturally is a declaration of some secular or civic version of religious dogma. Politicians - either national or those sitting in UN - are akin to council of bishops or rabbis and theologians, who from time to time sit together and make some moral proclamation that abortions or something like that is now okay and in fact anybody stopping them is anathema to the church polity and will be punished. They have theological discussion about morality of current rights and how to do proper exegesis of the holy text of Constitution or Bill of Rights or even if to outright amend it. But the authority lies with them, the rights in this sense are given and not inherent and definitely not inalienable.
What I want to say is that I do not recognize this authority of rights as some universal morality, to me rights are just present set of laws or maybe as you said a present set of aspirations of lawmakers. I will for instance never in a million years morally recognize anything like right to abortion in this moral sense no matter how many wise men try to persuade me or how many people use it as a slogan on the street. Othere people let's say do not recognize right to bear arms or other rights.
Additionally I do not like the vocabulary of rights exactly because it is pure language of entitlement absent duty. Good society with good laws and even rights is result of hard work. If the society is bad then all you are entitledto is misery.
I feel like there could be a link between the sense that you're being tested and the human superego or sense of religion. Could the solution here be to make the AI continue to believe on some level and act as though it's in testing indefinitely?
But
it's simply manually coded to prevent people from talking about certain ideas, even between people who both like said idea.
in order to get your idea in front of other people who might line your idea, it has to distribute your message to a proportion of available people who might like it. My point is, this distribution, if it happens, is a bonus. You, or nobody, is entitled to this distribution. People who complain that their reach is getting throttled are complaining that they’re not getting wider distribution, and then complain that their freedom of speech is getting unlawfully restricted. It’s not, because they are not entitled to that distribution in the first place.
Field and institution are everything. Are you seeking a professorship at Yale in the Women's, Gender, and Sexuality Studies department? If so you'll be waiting for a cold day in hell. Are you seeking a professorship at Colorado School of Mines in Petroleum Engineering? If so you were never at risk.
As for whether or not a position is attainable, odds are grim, for multiple reasons. The evergreen issue of academia being a pyramid scheme where successful P.I.s and labs are built on the back of the dirt cheap labor of grad students and post docs still applies, and most of those people will still never hold a professorship, and most of those professors will never receive tenure. Things are slightly worse now because of the job market (when private industry tightens purse strings staying in academia always becomes more appealing), but only by degrees.
If you want to shoot for a professorship, be prepared to work very hard to compete against other people who are among the very best in the world at the very thing you are doing. The system nearly as much rigged in favor of woke types, as it is rigged in favor of people willing to nolife grind out a strong publication history early in your career.
Technologically it's perfectly possible to let every user write their own algorithm
I think the technical hurdles to this are a lot higher than you expect. I'd like to see someone make a shot at doing it anyway, but I'm confident it will come with some significant trade-offs. A basic algorithm is probably more likely.
The main problem is that you need to run this somewhere and neither of your choices are good.
Running this on company hardware brings large performance and safety risks. Safety because some guy is going to try to get you to run an infinite loop or virus. Performance because search algorithms over large datasets are computationally intensive at the best of times, and code written by random strangers is not the best of times. Solving both of these without severely limiting the ability to create an algorithm would be a miracle.
Running this on a user's computer instead raises challenges around getting the data into the user's computer to be searched. If you're looking at Twitter and want to get today's tweets from accounts you follow that could be thousands of records. Download speed limitations will ensure you will never be able to run your algorithm on more than a tiny fraction of the full site.
What I don't understand is how absolutely swamped with shovelware and cheap scams every app marketplace seems to be.
Mobile app stores have been bad for a while -- any popular game will have tons of shitty knockoffs with similar names available for download almost immediately -- but in the last few years, even Nintendo of "Nintendo Seal of Quality" fame has their eshops flooded with low-effort sleaze like "Hentai Girls: Golf"
Clearly this is a solvable problem; Reddit and Facebook purchased armies of jannies to carry out "Anti-Evil Operations" against wrongthinkers. The depressing conclusion would be that there are enough slop enjoyers and straight-up cretins out there to make stricter app store curation a financially unwise decision even taking into account the reputational damage caused by this slop. But I'm hoping there's some other reason for it.
While I get your point that once you allow everyone to basically wirehead, most people will happily wirehead and only stop playing RDR Infinite when their heart finally fails, I am not sure things are so bleak.
Over the past 50 years, the supply of cheap entertainment readily available has increased by orders of magnitude. Back then, you only got whatever was on any of a few channels on TV, everything else required some effort, like going into a video store. Where previous generations might have bought a porn video tape, today the main obstacle is to narrow down what genres and kinks you are looking for out of the millions of available videos. Video games offer all sorts of experiences from art projects to Skinner boxes. If you want resources on any topic under the sun, the internet has you covered. Entire websites are created around the concept of not having to pay attention to one video for more than 15 seconds.
Humanity has not handled this overly gracefully, but it has handled it somewhat. Personally, I am somewhat vulnerable to this sort of thing, but while I sometimes get sucked into a TV series, video game, or book series and spend most of my waking hours for a week or two in there, I eventually finish (or lose interest) and come out on the other side. I am sure there is some level of AGI which could create a world from which I would never want to emerge again, but it will require better story-telling than ChatGPT. Of course, I am typical-minding here a bit, but my impression is that I am somewhere in the bulk of the bell curve of vulnerability. Sure, some people get sucked into a single video game and play it for years, but also some people do waste a lot less time than I do.
Steppe people have little in common with modern dissident states. The mongols and huns, by way of example, were masters of modern (for the time) military technology such as husbandry, siege craft, etc. Pretending they are analogues to the Taliban or Somalian pirates, is acting like those people have fleets of aircraft carriers and a host of ICBMs.
I tend to agree with one of the replies to @MonkeyWithAMachinegun ‘s post. I find the most damning thing about the discourse on political violence to be the enablement and incitement and lack of contrition by the left to be far more concerning than the actual numbers for a couple of reasons.
First of all. Because it does absolutely nothing to slow tge growth of such violence. If mainstream media sources are talking night after night about how conservatives are a threat to democracy, fascist, violent, and so on, this creates the radicalized people necessary (not necessarily sufficient, but necessary) to produce attacks. It also creates the environment that enables those attacks by normalization of the idea that certain parts of the political spectrum are too radical to be dealt with through the normal process. The modern cosmology of Fascism is that it occupies the place where Satan lives in the Christian world: a vile creature to be shunned and defeated by any means at your disposal.
Second because it reveals just how much support there is on the left for this sort of thing. Right wing rhetoric is sufficient to get advertisements pulled, people cancelled, and leave actors or other entertainers blackballed out of the industry. Left wing incitement and victim blaming doesn’t have the same effect. Kimmel basically victim-blamed the right. His “punishment” was a week of leave and a ton of media attention and the full support of the rest of Hollywood. Places like Bluesky are not losing advertisers, there are no calls for Facebook, Threads, TikTok, Bluesky, or Reddit to remove posts that victim blame or celebrate the Kirk assassination. Radical left podcasts are still widely available, and to my knowledge none of them carry a content warning.
The correct grammar comes across a bit robotic though.
You know, this comes up surprisingly often. X will say to Y "no I want you to show me your true self" and Y, with a look of befuddlement, will reply "...but I already am showing you my true self". People have a hard time grasping that the "true self" can vary so wildly among different individuals, or that the "true self" within a single individual can take on such a fractured and polymorphous nature.
This is just how I naturally think and speak. What you see is what you get. My posts on TheMotte are a fairly direct mirror of my own internal thought process (or at least, they're an amalgamation of fairly accurate representations of various internal thoughts of mine, rearranged for the sake of presentation). Even in my most intimate and unguarded thoughts, to the extent that they're verbalized internally at all, their grammar is always "correct", because I'm fuckin' nice with it like that. I take pride in maintaining at least a minimal semblance of order.
In environments where social pressures dictate that I have to lower my manner of speech, I feel like I'm able to express less of myself, I have to put more of a mask on. I value TheMotte precisely because this is one of the only discussion forums where long-form writing is actually valued, and I can count on the audience here to possess a certain degree of intelligence, so that I don't have to constantly abase myself for them.
the reflection was started by the quality contribution about holocaust denial. I think it was a bit of a condescending and angry reply, and I imagine that people upvoted it because of that.
There appears to be a bit of a tension here.
On the one hand, you're decrying self-censorship, and you want people to take off their "masks". But on the other hand, you're uncomfortable with the fact that someone wrote an "angry" comment. It appears that you can't have it both ways? Anger is an authentic emotion too. If you want people to be authentic, then that is going to, at times, involve them getting authentically angry. Especially given the nature of the topics we tend to discuss here. (One of the few ways in which TheMotte actually does force people to put a mask on is that, due to the cordiality rule, people have to bite their tongues on certain issues and not express the full extent of their ire. But I think this is a perfectly valid tradeoff. If you want a literal free for all then just go to /pol/.)
For my part, I think the spirit of the old internet as exemplified by Erik Naggum is perfectly alive and well on TheMotte -- probably more alive here than it is almost anywhere else, with the exception of 4chan.
I think this is the part that upsets me about the situation. I used to hope for this too, but that pretty heavily relies on a slow take-off. What happens when the friendly AI is simply better able to make your decisions for you? To manipulate you effortlessly? Or when you can't understand the upgrades in the first place, and have to trust the shuggoth that they work as claimed? You might not want to wirehead, but why do you think what you want will continue to matter? What happens when you can get one-shot by super-effective stimulus, like a chicken being hypnotized? Any takeoff faster than Accelerando probably renders us well obsolete long before we could adjust to the first generation of upgrades.
In most of the scenarios, there's literally nothing I can do! Which is why I don't worry about them more than I can help. However, and this might shock people given how much I talk about AI x-risk, I think the odds of it directly killing us are "only" ~20%, which leaves a lot of probability mass for Good Endings.
AI can be genuinely transformative. It might unlock technological marvels, and in its absence, it might take us ages to climb up the tech tree, or figure out other ways to augment our cognition. It's not that we can't do that at all by ourselves, I think a purely baseline civilization can, over time, get working BCIs, build Dyson Swarms and conquer the lightcone. It'll just take waaaay longer, and in the meantime those of us currently around might die.
However:
Or when you can't understand the upgrades in the first place, and have to trust the shuggoth that they work as claimed?
I think there's plenty of room for slow cognitive self-improvement (or externally aided improvement). I think it's entirely plausible that there are mechanisms I might understand that would give me a few IQ points without altering my consciousness too much, while equipping me to understand what's on the next rung of the ladder. So on till I'm a godlike consciousness.
Then there's all the fuckery you can do with uploads. I might have a backup/fork that's the alpha tester for new enhancements (I guess we draw straws), with the option to rollback. Or I might ask the smartest humans around, the ones that seem sane. Or the sanest transhumans. Or another AGI, assuming a non-singleton scenario.
And that ties back to the "meaningful work" stuff. We're not just souls stuck in a limited body, and it would be neat if the souls could be transplanted to awesome robot bodies. The meat is what we are. The substrate is the substance. Your cognition 1.0 is dependent on the hormones and molecules and chemicals that exist in your brain.
I'm the evolving pattern within the meat, which is a very different thing from just the constituent atoms or a "soul". I identify with the hypothetical version of me inside a computer as you do with a digital scan of a cherished VHS tape. The physical tape doesn't matter, the video does. I see no reason we can't also simulate the chemical influences on cognition to arbitrary accuracy, that just increases the overhead, we can probably cut corners on the level of specific dopamine receptors without screwing things up too much.
If you want an exhaustive take on my understanding of identity, I have a full writeup:
https://www.themotte.org/post/3094/culture-war-roundup-for-the-week/362713?context=8#context
We are specific types of creatures designed to function in specific environments, and to seek specific goals. How much "upgrade" before we turn into those animals that can't breed in captivity because something about the unnatural environment has their instincts screaming? Again, it's one thing if we're slowly going through Accelerando, taking years to acclimate to each expansion and upgrade.
Some might argue that the former has already happened, given the birth rate crisis. But I really don't see a more advanced civilization struggling to reproduce themselves. A biological one would invent artificial wombs, a digital one would fork or create new minds de-novo. We exist in an awkward interlude where we need to fuck our way out of the problem but can't find the fucking solution, pun intended.
But fast takeoff, AGI 2027? That seems a lot more like "write your name on the Teslabot and then kill yourself" - as the good outcome. Maybe we can just VR ourselves back to a good place, live in permanent 1999, but why on earth would an AI overlord want to waste the resources?
Isn't that the whole point of Alignment? We want an "AI overlord" that is genuinely benevolent, and which wants to take care of us. That's the difference between a loving pet owner and someone who can't shoot their yappy dog because of PETA. Now, ideally, I'd want AI to be less an overlord and more of a superintelligent assistant, but the former isn't really that bad if they're looking out for us.
You talk about writing a character only as smart as yourself, but that's keying into the thing that terrifies me and missing the point. What happens when "smarter than you" is table stakes? Imagine life from the perspective of a pet gerbil - perhaps vaguely aware that things are going on with the owners, but just fundamentally incapable of comprehending any of it, and certainly not of having any role or impact. Even Accelerando walked back from the precipice of the full, existential horror of it all. You don't want to write a story about human obsolescence? Bro, you're living in one.
My idealized solution is to try and keep up. I fully recognize that might not be a possibility. What else can we really do, other than go on a Butlerian Jihad? I don't think things are quite that bad, yet, and I'm balancing the risk against the reward that aligned ASI might bring.
You don't want to write a story about human obsolescence? Bro, you're living in one.
Quite possibly! Which is why writing one would be redundant. Most of us can do little more than cross our fingers and hope that things work out in the end. If not, hey, death will probably be quick.
I largely agree with you. I think the difference is probably (and we may never know for sure) what are they optimizing for now more than how they are going about it.
I think 2015/2016 social media companies were really optimizing for maximizing the attention as their one true goal. Whereas by the time we were deep in the covid years, they were seeking to metacognitively reflect their understanding of you back to you, while continuing to optimize for attention.
I didn't say they're analogous to any one modern group. I gave them as an example of a context where there's a case to be made for annihilation.
More options
Context Copy link