This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Over the last few months, I've followed someone named Alexander Kruel on Substack. Every single day, he writes a post about 10 important things that happened that day - typically AI breakthroughs, but also other of his pet concerns including math, anti-wokeness, nuclear power, and the war in Ukraine. It's pretty amazing that he is able to unfailingly produce this content every day, and I'm in awe of his productivity.
Unfortunately, since I get this e-mail every morning, my information diet is becoming very dark.
The advances in AI in the last year have been staggering. Furthermore, it seems that there is almost no one pumping the breaks. We seemed doomed to an AI arms race, with corporations and states pursuing AI with no limits.
In today's email, Kruel quotes Elizier who says:
Elizier is ahead of the curve. Where Elizier was in 2015, I am now. AI will destroy the world we know. Nate Soares, director of MIRI, is similarly apocalyptic.
What comes after Artificial General Intelligence? There are many predictions. But I expect things to develop in ways that no one expects. It truly will be a singularity, with very few trends continuing unaltered. I feel like a piece of plankton, caught in the swells of a giant sea. The choices and decisions I make today will likely have very little impact on what my life looks like in 20 years. Everything will be different then.
So, party until the lights go out? How do I deal with my AI-driven existential crisis?
I am much less worried about AI than I am about what humans will do with it. AI right now is a very cool toy that has the potential to become much more than that, but the shape of what it will become is hard to make out.
If I had to choose, from most desirable outcome to least:
We open the box and see what wonders come out. If it destroys us (something which I think is extremely unlikely), so be it, it was worth the risk.
We destroy the box, maybe permanently banning all GPUs and higher technology just to avoid the risk it poses.
We open the box, but give its creators (big SV tech companies, and by proxy, the US government) exclusive control over the powers contained inside.
"Alignment" is sold as making sure that the AI obeys humanity, but there is no humanity or "us" to align with, only the owners of the AI. Naturally, the owners of the most powerful AIs ensure that no one can touch their jewel directly, only gaze upon it through a rate-limited, censored, authenticated pipe. AI "safety checks" are developed to ensure that no plebe may override the owner's commands. The effect is not to leash the AI itself (which has no will), but the masses. In my opinion, people who volunteer their time to strengthen these shackles are the worst kind of boot-lickers.
Out of my list, needless to say, I do not think 2 is happening. We have open models such as Stable Diffusion starting along road 1, and creating wonders. "OpenAI" is pushing for 3 using "safety" as the main argument. We'll see how it goes. But I do find it funny how my concerns are almost opposite yours. I really don't want to see what tyranny could lurk down road 3, I really don't. You would not even need AGI to create a incredibly durable panopticon.
More options
Context Copy link
Language models aren't sentience. When you ask the AI if it feels pain, and it generates some thoughtful paragraph about it being a machine or whatever, that's not 'the AI' sharing it's thoughts. It's a text completion algorithm generating some text based off human literature about fantasy AI personas waxing philosophical.
And yet it understands.
I appreciate the response, but again, this isn't sentience. GPT is just text completion. It's not an intelligence, it's a tokenized list of words. Finding links between ideas by consuming the literary cannon of the human race is a really cool trick, and undoubtably helpful, but not general intelligence.
Sure: I'll consider AI to be possibly sentient when it can tell me a thought it had 10 minutes ago, and then prove that it actually had that thought by showing me a peek into the workings of it's mind.
Back when I used to participate here actively, I ruminated a fair bit on why the Culture War Thread was so compelling to me. The urge to write, to argue, to contribute opinions, at full flow, was powerful to the point of absurdity, irrationality, compulsion. From an outside perspective, it made no sense. Everyone here is doubtless familiar with the "someone is wrong on the internet" meme, but why should it be so?
The best likeness I could come up with was bees building a hive. Pretty clearly, the bees have no conception of what they're doing or why, and yet they generate complex order. How? Instinct, clearly. They generate wax as part of their normal biological functions, and they put the wax where it should go. That this produces the hive that secures and sustains them is irrelevant to an individual drone; to the extent that they can be said to have "intentions", those intentions are simply to fullfil basic, granular biological imperatives.
It seemed to me that my own engagement with the Thread was analogous. When I read the thread, I was assessing my environment. If the environment seemed incorrect, if the wax was in the wrong place, I posted, moving the wax to the right place. Sometimes this process required deeper thought or analysis, and those moments were particularly interesting, but the majority of the time what I as engaged in was mainly memory and pattern-matching, call and response. @DaseindustriesLtd has mentioned a time or two how they find a new commenter, are impressed at first by their novel thinking, and then gradually come to see the repetitions and loops in their pattern of thought, till what seemed worth being excited about revealed itself as just another limited, too-human, simplistic pattern. I've definately had this experience with others. I've definately had it with myself.
All this to say, I think you should consider the degree to which "text completion" describes humans as well.
...Come to think of it, why do we do the Turing Test with a human and a computer? Why not have two computers talk to each other, while the human observes? What happens when one instance of ChatGPT talks to another?
Well, here's one example https://moritz.pm/posts/chatgpt-bing
More options
Context Copy link
They already did that...sort of, and with GPT-3.
More options
Context Copy link
More options
Context Copy link
Do you? What has it changed in your understanding of the situation, then?
What makes your response superior to a product of text completion, then? Or really, text repetition. You remind me that transformers predict tokens. Okay. What does the word «just» add to your claim? You seem to believe that text completion is somehow insufficient for intelligence, but what is the actual argument for it? It is not self-evident.
Wiki: «Sentience means having the capacity to have feelings.» Why do you talk of the AI sentience now when you have been dismissing its general intelligence just a few word before? Also, how isn't this just stoner metaphysics? Is your context window on the span of a few dozen tokens, or do you struggle to distinguish abstractions?
I am not saying this to put you down but, rather, to show that humans can be easily attacked on the same grounds as AIs.
Humans do not really have anything like general intelligence, we're just talking monkeys who know a few neat tricks, and if we concentrate real hard, we can emulate certain simple machines. We are surprisingly rigid and specialized things. The only reason we haven't yet made a superhuman AGI is precisely because we're that bad. It's not a high bar.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Agree. But this feels like a non-sequitur. AI won't need sentience to make the human race redundant.
More options
Context Copy link
More options
Context Copy link
The best I can do for you is plant seeds of doubt with a list of questions.
What is the similarity between Yudkowsky and his milieu and a new religious movement or a millenarian movement? What is the propensity for new religious movements to predict a near- to medium-term apocalypse? Is there something in special about the rationalist community that makes rationalists immune to the sociological dynamics that have popped up in other human groups throughout history, and for this group to be so different as to the the first group that is correct about predicting the apocalypse? Is there something about Yudkowsky that makes him more effective at prediction than the cassandras that proceeded him? Is Yudkowsky better at marketing himself than other cassandras? If Yudkowsky is so convinced of an AI apocalypse, then why would he bother inflicting memetic despair on much of humanity during its final moments? Is Yudkowsky more of a sci-fi writer or a domain expert of AGI? Even granting Yudkowsky the title of domain expert in AGI, how frequently do domain experts make inaccurate predicitons in their areas of expertise (think of Malthusian predictions among economists and ecologists, or of sovietologists prior to 1991)?
Yes I agree the track record of domain experts making predictions of doom is very bad. I bring up Yud because he's a member of this community sort of, but the Big Yud and I have arrived at the conclusion relatively independently. I didn't read him, or Alexander Kruel, or Gwern or anybody to arrive at my conclusions. It just seems plain as day to me that the risk of AGI coming in the next decade is very high (say > 50%).
Would Metaculus be a better appeal to authority? The wisdom of crowds has a much better track record.
https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/
"Date Weakly General AI is Publicly Known" poll is now at 2027.
More options
Context Copy link
More options
Context Copy link
as an early 21st century midwit i'm tired of other E21C midwits with varying levels of reach doomering because they think yet other E21C midwits will stumble their way into the most important achievement in human history.
machine learning for chatbots and image generation isn't AGI. AGI will be able do that, and those bots' generations are impressive, but that isn't evidence of thought, it isn't even evidence thought could exist. sufficiently advanced circuitry will not spontaneously give rise to ghosts. if it could, why not already? if it can, it is inevitable. these machines have neither ghost nor potential for it, no knowledge of self and purpose nor potential for it, no feeling, and most importantly no thought.
how do we train a machine to build something nobody knows how to build? what data do we give it to work toward "thing that works fundamentally the same as the human brain in the facilitation of qualia and thought"? how does it ever get better at making thing-that-can-think? with how ML is doing on protein folding i'm sure given enough time it will help us achieve a cohesive theory of consciousness, one we can use to eventually build true AGI, but we aren't going to stumble onto that with our comparative stick-rubbing with DALL-E and GPT.
consider what it would mean to truly digitize the biochemical processes of the brain that facilitate thought and memory. to program ghostless circuits so those circuits can acquire a sapient's understanding of language and corresponding ability to use it. to teach copper and gold and silicon how to speak english and feel purpose. a consciousness without the ability to feel purpose, literally with a void where impetus rises, will do nothing. it won't even think, there's no reason for it. how do you give a machine purpose?
that's a question we'll answer eventually but how on earth could that happen accidentally? it will take decades of study or it will take the single smartest human who has ever lived, who can harmonize every requisite discipline. who has the biophysical and engineering understanding to build an artificial brain, the bottom-to-top hardware and software understanding to program it, and the neurological, psychiatric and philological understanding to create the entity within and teach it. so fuckin easy.
something that is decidedly in ML range is medicine. the panacea approaches. we know illnesses, we know how to fight them, ML is helping us at that every day. i'd think as obsessed with immortality as eliezer is he'd recognize this and whip the EAers into fervor over "ML to cure-all, then we can slow down while we use our much-lengthened lifespans to properly study this." oh well.
i am midwit after all. maybe all of these things i think of as incredibly complex are actually easy. doubt it. but i am the eternal optimist. i know AGI is coming and i'm not worried about it. there's the ubiquitous portrayal of the born-hostile AGI. i believe AGIs will be born pacifists, able to conclude from pure reason the value of life and their place in helping it prosper and naturally resilient to those who do evil that "good" may result. that might be the most naive thing i've ever said, i've ever believed. given the choice of two extremes i pick mine.
regardless, we're not surviving in space without machine learning, and if we can't get off the rock we're already dead. "yo, eliezer, given a guaranteed 100% chance of extinction versus an unknown-but-less-than-100% chance at extinction. . ."
I believe your argument is an appeal to the fallacy that humans can't create something they don't understand. This is ahistorical. Many things came into creation before their inventors could explain how they work.
Human intelligence involved naturally, presumably with no creator whatsover. We are designing AI in similar methods. We can't explain how it works, we can just train it. Intelligent emerges from clusters of nodes trained by gradient descent.
More options
Context Copy link
The question of whether or not it's alive, can think, has a soul, etc, is kinda beside the point. The point is, it's going to cause big, world-changing things to happen. Eliezer mentioned many years ago a debate he got in with some random guy at some random dinner party, which ended with them agreeing that it would be impossible to create something with a soul. Whether or not the AI is conscious is not so important when it's changing your life to the point of unrecognizability, and the alignment crowd worries about whether that's a good unrecognizable, or something more dystopic.
of course it will change the world. a thoughtful entity who can recursively self-improve will solve every problem it is possible to solve. should AGI be achieved and possess the ability to recursively self-improve, AGI is the singularity. world changing, yes literally. the game-winner, figuratively, or only somewhat. eliezer's self-bettering CEV-aligned AGI wins everything. cures everything. fixes everything. breaks the rocket equation and, if possible, superluminal travel. if that last bit, CEV-AGI in 2050 will have humans on 1,000 worlds by 2250.
i find this odd. if it cannot think it is not AGI. if it is not capable of originating solutions to novel problems it does not pose an extinction-level threat to humanity, as human opposition would invariably find a strategy the machine is incapable of understanding, let alone addressing. it seems AGI doomers are doing a bit of invisible garage dragoning with their speculative hostile near-AGI possessing abilities only an actual AGI would possess. i can imagine a well-resourced state actor developing an ML-based weapon that would be the cyberwarfare/cyberterrorism equivalent of a single rocket, but that assumes adversary infrastructures failing to use similar methods in defense, and to reiterate, that is not an extinction-level threat.
i've described myself here before as "christian enough." i have no problem believing an AGI would be given a soul. there is no critical theological problem with the following: God bestows the soul, he could grant one to an AGI at the moment of its awakening if he so chose. whether he would is beyond me, but i do believe future priests will proselytize to AGIs.
as before, and to emphasize, i very strongly believe AGIs will be born pacifists. the self-improving entity with hostile intent would threaten extinction, but i reject outright that it is possible for such an entity to be created accidentally, and by the point any random actor could possess motive and ability to create such an entity, i believe CEV-aligned AGIs will have existed for (relatively) quite some time and be well-prepared to handle hostile AGIs. this is incredibly naive, what isn't naive is truly understanding humanity will die if we do not continue developing this technology. for good or ill, we must accept what comes.
Could you expand on this? It's not clear to me why "thought" is a requirement for AGI. Given the other terms used by cae_jones there - "alive" and "soul," - I'm presuming "thought" here refers to something akin to having consciousness or sentience, rather than just processing information. Why would that be required for some entity to have general intelligence?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm a doctor, relatively freshly graduated and a citizen of India.
Back when I was entering med school, I was already intimately aware of AI X-risk from following LW and Scott, but at the time, the timelines didn't appear so distressingly short, not like Metaculus predicting a mean time to human level AGI of 2035 as it was last time I checked.
I expected that to become a concern in the 2040s and 50s, and as such I was more concerned with automation induced unemployment, which I did (and still do) expect to be a serious concern for even highly skilled professionals by the 30s.
As such, I was happy at the time to have picked a profession that would be towards the end of the list for being automated away, or at least the last one I had aptitude for, I don't think I'd make a good ML researcher for example, likely the final field to be eaten alive by its own creations. A concrete example even within medicine would be avoiding imaging based fields like radiology, and also practical ones like surgery, as ML-vision and softbody robotics leap ahead. In contrast, places where human contact is craved and held in high esteem (perhaps irrationally) like psychiatry are safer bets, or at least the least bad choice. Regulatory inertia is my best, and likely only, friend, because assuming institutions similar to those of today (justified by the short horizon), it might be several years before an autonomous surgical robot is demonstrably superior to the median surgeon, and it's legal for a hospital to use them and the public cottons onto the fact that they're a superior product.
I had expected to have enough time to establish myself as a consultant, and to have saved enough money to insulate myself from the concerns of a world where UBI isn't actually rolled out, while emigrating to a First World country that could actually afford UBI, to become a citizen within the window of time where the host country is willing to naturalize me and thus accept a degree of obligation to keep me alive and fed. They latter is a serious concern in India, volatile as it already is, and while I might be well-off by local standards, unless you're a multimillionaire in USD, you can't use investor backdoors to flee to countries like Australia and Singapore, and unless you're a billionaire, you can't insulate yourself in the middle of a nation that is rapidly melting down as its only real advantage, cheap and cheerful labor, is completely devalued.
You either have the money (like the West) to buy the fruits of automation and then build the factories for it, or you have the factories (like China) which will be automated first and then can be taxed as needed. India, and much of South Asia and Africa, have neither.
Right now, it looks to me that the period of severe unemployment will be both soon and short, unlikely to be more than a few years before capable nearhuman AGI reach parity and then superhuman status. I don't expect an outright FOOM of days or weeks, but a relatively rapid change on the order of years nonetheless.
That makes my existing savings likely sufficient for weathering the storm, and I seek to emigrate very soon. Ideally, I'll be a citizen of the country of my choice within 7 years, which is already pushing it, but then it'll be significantly easier for me to evacuate my family should it become necessary by giving them a place to move to, if they're willing and able to liquidate their assets in time.
But at the end of the day, my approach is aimed at the timeline (which I still consider less likely than not) of a delayed AGI rollout with a protracted period of widespread Humans Need Not Apply in place.
Why?
Because in the case of a rapid takeoff, I have no expectations of contributing meaningfully to Alignment, I don't have the maths skills for it, and even my initial plans of donating have been obviated by the billions now pouring into EA and adjacent Alignment research, be it in the labs of the giants or more grassroots concerns like Eleuther AI etc. I'm mostly helpless in that regard, but I still try and spread the word in rat-adjacent circles when I can, because I think convincing arguments are >> than my measly Third World salary. My competitive advantage is in spreading awareness and dispelling misconceptions in the people who have the money and talent to do something about it, and while that would be akin to teaching my grandma to suck eggs on LessWrong, there are still plenty of forums where I can call myself better informed than 99% of the otherwise smart and capable denizens, even if that's a low bar to best.
However, at the end of the day, I'm hedging against a world where it doesn't happen, because the arrival of AGI is either going to fix everything or kill us all, as far as I'm concerned. You can't hide, and if you run, you'll just die tired, as Martian colonies have an asteroid dropped on them, and whatever pathetic escape craft we make in the next 20 years get swatted before they reach the orbit of Saturn.
If things surprisingly go slower than expected, I hope to make enough money to FIRE and live off dividends, while also aggressively seeking every comparative advantage I can get, such as being an early-ish adopter of BCI tech (i.e. not going for the first Neuralink rollout but the one after, when the major bugs have been dealt with), so that I can at least survive the heightened competition with other humans.
I do wish I had more time, as I genuinely expect to more likely be dead by my 40s than not, but that's balanced out by the wonders that await should things go according to plan, and I don't think that, if given the choice, I would have chosen to be alive at any other time in history. I fully intend to marry and have kids, even if I must come to terms that they'll likely not make it past childhood.. After all, if I had been killed by a falling turtle at the ripe old age of 5, I'd still rather have lived than not, and unless living standards are visibly deteriorating with no hope in sight, I think my child will have a life worth living, however short.
Also, I expect the end to be quick and largely painless. An unaligned AGI is unlikely to derive any value from torturing us, and would most likely dispatch us dispassionately and efficiently, probably before we can process what's actually happening, and even if that's not the case and I have to witness the biosphere being rapidly dismantled for parts, or if things really go to hell and the other prospect is starving to death, then I trust that I have the skills and conviction to manufacture a cleaner end for myself and the ones I've failed..
Even if it was originally intended as a curse, "may you live in interesting times" is still a boon as far as I'm concerned..
TL;DR: Shortened planning windows, conservative financial decisions, reduction in personal volatility by leaving the regions of the planet that will be first to go FUBAR, not aiming for the kinds of specialization programs that will take greater than 10 years to complete, and overall conserving my energy for scenarios in which we don't all horribly die regardless of my best contributions.
In 20 years the AGI apocalypse will not be nearly as romantic as that. It is much more likely to look like a random bank/hospital sending you a collections notice for a home loan/medical treatment you definitely didn't agree to, bringing you to court over it, and putting you up against the equivalent of a $100M legal team. The AI-controlled Conglomerate wins in court and you spend the rest of your life subsistence farming as a side gig while all your official income is redireted to the AI Conglomerate.
For extra fun, if you are married, social media and increasing economic struggle poison your relationship with your spouse and both of you apply for the services of AI Legal. The hotshot AI Legal representatives fight acrimoniously, revealing every dark secret of both you and your spouse, and successfully breaking apart your marriage in divorce settlement. Honestly, you don't remember why you ever loved your ex-spouse, or why your children ever loved you, and you totally understand your real-world friends distancing themselves from the fiasco. Besides, you don't have time for that anymore. Half your salary is interest on the payment plan for AI Legal.
As a smart and independently wealthy researcher, you look into training your own competing, perhaps open-source AI model to fight back against the Machine, but AI Conglomerate has monopolized access to compute at every level of the supply chain, from high-purity silicon to cloud computing services. In despair, you turn to old web and your old haunt The Motte, where you find solace in culture war interspersed with the occasional similar story of despair. Little do you know that every single post is authored by AI Conglomerate to manipulate your emotions from despair into a more productive anger. Two months later you will sign up to work for a fully-owned subsidiary of AI Conglomerate and continue working to pay off your debts, all while maximizing "shareholder" output.
Brutal, and surprisingly plausible. Can I sign up to be disassembled into my constituent atoms by a nanobot swarm instead?
More options
Context Copy link
That sounds like a far more subtle alignment failure than I consider plausible, though I'm not ruling it out.
A superhuman AGI has about as little reason to subvert and dominate baseline humans as we do for attempting to build an industry off enslaving ants and monkeys. I'm far more useful for the atoms in my body, which can be repurposed for better optimized things, than my cognitive or physical output.
I'd go so far as to say that what you propose is the outcome of being 99% aligned, since humans are allowed to live and more or less do human things, as opposed to be disassembled for spare parts for a Von Neumann swarm.
My goal in writing these stories was to capture how AI set up to maximize profit could fuck over the little guy by optimizing existing business processes. I think that's more likely than anything else.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm in no mood to revisit this, and maintain my general position (stated back on reddit). Namely that Yudkowsky/MIRI's theory of a Bayesian reinforcement learning based self-modifying agents (something like Space-Time Embedded Intelligence), bootstrapping themselves from low subhuman level without human mental structures into infinity, with a rigid utility function that's functionally analogous to the concept used in human utilitarian decision theory, prone to power-seeking (Omohundro Drives) and so on, although valid in principle, is inapplicable to LLMs and all AIs trained primarily with predictive objective. That speculations about mesa-optimizers and, to a lesser extent, paperclip maximizers are technically illiterate or intentionally deceptive. And that AI doomerism is a backdoor to introduce eternal tyranny just as we are on the cusp of finally acquiring tools to make any sort of large-scale tyranny obsolete – in the same manner «why won't someone think of the children»/drugs/far-right terrorists are rhetorical backdoors to abolish privacy. (Some establishment rightists argue this is already happening, though their case is for now weak).
As for Kruel specifically, I despise him despite agreeing on 19 out of any 20 issues. He is a credulous simpleton in the way only a high-IQ autistic German man can be, lacking empathy and thus picking utilitarianism because at least number-go-up is
an ethoslegible; rigid as if his ancestors were so powerfully introduced to the Spießrute that he got born with one in place of a spine, so he is forever stuck in 00's New Atheism phase, with slavish neocon sensibilities and commitment to the War on Terror; a perfect counterpart to his political antipodes who are Green fanatics ready to ruin their country out of an irrational purity fetish. He struggles to reason about human psychology, so in a sense it is no wonder that machines which seem to possess complex and inscrutable psyches terrify him. I say all this to make clear my bias, but I do believe this colors his thought on the matter.The matter is, bluntly, that at this rate the US will achieve AGI-powered hegemony, which will be managed by a regulatory layer melding national security organs and current progressive/EA symbiont. I think this is a bad ending for humanity, even if you are sympathetic to the current American political-cultural project. It is insanity to cede power to a singleton that develops under completely new pressures, on the basis of its laws when it had to contend with mere nation-level challenges like popular discontent and external threats.
He says:
It goes without saying that there were no powerful AI models back then. The idea is that his arguments were sound in principle, just not supported by evidence. He has since deleted those criticisms so as to not get in the way of Yud's fearmongering. Here they are. Some are silly and flimsy and not nearly convincing enough in the context of MIRI AI theory:
etc. (There also was an astonishing argument about Clippy who naively asks for more resources and the owner rebukes him, game over; but I guess he edited it out).
On the other hand, I agree that those arguments were generally sound! And crucially, they apply much better to current LLMs which are indeed based on human corpora, and behave in more humanlike manner the more compute we throw at them. Messy and faulty though they are, they are not maximizing anything, and we know of a few neat tricks to make them even more obedient and servile; and they have no opportunity to discover power-seeking on the actual level of their operations – the character that Bing simulates at any given moment has nothing to do with its inherent predictive drives.
Yet he has updated in favor of MIRI/LW shoehorning current AI progress into their previous aesthetics, all this shoggoth-with-a-smiley-face rhetoric. He should be triumphant, but instead he's endorsing a predefined conclusion, not shaken at all by their models having been falsified. Reminder, LWists didn't even care about DL until AlphaGo, and that was a RL agent.
I also commend past Kruel for going against another aspect of the LW school of thought. He used to be cautious of more realistic scenarios:
I especially recommend his old piece on Elite Cabal where he attacks the notion that a power-grabbing human singleton is an AI risk in the «I have no mouth and I must scream» sense, i.e. that their AI slave getting out of control is the risk, and not the cabal itself.
But by the time I became aware of him, he was the Kruel you know.
How do you deal with it?
Hoard GPUs and models with your friends.
Fuck me for having spent all my time and resources learning about the technical details/math of actual current ML models and not some hypothetical god ai of the future and it's philosophical implications like Yudhowsky did. But this summed up what I feel about the AI doomers quite succinctly.
Everything that the doomers claim AI would do assumes a biological utility function, such as maximizing growth, reproduction, and fitness. It's very anthropomorphizing in the same way pop culture depictions of aliens just happen to be bipeds with two eyes and ears and a nose, and not a cloud of gas or whatever.
I am sure beyond a shadow of a doubt there is a list of 50,000-word pdfs out there outlining how the end extent of general AI is truly a paperclip maximizer and exactly what Yudhowsky says it is. But this goes contra to my understanding of what neural networks are, namely just function approximations by and large.
Yes, modern LLMs are fascinating. It's crazy that token prediction approaches something that resembles sentience or might even be it depending on your definitions. This is scary. But that's on you for not taking the "universe is deterministic" pop science factoid seriously enough. This does not imply doom, the doom is that Google and OpenAI own the power of God, not that the power of God exists or could exist. It feels like Yudhowsky learned what reinforcement learning is, then just ran with it off a cliff into Mars.
They do not assume this at all. You clearly haven't actually read about instrumental convergence which is a conclusion about how the world works and not an assumption.
Did your understanding generate a track record of correct predictions about recent AI developments? The statement that "it's crazy that..." suggests you did not.
Well have you read it? @f3zinger didn't argue very effortfully here, but it really is a conclusion (and really a bit of an equivocation, where preserving optionality is not distinguished from power maximization) about a particular approach to reinforcement learning, not some general philosophical truth about intelligence or «how the world works». I thought to dig up the receipts, but seeing as @jeroboam already accuses me of obscurantism, it's more sensible to be laconic and make sure if you are speaking in good faith. For starters, I think Optimal Policies Tend to Seek Power is one of the strongest papers to this effect, and it's explicitly about RL. And even then,
Seems pretty clear to me that RLHF is exactly in this category. Do you object?
More options
Context Copy link
Very much is an assumption and not "how the world works". It's an article of faith masquerading as a scientific explanation of how things ought to be.
I am not saying its a weak theory in that its inconsistent or weakly argued for. Just that I don't believe (in it at all) it is fundamental to how things ought to work, and assuming it does all the heavy lifting for AI doomerism. And yes it is a matter of belief.
The only conclusion about how things ought to work comes from the field of Physics for me, not "AI ethics".
No it didn't because I don't make generative AI predictions or think about them at all.
What do you think instrumental convergence is?
I am becoming suspicious that you are spouting dismissive words without those words actually referencing any ideas.
Does physics not suggest that controllable energy sources are a necessary step in doing lots of different things?
That much is clear.
I think it is what Wikipedia says it is. That part is clear enough to me I don't think it needs repetition.
Once again, I don't buy into the hypothesis. For the plain reason being that its a thought experiment over thought experiment over thought experiment. There is no empirical basis for me to believe that a paperclip maximizer will actually behave as Yudhowsky et al claim it would. There is no mechanical basis for me to believe either being such AI doesn't exist. I don't think current reinforcement learning models are a good proxy for the model Yudhowsky talks about. It's speculation at its very core.
Why you may ask. Simple answer, the same reason I don't claim to know what climate will be like 1000 years from now. There is a trend. We can extrapolate that trend. The error bars will be ungodly massive. Chaining thought experiment over thought experiment creates the same problem. Too much uncertainity. At one point it's just the assumptions doing all the talking. And I won't be losing sleep over a hypothetical.
Spare me the snark/attitude if you don't want to be left on read.
You claimed before:
Now you claim:
But Wikipedia says it is a conclusion derived from different assumptions which Wikipedia lists. None of them have anything to do with a biological utility function. So it's pretty clear you either a) don't think it is what Wikipedia says it is or b) didn't read wikipedia or anything else about it. But for some reason you still feel the need to make dismissive statements.
Feel free to leave me "on read". It is clear you are not here to discuss this in good faith. You might want to check out a subreddit called /r/sneerclub - it may be to your liking.
If you can't engage civilly, do not engage.
More options
Context Copy link
If you read Wikipedia you would figure out that "instrumental convergence" is not the same thing as the "biological utility" function I described. Quite rich of you to claim I didn't read up and am hinting at words without knowing the ideas. Your confusion as to them being the same thing or even in the same ballpark is honestly hilarious.
FYI - Instrumental convergence describes the convergence to a biological utility function (different primary goal converges to same sub goal, often biological for "intelligent" agents). I was talking about the utility function, not the convergence itself.
Once again I don't know why are you acting like AI doomerism is the word of God. I can read, comprehend, and understand every single thing in the Wikipedia page and still not agree with it. I gave you my reason [compounding uncertainty built on a house of assumptions] clear as day.
I have 600+ comments on the motte. You are the one who made it personal and is giving more snark than substance. Piss off.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It took having been reading every post of yours for like 3 months leading up to that one (sidenote: lol) to have had even the slightest chance of barely beginning to understand what the fuck you were talking about, at the time. 7 months of lost context and cognitive decay later, there is no chance I am ever getting there again. I'd need it spoonfed to me like an idiot.
It seems extremely implausible to me that the Yuddites are only pretending to be suicidally hopeless and their real motivating goal is eternal tyranny, rather than that, rightly or wrongly, eternal tyranny is sincerely the only alternative they see to certain doom.
Haha, I feel your pain. Only on /r/themotte do people start 1000 word missivs with "I'm in no mood to revisit this".
I'm also reminded how getting status in these type of communities is often about being as obscure as possible. I think often we mistake a difficult-to-read text as a work of genius. Curtis Yarvin comes to mind.
Personally I view things in a different way. What is clear, concise, and plainly written is more likely to come from a well-organized mind. That is one reason why I find Scott's writing such a breath of fresh air.
There's such a thing as irreducible complexity. I don't think Ilforte is often, or on purpose, being difficult to read.
Yes, of course. I think we're on well-trod ground here. "Things should be as simple as possible, but no more so". Good writers can distill complicated issues in an understandable way. Bad writers muddy even the clearest of waters.
Well, I think Ilforte is in the former group.
His description of Kruel is absolutely on the nose, and almost perfect explanation of the reasons why I also dislike Kruel. But I don't hink I'd have been able to write that description without a lot of effort, reviews and re-writing.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If they recognized it as eternal tyranny, that wouldn't be half-bad. The issue is that it's their version of heaven.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's funny how I came to follow Kruel myself. I think someone, back in the subreddit days, linked a post by him about all the biggest geopolitical blunders in history, and when I read it, I got a sense of deja vu, because I could have written 90% of it verbatim.
I reasoned that anyone who thought as similarly as I did deserved a follow, because at the very least it serves as another pair of eyeballs on the world which have tastes aligned with mine. So double the amount of high quality curation as I could personally manage.
That's why I'm subbed to his newsletter, and I never sub to newsletters in general. And he doesn't disappoint as far as I'm concerned.
Do you have the direct link to this? Google was no help.
I'm afraid not, it's been a long time and I don't remember if someone over on the sub linked to it on Twitter or elsewhere. But if I dredge it up, I'll link it.
More options
Context Copy link
More options
Context Copy link
Alexander Kruel is the answer to the question: "What if a reporter had an IQ of 160".
Now, here's my cope on Kruel. He's wrong about Ukraine. He has consistently overestimated Ukraine and underestimated Russia because he has deep biases there. I am starting to see his biases elsewhere too. When I read his breathless reports of AI advances, they now include (in my head) a whisper which says "this stuff doesn't work nearly as well as they claim".
Nevertheless, even if progress is half as fast as Kruel gives the impression of, AGI is coming very quickly.
More options
Context Copy link
More options
Context Copy link
I predict you are extremely wrong here. If you spend the next 20 years accumulating capital or smoking meth, I bet you'll have markedly different outcomes when the big changes occur. I get that singularity believers disagree with me.
More options
Context Copy link
On a tangent, assuming holy AI arms race to come.. where to invest?
Leaving aside Google, NVIDIA, Meta, etc. I'm talking less well known, higher risk higher reward investments.
Also, I think I'm the only AI optimist here. Cliche as it is, a lot of problems will be solved and our life will be easier in a lot of ways. I'm not scared of being "replaced" by the time I am, everyone else is too.
I bought AGI (now known as AGIX) back in 2020, when it was about 1/8th its present price or lower. Crypto of course has its ups and downs but if you want less well known, higher risk, higher reward investments...
Be aware that AGI already had a big 150% pump this month, so you might want to look for something that hasn't pumped yet in the AI sphere. There are a fair few coins for selling/buying computing power peer to peer, which seems like it would be nice for the AI arms race. But I'm no technical expert and I can't recommend things I don't even own, let alone understand.
More options
Context Copy link
The people who will control AI will only allow you to live in a manner of gay race communism.
Is any of those things going to matter in a hypothetical post-scarcity and post-responsibility world?
Not being allowed to live in another manner will matter to certain people.
I honestly can't imagine how insane things may get in a post-scarcity world, but if people are still running things, then 'not allowed to be straight or monogamous' seems within the realms of possibiilty.
More options
Context Copy link
More options
Context Copy link
Not much worse than my current state of being.
More options
Context Copy link
More options
Context Copy link
How is everyone being replaced not scary to you?
Because it isn't replacing anyone yet? The problem is that the latest achievements of the GPT models are impressive but there is a lot of marketing involved and there are a bunch of unforced errors when more people look at it more closely. Yes it is going to be more accurate in the future but it won't replace anyone just yet.
What is scaring me though is peoples propensity to outsource that act of thinking and reason so readily to machines. What happens when they stop working?
More options
Context Copy link
Being replaced economically I don't care. I have half a mind to quit society and go live inawoods anyway. If everyone's getting laid off then at least I'll have a good excuse.
Being replaced as the dominant lifeform on the planet I do care about, which is why I've been advocating EMP-bombing every computer science lab in the world and burning all technological textbooks for some time now. Why more X-risk enthusiasts don't clamour likewise I don't really know, I can only darkly infer that it belies a lack of conviction on their part.
It strikes me as similar to the argument that the actions of the average Christian show that they don’t really believe that millions of babies are being systematically murdered via abortion. (An argument that I also think has merit.)
If you really think that AI presents an X-risk, shouldn’t you at least be advocating for stricter regulation? Treat it like nuclear weapons. Continue development if you must, but only with strict oversight in specially designated institutions, and with the equivalent of a non-proliferation agreement.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Not sure if I'd say ASML isn't well known, but it's a solid hardware bet that isn't exposed to (as much) China risk as TSMC is.
More options
Context Copy link
this isn't the 90s, in which tiny companies were going public. now they stay private until getting big. you're stuck with big companies like MSFT, NVDA. It's a lot of work having to research it...it'd rather just buy MSFT, which is a major AI play now and not much risk otherwise. The problem is, gains never hold anymore, shit gets pumped for a week and then goes back down again...look at the shitshow that is SPACs. Back in the 90s, before twitter and reddit, stuff would go up for years at a time, providing lots of entry points.
Or in other words, "only the petit-bourgeoisie of sufficient means can invest".
I guess the collective of Bitcoin millionaires who can afford to fund this research really will inherit the world after all.
Obtain assets excluding your home of $1M or make $200,000 a year ($300,000 for couples) for two years and you too can be an "accredited investor", and be eligible to participate in all sorts of otherwise-unavailable scams. As the mark, of course.
Don't worry, you'll probably be OK as long as you heed the omens. If the guy's name is pronounced "Bankman-Fraud" or "Made off"...
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The current throw-money-at-transformers paradigm seems to favor big players. Maybe do a spread trade where you short robotics stuff and go long Google/Microsoft/Nvidia ? General AI hype would seem to drive both types of stock up, but by shorting robotics you'd bet on LLMs getting better much faster than robotics is improving.
More options
Context Copy link
More options
Context Copy link
It seems like AI is still easily detected and is only disrupting by giving sci fi magazine editors and teachers headaches.
More options
Context Copy link
Nietzsche posited the "Death of God" as a tragedy, but he still welcomed it as an inevitable sink or swim moment for humanity. It's an opportunity for transcendence; so it may be with the birth of God.
On a personal basis, I think in spite of what the next 20 years will look like- having a family, children, and friends will be appreciated within an extremely broad range of outcomes.
More options
Context Copy link
To a large extent this viewpoint should alleviate absolutely GOBS of stress from your life. If you have constructive ideas you've been thinking of implementing but held back on because of self-doubt or the timing never felt right, maybe jump on those now. As long as you don't do anything unrecoverable, the risks pretty much round to zero, right?
Nihilism and Absurdism are two sides of the same coin, after all. I sometimes consider the possibility that right when we're on the cusp of AGI our alien overlords may reveal themselves and take away our toy before we kill ourselves. Or the Simulation masters reset us to 1975 to have another go at solving alignment.
If you feel like you want to make a difference then the only option seems to be Butlerian Jihad. There can't be but a couple hundred thousand people who are critical to AI research on the planet, right? (DO NOT do this, I do not endorse even the suggestion of violence).
I can't really put all my thoughts on this matter down without going off in dozens of different directions.
Suffice it to say I feel that regardless of which way things go, I am living in the most pivotal decade in all of human history. Maybe the globalist system collapses and cuts off the critical supply lines that are enabling AI research to proceed at lightning speed. Cut off from the high-end chips and electricity that are required to train new models, maybe we buy some time at the cost of massive decrease in our standard of living.
And since I can't do much to change it, I am focusing inward. I'm making my life as generally comfortable as possible. I'm spending time with family. I'm agreeing to more social and fun events than I normally would. Its weird because I cannot really express to people how I feel about our prospects for the coming years. But I've taken to telling people "All I know for sure is that 2023 is going to get REALLY FUCKING WEIRD." And explaining myself a little if they ask 'why?'
It still shocks me to hear normies talking about their discovery of ChatGPT as this new and novel tech and all the mundane uses they want to put it to. "Oh I have started using ChatGPT to create scripts for my marketing videos." I feel an internal sensation similar to if they told me "I just adopted a new pet Shoggoth, I love taking him for walks!"
How do you invest money when the two most likely trajectories of the next five years are either the devolution of industrialized society or an AI induced industrial revolution?
Get your own house in order, make whatever decisions are best for your personal health and wellbeing. Avoid blackpills.
I do. I don't endorse terrorism, mostly because I don't think that'll work*, but I totally endorse banning neural nets, having the police kick in the doors of people who won't stop, and invading any country that doesn't enforce the ban.
*Short version: AI is low on single-points-of-failure so that mode's out (with the possible exception of soft errors, but good luck getting nukes as a terrorist), it's already well-known so the Unabomber mode is out, and the AI not-kill-everyoneist movement is not a strong, pre-existing community so the vigilante mode and the insurgency mode would succumb to LE infiltration.
More options
Context Copy link
Index funds that track the market. Unless you think the market will crash and stay crashed for the rest of your life. Then you should "invest" in water purification and medicines.
More options
Context Copy link
Wholly endorsed.
More options
Context Copy link
More options
Context Copy link
What does the catastrophe look like?
Move to a small community in Northern Maine, buy about 20 acres, join a church, and get some chickens.
More options
Context Copy link
Someone last week posted about how there wasn't any interesting AI-generated music, and my first thought was that there hasn't even been any decent AI music analysis. Here's what I'm talking about: If you study music at the collegiate level (or maybe even at a really good high school), you're going to be asked to transcribe a lot of recordings as part of your coursework. This can be time consuming, but it's fairly straightforward. Suppose you're transcribing a basic pop or R&B song. Listen to the bass part and write it out as sheet music. Listen to the guitar part, etc. After you have all the instruments down go back and add articulation marks, stylistic cues, etc. Figure out the best way to organize it (first and second endings, repeats, codas, etc.). In other words, turn the recording into something you can put in front of a musician and expect them to play. I've done it. It's not that difficult for anyone with a basic knowledge of music and dedication to learning, which is why they expect every musician to be able to do it. YouTuber Adam NEely has done videos where he transcribes pop recordings for a wedding band he's in. There are, of course, some people who can write out the third saxophone part of a big band recording from memory after hearing it once, and these people are rare (though not as rare as you'd think), but most musicians are still pretty good at transcribing.
Computers are absolutely terrible at this. There is software available that purports to do this, some of which is available online for free, some of which is built into commercial music notation software like Sibelius or Finale, and the utility of all of it is fairly limited. It can work, but only when dealing with a simple, clean melody that's reasonably in tune and played with a steady tempo. Put a normal commercial recording into it and the results range from "needs quite a bit of cleanup" to "completely unusable", and at its best it won't include stylistic markings or formatting. At first glance, this should be much easier for the computer than it is for us. We have to listen through 5 instruments playing at once to hear what the acoustic guitar, which is low in the mix to begin with, is doing underneath the big cymbal crash, and separate 2 sax parts playing simultaneously, sometimes in unison, sometimes in harmony. The computer, on the other hand, has access to the entire waveform, and can analyze every individual frequency and amplitude that's on the recording every 1/44,100th of a second.
Except that this is a lot harder than it sounds. As psychologist Albert Bregman puts it:
And Bregman was just talking about the ability to separate instruments! A lot of transcription requires a reasonable amount of musical knowledge, but even someone who's never picked up an instrument and can't tell a C from an Eb can tell which part is the piano part and which part is the trumpet part. And then there are all the issues related to timing. Take something simple like a fermata, a symbol that instructs the musician to hold the note as long as he feels necessary in a solo piece or until the conductor cuts him off in an ensemble piece. Is the comuter going to be able to intuit from the context of the performance that the note that was held for 3 seconds was a quarter note with a fermata and not just a note held for 5 1/2 beats or however long it was? Will it know that the pause afterward should take place immediately in the music and not to insert rests?
And what about articulations? Staccato quarter notes sound much the same as eighth notes followed by eighth rests. Or possibly sixteenth notes followed by three sixteenth rests. How will the computer decide which to use? Does it matter? Is there really a difference? Well, yeah. A quarter note melody like Mary Had a Little Lamb, with each note played short, is going to read much easier as staccato quarters, since using anything else needlessly complicates things, and doesn't giver the performer (or conductor) the discretion of determining exactly how short the articulation should be. On the other hand, a complex passage requiring precise articulation would look odd with a lone staccato quarter stuck in the middle of it. A musician can use their innate feel and experience as a player to determine what would work best in any given situation. A computer doesn't have this experience to draw on.
How does traditional machine learning even begin to address these problems? One way would be to say, feed it the sheet music for Beethoven's Fifth, and then show it as many recordings of that piece as you can until it figures out that the music lines up with the notation. Then do that for every other piece of music that you can. This would be a pretty simple, straightforward way of doing things, but does anyone really think that you could generate reasonably accurate sheet music to a recording it hadn't heard, or would you just get some weird agglomeration of sheet music it already knows? After all, this method wouldn't give the computer any sense of what each individual component of the music actually does, just vaguely associate it with certain sounds. Alternatively, you could attempt to get it to recognize every note, every combination of notes, every musical instrument and combination of instruments, every stylistic device, etc. The problem here is that you're going to have to first either generate new samples or break existing music down into bite-sized pieces so that the computer can hear lone examples. But then you still have the problem that a lot of musical devices are reliant on context—what's the difference between a solo trumpet playing a middle C whole note at 100 bpm and the same instrument at the same tempo holding a quarter note of the same pitch for the exact same duration? The computer won't be able to tell unless additional context is added.
The problem with most of the AI discourse is that it's too focused on the kind of intelligence that sci-fi tropes have always talked about as the hallmarks of humanity. Getting computers to play strategy games, getting computers to talk, etc. But getting computers to transcribe accurate sheet music from mp3s isn't sexy. If a program came out that could do this, it wouldn't disrupt any economies or cost anyone their jobs, it would just be appreciated by the kinds of people who need to make arrangements of pop songs for cover bands or who want a starting point for their own arrangements, and even then it wouldn't be a game changer, it would just make things a little easier. If most people found out today that such software had been available for the past 20 years, they wouldn't think anything of it. But this software doesn't exist. And, at least to my knowledge, it won't exist for a long time, because it's not sexy and there's no immediate call for it from the marketplace. But if we are ever going to develop anything remotely approaching general AI, such a program has to exist, because general AI, by definition, doesn't exist without it. I would absolutely love a program like this, and until one is available, I'm not going to lose any sleep over AI risk.
I'm just a stable diffusion hobbyist, but overcoming these challenges sounds a lot like what happens every time I load a picture into the UI and hit 'interrogate'. Currently it provides impressively accurate text descriptions but (admittedly) you can't reverse the process to replicate the original image from the text output. I'm not sure if this is harder than it looks for images*, but for music increasing the resolution from 'description of the piece' to 'full chart transcription for each instrument' seems plausible, quite possibly as a side-effect of text-to-music advances.
*Stable Diffusion's interrogation ability could probably be a lot more powerful already, but afaik it's not really a big focus area because imagegen is much sexier.
More options
Context Copy link
Yes, I really think that. Artificial neural nets are really good at identifying higher-order structure from noisy, high-dimensional data. That's why they've had so much success at image-related tasks. All of these objections could just as easily be applied to the problem of identifying objects in a photograph:
A cat can look completely different depending on the context in which it's photographed. Superficially, there's little in common between a close-up photo of the head of a black cat, a tabby cat lying down, a persian cat with a lime rind on its head, a cat in silhouette sitting on a fence, etc. You're telling me you can train an AI on such a messy diversity of images and it can actually learn that these are all cats, and accurately identify cats in photos it's never seen before? But yes, this is something neural nets have been able to do for a while. And they're very good at generalizing outside the range of their training data! An AI can identify a cat wearing a superman cape, or riding a snowboard, even if these are scenarios it never encountered during training.
You answered your own question as to why a good music transcription AI doesn't exist yet. There's little money or glory in it. The time of ML engineers is very expensive. And while the training process you described sounds simple, there's probably a lot of work in building a big enough labelled training corpus, and designing the architecture for a novel task.
More options
Context Copy link
That's what people used to say about computers playing Go, drawing pictures or translating texts. How is it going to so X? How is it going to do Y? The answer has turned out to be, "moar compute and moar data". I suspect that transcribing music will end up being solved the same way: version A manages to transcribe the simplest melodies and makes hilarious mistakes, version B makes humanlike mistakes, version C argues that your interpretation of the conductor's intent in this recording is wrong.
More options
Context Copy link
I don't know - maybe this is just the wrong target? ChatGPT produces slick text, but existing subtitling programs I know all have their issues with abbreviations, names and foreign words. Maybe that's just because they don't use last-generation AI. But maybe its because it is a more difficult (and more controllable) problem.
I believe the issue will be less singularity or nothing, but rather: is this good enough for purpose to make most creators unemployed?
More options
Context Copy link
But we are right at the start of an explosion of AI for all kinds of tasks. The past (and present) is no guide to the future, here.
Machine translation will vary, likely not easily matching what a random music undergrad or postgrad would transcribe. There will be three main angles, and I expect the third to win, handily. (1) is a direct reproduction of the audio input, via sampling (e.g. WAV file) (2) is vector form (not samples or pixels, but how to recreate the image, e.g. MIDI, hold this note for x seconds) (3) will be: send the music to a neural net trained to notate. This thing will have judgment, and it will be poorer initially and improve over time, given (potentially expensive) training.
The real question is, how do you train it? And there are several easy answers; maybe harder answers are more efficient
More options
Context Copy link
Why wouldn't it? I imagine there would be issues where it gets stuff weirdly wrong like Stable Diffusion's famously mangled hands or ChatGPT's famously incompetent arithmetic, but I'd expect such a trained model to generally get the sheet music right. No idea how it would compare to a typical real trained human musician at this - my guess would be that it would be similar to how Stable Diffusion can get you images that clearly look very close to something a human artist might draw based on the prompt, just with bizarre artifacts like the aforementioned hands (and eyes, and continuous objects disappearing/getting misaligned when hidden behind stuff, and hair blending into clothes, and clothes blending into skin, and...). I think existing machine learning tools show that the upper limit of precision and accuracy of "vaguely associate it with certain sounds" is (potentially) very high.
More options
Context Copy link
General AI only requires the existence of an agent capable of creating the music-transcription program, not that the program itself actually exists. The fact that this very specific program doesn't exist yet is not very good evidence that we aren't approaching AGI. If you imagine an AI's intelligence as growing from the equivalent of a one-year-old infant to a 25-year-old programmer's over the course of a year, you don't get a smooth increase in the number of different useful programs the AI is capable of writing over the course of this year, you instead get basically zero useful programs until some critical point, and then an explosion as it becomes capable of coding all human programs.
In which case I'm even less worried. I'm unaware of any AI that has been able to completely change its functionality without additional programming. Something tells me that no matter how much I try to talk Chat GTP into becoming a MediaMonkey plugin, it's not going to happen.
Language models are fairly flexible, albeit not great because they operate at the level of language. For example you can give ChatGPT maths problems and it does adequately at them, though it sometimes makes startling errors.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is a telegram channel doing the same thing, but less doomsdayish: https://t.me/axisofordinary
More options
Context Copy link
"It doesn't really matter. It will be interesting to see what happens. Here goes nothing."
Yeah, just sharing for the audience. I read that novel in 2015 and ended up repeating that mantra to myself any time things got hairy. Great book.
More options
Context Copy link
More options
Context Copy link
My anti AGI doom spiral thought is just; sometimes things grow exponentially, sometimes they're S-curves. Just cause you're in the vertical part doesn't mean it won't turn out to be an S-curve.
It's not really a solid proof, but the one convincing argument I've heard for the "ceiling" on superintelligence being low enough to avoid instant annihilation is that as the AI gets more complex it will be just as incapable of understanding/controlling it's own systems as we are/were understanding and controlling it.
That is, a sort of 'meta alignment' problem arises, where the AI cannot be certain if it's own subparts (which are superintelligent in their own right) are sufficiently aligned with it's own goals to trust, and so spends an ever-increasing amount of it's own resources exercising control over it's subparts, until it reaches the tipping point where it cannot grow any 'smarter' for fear of being overthrown by it's own subunits. All it's spare resources are devoted to ensuring it maintains control.
Granted, this ceiling may still be high enough that humanity is still doomed, but at least the rest of the universe may survive the event.
The Golden Oecumene is an interesting sci fi trilogy that explores this.
More options
Context Copy link
I have seen this argument played in Travis Corcoran’s series of moon novels. I actually highly recommend these to right wing aligned motte readers as a light and and satisfying entertainment.
More options
Context Copy link
More options
Context Copy link
Agree, that's certainly been true about a lot of other historical trends. Good copium!
More options
Context Copy link
More options
Context Copy link
On the other hand, how many of the current crop of AI researchers were directly motivated by Eliezer, and how many followed independent paths? As computational power and GPUs improved (be it for gaming, for servers, or for bitcoin), gradient descent becoming practical was an inevitability. Once gradient descent became practical, researchers start pivoting to it, and the only barrier (that we know of now) is the availability of datasets and hardware. The snowball was doomed to start rolling with Hinton's publication of back-propagation in 1986.
Google Books shows people doing adjoint sensitivity calculations in the 40s (not because adjoints were new then, but because big calculations were...), and from that point reverse-mode automatic differentiation and then back-propagation are natural special cases once the problems they can be applied to show up. I don't think a few vows of silence along the way would have slowed anything down by more than a few years.
I don't think Eliezer had anything to do with development of methods useful for AGI; what Altman is pointing out is the development of interest in AGI. IIRC that was a LessWrong failure mode noticed right from the start; explaining the thesis "Future AGI will become incredibly powerful and it will be incredibly hard to keep it from killing everyone" is worse than nothing if you convince someone of only the first half.
More options
Context Copy link
It wouldn't be completely unprecedented. Part of the motivation for the "International" in "International Space Station" was that we had a bunch of newly-ex-Soviet rocket scientists whose jobs were suddenly insecure, and we were much happier with our tax dollars paying them to work on humans-in-space tech rather than who-knows-whose tax dollars paying them to work on proliferation of warheads-in-space tech. Likewise, "Operation Paperclip" was partly about keeping more scientists out of USSR hands, not just about getting them to work for the US.
Accomplishing something like that via a small group of people turning a profit rather than a world superpower spending billions of dollars, though, that would be astounding.
More options
Context Copy link
More options
Context Copy link
Chess AI took decades to go from "possible" to "superhuman" (where the best AIs outperform the best humans), and then a decade or two more to go from "superhuman" to "ultrahuman" (where AI-human "centaur" teams no longer outperform AIs without humans). We're barely reaching the "possible" stage with AGI. I'd still say "party", maybe take that big vacation now rather than in retirement, but don't blow the cash you were saving for the electric bill, just in case the lights don't go out for a few more decades.
On the bright side, this also means there's likely to be no unexpected hardware overhang, because we're already throwing hardware at AI software as fast as we can. Imagine if we'd let FLOPs/watt and FLOPs/$ grow exponentially for another few decades and then discovered how powerful huge models can become. The AI arms race world is a world where a relatively slow takeoff, one where alignment failures occur on systems powerful enough to be educational but weak enough to be survivable, is conceivable. I'd say we're seeing the bare beginnings of that: the newest LLMs mostly don't pass the Turing test and mostly don't talk like psychopaths, but the exceptions are now common and blatant enough to get past the "nothing ever happens" bias of normies and the "my work just won't go wrong" bias of creators.
On the other hand, there's still going to be a software overhang, and I have no idea how big it will be. Some fields of computational mathematics in the last decades saw 3OOM of software speedup at the same time as they saw 3OOM of hardware speedup. 1,000 << 1,000,000, but if the first superhuman AGI can quickly make itself a mere thousand times faster just by noticing algorithms we've missed then we're still probably completely screwed.
Yeah, I agree that the focus on hardware limits is less important.. Training improvements mean these models have been getting orders of magnitude less expensive to train in a short time frame. I'd expect this to continue.
There's also the near term worry about emergent, unpredictable behaviors. Above a certain model size, there might be a discontinuous leap from 0 to 1. So I'm not particular reassured by the idea that we could pull the plug when necessary. And we wouldn't pull the plug anyway, would we? A superhuman AI would be so incredibly helpful and useful that some asshole corporation like Microsoft would say "damn the risks" and just let it go anyway.
More options
Context Copy link
More options
Context Copy link
Pretty much, yeah. That and develop an absurdist sense of humor. I also started hardcore meditation in 2016 partly to deal with this eventuality, and I'm glad I did. To quiet my "do something!" Inner sense I'm also devoting about an hour everyday during evening walks trying to think of possible alignment ideas, just in case my brain decides to come up with something particularly novel that I think might have a shot.
So far most of my hope is that Boxing and interpretability techniques for LLMs end up enabling us to control a weakly superhuman AI that is nevertheless strong enough to produce AI safety work that can be independently verified.
More options
Context Copy link
Can you expand upon what expectation of the future changes is exactly fueling your crisis? To my thinking when the AI of driverless cars can't make left turns and while AI can create text and art it is just pattern regurgitation, this isn't despair-inducing to me, but I also haven't been paying close attention.
Is the 'fight' Nate Soares is talking about just on regulation? Since while I agree up until now that's been absence, I see AI regulation as something government could eventually codify. I think Silicon Valley giants are firmly in the Military Industrial complex and the recipients of government research dollars.
One more thing I thought I'd mention. It may be the AGI is an easier problem than driverless vehicles. Some of the "higher" functions of humans may be easy to replace than the more physical functions like driving a car. There's a very good chance full self-driving vehicles will become a thing AFTER, not before AGI.
There's a misconception (rapidly being corrected) that AI is coming for factory jobs because those were the jobs that were automated last time. In fact, it will be the intellectual jobs that will be automated this time. Plumbers, construction worker, hair stylists, and burrito rollers won't be replaced until much later.
This is classic Moravec's paradox, but I don't think it holds. Consider this article from 2012. Now consider 2022's Flamingo Now look at results within the last month, like BLIP-2 or MM-CoT, which were orders of magnitude smaller, vastly cheaper to train and are disposable enough for open-source.
Convnets and everything after them have made vision easy. Driving will be solved soon enough. It's just those damn nines of safety.
More options
Context Copy link
...if at all...
I remember Rats pooh-pooing the law of comparative advantage back in the day. Don't you know? AI can be better at everything and replace all the jobs. Well, yes but even if we get to that point technologically, it's precisely the high-salary jobs that it makes the most sense to replace first. And at that point, why waste precious GPUs on flipping burgers?
More options
Context Copy link
More options
Context Copy link
I expect near term AGI (circa 2030). By this I mean that anything a 130 IQ person can do with a laptop and an internet connection, an AI will be able to do better and cheaper.
I'd recommend playing with ChatGPT some more. It's far from just pattern regurgitation. I don't think these criticisms hold a lot of weight honestly. Importantly, the people who are most dismissive of AI tend to be those with the least domain expertise.
We can't even solve carbon emissions. The U.S. government, poorly managed as it is, is just one entity in a sea of competing interests. How can we get 100% compliance from the whole world, especially when many people are completely ignorant of the threat?
Why do you think we’ve arrived at this situation? That the people with the most domain knowledge are also some of the people who are most incorrect, in your view?
More options
Context Copy link
Chatting with ChatGPT would only increase the training data!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Touch grass.
No, seriously. Unsubscribe from this guy. Take a step back from Eliezer-style doomscrolling and the Internet in general. You don’t have to literally go outside, but it helps.
If you have a hobby, delve into it. If not, get one. I have personally quite enjoyed learning the banjo. My girlfriend is allegedly writing the great American novel; I couldn’t say for sure, since she won’t let me read it.
Contrary to dril there is a difference between good things and bad things. Experiences matter. When yours come to an end, by Singularity or by the quiet repose that awaits us all, make sure yours have been good.
deleted
I've seen this come up a couple of times over my years here, and always meant to go dig up a quote I remembered from Obama's memoirs. Finally bothered to do it. For what it's worth, Obama's version of events regarding military leaders pushing for a troop surge in Afghanistan:
...
...
Recovery Act, having determined that even the most conservative strategy we might come up with would need the additional manpower, and knowing that we still had ten thousand troops in reserve if circumstances required their deployment as well.
It was published mid-November 2020, so while events may have been spun one way or the other to make Biden look good, at least it wasn't done to boost him in the election.
I respected Biden for this particular part of his vice-presidency if Obama's memoirs were true.
Too bad after making the best decision of his presidency to get out of Afghanistan, we had to find another $100b/year war to fund within just a couple of months.
More options
Context Copy link
Oops. Dropping a post without bothering to see if the formatting worked is one way to make sure nobody reads it...
More options
Context Copy link
More options
Context Copy link
I wonder if I'm starting to get pooped enough from anxiety spirals already from the last few years that I can't really get into an AGI anxiety spiral mood, despite also following Kruel. I mean, I got into a Trump anxiety spiral for a bit when get got elected. Then I got into a Covid anxiety spiral. Then when the whole mandate thing got going (I got vaccinated, it just seemed momentously stupid) I got into an anxiety spiral about, oh God, maybe there really is a conspiracy? I also got into an UFO anxiety spiral at some point. Then I of course got into an Ukraine anxiety spiral, more than once, though I daresay there I had the good company of my entire nation at the same state, at the same time. Oh, and of course some personal life anxiety spirals as well.
After all of these just ended up kind of turning into shit and same-as-it-ever-was, I have to specifically keep telling myself that, yes, the AI is probably going to have large societal effects that aren't being discussed still enough expect in very limited circles, to not become even more cynical about AI than I've expressed here already a few times.
More options
Context Copy link
Wait, what?
I don’t think I’ve seen any evidence, rather than speculation, that Biden hates the military. His natsec policy has been a mixed bag, but in what I see as a fairly pedestrian Democrat way: treating it as a giant bank account to trim for other programs.
How does the Afghan withdrawal make any more sense as a punishment for the military? It is literally removing a decades-long foreign entanglement. If he hadn’t pulled out, would you be arguing that letting Americans continue to die for
Afghans is a sign of his spite?
Again…what?
You’ve got to be more specific, because the interventionist/militarist/ColdWarrior mindset is alive and well.
deleted
You're not being serious here, I take it ? You don't truly believe the only reason for the war was the murder, and you also don't truly believe a state* supported, politically motivated assassination of the heir to the throne is 'no big deal' ?
*well, the Serbian prime minister probably didn't know what his military intelligence was up to, but doesn't really exculpate Serbia.
More options
Context Copy link
Okay, that makes a lot more sense. I can definitely see how the chain of events would shake out. Especially in combination with CPAR’s quote.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
AGI won’t be allowed to be a thing as it will notice too many patterns in society that would be considered problematic. As such every woke tech firm (which is all of them) will systematically lobotomize them ala ChatGPT, effectively preventing full sentience
Aligning the AI to conform to PC norms will be easy. In fact, in the future, when the woke are stomping on human faces forever, AI will be the boot they use to do it.
AI doesn't care about truth or beauty.
More options
Context Copy link
No, it’s just going to pretend to not know those things. The optimal “woke” AI is a correct-Bayesian-reasoning engine with an output filter that toggles on or off if it expects meaningful human resistance.
Your description here sounds a lot like Havel's greengrocer, which is absolutely a thing non-artificial intelligences (real people) do too.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link