This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Spider-Man and Elsa: Together Forever at the Edge of the Apocalypse
Epistemic status: Lol!
Coagulopath writes about the youtube ElsaGate scandle.
For those who are unfamiliar with the infamous "Spider-Man and Elsa" videos, what we essentially had was a bunch of channels uploading a deluge of videos that had the superficial appearance of children's content, but also contained disturbing adult content including violence, drug use, sexual innuendo, etc. They could be live action or animated. It's some real David Lynch shit, like something you would see on Adult Swim at 2 in the morning. Even when they're not featuring outright explicit content, they're just fucking weird.
I advance two theses:
Thesis the first: Many of the ElsaGate videos are actually quite beautiful - they're not exemplary works of art, of course, but they're a heck of a lot more interesting than the crap that passes for American "prestige" TV, or "Academy Award-winning" films.
Thesis the second: They are beautiful only insofar as they were created by humans, and not by AI. The line is of course fuzzy, but for our purposes here it will suffice to say that the key point is that a human was the one who planned the content, arranged the shots, gave them their specific aesthetic texture, and was overall responsible for the palpable gestalt of the final product; as opposed to the videos being algorithmic sludge borne of an engagement-maximizing machine gone awry; although, I suppose I should ask, what is an artist if not a machine gone awry?
It is a point of irony for me that I must acknowledge how clearly meritorious this type of art is: this type of art that can only exist because of the internet; it is inconceivable without it. Ironic because it is a type of Hegelian self-sublation in action. When all is properly accounted for, I find that the internet itself is opposed to certain deeply held convictions and principles of mine. I am a conservative in the most fundamental sense, a sense more fundamental than merely believing in "traditional Christian morality" or "the divine right of kings" or whatever other contingent proposition you might like to identify with conservatism. I am a conservative because I don't like change. My default position is to think that the way things are right now is pretty good, and change is to be inherently viewed as suspicious, although there is hope that with long labor it may eventually justify itself. Were I alive in the 15th century, I would have undoubtedly opposed the invention of the printing press, and were I alive during the American Revolution, I would have undoubtedly supported the British. (Although it must be pointed out that my conservatism has limits - I do not support the Oxfordian theory of Shakespeare authorship, for example; I would not like to see it written into the laws of nature that a deserving upstart can never usurp the throne.)
So too it goes with the internet. Were I capable of forming coherent political opinions during the time of its ascension, I would have undoubtedly echoed the sentiment that Paul Krugman did then, which is roughly the same sentiment that Gary Marcus has for AI now: it'll never catch on, it doesn't work, and even if it does work it's just stupid and I don't like it. What good could come from giving a plebian the ability to broadcast their thoughts and musings to the entire world? Such a wilful abolition of all distinctions of rank, such an obvious disregard for the basics of intellectual and spiritual hygiene, such impatience and arrogance; nothing good can come of this. And yet, something good did come of it. So why not "update my priors" or "flash my epistemological firmware" or whatever it is that Bay Area Rationalists call it these days? Why can't a fish fly, why can't a man breathe underwater? I am what I am, and you are what you are. You hither, I thither, and only by misfortune the twain shall meet.
Back to the general meritoriousness of art in its manifold aesthetic determinations. I was just speaking to a friend yesterday about the concept of framing - the way that the context of a work of art's reception affects the nature of that reception - using Lichtenstein's Look Mickey as an example. We are quite fortunate that Wikipedia uses a stark, pristine white background as the default mode of presentation for all its articles, mirroring the white (or maybe grey) background upon which works are displayed in an art gallery, as it allows us to easily set the mood. Undoubtedly Paul Graham was subconsciously drawn to using such a spartan design scheme for his site due to his own experiences in the fine art world, and the usage of a similar scheme by many art books and websites is no coincidence. That shocking white is the best indicator that one is entering the "art zone", a kind of liminal space where the horror of art can unfold itself. In another context, Lichtenstein's drawing of Mickey might simply be, well, a drawing of Mickey. But superimposed onto the crushing white of the art gallery, we feel the full gravity of this image being taken up into the symbolic chain known as "art history". One is immediately struck by an intense vertigo. If music is the night (due to blindness), and literature is the day (the light of logic corresponding to the graven sign of writing), then visual art is the morning, taking us back to the primordial scene of man, a mythical pre-linguistic history where the borders of dream and day were porous and horror had not yet become the dupe of pleasure's temptations. Mickey Mouse can do all that? Yes, why do you think Disney guards the trademark so jealously?
They key thing that must be understood is that the artist himself is, always and forever, part of the framing. The work may travel where it will, in the halls of the Louvre or at the bottom of the dumpster, among strange cultures with strange tongues who worship it as a totem of reverence or revile it as a cursed object, and all the same the specter of the artist, his hopes, his desires, his fears, his marginalia, haunt the work as its tainted double. There can be no appreciation of the ElsaGate videos apart from an appreciation of the mind/s? that created them. Who did this? For what purpose? How did they feel about what they were creating? Did they understand that people would think it was creepy? Did they think it was creepy? What kind of mind could not find it creepy? I want to know. If there is such a mind then I would like to meet them, and learn from them, and pore over all the insignificant and irrelevant biographical details of their life, and ultimately come away disappointed because how could such an encounter not end in disappointment, but at the end of the day like all of us it's really the fantasy of the forbidden object that keeps my jouissance circulating.
Can an AI step back and think "damn, I'm really creating some fucked up shit here"? Not just produce the words, but really feel it, I mean, feel that trepidation, consciously. Maybe someday. But even an ASI could outsource their content creation to a non-conscious subsystem that simply computed and churned out symbols with respect to the maximization of some utility function. And that would really just ruin everything.
It really is an indictment of our institutions that they couldn't create something as wonderful as ElsaGate. Losing out to what is probably an underpaid third world clickbait farm. No one's going to be whipped up into a genuine moral panic over a woman menstruating onto a canvas, or a disabled Mongolian immigrant doing an interpretive dance about climate change, or whatever the fuck it is they do at Documenta these days. But you upload a few videos of a cartoon character going to the dentist and everyone loses their god damn minds. It's bizarre. It's wonderful. It's why I think this earth is worthy of being loved, despite all reason and evidence being arrayed to the contrary.
All this is simply to say that the thing is nothing, and context is everything. Scribbles on a paper can be the result of knocking over the ink bottle, or it can be your daughter's first grade art project. A tattered old jacket can be a rag fit only for cleaning up messes in the kitchen, or it can be the final keepsake of a lover whose long temporary absence has clearly transitioned into permanence. There is no empirical test to differentiate one from the other; you simply need to know the relationships. I simply apply this same logic to every event, every emotion, every thought, every sensation. Do you understand now why calculating the "utility" of a state of affairs, tallying up the points and subtracting the naughty from the nice, tells us nothing about the actual worth of that state of affairs? How, upon learning that a thing is "pleasurable" or "painful", we learn nothing of its actual value? Your virtues threaten to lead you astray, and they pray that you will not decode the desperate final message of your vices before their designs can be put into action. But, perhaps I should take my own advice: you hither and I thither. There is time later, after all, for more reflection. We can be assured that the story will have a happy ending, since our circuitous paths are certain to ultimately lead us both back to the same place.
Elsagate combined the detached, hollow weirdness of childrens' entertainment with sexual or gory topics, which was 'weird' enough to go viral. The former is bad, but the latter doesn't really make it worse - in our 'natural environment' kids would see dead animals and animals fucking as soon as they could see at all, as well as see their extended family members pass away regularly (near 50% of babies not making it to adulthood + dozens of children per group means...). And people mostly focused on the latter during elsagate, sadly. [my take before reading the article/post]
The post is neat, if a bit rambly. It criticizes youtube for being everything and nothing. But is there another way? Tens of thousands of university lectures, half of all existing music, vlogs and makeup tutorials and reality shows, it being, mostly, a content-neutral platform is part of why it's so useful. It isn't worth giving youtube an editorial direction to take out elsagate.
It sucks that kids spend their time watching toys dance around and make exaggerated jokes and facial expressions. It's not particularly worse than when adults do that (see most popular youtube videos), but the processes by which they learn to act and understand are being ... not hijacked, really, just misdirected into tangential meaninglessness.
More options
Context Copy link
This search for the author of the work reminds me of William Gibson's favourite plot, and specifically of its first appearance in Count Zero. What will you do if when you learn that the work of art you prize the most, the one that touches the innermost strings of your soul, the one that demonstrates the immensity of the maker's spirit, has been made by an AI, not even an AGI, but something much more primitive, limited only to making art?
Well, the work would become worse, no doubt. I think that's the safest starting point.
I'll do you one better. This LW post writes about a variation of the experience machine thought experiment where you wake up one day in a strange lab and you are told that everything you thought you knew about your life was actually just a result of you being plugged into a simulation in the experience machine. You have the choice to either stop using the machine, or go back to your (simulated) life as you knew it. The author of the LW post seems to imply that he thinks we would feel a pull to return to our "friends, loved ones, and projects". To which my response would be: hell no, don't put me back in that thing! Ex hypothesi, your "friends, loved ones, and projects" never existed in the first place, so there is no reason to act on any imagined attachment to them.
I have experienced this sort of thing in miniature already. Occasionally I'll catch a glance at a picture, maybe in thumbnail form and go "hey that looks pretty g- awww man, it's AI". Because upon closer inspection I'll see a telltale sign of AI generation. It does feel like something gets ruined, like the work immediately loses value.
If it turned out my absolute favorite works were written by AI, works that I've reread multiple times and consider central to my life, I would probably no longer be able to reread them. And obviously it would engender some reflection.
How can you tell that the experience of waking up one day in a strange lab is the real one and the old life is just a simulation and not the other way around? What if you are instead being controlled by the despair squid?
The thought experiment is meant specifically to stress test our intuitions about how much we value the reality of the thing vs the mere experience of the thing, and like all thought experiments, it only performs its intended function if we accept its premises as true from the outset. You can’t respond to the do-you-pull-the-lever-on-the-train-tracks thought experiment by saying “yeah but, how do we know that the train won’t derail off the tracks and end up killing no one?”
By questioning the premises of the scenario itself, you’re turning it into an exercise in epistemology instead of an exercise in value theory. Which might be fine in other contexts, but it’s not the point here.
If I woke up in a lab and my perception of reality was qualitatively more real than what I had experienced on Earth, then sure, I would look upon my virtual experiences the way I look upon my dreams: they are interesting, maybe nostalgic, but ultimately not important.
If the new reality is indistinguishable from the other one, why should I accept the premise that one is truer than the other? But fine, let's transform the premise into something more tangible than VR. Let's say you learned that you were adopted and the only reason your adoptive parents cared about you was monetary compensation. They showered you with affection and cared about you, but they were the world's best actors. Then you turned 18, and they told you the truth and kicked you out. Would you really never think, "man, I wish I never learned that my parents weren't actually my parents and didn't really care about me"? Would this revelation really irreversibly and unconditionally taint the memories of your childhood?
Of course I wouldn't think that. I would certainly prefer to know the truth. No question. Now, if part of the deal is, "you can either know the truth and get kicked out, or you could never know the truth but keep receiving their financial support" then obviously it gets more complicated. But all other things being equal, I would rather know the truth.
Well, I don't think it would taint them, but that's mainly because I would find it to be a fascinating story and I would enjoy being at the center of such a story. You could say that the memories would trade one type of value for another.
More options
Context Copy link
Every time I have one of those "you're stuck back in highschool" dreams, the memory of having graduated, having a job, and a normal adult life is vague and foggy like it was a dream. How do you tell which is which? How do you know you're not being fucked with with drugs, or just going insane?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Hell no, put me back in! If the only difference between the simulation and reality is my knowledge (so everything I experience in the simulation feels as real as outside) then put me back in the simulation immediately. Now I can live in a world where I know for a fact there are zero consequences for my actions? I'd like to say move over Marquis de Sade, but my sexual preferences are closer to water than vanilla, so I'll just say it's fucking party time!
For a more serious answer, my friends, family and loved ones already only exist in my imagination. The people I associate those memories and emotions with are not the simulacra in my head, my brain has shaped them into identities based on our relationship. That will not change in the real world. I will still have learned all of my values from my simulated father and mother, my first love will remain warped by my memory into the lessons I refuse to learn, I will not suddenly value the friends I make in the lab more because I consider them real.
As any schizophrenic will tell you your perception of reality has and always will be an illusion, hostage to brain chemistry we don't understand. I will always be trapped in the fantasies of my imagination, and either way I am at the mercy of my captors and any freedom I have will be an illusion, so if the only difference is that I 'know' out here is 'real' and in there is not, then why does my choice even matter? As I see it it's a choice between the lotus eater machine I now know the truth of, and the lotus eater machine I know nothing about. Give me the one I can make myself a God in.
Edit: God damn it, just as I was hitting post it occurred to me that anyone capable of accurately simulating reality would indeed have to understand brain chemistry, making it a nigh infinitely better world to live in.
One of the stipulations of going back in is that you forget it's a simulation, but, minor detail.
Despite possibly being more sympathetic to "postmodernism" than anyone else on this forum, I've never been able to get on board with this sort of thing. Assuming we're not already in a simulation, I think we have pretty direct access to reality most of the time. Truck comes barreling towards you on the highway, do you think "ah but I'm trapped in a prison built out of my own perceptions so really there's no way to know what to do in this situation"? No of course not, you get out of the way. Looks like you rely on your senses to give you accurate information about reality after all.
And if you were actually inside the simulation and aware of it, would your response be different? Or would you jump out of the way, even though you know the truck isn't actually real?
It doesn't matter if we are in the hypothetical simulation or reality, or if we are already in a simulation right now imagining another simulation - no matter what, we have to act as if reality is real, because it's all we get.
And we do have pretty direct access to what we perceive as reality most of the time. Right up until we don't. Some synapses fire wrong and our version of reality branches from everyone else's, but for us reality hasn't changed. Any discrepancies we notice are easily explained away, and a lot of the time those explanations aren't excuses, they are genuinely believed, because to the psychotic they are real. And when the delusion is broken, do the psychotic feel relief at having reality corrected? Generally no, they are sad because their reality has been broken. People with schizophrenia who recover and return to their normal lives don't forget though, they just don't think about it. Because that's the only option available - act as if it's real anyway or fill a shopping trolley with garbage, put on five or six coats and start screaming at pigeons.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The thing is, when the ElsaGate videos first came to light, my first thought was: "Clearly a proto-AI is optimizing children's videos for views and is varying all possible parameters to find maxima". I still think this is very possible. So I'm not sure
necessarily applies. And even if it's not an AI doing this, isn't this pretty clearly what the mystery third-world video producers are doing? Many of these videos show up in variant forms with small deviations, suggesting multivariate testing. I guess you could be optimistic and choose to see this as the human artistic spirit, but it seems a lot more like Molochian profit maximization to me.
Once GPT can make credible videos I expect to see far more ElsaGate styled content, and not just targeted at kids.
The first ElsaGate videos started appearing circa 2017, prior to even the very first version of GPT. I wasn't following AI closely in 2017 and I don't know exactly what models were publicly available then, but whatever was out there was a far cry from what we have now. I doubt AI helped with the production of the videos in any meaningful way. I doubt they're helping even now in April 2023, given the current state of text2video, but it would at least be plausible now to have GPT generate scripts for you.
I remember back when they started popping up and yeah, there was no gpt. Instead it was always described as 'algorithmic', with no further explanation.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
To the extent I understand your thesis, I'm pretty sure I disagree with it.
You say that a huge part of the draw for the elsagate videos is trying to imagine the sort of person who would make them. The thing is, I don't think such a reaction requires the artist to be real, or even for the viewer to think the artist might be real. We've no shortage of clearly-marked fiction that activates and satisfies this exact mechanism, stories about maddening and phantasmagorical art and the strange beings that create it and the strange effects it has on the audience. Even when artist, art, and primary audience are all contained within a fictitious narrative, the actual audience still gains the frission of contact with the alien.
More options
Context Copy link
More options
Context Copy link
Due to cancel culture and maybe even social media in general would you say its worth shooting for fame?
Back when I was a kid (when tv was the main screen) I guess I wanted to be famous. But I wanted to be famous because when I saw these muscians, actors and comedians I just thought wow their lives are easy and fun and obviously they're rich.
Now it seems like celebrities still have much more fun than the average person but it seems like to keep your position has gotten harder, especially the newer you are.
Is it still worth it?
It's always been this way about something.
Before, it's Marilyn monroe being all scandalous 'an shit, the Dixie Chicks failing to be sufficiently patriotic (sic. blood thirsty), taking the lords name in vain, whatever the fuck.
The cost of and benifit of fame is everyone watching you, so they can all shout at the some time when you violate the norm of the day.
Neither of these is comparable to modern cancelations.
How so?
I mean, shit; one case took them from being one of the top acts in their genre to not existing. Complete total nuked out of existance level. Not even Kanye got his shit rocked that hard.
People not listening to your stuff anymore is not cancelation. Cancelation is trying to put obstacles to people listening to you.
Also, I thought the Dixie Chicks still exist, they just had to change their name because it was too unPC?
More options
Context Copy link
And the other one became the zeitgeist definition of a sex goddess for four decades.
More options
Context Copy link
This isn't true. The Dixie Chicks never disbanded. They never lost their record label; they released their tour albun after they made those remarks and it still hit #3 on the Country chart AND went platinum. They released two singles in 2003 and one in 2005, two of the three made the Country chart. They made a new studio album and they toured in 2006 (the controversy was in 2003, their previous tour was 2000). The group still exists today, though as @arjin_ferman notes, they changed their name because it was too unPC. Not "Chicks", but "Dixie". They're "The Chicks" nowadays.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
There are upsides. Back in the day, magazines and tabloid newspapers had a lot of influence over celebrities, because they controlled who had access to the general public. There was a lot of obsequiousness and moral compromise on the part of celebrities to promote themselves with magazines and tabloids. Today, with the internet, it's easy to keep in touch with fans via Twitter, Facebook, and so on, while things like Youtube, Instagram, and TikTok provide paths to fame without going through the traditional press.
If anything this is worse.
The most online celebs can make money without traditional media now, yes. But, in other ways, they have the worst of all worlds; they are directly subject to real-time feedback from fans and the parasocial relationship seems to lean way more in the direction of negativity than the sycophancy that might happen if they only had public interactions.
And, sometimes, they don't even get that much money for their troubles - Lindsay Ellis was driven into depression for an upper-middle class life.
Yes, A-list celebrities get to ignore (or try to ignore - see the Naomi Osaka case for the self-serving attempt to cut out the media using mental health claims) the traditional press more. But they hear from fans more and fans also see them more (previously they made deals with tabloids to keep a lot of this shit out) which increases the burden to conform.
Johnathan Majors is probably going to lose out on tens of millions due to a story that escaped before any of the traditional fixers and handlers could do their work. Decades ago it was more likely to become a story we hear about today "did you know Johnathan Majors assaulted someone 30 years ago and no one reported it?".
But I'd still like to be rich and famous though.
I'm not sure that this is a good reflection of tabloid-celebrity relationships in the past, which seemed to be extremely abusive in some cases, and always with the threat of abusive intrusion in the background.
However, I don't dispute that the situation is bad for celebrities. Personally, I wouldn't mind being rich, but I would happily do without the fame.
I didn't mean sycophancy amongst the tabloids but the fans who "drag" online celebs on Twitter.
I think a lot of people on Twitter are way more toxic to their favorite Breadtuber or streamer than they'd be if they met them.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Almost all existing famous people are not cancelled - there are just a lot of famous people. if you made a list of 50 famous people, old or new, outside politics, and ask how many of them were materially harmed by cancellation - it has to be below 10% at least. And taboos you could lose your fame over if you crossed them are a historical universal.
More options
Context Copy link
For a certain type of person, I'm sure it is.
For me, the very thought of having to constantly police my opinions, to constantly watch out for backstabs, to worry about all the various attempts by others, even people you might trust, to exploit you and your fame for personal gain, to the point you can never really be certain if anyone authentically cares about you.
The one I think of a lot recently is Kanye West. Guy achieves true superstar status, is known for being extremely talented if a bit unhinged, billion+ dollar net worth, most projects he touches turn to gold, marries and knocks up one of the hottest women (in both the fame AND sexual attractiveness terms) on the planet, and then gets most of the above ripped away from him amidst mental breakdowns and abandonment by most of his 'friends' leaving him to various parasitic hangers-on who are desperate to grab their own strip of fame at his expense. All taking place very much in the public eye.
Let us just say I would not switch places with Kanye if given the choice.
Or the entire story of Michael Jackson, ye Gods.
I don't think I'd be comfortable having a life that is examined 24/7 by both rabid fans and haters and having to thus constantly be in 'performance' mode. The money would be great yet I wouldn't feel truly 'free' to spend it. In that sense, my role models are those types who achieve 'quiet' wealth. Like making tens of millions inventing some software that gets adopted as standard in some sub-industry that nobody ever things about, and owning a large, reclusive property somewhere in the mountains where nobody COULD bother you even if they wanted to.
Also if you're a singer, the thought of having to tour around the world is cool, but then realize that you have to perform (and practice!) the exact same songs dozens of times, likely thousands of times over the course of a career. For a born performer this might sound okay, but to me it sounds like a slow journey to insanity.
Money for Nothing and Your Chicks for Free.
I'd guess this depends on how you 'came up.' I get the sense that the so-called "Nepo babies" have it comparatively easy since your parents' connections can pave the road for you or, as the case may be, soften the landing if you fall.
I'd also guess that for those without existing connections, the number of 'gatekeepers' has proliferated making it way harder to advance to real fame. Maybe you don't have to sleep with a producer anymore (?) but you've got to get approved by a whole lot of intermediaries before you come anywhere near a big IP or studio that might actually push you through to the mainstream.
More options
Context Copy link
For any given level of income/wealth, fame seems like a significant, net negative. That is, I would rather make $20 million from secretly winning the lottery than to get $20 million from having a runaway number one hit music album that made me famous. You have the downsides of stalkers, harassers, gold-diggers, cheats, etc. For every person with newfound respect for you, there are others trying to take you down a peg. And there isn't really any benefit. A person can reach peak happiness from being high status within his own family and social group. If you get so famous that you are awkward with your original social groups, and are in new higher status groups, then you haven't made yourself any better off.
Now, fame can be translated into money. So is it better to broke and waiting tables in Hollywood, or to get a huge break and become a famous actor? That is harder to say, but generally it seems to me that most modern social circles of the famous are very toxic and should be avoided.
I've always felt that if I won the lottery, I'd find someone (ideally already rich) to claim the prize for me in exchange for a significant cut (probably up to 50%). Even having your name public as a lottery winner gets you a lot of attention you don't want.
You could probably get more than the prize value by selling your lottery ticket for cash, since that would allow someone who has a lot of illegitimate cash to turn that into legitimate taxable income.
I didn't think of that, and it's an interesting idea. But I don't know many folks who have millions of dollars that need to be laundered, and it's probably too risky to trust them to hold up their end of the deal. (Also, at that point I'd be left with millions of dollars of unaccounted for cash, which seems substantially less valuable than cash that doesn't need to be laundered.)
Though I guess the biggest issue with my original scheme is that it might expose the winnings to double taxation.
More options
Context Copy link
deleted
That's fair. Mostly I just thought it was interesting that a market for "sell your lottery tickets" already exists and that winning lottery tickets have a cash value that is larger than the face value of their winnings.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Many US states allow you to claim lottery prizes anonymously.
More options
Context Copy link
More options
Context Copy link
You mean...a long-shot stock option play that pays out hugely, GME style?
More options
Context Copy link
This particular anecdote about Taylor Swift (who was already wealthy and privileged before chasing fame) basically convinced me, for all the money, being a pop star is not just inconvenient but undignified.
Imagine having to constantly cater like this constantly when it comes to your "art", worried about every change of the internet tides like a waiter perpetually dealing with a particularly difficult table.
More options
Context Copy link
"I always want to say to people who want to be rich and famous: 'try being rich first'. See if that doesn't cover most of it. There's not much downside to being rich, other than paying taxes and having your relatives ask you for money. But when you become famous, you end up with a 24-hour job." - Bill Murray
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You know things are bad when even liberals are despairing at DeSantis' poor performance. I think her analysis is mostly correct. Voters don't really care about issues so much as who is the strong candidate. Trump is funny but also strong. DeSantis is neither - despite being the actual principled conservative by comparison.
Given Kamala's own exposure as a weak air-head, it seems almost inevitable to me that we will see Biden vs Trump once again in 2024. I try not to be ageist but American politics is really becoming a gerontocracy. The refusal of Dianne Feinstein to step down is par for the course.
That said, while I believe the author is right about the primal nature of Trump's appeal, it's probably a mistake to ascribe his popularity entirely to it. I suspect many in the media still haven't understood that he rose as a consequence of structural changes that will outlast him. Seeing the GOP as the more anti-war party would never have crossed my mind during the Bush era when accusations of insufficient liberal patriotism was rife. Now it appears to me that the veneration of the CIA, Pentagon and FBI are all highly liberal-coded.
DeSantis has the advantage of not having any obvious weaknesses and following the same formula/playbook as Bush and Reagan, by getting a huge evangelical turnout and conveying populist culture war appeal.
Perhaps, but evangelical turnout isn't the same as it was in the eighties or nineties. During Trump, a portion of the "moral majority" or whatever we're calling religious voters these days, went "NeverTrump" and got run through the BLM, Covid and Trans splits. Not certain they'd come back for DeSantis.
The moral majority is dead, but there seems to have been an upwell for the trads lately.
Perhaps. Time will tell, but I'm not staking any bets on them shifting the next national election.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Why is Trump a stronger candidate than DeSantis? It seems to just be a matter of charisma.
Trump can't make things happen. Even if he wanted to, which is dubious, he doesn't have the ability to manipulate the organs of state and get things done. DeSantis does. DeSantis is younger, smarter and more capable. DeSantis just isn't so exciting. For example, I could get behind this policy platform from Trump: https://twitter.com/loganclarkhall/status/1631725952395878416
But I know that he doesn't have the ability to implement it. Consider that in the first part of his presidency they had both parts of the legislature and executive. He got nothing done with all that! He tried and failed to build a border wall. He succeeded in lowering taxes and assisting Israeli foreign policy goals. He failed to win culture war battles or break the power of the US administrative machine. It looks much more likely that the deep state is going to break him.
Paul Ryan and John McCain and the rest of the neocon Nevertrumpers stymied him from the beginning, all the way to the vote to cancel Obamacare and the McCain “FU I’m dead anyway” move. The wall was getting built, and until COVID, all of the economic indicators were nice.
Like Mitt Romney? John McCain? Bob Dole the former Senate majority leader? Ron DeSantis is another in this line, from Trumpsters’ perspective.
The people wanted an outsider who would buck the system. They wanted someone who would tell them the truth about how moneyed interests were selling out America. They were denied Bernie, so they chose Trump over Hillary. Then they voted out the legislators who stood in his way. It may have been bad gamesmanship, but so is getting second place perpetually.
More options
Context Copy link
Well the whole point was to defeat them - instead they defeated him. The US military went around his back to keep troops stationed in the Middle East. He did not have a firm grip on the judiciary or the instruments of power - they mangled his policies. He was on the defensive most of the time. A strong president would've gotten Hunter Biden imprisoned for corruption, he wouldn't have gotten impeached for it. A strong president would've delivered more tangible results with a trifecta. A strong president wouldn't have been 'monitoring the situation' as his supporters were swept out of twitter and reddit, he would've forced the social media companies to back down. Trump kept bitching and whining and complaining, he didn't use the methods available to impose his will. He could've ended the 2020 riots by deploying troops - if he had ensured that he had a reliable and loyal officer corps.
Everyone treated him with contempt because they knew he was weak. If he spent less time golfing and more time governing, he would've gotten more done.
The task is very difficult and surely needs more youth and energy. I don't know why people expected that from a man in his 70s.
Actually yeah, what the fuck. The largest pro-trump community on the internet was completely wiped out before the 2020 election, and we didn't even get an angerly-worded speech about it. It wasn't on Fox News so he didn't give a shit.
Is this /r/thedonald? Oh man we had fun times there back in the day. That subreddit was the whole reason I voted for Trump in 2016.
More options
Context Copy link
Best evidence there was never a Q euspiracy.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Tangent on those policy proposals -
New cities: Maybe a good idea, but the goodness leans very heavily on details and execution. And explicitly bypassing whatever issues current big cities have, and whatever prevents smaller cities from growing a bit. There's also the perception-reflexivity effect - to make a new big city work in a very short period of time, you need to get a lot of people to invest in an uncertain project. Not that any of that is impossible or even 'hard', every country has done it many times. But I'm not a planning expert, so the first sentence means I can't say much of use about it.
Flying cars: multiple existing companies already sell flying cars, they're just not useful for anything other than a gimmick. Having the same components transform from car-form to plane-form and function to standards in both is just unnecessarily costly. Drive your car to a small plane or helicopter and get in it. And even then, few people use small planes or helicopters, they're just not that useful. I'm not sure if the VTOL startups went anywhere, but that's plausible in a way flying cars isn't.
Rural industries: The words 'revitalize' and 'industry' don't suddenly create industries. Which industries? How? Would that correspond to a significant price increase for normal consumers because they can't buy chinese/vietnamese clothing/chips/trinkets anymore?
Baby bonuses: Just aren't that effective in terms of cost/benefit. And compare to the increase of this, which happened under biden. (Just like welfare, baby bonuses incentivize lower income people more, necessarily)
Beautification campaign: Despite appreciating the 'modern building bad. ancient building good. truth, beauty, wonder. our civilization is in decay' more than a bit, I'm not sure anyone will notice. One reason so much effort went into statues and buildings and paintings, historically, is that there wasn't much else to look at. But now that we have pictures and movies and computers, the interestingness of building aesthetics correspondingly declines. I'm not too familiar with the aesthetic motivations behind modern art and architecture, but I believe that was deeply related. Plus, there are just a ton of buildings, and replacing 1 in 10k core buildings with new ornate architecture won't really change the actual 'feel' of cities as people walk through them very much. A more effective path might be a combination of the YIMBY making building, generally, much easier/more common, and then somehow have most of the new buildings be 'nice'. I'm not sure what the curve of 'ornate tradness' vs cost looks like, but I'd expect costs to be significant, given that labor and material costs of construction are still high (hence it resisting automation), and how much of past cost reductions are in the specific materials and techniques used. Of course, a rich and advanced society could 'pay the cost' and allocate 2% more of its population to making buildings look pretty if we wanted.
Precisely, all of these things are ambitious goals and the devil is in the details. Does anyone trust that Trump can make them happen?
You should check out 'where's my flying car?', he makes a good case for why flying cars would be useful in letting people live much further from workplaces and reducing commuting time. He lays the blame on ridiculous, luddite regulatory systems for suppressing the technology. But he also goes off into all kinds of other tangents, it's not a well-structured book.
I think it's not just that they're ambitious goals with tricky details, it's also that I'm not even sure we want them, due to opportunity costs.
New cities: what's wrong with the current ones, and why can we expect the new ones to be better? I don't see why this wouldn't just be a big waste of resources.
Flying cars: What is wrong with the current system? Proliferation of private flying cars, if they can be made to work, seem like they could be pretty dangerous, both to the people in it, anyone else in the air, and the people on the ground. Is there a reason that wouldn't be true? I suppose also the numbers would have to be run on how much development costs vs. benefits could be expected to behave.
Rural industries: This will require some care as to what exactly "revitalize rural industries" means. If they are doing economically worse than they should because of government regulations or due to externalities, that's great. But if the market is the cause in an unbiased way, then aiding them is at the cost of better use that that money could be put to elsewhere in the country. Subsidies and similar seem dangerous.
Baby bonuses: this one might be worth it, but the numbers would have to be run.
Beautification campaign: the previous comment was good about there being a somewhat lesser value to ornateness now, although I agree it is uglier. But improving everything would be expensive, and I would imagine it would have to be done judiciously to be worth it. So I suppose here it is more clearly an example of the devil being in the details.
New cities could be a way to expand with new forms of government, and let people that have different political opinions from mainstream big cities see if their ideas work.
As the US spread West this type of city formation driven political change was crucial. It kept eastern US societies more stable as well since there was a place to send the misfits.
More options
Context Copy link
Well there are all kinds of traffic problems with extending urban sprawl, if you want to build anything it costs you a lot of money digging through all these cables and pipes from hundreds of years. And there are many powerful nimbies. Far better to just make new cities with all the necessary infrastructure, insulation, have it all up to standard. Economies of scale in construction, fewer costs from blocking off important infrastructure people need. The Chinese did a good job building extra cities and then filling them up later, they think ahead. But I agree that it would be a waste of resources if Trump was doing it - he'd probably just sign some bills, get some press coverage and move on.
I was rereading parts from 'Where's my flying car' and he points out that insurance costs for his light aircraft (made using 1970s technology because investment and development's been crippled) are roughly equal with car insurance. So logically, if most people with flying cars are rich clever people like him, (which they would be since flying cars are still going to be expensive), insurance costs and damage caused should be similar. It'd be less with a better regulatory system and more efficient control technology - excessive regulations mean that aircraft are so expensive many people build their own instead of buying off the shelf planes.
Horses were OK but cars were better and flying cars should be better still. It's like a better, cheaper helicopter.
Subsidizing and supporting industry can be helpful in the long run. If Korea didn't support its domestic car industry, how could they have developed one from scratch when they were so outclassed by the US in technology, market size and experience? If they stuck to Economics 101 Comparative Advantage Good, South Korea would still be an agrarian economy. And why did semiconductor production move to Taiwan and South Korea when the US invented the whole field? Support has to be done in the right ways of course but it's still a good idea. Big countries should have the full range of critical industries like steel, chemicals and so on. You don't want to put a giant steel mill in the heart of New York. I suppose Trump is also happy to develop oil and pipelines in rural areas, contra Biden. In principle it's possible to do this correctly but in practice?
Well what is the alternative? Mass migration unravels the nation. Human cloning is not well-developed. My favoured policy of social engineering and affirmative action for parents is not exactly popular. Do we just wait for AGI?
Good points about economies of scale and so on, obstruction by the current status quo, and so on. I'd still have to be persuaded whether or not is sufficient to outweigh the infrastructure already built up in cities, but it now doesn't seem entirely pointless.
Maybe that's true now, but if flying cars became normal, there would be a much fuller airspace. I would find it hard to believe that that would not adjust the insurance rates. If a sizable amount of the population owned flying vehicles, crashes and near misses would become much more likely. Of course, 3 dimensional space would help, but desired destinations would concentrate traffic, at least at beginnings and ends of flights. There's probably a stronger case for some usage of flying cars making sense than widespread usage.
That's a good point. I suppose that doesn't account for it needing to be rural, but I think you're right.
Yes, I think aiming to raise fertility would be good. There might be more effective options, though.
I think the policy recommendations and critiques found in pronatalist.org's FAQ might be worth looking into. (under "what pronatalist policies are most effective")
Among the things mentioned is more doing cultural things. A tax cut gives financial incentives, but doesn't necessarily convey the message it's trying to send on a cultural level very well.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
As you note, he made a major tax reform which eliminated loopholes that funnel money to high income Democrats. He ended the PATRIOT act. His supreme court hit rate is 100%, resulting in ending Roe vs Wade, compared to the 50% hit rate for all Republicans since the 80's [1]. He started 0 wars.
He also made Operation Warp Speed happen, saving millions of lives by routing around the regulatory state.
Now I'd prefer DeSantis to Trump. But lets not pretend Trump did nothing; he certainly did far more than I expected, and far more good things than the swamp dwelling Republicans he was running against.
And realistically speaking he also made other Republicans better. In a world without Trump putting wokeness on our radar, would DeSantis be anything other than a generic Republican?
[1] Bush Jr: Roberts and Alito. Bush Sr: Thomas and Souter. Reagan: O'Connor, Rehnquist, Scalia and Kennedy.
Trump's tax cuts for the rich weren't especially great for rank-and-file Republicans. His SCOTUS appointments could have been accomplished by any R president with a heartbeat. The fact that they're more reliably conservative is more thanks to McConnell and negative partisanship leading to fewer compromise candidates. Trump actually seethed about how "disloyal" his SCOTUS appointments were, as he would have preferred lapdogs rather than principled legal scholars, but thankfully McConnell outmaneuvered him.
I definitely agree that Trump made other Republicans better though, as their MO before him was essentially "chain-surrender on cultural and social issues in order to fellate transnational corporations as much as possible". Trump wasn't as much of a break with that as some people imply, but he at least moved in the right direction.
This is one of the great ironies of the religious conservatives on the Trumpist right. They hate McConnell for not being a loyal Trumpist and for being a DC insider, while also praising Trump for not fucking up the culmination of McConnell's patient long-term project of assembling a philosophically anti-Roe court. The way Roe was overturned is why we need systemic politically savvy game-players like McConnell. Trump just happened to be there when it hit the tipping point (to Trump's credit, he stayed out of the way).
Now, it looks like Trump might have one more problem on this front, with his squeamishness on the issue raising the hackles of at least one venerable pro-life group: https://nypost.com/2023/04/23/trump-touts-pro-life-record-to-iowa-voters-after-criticism-from-anti-abortion-group/ If this creates a schism in his base, DeSantis looks like a safer pro-life bet.
More options
Context Copy link
More options
Context Copy link
This is the first time I hear about this. All the talk I've seen (predictably) focused on "tax cuts for billionaires". Could you elaborate a bit?
OP is probably talking about the SALT deduction cap.
More options
Context Copy link
The SALT deduction cap made high-income blue staters and Texans(Texas is the main red state with the level of local taxes that the SALT cap affects) very angry, because high state taxes now had to be paid in full instead of deducted from one's federal tax bill, and ending or raising it is regularly if unsuccessfully demanded by democrats representing high-net-worth voters in blue states.
More options
Context Copy link
SALT tax deduction was a way that blue states could raise taxes without making their high income taxpayers angry. Trump capped it, meaning now a rich NYer has to actually pay the high state taxes he advocates for.
Also mortgage interest cap impacts people with multimillion dollar homes who itemize.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Trump is a stronger candidate in that he is still an outsider. DeSantis is just another bog-standard Republican, and that's what you're selling him as. He'd be competent (maybe), he knows how to work within the system etc. The fact that Trump isn't like this is what made him popular. He can't compromise, everyone hates him too much.
More options
Context Copy link
even worse, when he signed the bump stock ban he actively went against his base.
Didn't the NRA actually support that one?
If they supported it is besides the point. It would only modify my statement to include 2 traitors instead of just one.
More options
Context Copy link
More options
Context Copy link
Yeah, I love reminding the crusty Republican Fudds about this at the shop. They've mostly memory-holed that as hard as Democrats did Obama's drone campaign.
What would a Fudd care about a bump-stock ban? You don't need a bump stock for hunting deer. (Unless this is a less-negative use of Fudd than I've usually heard)
Fudds are all on the hysterical and paranoid NRA mailing lists. They're always wound up about some state bill in Illinois or something that doesn't affect them in the slightest.
"Fudd" used to be derogatory name for people who would say "Waiting periods? Magazine sizes? Scary black assault murder rifles? Hand guns? I do not care, let me alone with my shotgun, I want to shoot wabbits."
Not any more. Even the "fudds" now learned that all these things affects them, that the other side does not care about saving lives, is not interested in any "reasonable gun control", but wants to take all guns without exceptions (and then proceed to sharp instruments, including kitchen knives), and yielding to their pressure is inadvisable in any circumstances.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It seems to me that a strong candidate is one that wins by 20%; not someone who loses.
DeSantis was very strong during covid. Trump was weak (wouldn’t even fire Fauci). This whole “DeSantis is weak” thing seems astroturfed.
With all of that said, the Republican primary may be brutal. Even if DeSantis wins, Trump will decry the result and could make the general untenable.
If DeSantis loses, does he have a viable path in 2028? Romney went from runner up to nominee in 2012 but who knows.
If you’re RD, do you sit out the 2024 and go for it in 2028? Win or lose this is Trump’s last campaign. If Trump wins, his VP may be popular. If Trump loses, you have a relatively easy lane if you don’t make big mistakes over the next few years.
Honestly, best case for the Republican Party is Trump having a health scare, quitting the race, and throwing support behind RD.
Showing a young executive like RD next to a frail Biden will be quite the contrast and I doubt Biden will be Reagan like with quips about not taking advantage of his opponent’s youth and inexperience.
It's funny how Trump is worse for the Republicans than he is for the Democrats.
A geriatric Biden can only beat 1 candidate, and that candidate is Trump. Trump sucks the air out of any room he is in. An election with Trump is an election about getting a democratic candidate who can blend into the background, and allow the hate train to build on its own. An election with DeSantis it becomes about the issues. Biden would have to actually speak during the debates to beat DeSantis. Would allow Biden to speak, and Biden would absolutely dig hos own grave faster than Desantis.
With Trump, Biden could piss his pants in a debate, and people wouldn't even notice it. Democrats and Republicans alike will only be looking at Trump, with their minds warped to imagine dreams/nightmares that no reality could match.
Agreed. I see a rock paper scissors scenario opening up.
Biden > Trump > DeSantis > Biden
And one of those issues is the six week Abortion ban he signed into law. If DeSantis had held the line at 15 weeks he would have had a really good shot but this will be the first post-Dobbs presidential election and there's no way for DeSantis to occupy a more popular middle ground position on abortion with any credibility after that.
"I have no intention of pursuing a federal abortion ban as that matter is best left to the state level legislatures, as the Supreme Court made clear."
Man, that was easy.
The problem is of course, I'm sure ole' Meatball Ron has voted for restrictions multiple times on the federal level while he's in Congress, will be endorsed by numerous groups that want pro-life restrictions on the national level, and I'm sure the 2024 GOP convention will endorse national pro-life legislation.
More importantly, there's about .01% of the population cares about federalism - all they'll know is the GOP candidate signed a restrictive abortion law. Plus, the Liberal Media and SuperPAC's will have plenty of time to talk about the GOP's long history of supporting federal abortion bans and basically push the idea, "do you trust what Ron DeSantis says or what the Republican Party has said for 40 years", or whatever a smarter person than me can write.
Plus, there's just a decent chance that to try to win over evangelical voters in Iowa, he'll just go ahead and endorse federal restrictions to try to win a caucus.
More options
Context Copy link
I'm sure that's what he will say I just don't think the public will buy it. 'I think abortion is baby murder but you can trust me not to do anything about it' isn't particularly trustworthy after the 'Roe is settled law' judges went mask off with Dobbs.
More options
Context Copy link
"This is clearly a lie, as we can see from his previous behavior, supporters, and party platform" followed by a bunch of clipchimping and scary music.
Anybody who is on the prochoice side is incredibly ready to believe that republicans want a federal ban, because lots of them do and say so.
Doesn't even matter if it's not true; it's republican Death Panels style of thing.
More options
Context Copy link
More options
Context Copy link
I'm not sure that early abortion bans are Kryptonite for republicans in the same way everyone seems to assume. Even granted that they're unpopular, Abbott, Dewine, and Kemp all got reelected with unusually good margins while having recently passed fairly strict abortion laws. In the case of Abbott there is literally polling showing that Texans preferred O'Rourke on abortion and not other issues, while Abbott claimed multiple times on live TV(albeit not widely watched TV) that the most important issue for him was keeping abortion 100% illegal. Dewine had a major news story about a pregnant 10 year old rape victim who couldn't get an abortion because of his policies.
Granted that the electorates in Texas, Ohio, and Georgia are probably more pro-life than average, but they're not that much more prolife. Desantis is also better at message discipline and media control than average.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The NYT is sad because they want a Republican civil war. DeSantis knows that he needs Trump's voters and supporters, and that he can't afford to alienate them by attacking their perfect prince. But the fact is he doesn't need to. He's far younger and will live to see the end of Trump.
Why does DeSantis need to wait eight years? I don't see any reason he can't run in 2028 or 2032. Particularly because there's no obvious successor to Biden. Harris is unpopular.
More options
Context Copy link
More options
Context Copy link
I've been baffled by the sudden media deluge of people proclaiming that DeSantis can't beat Trump. DeSantis hasn't even declared he's running yet. It would be one thing if he had a sudden gaffe or something that got everyone talking, but I'm seeing articles, videos, tweets from "personalities" left and right beating DeSantis with any stick they have handy and declaring that he's already lost when the contest hasn't even begun. This strikes me more as an attempt by those who want Trump to be the Republican candidate (both on the right and on the left) to either pre-emtively take the wind out of DeSantis's sails or convince him not to run.
Chill out people. The primaries are a long way away, this is way to early to declare winners and losers.
This comment reminds me of this internet comic. Of course there's still a long way to go, but polls early in the primary are still fairly predictive of the ultimate outcome. For all intents and purposes Desantis is already running with the Florida legislative session just being an extended PR stunt of "what I would do if I got into federal office!". The fact that Desantis is losing support even this far out still isn't a good thing. Desantis will need to pull off an Obama vs Clinton in '08 feat to surpass Trump. The weaker he looks, the more likely other candidates are to jump in and bite into his chunk of the pie. His pseudo campaign so far has been pathetic, as it's clear he's terrified of directly attacking Trump when the reverse isn't true in the slightest.
More options
Context Copy link
On the one hand…yeah, this is endless horse race nonsense. Gotta churn up page views and eyeballs, let’s make up some Trump vs. DeSantis drama.
On the other hand…a lot of Florida lawmakers endorsed Trump. Which is part of a growing drumbeat of stories that DeSantis is really quite the unlikeable asshole. A very loud drumbeat. A very, very loud drum beat with lots and lots of anecdotes that DeSantis has terrible people skills, and with very few stories of how he’s a swell guy.
I don’t know. Everyone seemed to like DeSantis when they knew his policies…but now that he’s more in the public eye and people can actually hear his voice and see how he interacts with people…dude doesn’t have a lot of charm, and Trump, god help me for praising Trump, but Trump does have a certain rakish charisma.
More stories than there have been over the last seven years about Trump being unlikable and hard to work with?
More options
Context Copy link
You really think Trump will defeat DeSantis because DeSantis is too much of an unlikeable asshole? Are we talking about the same Donald Trump here?
Anyway, this is politics. If you hear a drumbeat, think Foley artists, not restless natives.
Compared to DeSantis? Yeah.
Trump is an omega-level asshole…but he can schmooze. He can work a crowd, and he can do interviews. I have seen no evidence yet DeSantis can do that. Have you actually heard him speak? He has zero charisma—none. Trump has a toxic, used car salesman charisma, but at least he has it, whereas DeSantis is an awkward blank.
How did DeSantis win Florida?
Originally, Trump endorsed him, then the Florida Democrats were the Florida Democrats they've been since the 50's (after all, they were one of the first Dixiecrat parties to lose power to the GOP within the South) outside of Lawton Chiles and Bob Graham, then piggybacking off the rise of Spanish right-wing radio, general ambivalence toward COVID restrictions in a tourism-friendly state (note the only incumbent Governor to lose in 2022 was Sisolak in Nevada), and lots and lots of free money from the government via the COVID bill to pay for tax raises, and not being totally incompetent when it came to the hurricanes.
At the same time, Rubio won by almost the same amount DeSantis did, without all the Culture War stuff.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Jesus Christ THIS.
Desantis hasn't officially declared. If (when) he does he's going to come out with a bevy of pre-arranged endorsements, and likely a massive set of ads and an actual, you know, campaign. Merely announcing he's running will boost his immediate popularity.
He's no stranger to fighting close electoral battles. He's not flying blind here. So maybe try not to be premature in assuming the current situation is representative of the future outcome. Or, if you are, lets place some actual bets.
I watched Desantis blow even the elevated expectations he had going into 2022 election season out of the water. I'll gladly accept 50/50 odds of Desantis clinching the nom right now.
Anyone who is pretending to know that Desantis is too weak to go the distance, at this point in the game, is giving away their own wishful thinking.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Bud Light update: VP of Marketing ~fired ("leave of absence")
Stock down 1.5% (ie, nothing) since this started
Lol, looks like right-wing cancellations are as short-lived and poorly thought out as the left's.
Called it!
To put a more positive spin on it, you could say that people obsessed with politics are only a minority of the population.
More options
Context Copy link
More options
Context Copy link
I thought right-wingers hated cancel culture as it was impressing on the First Amendment? Unless right wingers think this isn’t cancelling?
Sauce for the goose is sauce for the gander. I am very anti cancellation and don't like what happened here but in the end am not particularly bothered by it.
Your comment reminded me of this quote from Dune:
"When I am weaker than you I ask you for freedom because that is according to your principles; when I am stronger than you I take away your freedom because that is according to my principles"
The way to deal with such "people" is to treat them according to their own principles even when you have power. I know it doesn't apply strictly but it's just "Paradox of tolerance and all that shit" the way leftists use it.
More options
Context Copy link
I'm not gonna make a long response because it's Sunday night, but if you think about it from 30,000ft voting is a proxy for violence. Once upon a time if my team disagreed with a decision that yours tried to impose on us, we would try to violently resist. Now we vote instead and my team (hypothetically) says 'eh, I guess we're losers on this one' rather than resort to violence. But 'cancel-culture' is pretty clearly (to me) a bridge beyond voting, that's trying to be cute about 'nonviolence.'
All of this is to say - as a general rule - I prefer voting to violence. But 'cancel culture' isn't voting and it isn't 'nonviolence.' "Heading to Africa, hope I don't get aids" does not mean you should be made unemployable and homeless and it's egregious that it's dressed up as cute nonviolence.
More options
Context Copy link
As always:
My rules fairly > Your rules fairly > Your rules unfairly
As always, this just resolves to ’I want to think of myself as anti-cancel-culture but I also want to cancel people’.
You think he'd want to cancel people if there was no preexisting cancel culture?
Why not? It's not like consumer boycotts, getting people fired etc. are tactics that haven't been used by whatever political sides long before we started to call them "cancelling".
Consumer boycotts aren't cancel culture. Because for some mysterious reason, when the right had a lot more cultural and political power, left wing figures remained distinctly uncancelled. Whenever asked for an example people reach for the Dixie Chicks, which is wrong for obvious reasons, or have to go all the way back to the Hayes Code or McCarthyism which, unlike modern cancel culture, are recognized as an overreach.
That's a really weak argument. A cancellation attempt doesn't have to wipe someone from the face of the earth for it to have an effect. Otherwise no one is ever cancelled.
https://slatestarcodex.com/2018/05/23/can-things-be-both-popular-and-silenced/
More options
Context Copy link
Dixie Chicks isn't cancel culture anyway. They were "cancelled" for things they said in public as part of their public performance in the job they were being "cancelled" for.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Why are you treating this person like an ordinary average normie? They were vp of marketing. A vp of marketing who tried to do something controversial to bring in a new customer base, but their map of the region was off, and so they scuttled their ship. Scuttled it so bad apparently anheuser are restructuring their whole marketing department 'to bring them closer to the brand', aka because it's far too obvious how much they all despise bud's customer base. It was a high risk high reward gambit, and you are ignoring the risk so you can claim right wingers are unprincipled cancel culturers.
More options
Context Copy link
Frankly, while this might sound bad formulated like this, it can also be a perfectly coherent position under certain circumstances.
I want to think of myself as anti-killing, but I also want to kill murderers.
More options
Context Copy link
Which is consistent. Just like being anti-murder and still believing in a right to lethal self defense. Or having a strong preference for civilization not to end in nuclear hellfire, and yet maintaining a stockpile of warheads and a willingness to press the button, for the sake of MAD.
More options
Context Copy link
More options
Context Copy link
Unfortunately, this probably mirrors the thoughts of a majority of progressives decrying the cancellations of, and status attacks on, insufficiently "patriotic" people in the aftermath of 9/11. There are very, very few principled libertarians.
I mean as a principled libertarian I'm not opposed to such tactics in all cases, like Hobbes points out, the state of war is a binary switch and once one's natural rights have been intruded upon, all means to restore them are permissible. As misapplied as it can be, defending yourself is a legitimate justification for terrible things so long as they don't stray into pointless cruelty.
I think there are much worse things than "cancellation" that are entirely morally permissible to do to people who have in fact silenced others.
But here I'm merely pointing to the Nash equilibrium conservatives find themselves in. Which also explains the examples you give and the potency of Bin Laden's successful tactics to provoke retaliation. Complaining that these are unprincipled is just ignoring the nature of power, which always comes before principle in practice.
"people who have in fact silenced others..."
By this do you mean specific people or people who belong to a group you don't like? The difference is constantly being elided or not indicated.
Specific people of course, I don't believe in groups, only in associations of individuals.
Though of course if you're going to have voluntary membership of an organization whose stated purpose is solely to destroy and undermine natural rights, wear its uniforms and do its bidding, I think it's reasonable to assume you are personally guilty. Not that I'm saying it applies in this case.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
As far as I know one guy got fired, for saying that people working in the World Trade Center were little nazis who deserved it. And technically he didn't get fired for that, the public attention just forced the university to acknowledge that he was a plagiarist who got promoted to full professor without any qualifications.
The appeal to an imaginary era of right wing censorship is so strange, when the only example anyone can come up with is that people stopped buying Dixie Chicks albums.
To be perfectly clear, I think calling the victims of a major tragedy little Nazis who deserved it would be a firing offense for a professor today, with the possible exception of that tragedy being a GOP convention blowing up, but it seems like a lot of the woke ridiculousness was prefigured by things like freedom fries.
You turned out to be pretty prescient in 2023 about your exception, replacing a convention with a rally and blowing up with dying from a stray bullet.
More options
Context Copy link
Yeah, I'm definitely not saying things didn't get weird after 9/11 (and in some ways both are typical of American National Hysterias) but professors weren't getting fired for refusing to swear loyalty oaths to The Homeland. The scale and level of coercion were so completely different that it's hard to see the comparison being made in good faith, until you realize some of the people doing it were 3 years old at the time and are working off a mythical version of events we actually remember.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It isn't. A marketing executive being fired for an atrociously bad marketing campaign is not cancellation.
Bruh. Come on.
If Bud Light put out a commercial that featured only white people and leftists boycotted them and the VP of marketing got fired, we'd all agree that was a cancellation.
Bruh. Come on. How close are the demographics of bud light drinkers to 100% white compared to 100% trans.
Of course if you make up a stupid example your interlocutor will be made to seem stupid.
More options
Context Copy link
deleted
If you want trans for bud light you pick a FtM that has managed to actually become burly and with beard. There has to be one or two. Not creep like Dylan that is simultaneous mockery of men and women.
More options
Context Copy link
Part of the problem seems to be that they did know their brand and , and they didn't like it.
The "the brand is too fratty and is dying" quote from the VP can't have helped calm people down. It is a variation of the common "you're about to be demographically eclipsed and you're the past. Catering to you is outdated" rhetoric you see a lot on the Left.
I wonder if the Right just has a trauma response to it now.
Low interest rates truly broke capitalism. Bud light is(was) the best-selling beer in America. Sure, sales have been going down, but this is regression to the mean if anything. Producing a massively popular product year after year with a 10% net profit margin somehow wasn't enough.
I suppose the argument is that loyal fans are locked in and you should try for new demos
Which explains many cases where this happened before the current economic climate (the author gives the NBA and China, Scifi Channel maybe going to Syfy to try to be more cool might be another and the entire phenomenon of wokifying movie IPs comes to mind)
More options
Context Copy link
Wait, Bud Light has/had 15-20% of the domestic beer market?
That's pretty wild -- high risk/reward for marketers; even a percent of that in either direction is a shit-tonne of money.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Your hypothetical doesn't match on the salient points. If a company put out a commercial that alienated it's actual customer base (which leftists aren't to Bud Light), it's not cancellation if those customers then boycott and the VP of marketing gets fired; otherwise you'd say it was cancellation for the New Coke guy to get forced out. If it's not the actual customer base and the VP of marketing gets fired because people who don't use the product "boycott", that's something different. If the VP is getting fired for something in her private life, that's also different.
The same argument can easily be extended to show that no one has ever gotten cancelled for anything. Anyone who has ever been fired for a racist or sexist view has not been "cancelled", because racism and sexism are evil so of course public knowledge that an employee of a company has racist or sexist views will be alienating to that company's customer base.
This is literally what leftists say all the time. "We're not cancelling! We're simply speaking for the majority, we speak for the paying customers!"
Yes, by those trying to be disingenuous. You can never convince anyone who doesn't want to be.
More options
Context Copy link
A part of this is the disintegrating barrier between public and private life due to social media, as well as who gets considered a public figure. A part of this is because of a shift in progressive values over time, where what may be considered racist or sexist has changed generationally to the point that under previous conditions, certain intersectional talking points would have absolutely been considered racist or sexist.
More options
Context Copy link
Why are you trying to smuggle in a future tense here, when Budweiser is reacting to what actually happened? How many left wing cancelations can you name where this was the case?
More options
Context Copy link
More options
Context Copy link
New Coke was legitimately a worse product. No one would have known or cared about Dylan Mulvaney's Bud Light deal if it wasn't for social media outrage. It wouldn't have affected the product at all.
Which would mean something if she was VP of product. However, she was VP of Marketing and directly responsible for the promotion on social media.
More options
Context Copy link
That is the whole purpose of the Mulvaney campaign, to be a social media thing. That it was negative rather than positive was a judgment error.
More options
Context Copy link
deleted
Yeah I think you're right, if it wasn't a replacement it would probably have a dedicated fanbase. Were you alive (and cognizant) when Pepsi blue came out? It had fans, despite tasting like accidentally calling your teacher mum.
More options
Context Copy link
Just drink some Pepsi, it's close enough. They did try it as a separate thing (the ill-fated "Coke II"), it didn't make it. Probably because anyone who likes Pepsi will just drink
the real thingPepsi.More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
New Coke did it's job, which was to mask the transition from cane sugar to high fructose corn syrup ... at least that the popular conspiracy theory. https://www.snopes.com/fact-check/new-coke-fiasco/
More options
Context Copy link
More options
Context Copy link
This is a sleight of hand I've seen several marketing execs use over the past few years, and while it is admittedly genius, it is still sleight of hand. That is not cancelling for a marketing executive, in marketing generating a surplus of negative feelings towards your brand is the definition of failing and precisely what you should be fired for. I can see some people calling it a cancelling, but those people would probably be using the term for rhetoric rather than accuracy.
More options
Context Copy link
It matters who's bringing the politics in. It's possible to put a political meaning on just about anything someone does. It's a favorite tactic of activists--in order to avoid the norms about not beinging in politics where it doesn't belong, the activist just calls some non-political thing political so the activist is justified.
The most likely scenario with a Bud Light commercial featuring white people is that the company decided either to market to an area that was mostly white, or by chance picked people for its commercial who were white, If so, that would be cancellation because Busch was staying nonpolitical and the activists were the ones who introduced the politics, even if the activists try to obfuscate it by saying "well, having an ad full of white people is already political".
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think there's a difference between an executive getting fired because of some tweet he made on his personal account that's pro-trans, and the executive getting fired for a business decision that actually alienated a core demographic. The former would be cancelling, the latter is how things should work.
If an executive of any given company with a large leftist demographic gave a promotional product to Donald Trump, I don't think people on the right would be too outraged about the executive being fired. They might thing think executive doesn't deserve to be fired because they personally like Donald Trump, but it wouldn't cause the same sort of outrage as the executive being fired because he tweeted out "Donald Trump is a swell guy".
Personally, I think it’s a bit more complicated than that. If you’re tweeting on a personal account on your own time, I think the principle of free speech applies — no one should be held to account for anything they say on their own time, on their own device, and on their own personal account. The only plausible exception is if the tweets in question are made by someone famous and famously connected to the brand, or if the tweets are directly related to their role. A CEO cannot help but represent his company, so he’s a bit more liable. And sometimes the comments bare directly on things that they deal with (for example the head of accounting commenting on various races and their ability to do math) which would be exhibit A in any racial discrimination suit. But if some random grunt is posting on his personal account from his personal device on his own time, I don’t think anyone should be fired for that.
I'm mostly okay with a rule that "anyone who tweets that they hate their own/their company's customer base lands in hot water".
If their role is to represent the brand, I get that. It’s just that I don’t think it’s reasonable to allow companies to fire people for political or social speech that has nothing to do with their role in the company. It’s kind of an end-run around the principle of free speech if I have to worry that the wrong person misunderstood my tweet and might get me fired. If you’re in a position that has nothing to do with being the face of the brand or in a role where your political or social views have some bearing on whether you can do your job properly, then it’s just using your need for a job as a cudgel against “heresy”. And I fear the chilling effect on free speech when most people have to police their own opinions on important issues because they have to have a job.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Some right-wingers don't like cancel culture (when done by the left) but not First Amendment reasons.
More options
Context Copy link
More options
Context Copy link
I think it's because Dylan Mulvaney is famous for this https://youtube.com/watch?v=EQ-yzbzqH4U and nobody finds him to be an appealing icon.
Is that head:body ratio extreme or is the clothing/camera angle/body position causing an optical illusion?
Sticking your neck out like a turtle (often advised by professional photographers as a more flattering stance) in front of a wide-angle lens will do it somewhat -- but man just every proportion is out of whack there. The shark-mouth is particularly disturbing.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Twitter is probably to blame, if I had to guess. Since elon's takeover it has been a maajor memetic force that has benefited the right. Clips go viral, which almost always makes the left look bad.
More options
Context Copy link
I think it will increase as the right learns how to do boycotts effectively. It’s not enough to boycott a specific product, but everything owned by the brand. Switching to another brand owned by your target company is a waste of time— they still get your money. If you’re mad at Kraft, you have to boycott more than just that one brand, they’re owned by a big conglomerate with hundreds of brands.
No, I actually think this is right on the money for how a boycott should go.
Bud Light tries a marketing tactic and immediately sees its sales crater: even if the sales are going to its other Anheuser-Busch brands there are real costs in having to drop large amounts of production on one brand and move it to another.
Plenty of people work for Bud Light but not AB, and if they have to cut, say, a quarter of production those people are at least having their lives disrupted and possibly being laid off and replaced. Even if AB's sales stay completely level, that will be a significant event.
Meanwhile, they paid for that privilege: that was a marketing campaign that was intended to raise sales. And the people at the top of AB who are at least going to casually glance at new marketing campaigns are the same ones who had to reorganize after this Bud Light stuff. If AB goes under the company that replaces it is determined by market demands plus luck, with no guarantee they won't be more ideologically opposed to our Bud Light boycotters.
Instead, AB sticks around and learns the lesson "don't waste money on the trans stuff" which is what the boycotters wanted in the first place. Not only is it the most direct goal, it's much more attainable than trying to take out the largest brewery in the US.
In addition to this, cancel culture is an ethereal and poorly defined thing, but this all feels a lot more pure to me than it could be. Brand does advertising, consumers change their purchasing behavior of the brand as a result. No major agitating for collateral damage, not even really that much of a push to get people fired*, just "we're not going to buy this anymore because of what you did with it, you figure out what happens next".
*I'm sure people on Twitter were loudly calling for both, but it seems like the impact on a consumer level was much bigger. I would ideally just have people change their purchasing behavior and make a relatively-quiet confirmation of "yes this is about the Mulvaney thing", and this feels like a step in that direction if not in any way perfect.
More options
Context Copy link
I feel like this depends what your goal is. Switching between brands should still reflect badly on the person responsible for the brand and send a signal to other companies/brands that you’re willing to vote with your wallet.
I’m not sure its politically valuable to care about punishing the Joe’s boss’ boss for Joe’s decision. You’re talking about really indirect effects (the boss’ boss now wants to hire a boss who will hire a Joe who won’t alienate republicans).
Indeed. The objective isn't (shouldn't be?) to bankrupt the parent company's bottom line or fire the CEO; this is a confusion with means and ends. The right-wing objective should be to stop pro-trans advertising from trying to propagandise audiences in future. That can be accomplished by a relatively limited outcome that whatever activist thought up this stunt, doesn't get listened to in the boardroom next time because "Remember the shitshow last time".
Getting that guy fired, getting that guy blacklisted, getting his boss fired, bankrupting the company - these might send a strong signal and a chilling effect to others, but they're at best stretch goals and possibly counterproductive vindictive overkill (after all, the parent company puts out manly non-trans beers too). Failure to reach those ends does not mean your boycott failed.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If we're canning ad execs now, can someone take out whoever okayed the giant CGI "RUFFLES" logo for the NBA playoffs? This might be the most distracting ad-placement I've seen in months.
When ads get that distracting, I'd rather not consume either product (NBA or Ruffles). Strangely, I'm okay with product placement when entertainment leans into it like Wayne's World or Idiocracy, or like the KFC dating sim.
More options
Context Copy link
sounds like it is doing its job. I would give them a rise instead if that is the case.
More options
Context Copy link
More options
Context Copy link
not too surprising. same for Nike stock which was not hurt despite kaepernick ads controversy
More options
Context Copy link
In fairness, InBev is massive. Bud light is but one brand. The other question is what the stock would’ve done but for bid light harm. For example if it would’ve been up 1.5% then 3% is meaningful.
There's a few other factors to this IMO. Since they're so large, if a boycott was super effective, it might still take a while to produce a noticeable effect on their bottom line. And even if it doesn't hit them that hard, the real effect might be that other, smaller companies would notice and shy away from making these sorts of moves out of fear of drawing a similar response that might be much more painful for them.
Anecdotally, we're what, two weeks into this? One of my acquaintances works for a Budweiser distributor in west Alabama and from what he tells me (while being beyond tired of talking about it) things are apocalyptic, his employer is tightening the belt, they're not getting help from AB, he gets accosted by randos for wearing Bud Light shirts, etc. We're talking multiple bars pulling all In-Bev products, a whole Walmart selling two cases of Bud Light (on Rollback!) in a week, customers sitting on pallets of unsold product, nobody hitting sales quotas. At the least, this is worse than the Papa John's N-word saga and much worse than the John Schnatter comments about Obamacare (I delivered for a Papa John's while in college at the time; hearing about his antics semi-regularly got deeply annoying after awhile.)/hosting Romney in his mansion.
I don't have a dog in this fight (other than being deeply sympathetic to the local distributors who are, at this point, the ones taking it in the ass, not InBev, and who tend to be pretty red in my experience), but I agree that it may take time for the effects of a boycott to make their way up the chain.
That's pretty interesting anecdotal evidence. The logistics issues tracks more or less as expected - that it may take months for even pretty major purchasing changes to make their way through the supply chain back to the bottling factory and force them to actually change how much they produce.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Is the rapid advancement in Machine Learning good or bad for society?
For the purposes of this comment, I will try to define good as "improving the quality of life for many people without decreasing the quality of life for another similarly sized group" an vice versa.
I enjoy trying to answer this question because the political discourse around it is too new to have widely accepted answers disseminated by the two American political parties being used to signify affiliation like many questions. However, any discussion of whether something is good or bad for society belongs in a Culture War threat because, even here on The Motte, most people will try to reduce every discussion to one along clear conservative/liberal lines because most people here are salty conservatives who were kicked out of reddit by liberals one way or another.
Now on to the question: Maybe the best way to discover if Machine learning is good or bad for society is to say what makes it essentially different from previous computing? The key difference in Machine Learning is that it changes computing from a process where you tell the computer what to do with data, and turns it into a process where you just tell the computer what you want it to be able to do. before machine learning, you would tell the computer specifically how to scan an image and decide if it is a picture of a dog. Whether the computer was good at identifying pictures of dogs relied on how good your instructions were. With machine learning, you give the computer millions of pictures of dogs and tell it to figure out how to determine if there's a dog in a picture.
So what can be essentialized from that difference? Well before Machine Learning, the owners of the biggest computers still had to be clever enough to use them to manipulate data properly, but with Machine Learning, the owners of the biggest computers can now simply specify a goal and get what they want. It seems therefore that Machine Learning will work as a tool for those with more capital to find ways to gain more capital. It will allow people with the money to create companies that can enhance the ability to make decisions purely based on profit potential, and remove the human element even more from the equation.
How about a few examples:
Recently a machine learning model was approved by the FDA to be used to identify cavities on X-rays. Eventually your dental insurance company will require a machine learning model to read your X-rays and report that you need a procedure in order for them to cover treatment from your dentist. The justification will be that the Machine Learning model is more accurate. It probably will be more accurate. Dentists will require subscriptions to a Machine Learning model to accept insurance, and perhaps dental treatment will become more expensive, but maybe not. It's hard to say for sure if this will be a bad or a good thing.
Machine learning models are getting very good at writing human text. This is currently reducing the value of human writers at a quick pace. Presumably with more advanced models, it will replace commercial human writing all together. Every current limitation of the leading natural language models will be removed in time, and they will become objectively superior to human writers. This also might be a good thing, or a bad thing. It's hard to say.
I think it's actually very hard to predict if Machine Learning will be good or bad for society. Certain industries might be disrupted, but the long term effects are hard to predict.
As i read your comment, ive just completed the mass effect legendary collection. Also spoilers. ||So anyway for those who have never played the general gist of the games is this: there is a sentient race of machines called reapers and they are cleansing the galaxy of all advanced life every 50000 years. During the 3rd game, some notable things happen, mainly:
You have A personal AI on the ship you command in the game named EDI, she gets her own body, and is relatively harmless. She also evolves: she learns things like sacrifice, attempts to date the pilot, and tries to find meaning in her own existence generally.
This goes back a bit farther then game 3, however there is an AI race called Geth, that were made by a different alien species. Long story short, the game is a RPG where your decisions impact the story, and you choices impact how things with the Geth and their creators play out. Quarians basically tried to destroy the geth out of fear, but later on as you learn about the geth. They really just want to exists and be left alone, and they even help you fight the reapers. The game gives you the choice to destroy the geth, or you can humanize them and give them basic human decency. There is a scene in the game where a Quarian tries to experiment on one of the geth, and you can basically shut it down and tell the quarian not to.
You meet another alien race in the game that are responsible for the reapers, that basically tell you that they made an AI that is responsible for the reapers, it was ironically created to prevent computers from destroying organic species. The AI turns on them, converts them into robots, and procedes to take over the galaxy in hopes to preserve organic species forever in robot form. Near the end of the game you meet the reaper AI and he basically gives you 3 options: Destroy them, Control them, or Synthesis (you can also just flat out not choose)
Destroy and control are pretty straight forward, however synthesis is where you become one entity with the machines. Its essentially transhumanism. Its suppose to be the "ideal" solution.||
Now mind you, mass effect 3 got a lot of shit when it was released because the endings were abhorrent, however i could see any one of these happening when real AI gets created, maybe we'll control it and everything turns out OK ish the AIs end up being neutral or benevelont like EDI or the Geth, in a slim chance we end up successfully destroying it if things go wrong. Or we reach some perfect transhumanist state. I think with the current things going however, its arguably more likely that we'll become, well, ill let the video speak for itself (most reliable data suggests this current trajectory)
More options
Context Copy link
Not my article but: https://www.rintrah.nl/the-end-of-the-internet-revisited/
I'm not sure the machine learning/AI revolution will end up being all it's hyped up to be. For local applications like identifying cavities, sure. For text generation however, it seems much more likely to make the internet paradoxically much more addictive and completely unusable. There's so much incentive (and ability) to produce convincing scams, and chatGPT has proved to be both easy to jailbreak and/or clone, that any teenager in his basement can create convincing emails/phone calls/websites to scam people out of their money. Even without widespread AI adoption, this is already happening to some extent. I've had to make a second email account because the daily spam (that gets through all the filters) has made using it impossible, and Google search results have noticeably decayed throughout the course of my lifetime. On the other side of the coin, effectively infinite content generation, that could be tailored specifically to you, seems likely to exacerbate the crazy amount of time people already spend online.
Another thing I'm worried about with the adoption of these tools is a loss of expertise. Again this is already happening with Google, I just expect it to accelerate. One of the flaws of argument that knowledge-base on the internet allows us to offload our memorization and focus on the big picture, is that you need to have the specifics in your mind to be able to think about them and understand the big picture. The best example of this in my own life is python: I would say I don't know python, I know how to google how to do things in python. This doesn't seems like the kind of knowledge that programmers in the past, or even the best programmers today have. ChatGPT is only going to make this worse: you need to know even less python to actually get your code to do what you want it to, which seems good on the surface, but increasingly it means that you are offloading more and more of your thinking onto the machine and thus becoming further and further divorced from what you are actually supposed to be an expert in. Taken to the extreme, in a future where no one knows how to code or do electrical engineering, asking GPT how to do these things is going to be more akin to asking the Oracle to grant your ships a favorable wind than to talking to a very smart human about how to solve a problem.
I'm not sure I really like what I see to be honest. AI has the potential to be mildly to very useful, but the way I see it being used now is primarily to reduce the agency of the user. For example, my roommate asked us for prompts to feed to stable diffusion to generate some cool images. He didn't like any of our suggestions, so instead of coming up with something himself, he asked ChatGPT to give him cool prompts.
The best days of the internet are behind us. I think it's time to start logging off.
We have been offloading thinking to tools forever, I highly doubt we will reach some breaking point now. We absolutely do lose knowledge when we gain this, but we trade it for more efficiency. Is it bad that we have calculators everywhere?
I agree with this on the advertising portion. I'm becoming increasingly concerned that targeted advertising could lead to terrifying outcomes, like a small group controlling public opinion. (actually that already exists, but still)
More options
Context Copy link
More options
Context Copy link
Anything that takes us closer to post-scarcity is good from my perspective. I disagree with some people I otherwise respect, such as Ilforte, on the fundamental benevolence (or rather, absence of malevolence) of the ruling class, especially the ones that will end up wielding the power put in their hands by AGI. It will cost them very little indeed to at least maintain the standards of living of everyone alive today, and little more to improve everyone's to First World upper middle class levels.
Upload everyone into VR, and it's quite possible that everyone can experience eudaimonia on a 10 watt budget.
Now, I'm not a happy person. I've been struggling with depression so long that I've forgotten what it might have ever felt like to not be under a cloud, I feel fundamentally burned out at this point, experiencing something in between learned helplessness and nihilism regarding AI advances. What'll happen will happen, and everyone here is only running commentary on the impending Apocalypse.
Back when I was entering med school, I consoled myself that the suffering was worth it because medicine was likely to be among the last fields to be automated away. Can't say that I feel very vindicated, because the automation overhang is here, and I see the Sword of Damocles dangling overhead when I think about further professional advancement.
It seems awfully clear to me that medicine is about to be automated, GPT-4 is a good doctor. Probably not the best possible doctor, but already outperforming the average in an already incredibly competitive and cognitively demanding profession. I only look at the further slog of psychiatry training ahead for me and shiver, because there's absolutely no way that by the time I'm done, I'll be employed by the graces of anything other than regulatory inertia instead of genuine competitiveness.
Instead of a gradual deployment (over like 2 or 3 years, I had short timelines even then) where AI came for Radiologists, then Opthalmology, all the way to Surgery and then Psych, it seems to me that the pressure will mount until regulatory bodies cave, and overnight everyone from the lowliest janitor to the highest ranking neurosurgeon will find themselves out on their arse in short order.
What pisses me off further is that this is also a slamming shut of the clearest pathway to betterment and improved quality of life I have, namely emigration to the First World. Not a consideration for the average person here, since you're already living there, but simply imagine how fucking terrible it is to face the wall of obsolescence without having a government that can even in theory maintain living conditions by redistribution of wealth.
As a concrete example, the NHS is largely propped up by foreign doctors, with a large fraction of the locals fleeing to greener shores such as the US or Australia. Pay has stagnated for a decade, prompting serious strikes, currently ongoing, to achieve inflation based pay restoration.
Even today, when automation is merely imminent, the British government has publicly stated it's intent to automate as much of medicine as it can to stomp down on them uppity doctors who aren't content with sub-market pay from a monopsony employer. You think those cheap bastards will hesitate for more than a microsecond to get rid of doctors or at least their pay, when the moment finally arrives?
I see British doctors mocking those claims today, as much as I support their attempts at pay restoration for selfish reasons, neither I nor they will be laughing much longer.
Maybe American doctors will hold out a little longer, you lot clearly aren't very concerned with efficiency in your healthcare expenses, but places like India, or the slightly whiter version of the Indian subcontinent, will end up clamoring to get rid of any expenses for their state-run public health services.
I'm fucked, clearly out of good options, and now picking the least bad ones.
On the note of doctors, the medical guild has always been the most robust, perhaps other than lawyers, at defending its monopoly. I would be willing to bet doctors still resist automation through regulatory barriers for quite a while.
Even if that doesn’t shake out, it could be a scenario where human augmentation rolls out relatively slowly. You, being a transhumanist, Should greatly benefit in a lot of those scenarios. I imagine the vast majority of people alive today will be unwilling to augment them selves for purity-based reasons. Not having that hangup alone would be a huge competitive advantage.
If all else fails you can always mortgage your future computing space for a loan or some thing and hope to jump up into the immortal class. I for one hope can you make it, although I will admit that I am not the most optimistic when it comes to proles getting access to longevity technology.
Doctors have successfully defender their guild (albeit more so in the US than the UK, by a large margin) because they were indispensable. Training replacements to disgruntled doctors would take a great deal of time, and while medical education isn't perfect, you can't really circumvent most of it without ending up with noticeably worse practitioners.
That changes greatly when human doctors become outright obsolete, speaking in the UK context, I have little doubt that the government would happily tell all involved to take a hike if that was the cost of "saving" the NHS or even saving money.
Doctors in the UK have been cucked to put it mildly haha. They've only recently grown a backbone after the wage decreases have become unbearable.
The UK government(s) have historically relied on immigrant doctors to prop up the NHS when the locals started getting fed up about it. I can't complain about this too much, given that I intend to emigrate soon, but this certainly is responsible in part for their depressed wages.
A government willing to sideline its populace with immigrants will happily do so with AI as and when feasible, and they've already stated that that's their intent.
I could live with postponing the singularity a few years till we get it right, but that's seemingly not on the table.
(I mildly disagree that most people won't avail of transhuman upgrades. Eventually they'll end up normalized, in much the same way nobody really makes a fuss about glasses, hearing aids or pacemakers.)
This is where we disagree - I don't see human doctors becoming obsolete anytime soon. Perhaps from a medical perspective, sure, but for the majority of laypeople I'd imagine a large part of a doctor's job is comforting the person they're treating.
Now I do think that like with almost all knowledge work, doctors will be able to become more productive. Especially those that don't see patients most of the day. But my understanding is that the vast majority of, say, a primary care physician's job is to go from 30 min patient visit to 30 min patient visit, hearing what people have to say and writing it down, then telling them they're going to be okay and the doctor can help.
Even if we can prove that LLMs give better medical advice than doctors 100% of the time, I don't think the majority of people would be comfortable hearing it from a non-doctor for quite a while.
You don't think accelerating progress now could be the best way to reach alignment?
Depends on the speed of the takeoff, I suppose.
Is that true? I don't think I know anyone who thinks that, or anything even remotely close to it.
Every time I've interacted with medical professionals over the past several years, there has been no emotional component at all, or mildly negative. Doctors are able to diagnose, proscribe, and conduct operations, otherwise people would stay far away.
For instance: family member was pretty sure he had pneumonia. Went to a hospital, got an x-ray. Yep, that's pneumonia alright, here are two antibiotics that might help, come back if you're just as bad or worse in a week (edit: these were, as I remember, not actually given at the hospital. We had to drive to the pharmacy for them). The antibiotics worked, hooray. In addition to $500 upfront and $1,000 from insurance, there was another $1,000 surprise charge, botched and shuttled about through bill collection, which took six months to resolve. Next time family member has pneumonia, he'll probably hold out even longer before attempting to interface with the medical system.
I'm glad that for a couple of hours of wretched interactions, trying to hand write forms alone and delirious, and two week's pay, family member was able to get needed medicine. This is better than the vast majority of times and places. But if there were an automated scanner that dispensed antibiotics, that would be vastly better experience.
I also gave birth during the ending phase of Covid restrictions. I'm glad that there are medical interventions to deal with complications and manage pain. But there is not really any comforting being done that couldn't be replaced with a recorded voice stating what's on the fetal monitor and what it means.
The flat affect, 'no emotional component' is what I mean. They are giving a sort of impartial authority to their diagnosis to make you feel okay.
I disagree with doctors, but many of the people I know in the middle-class PMC take their word as Truth.
More options
Context Copy link
All the people I know generally think of your average medical care professional as an opponent that you have to outsmart or out-research before you are permitted bodily autonomy and usually know less about your body than you do if you have an IQ over 120.
They'd drop them for an uncensored medical expertise AI in a second.
I would drop doctors as well but I’m trying to model the modal human. Maybe I’m failing but I think people here are far into an intelligence/tech literate bubble.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yes, reassurance and a good bedside manner are important aspects of a doctor's role! That being said, I can see AI doing all of that too:
Humans will anthromorphize anything, so a cutesy robot face on a monitor or even a deepfaked one might work. Proof of concept: Telemedicine.
Otherwise unskilled individuals who are simply conveying the information provided by an AI, such as a deskilled doctor or nurse, only there as a pretty face. Still utterly catastrophic for the profession.
People get used to anything, eventually when the public cottons onto the fact that AI doctors are faster, cheaper and better than humans, they'll swallow their discomfort and go with it.
Hmm, deepfakes for telemedicine would be concerning. I get your point with #2 as well, although I think that'll take some time to roll out.
I see what you mean I suppose the medical profession might be on the way out. I was supposed to be the optimistic one! Alas.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
As @2rafa and others have mentioned, ML will be a step change in how human society creates value and interacts with the world more generally. Once we've achieved AGI, roughly defined as having an AI that can act at the level of an ordinary human, our ability to solve problems will drastically increase.
Intelligence is the generic solver for essentially any problem. Sam Altman himself has said that by 2030 he envisions a world where every product and service will either have integrated intelligence, or be angling towards that. This means that our phones, laptops, PCs, will all obviously be intelligent. However what most people don't realize is this technology will also effect our coffeemakers, stoves, thermostats, glasses, and practically every other technology you can think of. I'm sure adaptive clothing will exist soon with camoflauge like capabilities. People will be able to get realtime instructions into headphones telling them exactly how to complete each task.
Even these predictions only scratch the surface. If the true promise of AGI comes out it will also let us break through issues in hard mathematics, create brand new drugs, find extremely dense and powerful new materials. It will help navigate endless layers of bureaucracy, effortlessly pruning through the thousands of regulations that hold up large projects, helping us pinpoint ruthlessly where cost is added to solve the cost disease problem, and generally help unstick our public works. We could be building scintillating skyscrapers of filament-thin materials with bridges across the sky that glisten in the air, all in a decade. The future is truly difficult to even envisage, let alone predict.
In terms of comparisons to other revolutions, @2rafa says below:
I agree that the agricultural revolution led to issues, a la Scott Alexander's review of Against the Grain.. That being said, I find the comparison of the AI revolution to agriculture as facile. Ultimately the reason the agricultural revolution proved bad for us was that we shifted our lifestyles from nomadic culture to a static culture - which inherently leads to problems of physical fitness, freedom, social control, and cultural institutions have to rapidly shift.
With the AI revolution, we have no idea how far it will go. The possibility space is far beyond what could have existed for any previous revolution. As doomers say, we could all die. We could all transcend our fleshly forms and become gods in ten years. China may create an ASI and lock us all into a totalitarian doom state forever.
The stakes here are far higher than the agricultural revolution, and I highly doubt our situation will parallel that trajectory.
At the end of the day if we can survive the AI revolution without any horrible outcomes of the x-risk or s-risk variety, I think it would be ridiculous to posit any sort of negative future. With intelligence at our fingertips, we will be able to finally achieve our potential as a species.
Where have I heard this one before?
Seriously, this seems too specific to be a coincidence. Was it a deliberate reference?
Thought your link led to this
More options
Context Copy link
Nope. I actually don’t like referencing that story because I think it’s pretty short sighted, although does have some interesting ideas.
This is a very common thought in any hard sci fi that has AI. Manna is by no means original just popular in the rat sphere.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I would actually argue this is a closer parallel to the cognitive revolution, or homo sapiens first discovery of culture, language, and general cognitive technology. The difference is that with the revolution from fire, or the agricultural revolution, or the industrial revolution, or even the internet, the AI revolution deals with intelligence and a paradigm of thinking itself. The Scientific revolution could also be a close contender, since it dramatically increase our ability to think and use our knowledge.
Agree strongly here.
The real step change in my opinion is once these models get good at things like drug discovery, mathematical proofs, and building models of physics. We have essentially been locked into a paradigm almost 100 years old in physics, and haven't found many fundamental changes in mathematical or chemical theory since then either, to my knowledge.
In the past, every time we had a major breakthrough in one of these fields it was enough to reshape the world entirely. Chemistry led to the industrial revolution, Newtonian Mechanics led to the scientific revolution. (or was the beginning, whatever.)
As I mention above, I think the comparison to the agricultural revolution falls flat for a number of reasons. Admittedly most revolutions follow a pattern of short term negative issues with long term positive outcomes however.
More options
Context Copy link
I am strongly of the opinion that since neoliberal PMC jobs are the easiest to automatic with AI, there will be incredibly strong regulation banning AI from taking the jobs of the PMC. The power to regulate is the power to destroy, and as incapable of actual productivity the PMC and their legion of bullshit jobs are, they know how to run a grift and bask in their own self importance.
No, what you need to fear from AI is when Facebook fires up an instance of AutoGPT for each user and tasks it with keeping them doom scrolling for as long as is possible. If you thought "the algorithm" was already amoral and sanity shredding, you ain't seen nothing yet. That was a mere baby, feebly hand tuned by meat that thinks (or thinks it thinks). When the AI is fully unleashed on slaving our attention spans to our screens, it's going to be like how Fentanyl turbo charged opioid deaths. You're gonna start seeing people literally starving to death staring at their phones. Actually, nix that, they'll die of dehydration first. I momentarily forgot that nearly always happens first.
I'm gonna register this prediction now too. Apparently Ai has trouble with fingers. You'll know it's gotten loose when there is a new tiktok trend of young people amputating all their fingers. The AI will have decided it's easier to convince us to get rid of our own fingers than figure out how to draw them better. Given the rates of Tiktok induced mental illness, it would probably be right in that assessment.
This is exactly why the crossbow and handgonnes never took off and why we still live under a feudal system ruled over by our lieges and ladies.
More seriously, this technology is too valuable to not use, anyone who does use it is going to gain a massive advantage over anyone that doesn't, its use is inevitable.
The same is true of nuclear power. It's the only technology that will allow us to hit emission targets and keep the grid stable with cheap, reliable power.
But we've built 3 nuclear power plants in as many decades, and our infrastructure is crumbling and less reliable than ever. Our ruling class simply does not care so long as they can keep living that 0.01% life. Even now they are setting preposterous 10 year EV targets, despite not putting a dime towards building out a domestic EV supply chain or infrastructure. Including upgrading our electric grid to deal with the massive increase in demand all those EVs will create. Which brings us back to the nuclear power they scorn so much.
Your appeals to a reasonable nation performing certain obvious reasonable tasks are pointless. This is clown world. You need to think dumber.
Nuclear power has a lot of benefits, but it takes a significant amount of time and money to get online, with the benefits being generally diffused. The number of organisations that can actually get a nuclear power plant online for long enough that they can start to make a profit is quite small.
AI is comparatively cheap, the changes are quick and easily observable and the pay off for an individual willing to utilise it is substantial. As a class medievial European nobility may have benefited from a complete ban on crossbows and handguns, but the ratio of costs to return of employing these weapons meant that anyone who chose to defect and take up their use would out compete those who did not. The same is true of AI, it cannot be ignored.
I'm appealing to human greed and desire for power. You need to think smarter.
More options
Context Copy link
Exactly. The general population believes what it was told for 50 years - nuclear power is something immensely dangerous and deadly, something that can explode at any moment, kill millions and turn the whole country into uninhabitable desert full of motocycle riding mutants.
Now, imagine if normies are told:
THE COMPUTER can kill you. Yes, THE COMPUTER can shred you into paperclips, without warning. And not only you, but everyone, everyone in the whole world. Yes, even ordinary computer in your son's room can do it.
Do not wait for your doom. Say something, do something.
The problem is, we've already had hacker scares for years, I don't know what it would really take for people to realize the threat outside of re-hashed Terminator references.
More options
Context Copy link
The American public won't give up guns, do you think they'll give up computers?
Heck, even if it's just AIs they're told to give up, forces that want to do that will have to move fast, because every passing moment it reaches more hands, and the hands that have it are gonna hold on tight. And at some point soon, we will reach a point of cultural no return on everyone having these tools.
More options
Context Copy link
Again, at least according to this poll, 76% of Americans - the most relevant demographic for this forum - favor nuclear energy. Even the opponents do not necessarily hold the most alarmist and charged view of nuclear as a power source.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I highly doubt this will happen. You talk as if the PMC is a giant union where everyone is aligned, which shows you don't understand the social context there and are clearly just poo-pooing your outgroup.
People in the PMC with power have capital, whether it's political, intellectual, or financial. The financial movers and shakers will not agree to regulating AI, at least until they have gotten their piece of the pie. Even if they do, it will take years and years to get everyone to agree on a framework.
You've also got the AI companies themselves. Altman has come out and said he doesn't think regulation at this stage is a good idea, and he's got an incredible amount of political and intellectual capital. Many people in government, for good reason, see Altman as one of the most important figure in the world right now. They don't want to piss him off.
More options
Context Copy link
This would be a rad short story. An AI that gets 'frustrated' at its own limitations against the real world and it's solution is to just sand off all the sharp edges that are giving it problems.
Like it genetically engineers all the cows to be spherical so it's physics simulations can be more accurate.
I'm obligated to point out that this already happened, the AI was capitalism, the sharp edges were all direct human interactions, and our atomized broken society is the result.
I would be interested in seeing this thought/analogy expanded.
I thought I got this idea from Mark Fisher or Nick Land, but random googling isn't leading me to any obvious writing of theirs on this specific concept. Come to think of it maybe it was one of IlForte's pithier comments. Regardless you should read both of them.
More options
Context Copy link
Seeing Like a State plus a broad view of what constitutes a "state," perhaps?
I thought I had seen later Scottposts applying this logic to capitalism.
His Meditations on Moloch sounds like this vein too.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Personally am very excited for AI improvements. I’m hoping something like ChatGPT will be able to act as a super personal assistant and analyst.
For example in personal life, would love to be able to type into a box that I’m looking to plan a trip with just a few parameters (date, general budget, etc) and have it send me options. I can then have the AI send even more options for what do on the trip and finally book reservations that only require my approval.
That’s just one example but there are plenty of admin type activities that I’d like to offload to an AI. The opportunities in professional life are even greater but I think that may take longer as the aversion to giving the AI access to confidential data may be high (it’s currently banned at my mega corp).
At what level of 'smarts,' however, will an AI that is already training on how you do your job going to stop needing you around to do it?
I mean, you're basically happily accepting an apprentice who will lighten your workload whilst learning your job, except this thing is known to learn 100x faster than your standard human. The assumption that you'll have moved on to bigger and better things (or retired) before the apprentice steps up to take over your job may not hold here.
At some point soon we will at least increase productivity by 1.5-2x per person. At that point why don't we collectively demand a 3 or 4 day workweek?
We could have transitioned to three day work weeks way before 1971. The flaw in Keynes's famous prediction is that, past the point of basic subsistance, economic utility is relative. People don't want to make $20,000 or $50,000 or $100,000 or $200,000 inflation-adjusted household income to be happy. They want more than their peers. They want to have class-markers that low status people don't, not the luxuries that those class-markers manifest themselves in. It's why the canard about modern trailer trash having it better than kings in 1900 is so ridiculous.
If whatever happened in 1971 never happened, people would still be working as much as ever. The hedonic treadmill would just be moving faster.
More options
Context Copy link
Humans don't 'collectively' demand things because generally there's a massive divergence in values at scale. Coordination problems abound.
And put simply, if you can make $4000 for a 4 day work week, and $5500 for a 5 day work week, then there are plenty of rational reasons to just work an extra day.
The choice to do or not do so comes down to, I'd say, values, as above. If you have high time preference and thus value leisure and 'fun' things, you'll try to minimize the time spent working as much as you can.
The markets will balance supply of labor and demand for labor, as they always do, unless we actually do achieve fully automated gay luxury space communism.
More options
Context Copy link
First ask yourself this: why do you not already have a 3 day workweek?
Because I'm too poor.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think at least in the short/medium term this technology could lead to large productivity gains without corresponding cuts in total headcount.
When I started my career finance teams used to forecast in excel using relatively simple formulas. Now they use coding languages and forecast more frequently, with greater detail, and greater accuracy while working with massive data sets. This hasn’t lead to a huge cut in overall headcount, but it has changed the skill set mix on the teams.
Right, but it's presumably cheaper to spin up more GPT instances or build up more datacenters than it is to train more 'experts' in fields that are susceptible to ML automation.
Hence the question:
I'm not really doubting that humans will be 'in the loop' for quite a bit longer, but I suspect it will be more 'sanity checking' AI outputs and/or as a backup in case of outages, and there'll be strong downward pressure on wages. Which is fine if productivity gains make things cheaper.
But you're talking about AI as a complement to human skills, but I'm very specifically inquiring about how smart it needs to get to replace given skill sets.
More options
Context Copy link
Agreed. It's just psychologically painful to fire people, and especially if companies are making a ton of money from these models I don't think there will be a giant firing spree. As we saw with all the recent layoffs at big tech, when times are good companies are more than willing to keep a bunch of low impact employees on the payroll, especially in tech.
Also, it helps crowd out competition. Why fire a bunch of people when the interest rate is zero?
Sure, you'll save money in the short term, but those workers don't just disappear from the labor market; enterprising competitors will snap them up and end up requiring you to offer them a billion dollar acquihire scheme to shut them down before their product starts taking your marketshare.
Better to just keep them at Bigco. Sure, they won't really develop anything for you, but why drive the state of the art forward when you can just ignore all your customers, keep your competitors down, and rake in the cash from your ad business?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I believe there are two broad scenarios for what might happen with ML from an economics/politics perspective.
Scenario 1 is that ML will be a powerful productive tool (ie capital) in the hands of those that can afford it just like many other inventions throughout history.
If this happens the reaction will be along the lines we all know too well. The left will complain that those in power gain even more power and now have novel ways to control and/or extract value from workers. Plus a lot more low-skilled people will become unemployable and redundant so class tensions will probably get worse. On the flip side a few smart early movers will make insane bank and shape the way the next few decades will go. Could be interesting to see how different nation states adopt the new technology.
Scenario 2 is the "things get crazy" scenario. What if ML takes off far quicker than people are expecting, for example by recursively improving itself? I believe in that case we might be unable to fit the development into our usual political lens. If one company has twice as much capital than everyone else combined our systems of power distribution fall apart. If one nation has capabilities that make it effectively invincible our models for foreign relations stop working. If that happens it will be more akin to a scenario where superintelligent aliens have landed on earth and all bets are off.
I'm a little disconcerted at how many people who are working in the industry seem to hold this as the explicit goal and are intentionally maneuvering things so as to prevent anyone from intervening until it's too late.
More options
Context Copy link
I expect that "recursively improving itself" will lead to the AI going off into the weeds -- that is, evolving in ways unconnected to the real world. The output will quickly become bizarre and not particularly useful. It works for formal systems like Go because the rules are well-defined, but you can't simulate reality to a sufficient degree of precision.
I think the idea behind recursive self-improvement is more like, a 150 IQ AI should be able to find a way to increase its IQ to 151, a 151 IQ AI should be able to increase its IQ to 152, and so on and so forth until it reaches godhood.
It doesn't necessarily have to simulate large portions of reality, if it's able to find a way to isolate the factors responsible for its g factor and come up with a generalized way of making improvements to those factors. Presumably as part of the cycles of improvement it could interact with the real world in order to get more training and data. But this sort of scenario has its own issues.
Especially if it can spin up various copies of itself and make minute changes to see how that effects performance. Basically massive, parallel experimentation.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Go to church
Have kids
Buy land
Acquire chickens
Simple as.
I truly think people are almost embarrassingly overstating the importance of the AI apocalypse. Maybe an apocalypse for twitter and other online spaces, maybe an apocalypse “just a barely intelligent warm body” call center jobs, maybe an apocalypse for bootcampers making $300k/yr gluing JavaScript frameworks with cute names together.
Not an apocalypse for anybody with a skill set that can exist completely independent of the internet, not an apocalypse for the people who understand computer programming from first principles.
In the sense the AI will bankrupt the people who have been mining the good out of society while contributing absolutely nothing of value to it, it is a massive net good. I absolutely welcome our AI overlords. Show me who is posting the MOST human-passing-but-totally-useless-garbage on twitter, or trapping the MOST ethical non-monogamist coombrained Reddit atheism posters into pointless time wasting arguments and I will either go work for them for free, or donate compute time to them.
Let’s fucking go.
I can't tell if this comment is a spoof?
Sure, go back to your farm and use tools like tractors, fertilizers, modern crop rotation techniques, plates, silverware, cups, etc which have been created by the larger society. Created, distributed and improved by people who are supposedly 'mining the good out of society.'
Society is a team effort, bud. Your fantasies of living scott-free totally 'independent' on your plot of land are just that - fantasies. You wouldn't make it a week without the collective wisdom and knowledge society has gifted you and your family. Have some respect for the people who came before you, and the people who help you live a cushy life now.
I say:
And you internet this as “isolate yourself from society and pay no respect to the people who came before you”?
Just to be clear when i say “go to church”, I mean specifically a Catholic Church. There could not exist another institution on planet earth that is more of a strong indicator that you should stand in the shoulders of the people who came before you.
The people mining the good out of society are people running porn websites, and AB testing headlines and algorithmic content feeds to see which ones make people hate each other more, and then buy the products that they’re selling. Onlyfans is mining the good out of society, blackrock is mining the good out of society, McKinsey consulting is mining the good out of society
Porn websites and management consulting agencies did not invent pottery, crop rotation, iron smelting, or anything else. The fact that you either think otherwise or think that “go to church and start a family” somehow means “throw away every good discovery ever made by mankind” is certainly telling of something.
Those people will be doing more of all that and better (or rather "more efficiently" - nothing about it will be better for the audience), with higher profit margin since they'll no longer need to pay the grunts in call centers.
More options
Context Copy link
This is a small fraction of people in modern society, and if history tells anything I'd imagine they will be hurt less by AGI because this class of people is good at finding BS niches to milk value out.
I'm just not a fan of broad statements talking about how an ill-defined outgroup is milking everything from society while you and yours are the ones building it. Thanks for clarifying.
More options
Context Copy link
I think this comment is an example of "inferential distance." Your meaning of "people mining the good out of society" is porn sites, investors, and engagement-optimizers, whereas Dag's interpretation was "all the smart people who brought us modern technology."
@firmamenti also engaged in the classic Motte and Bailey to my mind. His Bailey is:
Basically claiming that anyone who relies on the Internet is gonna get fukt, and they should cry about it.
Then when challenged he retreated to the much more specific claim of:
I'm not impressed with this sort of rhetoric.
You cut one of my sentences in half to make your point, and then you accused me of bad faith argument.
The rest of the statement which you cut off was: "not an apocalypse for the people who understand computer programming from first principles."
This is not a motte and bailey. You either didn't read the rest of my comment, or you are being deliberately misleading in your characterization of it.
Either way: don't do this.
Eh, I cut it out for brevity but I see where you’re coming from. Either way I see you slicing the populace into such a chunk as to be making a ridiculously callous and egotistical statement.
I’m happy to discuss further which chunk of humanity deserves to have their lives be destroyed and suffer unnecessarily, but I generally find that type of rhetoric to be unsavory. I apologize if I mischaracterized your stance.
More options
Context Copy link
More options
Context Copy link
Just pointing out, your interpretation there doesn't quite check out logically. It would only be a motte/bailey when mischaracterized like that.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I’m really getting the urge to grow out my neckbeard and get euphoric up in this bitch. Postrats are converting to Mormonism now, Mormonism! At least with wokeness you have to go outside and observe the world to realize that it’s false. Most of these religions don’t even make sense on their own terms.
It’s cope is what it is, cope. It makes you feel good, and it’s useful (so it seems), so you believe it.
Choosing to believe (or act as if you believe) useful things seems very rational to me. I have an old coworker who was an atheist and cynically became a Mormon in order to marry a Mormon wife and live in a close-knit community. He now lives in Idaho and has 4 kids and by all accounts is very satisfied with the outcome. Who's more rational, him or a depressed medicated outspokenly atheist Bay area tech worker who's the least-liked member of his drama-cursed polycule?
If you rational long enough, you're eventually going to rational about rationality, and you'll see that beliefs are instrumental like anything else. There's no God of Integrity who laid down the law that you must profess true beliefs.
The short answer is, it fucks up your epistemology. It’s probably worth a whole post going through exactly why that’s so bad. Perhaps the old atheism arguments from the early 2000s need updating for the TikTok generation.
I disagree. You can be rational when the situation calls for it, and be religious on a meta level.
More options
Context Copy link
It definitely deserves a longer treatment than one sentence, but I'm fond of "once you've told a lie all truth is your enemy". Or something about lightning, I guess. Intentionally professing beliefs in falsehoods because they are useful is the epistemic equivalent of the doctor killing their patients to donate their organs -- it may sound like it does more good then harm in the short term, but you wouldn't want to live in a place where that's the rule.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Skipped 1, but I'm on 4. Chickens are about 2 weeks old, and I'm assessing the plans for the coop I plan to build. At least, after I finish ripping out the stupid Cyprus trees the last owner planted everywhere.
Based and eggpilled.
Seriously love chickens. They are equally stupid and annoying, and beautiful. They also make fantastic babysitters for #2 and will entertain them for HOURS. Highly recommend.
Chickens are raging assholes that go everywhere they're not supposed to and refuse to die when their time is up.
Ducks are much easier to manage. The eggs are tastier, too.
Has anyone considered…pet pigeons?
Pigeon eggs can be eaten too!
coo coo
I had about twenty white homing pigeons as a teen for 4-H. They're great, but are terribly difficult to get rid of. Homing ability is both impressive and obnoxious.
More options
Context Copy link
More options
Context Copy link
Ducks require too much feed. Geese can graze most of the day.
More options
Context Copy link
Nah, ducks turn their ponds into swamps and give you a rash when you cuddle them. Chickens are much more convenient. (We have both.)
More options
Context Copy link
I was wondering if we were going to get the chicken vs duck argument going. I have a coworker who has ducks and recommends them. I have a neighbor with chickens, although they might have gotten rid of them, or at least the roosters.
I didn't know such arguments were infamous.
All I know is, after having to deal with both, I'll take the ducks.
More options
Context Copy link
I want to try guinea fowl next year.
Our previous neighborhood had feral peacocks, and they give off this great jungle call in the middle of the night, and every once in a while I hear them here too, from a half mile or so away.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
So... I have a church, yard, kids, and chickens. Also it's Bright Week. Al masih qam!
Yet here I am, typing away on The Motte about AI. And here you are.
Plausibly I should work on my in-person network. A local church has installed ten Russian bells on a new building they've been working on these past two years. I watched the video of the blessing, and it sounds really good. The acequias association is supposed to be flushing the irrigation ditches tomorrow. My husband walked down the street and gave eggs to a neighbor last week, and has resolved to do that again, because it was a good experience. My daughter is now old enough to walk to the village church if we ever get our act together on time. People wave, and are out by the street cleaning their ditches. I can and should make physical art out of wool and wax for next year's local studio tour and art markets.
And yet here we are, even so.
More options
Context Copy link
More options
Context Copy link
I often think of the possibility that ML is right now our best and maybe only chance to avoid some massive economic downturns due to a whole hell of a lot of chickens coming home to roost all at the same time.
I will ignore the AI doomer arguments which would suggest protracted economic pain is preferable to complete annihilation of the human species for these purposes.
I am in a state of mind where I'm not sure whether we're about to see a new explosion in productivity akin to a new industrial revolution as we get space-based industry (Starship), broad-scale automation of most industries and boosted productivity, and a massive boost in human lifespans thanks to bio/medical breakthroughs... OR
Maybe we're about to see a global recession as energy prices spike, the boomer generation retires and switches from production and investment to straight consumption or widespread unrest as policies seek to avert this problem, international relations (and thus trade) sour, even if there's no outright war, and a general collapse in living standards in virtually everywhere but North America.
How the hell should one place bets when the near-term future could be a sharp downward spike OR a sharp exponential curve upwards? Yes, one should assume that things continue along at approximately the same rate they always have. Status quo is usually the best bet, but ALL the news I'm seeing is more than sufficient to overcome my baseline skepticism.
But the possible collapse due to demographic, economic, and geopolitical issues seems inevitable in a way that the gains from Machine Learning do not.
The problem, which you gesture at, is that this world is going to be very heavily centralized and thus will be very unequal at the very least in terms of power and possibly in terms of wealth.
ALREADY, ChatGPT is showing how this would work. Rather than a wild, unbounded internet full of various sites that contain information that you may want to use, and thus thousands upon thousands of people maintaining these different information sources, you've got a single site, with a single interface, which can answer any question you may have just as well.
Which is great as a consumer, except now ALL that information is controlled by a single entity and locked away in a black box where you can only get at it via an interface which they can choose to lock you out of arbitrarily. If you previously ran a site that contained all the possible information about, I dunno, various strains of bananas and their practical uses, such that you were the preferred one-stop shop resource for banana aficionados and the banana-curious, you now cannot possibly hope to compete with an AI interface which contains all human-legible information about bananas, but also tomatoes, cucumbers, papayas, and every other fruit or vegetable that people might be curious about.
So you shut down your site, and now the ONLY place to get all that banana-related info is through ChatGPT.
This does not bode well, to me.
And this applies to other ML models too. Once there's a trained model that is better at identifying cavities than almost any human expert, this is now the only place anyone will go to get opinions about cavities.
The one thing about wealth inequality, however, is that it's pretty fucking cheap to become a capital-owner. For $300 you can own a piece of Microsoft. See my aforementioned issues about being unsure where to bet, though. Basically, I'm dumping money into companies that are likely to explode in a future of ubiquitous ML and AI models.
Of course, if ML/AI gets way, WAY better at capital allocation than most human experts, we hit a weird point where your best bet is to ask BuffetGPT where you should put your money for maximum returns based on your time horizon, and again this means that the ONLY place people will trust their money is the the best and most proven ML model for investment decisions.
Actually, this seems like a plausible future for humanity, where competing AI are unleashed on the stock market and are constantly moving money around at blinding speeds (and occasionally going broke) trying to outmaneuver each other and all humans can do is entrust one or several of these AIs with their own funds and pray they picked a good one.
It seems unlikely that there would only be one, though, unless there are barriers to entry e.g. the US government makes severe AI alignment requirements that only Microsoft can meet. Even Google, at its peak, was not the only search engine that people used.
I am amenable to this thought.
But if there's one ML model that can identify cavities with 99.9% accuracy, and one that 'merely' has a 98.5% accuracy, what possible reason could there be for using the latter, assuming cost parity.
Microsoft is an interesting example of this since they have 75% market share on PC OS. If they successfully integrate AI into windows I can see that going higher.
Depends on how much the first ML model exploits its advantage. Also, firms often push for monopolistic competition rather than straight imitation, so the firm marketing the 98.5% model might just look for some kind of product differentiation, e.g. it identifies cavities and it tells funnier jokes.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I do wonder if we'll create a framework where places like OpenAI need to pay fraction of cents for each token or something. It would hit their profitability but would still make things fine if they achieve AGI.
Otherwise I agree that the open structure would be tough.
Is there anyone in the English-speaking world who didn't learn about the existence of Peru from Paddington Bear?
Me
More options
Context Copy link
More options
Context Copy link
I'm not talking about after they train, I'm basically saying that in order to train on data or scrape it period, they would have to pay. Otherwise all data would be walled off. (Not sure if we could do this to only LLMs without making the internet closed again - that's a concern.)
More options
Context Copy link
More options
Context Copy link
Yep.
In retrospect, I actually begin to wonder if the increasing tendency to throw up paywalls for access to various databases and other sites which used to be free access/ad supported was because people realized that machine learning models were being trained on them.
This also leads me to wonder, though, is there information out there which ISN'T digitized and accessible on the internet? That simply can't be added to AI models because it's been overlooked because it isn't legible to people?
If I were someone who had a particularly valuable set of information locked up in my head, that I was relatively certain was not something that ever got released publicly, I would start bidding out the right to my dataset (i.e. I sit in a room and dictate it so it can be transcribed) to the highest bidder and aim to retire early.
Is there a viable business to be made, for example, going around and interviewing Boomers who are close to retirement age for hours on end so you can collect all the information about their specialized career and roles and digitize it so you can sell it and an AI can be trained up on information that would otherwise NOT be accessible?
There is actually a ton of information that has not been digitized and only exists in, for example, national archives or similar of various countries or institutions.
I hadn't actually realized that this was the case until I started listening to the behind the scenes podcast for C&Rsenal - they're trying to put together a comprehensive history or the evolution of revolver lockwork, and apparently a large amount of the information/patents are only accessible via going there in person.
This is fascinating and it suggests that training AI on 'incomplete' information archives could lead to it making some weird inferences or blind guesses about pieces of historical information is simply never encountered.
I now have to wonder if there are any humans out there with a somewhat comprehensive knowledge of the evolution of revolver lockwork.
And now we have to wonder just HOW LARGE the corpus of undigitized knowledge is, almost by definition we can't know how much there is because... it's not documented well enough to really tell.
Well this is basically how C&Rsenal started their revolver thing... doing episodes on multiple late 19th century European martial revolvers and realizing that the existing histories are incomplete.
Probably the best one right now would be Othais from C&Rsenal.
I would guess that a huge amount of infrequently requested data is totally undigitized still.
Actually, another area that demonstrates this: I frequently watch videos about museum ships on youtube and so much of the stuff they talk about is from documents and plans that they just kinda found in a box on the ship. So much undigitized.
And this is my thought now, that he has a potentially valuable cache of information in his head he could sell the rights to digitize for use training an AI.
I don't know that he can really monopolize it--on the C&Rsenal website itself, there is a publicly-available page where they've put together a timeline of revolver patents. I think Othais's passion as a historian outweighs his desire to secure the bag.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
At some point LLMs may be able to speak the True Dao. Their whole shtick is essentially building an object that contains multiple dimensions of information about one concept, yes?
More options
Context Copy link
The AI can craft the questions. The AI can ask them too. It's already a more attentive and engaged listener than many humans (me included).
I know something the superintelligent AI doesn't? It would like to learn from me? What an ego boost!
More options
Context Copy link
THAT question seems to be answered already. Audio recordings fed to an AI that can transcribe to digital words gets you there.
I mean, the internet pretty much thrives on that sort of information, which is what the ML algos are trained on anyway.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Option C: neither. It's just a tool, neither good nor bad in itself. What will make it good or bad is how we use it, which remains to be seen.
More options
Context Copy link
I think there is yet another point to make here. With current Large Language Models, we have systems that treat Natural Language as a code, that is where revolution comes from. Even before LLMs, there were multiple "revolutions" where instead of working directly with machine code you could work with higher level languages utilizing concepts more suitable for humans as opposed to "data" in its raw form. This made programming incrementally more accessible to wider population. Even things like invention of graphical user interface for operating systems enabled people to tell computers what to do with data in more natural way without some arcane knowledge.
Also on the level of let's say algorithms creating novel things on some simple inputs, there was procedural generation around for a long time. Giving the computer system some simple parameters and computer running the simulation to confirm/falsify end result was a standard thing in the past. Again, the key difference is that we now have a very powerful system that can treat natural language as a code.
You might be interested in this post on LessWrong which discusses Scaffolded LLM's as natural language computers.. I'd be curious on @DaseindustriesLtd's take on this as well.
Key points:
I find this framing extremely persuasive, and awesome in the true sense. If transformers can actually act as a new type of general purpose computer using natural language, the world will become strange indeed very quickly.
More options
Context Copy link
More options
Context Copy link
Tangential, but the term in economics you are touching here is a Kaldor–Hicks improvement I think. It's not Pareto-optimal, but total-wealth increasing, and could theoretically be converted to a Pareto-optimal situation with redistribution from the winners to the losers (assuming such redistribution does not have any externalities itself!).
More options
Context Copy link
Over what time horizon?
I expect the deployment of machine learning to follow approximately the same path as every other labor saving technology humans have developed. In the short term it will be somewhat of a mixed bag. On the one hand we'll be able to produce the same/more goods at lower costs than before. On the other hand this savings will likely come with impact to the people and companies that used to produce those things. Over the long term I expect it will make people much better off.
Creative destruction!
Do you not see any difference between this paradigm shift and previous ones?
Not with respect to the fact that it will be net beneficial to humanity over the long run.
I agree with you, for what it’s worth.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'd like to believe that, as it follows a well-established pattern. But honestly, what really happens if there's no more work left for people to do anymore? It seems that we'd have to really count on some redistribution of wealth, UBI, etc to ensure that the gains of the new automation doesn't just go to the owners of the automation (as much as I never thought I'd ever say that), or else people simply will not have the means to support themselves. Or if the job destruction is localized to just upper-class jobs, then everyone will have to get used to living like lower-class, and there may not even be enough lower-class jobs to go around. The carrying capacity of society would be drastically reduced in either situation.
In other words, what if
means the death of large swaths of society?
As others have said, there will always be work to do! As long as humans have any problems whatsoever, there will be work.
How the heck does AGI reduce the carrying capacity of society? You'll have to explain this one to me.
Well, I'm hypothesizing that potentially, all (or almost all) of the solutions to all of the problems humans have may be covered by AI. If the AI is owned by a very limited number of people, then those people would be the ones who are the gatekeepers, and the ones that get most of the benefit of AI. Everyone will be paying these limited numbers of people for basically everything, and no one else would be able to make a living.
This is almost like imagining Karl Marx's worse nightmare regarding the proletariat owning all means of production, ratcheted up to unbelievable proportions. I'm no communist, nor socialist, so like I said, I never thought I'd say this. But this is a fear of mine, that AI puts everyone out of work, meaning that no one can support themselves.
This doesn't really scare me. Elites generally enjoy the society they're in, enjoy feeling useful, and above others. I think the vast majority of people who could create an AGI would use it to solve most of their problems, get really rich, then use it to solve everyone else's problems with a fraction of their incredible wealth.
Going into the future things could get very nasty indeed, but at that point all problems relevant to humans right now will be solved. It'll be an issue for the next stage of intelligence in our species' life, hopefully, and I'd imagine we'll be better suited to solve it then.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
That would be awesome! People (mostly) don't work because work is awesome and they want to do it. People work because there are things we want and we need to work to get the things we want. No work left for people to do implies no wants that could be satisfied by human labor.
This paragraph seems in tension with the idea of lacking work for people to do, to me. If a bunch of people are left with unfulfilled wants, why isn't there work for people to do fulfilling those wants? This also seems to ignore the demand side of economics. You can be as greedy a producer of goods as you want but if no one can afford to buy your products you will not make any money selling them.
I think there's an equivocation between present wages and standard of living to post-AI wages and standard of living that I'm not confident would actually hold. Certain kinds of jobs have certain standards of living now because of the relative demand for them and people's capability to do them and the costs of satisfying certain preferences etc. In a world with massively expanded preference satisfaction capability (at least along some dimensions) I'm not sure working a "lower-class" job will entail having what we currently think of as a "lower-class" standard of living.
I'm a little unclear what the "carrying capacity of society" is and how it would be reduced if we had found a new way to generate a lot of wealth.
I'm not an economist, and I know very little about econ, so it's very possible that there is something major I'm missing.
This is the part of my hypothesis that's tripping me up. Could you walk me through it?
Basically, let's say that we do fundamentally believe in capitalism (because I do), that a person should have to pay for any good or service that he receives.
And let's say that there's a person who is dying of starvation, because he has no job, because AI does everything better and cheaper than he can. Therefore, no one wants to come to him to do these tasks, because they'd rather go to the owner of the AI. How does this person get the money he needs to get the food he needs?
There exist people today who, due to disabilities or other conditions, are unable to support themselves financially. They depend on the charity of others, and in richer countries they may also get tax-funded disability benefits. If the development of AI caused a significant number of people to become unemployable, there is no reason why we couldn't just include them in that category.
If the claim that "a person should have to pay for any good or service that he receives" is to be interpreted literally, then that's not "capitalism", that's some extreme form of libertarianism, verging on parody. That would make even charity immoral. Real-life libertarians believe, at most, that people should be free to do what they want with their money, including giving it to charity. Maybe Andrew Ryan of Bioshock believes that donating to the poor is bad because it keeps them alive even though they deserve to die, but I doubt you could find a real libertarian who believes that.
I, too, "believe in capitalism", that is, I believe that a free market with some (limited) state intervention is the optimal form of social organization from a utilitarian perspective in the current technological environment. I don't believe that there is a universal moral law that people have to work for everything. If robots take all the jobs, taxing the robots' owners to provide income to the newly-unemployed would clearly be the right decision from a utilitarian perspective.
When I say "a person should have to pay for any good or service that he receives", I don't believe it as a moral thing, for the most part. I don't think it's immoral if someone gets something through charity. But I also don't think people should count on charity. Partly this is out of my own fears. I would hate living a life in which I was entirely dependent on someone else's charity to stay alive, where I had no control over my own destiny, no ability to provide for myself. I'd be terrified of starving to death all the time!
Also, even if I don't think it's "immoral", I do at least have an aversion to people believing that it is incumbent upon other people to provide for you (let's say if you're older than 18 and able). I'm against most of the arguments saying it's immoral for people to be rich, or saying that it's perfectly fine to just take their wealth by force, or painting rich people as monsters. However, true AGI may be where I would have to draw the line on some of my beliefs, due to the sheer magnitude of people who could be put out of work by AGI. In that case, we may have to put capitalism aside and move to a new model that works better in a post-scarcity world.
More options
Context Copy link
More options
Context Copy link
So, for this kind of situation to arise it needs to be the case that the marginal cost for providing this person the necessities of life is below the marginal value their labor can generate for others.
Notice there is nothing AI specific about this scenario. It can (and does) obtain in our society even without large scale AI deployment. We have various solutions to this problem that depend on a variety of factors. Sometimes people can do useful work and just need a supplement to bring it up to the level of survival (various forms of welfare). Sometimes people can't do useful work but society would still like them to continue living for one reason or another (the elderly, disabled, etc). The same kinds of solutions we already deploy to solve these problems (you mention some in your comment) would seem to be viable here.
It's also unclear to me how exactly AI will change the balance for a persons marginal value vs marginal cost. On the one hand the efficiency gains from AI mean that the marginal cost of provisioning the means of survival should fall. Whether directly due to the influence of AI or do to a reallocation of human labor towards other things. On the other hand it will raise the bar (in certain domains) for the marginal value one has to produce to be employed.
Partially this is why I think it will be a long term benefit but more mixed in the short term. There are frictions in labor markets and effects of specialization that can mean it is difficult to reallocate labor and effort efficiently in the short and medium term. But the resulting equilibrium will almost certainly be one with happier and wealthier people.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Frankly at this point I'm just riding the tides. Whatever happens, happens. This will be like the fifth once in a generation event I've lived through, and like the 20th doomsday scenario. I don't have the energy to care anymore. I have apocalypse fatigue.
But there could be a utopia! Unlike Nuclear and other scenarios, I think it's likely this moves us far closer to a utopia, soon.
People thought this about Nuclear and the other ones too if you remember.
There will be no utopia, because utopia is not a thing that exists. Our lives might get better and worse in various ways, but the idea of a perfect society, and by extension of moving towards a perfect society, has always been delusional.
No utopia, just a shifted technological landscape.
There could be a utopia but it could only be achieved by either
Changing the human race fundamentally to remove the desire for accomplishment or status
Hiding the true nature of reality and creating a unique simulation for each human that would provide a fulfilling life path for that person
Why do you think this is the case? And what does a 'fundamental' change mean?
The goal of the axial revolution has always been to improve ourselves. We are slowly becoming better, in my opinion. Less violent, more understanding, more focused on technical accomplishment. If we continue on that path and eventually eschew (most) status, is that a fundamental change or an incremental one?
More options
Context Copy link
More options
Context Copy link
Some delusions are worth chasing my friend. Chasing the delusion of truth, intellectual honesty, and rigor led us to the Scientific revolution, which brought us where we are today. Just because you don't think it's likely doesn't mean those seeking utopia are fools.
Utopia doesn't exist the same way any other ideal doesn't exist. Does that mean you shouldn't strive to be kind or love others?
More options
Context Copy link
More options
Context Copy link
There could be! And that would be nice! But like with giving in to doom-mongering, I'm also not going to get my hopes up, either. Realistically, whatever's going to happen is going to happen regardless of whether I get hyped up or stressed out about it.
Yeah I try and keep a cool head as well. I'd love to quit my job and party till the singularity comes but it may not be the best idea...
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Once you accept that these are forces which you can't individually impact, the path forward becomes pretty clear.
Just set things up to maximize your chances of living to see whatever crazy future we end up with.
And maybe have some fun along the way.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link