This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Sam Altman's bad week continues, as a car stopped and appears to have fired a gun at the Russian Hill home of OpenAI’s CEO.
It appears that, if measured by deed, Mr. Altman may be in contention for the title of most hated business executive in the country.
Unless I am profoundly misinformed about the base rate of assassination attempts on tech CEOs, it appears AI anxiety has apparently reached a precipitation point among American youth, to the point where discontent is crystalizing into direct action. I've seen this in my personal life. My youngest brother is a bright kid - top of his class, eagle scout, 1400+ on his SATs as a junior, the whole shebang. He's completely given up on his original goal of going to college for something software-related, and he's not only adrift about what he's going to do with his future, but he's angry about it. I hope he has a support network sufficient to keep him on the right track, but I don't like what I see.
I'm not exactly old, but I'm sure as hell not young either. For those of you who are 25 or under, what does it feel like on the ground right now?
This is very sad because this was me. Eagle Scout, 1460, varsity athlete, interested in software development, graduated in the aughts.
I had the opposite problem: The future was so open wide, that I took gambles on more creative pursuits than b-lining a traditional software job, where I could have made much more money more quickly. Now in my late 30s, with a young family, I feel the combined constant breath of the treadmill trying to keep up and catch up, while simultaneously worrying about what the heck my children will do.
The best I can do, is to pretend it's not here. Living vicariously as a parent is, in many, ways sweeter than childhood itself,; you get to be a gardener as well as enjoying the fruits of their joys. My parents got to show us a wide open world, and tell us that we could be anything we wanted, be excited to see us reach further than them, on their own shoulders...
I, instead, have to build a careful facade and guard the edges. Not tell my kids that I have no idea if they'll get to be anything at all, that I have no clue what their future will look like, while panicking that it won't all fall down before I've given them a rich childhood.
I am, of course, very very lucky in that I get to still be where I am with a family and still living in the modal 'good life'. Much better being here worried about it ending, than being on the other side worried you'll never get a chance.
It really does seem like the Boomers ended up being History's main characters; the generation born at the absolute peak.
Something is wrong if the age of thought machines and the internet is outclassed by the mid 20th century. It's almost like Boomers won't let us enjoy our new developments. They continue to insist on in person work, 40 hour work weeks, a mid-20th century education and credentialing system, and toxic levels of laissez-faire economics that leaves hard working, intelligent fathers like you worrying your kids will suffer immensely because of the creation of machines which should only grow the pie.
More options
Context Copy link
The boomers had tons to worry about. The cold war and the looming threat of nuclear annihilation was a far greater fear before MAD started to be understood as the default. The rise of the serial killer in the 1970s and 80s. American cities were insanely dangerous and crime ridden in a way that even the worst parts of now can't hope to emulate, with brazen corruption everywhere. Political violence was a lot more common, JFK was assassinated, RFK was assassinated, MLK was assassinated, Malcolm X was assassinated. Groups like weather underground, new world liberation front, etc were planting bombs around the country. Race riots so violent they make the worst of BLM look like a nice walk in the park. Growing cults like Jonestown, Heaven's Gate, the Manson family, etc. AIDS crisis, love canal, the Great Inflation, fears of overpopulation, Watergate, Vietnam!!
While it's true that violent crime danger in most places in the US dropped again in the 90s, the "worst parts of now" are still pretty awful. NYC's homicide rate in 1980 was 12.7/100k/year, nearly triple what it is today, but that's still barely more than half of the newly "low" rate I was just congratulating Baltimore on, and it's a quarter of the rate in a couple remaining hot spots like (the city of, not the metro of) St. Louis.
No real objections to the rest of your paragraph, though. IMHO the extinction risk of AI is worse than the near-extinction risk of nuclear WW3 was, but it's also a much more subtle and speculative risk. I learned as a (GenX) child that there were thousands of nuclear weapons ready to vaporize everyone I loved, 30 or 40 minutes after someone pressed the wrong button, and I'd say that was still sufficiently rough.
More options
Context Copy link
You could have just quoted the lyrics to We Didn't Start the Fire to save yourself some typing.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My first immediate thought is Americans live on easy mode. For East Asians, top of graduating class would mean minimum 1500.
1400 isn't particularly good (or bad) by any measure, Asian or otherwise, but he said he did that as a junior.
Don't most people take the SATs in their junior year? College applications start going out in October so there isn't much time to take them in senior year.
More options
Context Copy link
More options
Context Copy link
I got a 1530 and wasn't even close to top of my class, to me it seems like you guys are all living on easy mode.
More options
Context Copy link
But what has that gotten East Asians? It would seem they prefer to move to the United States, and Americans do not prefer to move to China. Perhaps over-studying is a form of defection against society, and it being predominant lowers creative output, leading to a worse economy than a counterfactual society.
More options
Context Copy link
More options
Context Copy link
My take has been that tech CEOs, as the blue tribers they are, will never stop taking loyalty oaths to the left, but as billionaire capitalists, they will be hated by the exact same class they pledge loyalty to. As a result, they have absolutely no friends and nobody who will run cover for them. That leaves them facing a bunch of hit pieces and getting no puff pieces, as well as no fans willing to go to war for them.
Elon musk might have alot of haters, but he has a ton of rabid superfans going to war on the internet trying to squash anything negative about him. And it kind of works, normies who aren't plugged directly into the anti-trump programming machine just think he's a awkward nerdy rocket man. Meanwhile non-tech CEOs know to shut up, and make their loyalty oaths in private, if they still feel the need to. Who even knows what the CEO of Boeing or Walmart thinks about some issue or another, or even who they are. Meanwhile Sam can't help but get on stage at every opportunity, and open his mouth and say something that will piss off another chunk of the shrinking group of people who don't hate him yet.
Tech execs are jacked up on hopium hoping that AI models get significantly better in the next few years. In an attempt to get ahead of the curve, they're adopting policies and methods that assume those models are already here.
The fact that the majority of all code that has ever been created in history has been created since the start of 2026 is actually an incredibly bad thing. Code itself is a burden, and is only there because of the functionality it enables. And I haven't seen any major improvement in the functionality offered by major companies recently at all. In fact it often seems that things are getting worse.
Sam Hyde's Dear Elon video goes into this quite a bit. Right-wingers don't care that he's a weirdo obsessed with electric cars and who has tons of children he doesn't actually raised with a dozen different women. So long as he is willing to side with us on the core issues we care about (like immigration) or at least not be actively hostile against it (like he seems to be with religion, the right is willing to accept and love him, warts and all. It's when he sided with Vivek Ramaswamy's overtly anti-white and anti-American views on immigration that a lot of the love the right was showing for him evaporated overnight.
Personally I've never loved (nor hated) Elon, I've always seen him as a rich, sometimes entertaining weirdo who finally fixed the space industry. But his interests aren't mine, and never have been. When our interests align I'll gladly accept the help, but I will never expect him to be a reliable ally.
More options
Context Copy link
I don’t agree. Software is a depreciating asset, like a house, and AI is tool for building it. Your take reminds me of people lamenting GarageBand or the equivalent because now everyone can make music, and it’s nearly all bad.
Software is a product that does something, while code is simply a component that is instrumental in delivering that product. If a builder uses twice as many nails, yet only builds 5% more houses, I think that's a problem for the builder.
In the same sense that everyone could make code with their computer since... well... computers? In the same sense that anyone could go and buy a guitar and tape deck and make music? In fact, the majority of music created by professional artists and record studios is probably also bad.
AI slop is different. Now truly anyone can make a song by clicking a button. I'd hazard to guess that the majority of all "music" to ever exist on earth has also been created since the start of 2026, and its badness dwarfs the shitty tracks put together by hobbyists toying with garageband.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm going to bring this up just because its in the back of my mind and I'm going to reserve some small amount of probability for it.
But I can imagine the scenario where Sam himself arranges for these (pretty ineffective, obviously) attacks against his home as a counter to the bad press and to make sure he keeps his grasp on power, as he's already been ousted once.
Yes, it's implausible that he'd pull a Jussie Smollet, it is VASTLY more likely to be actual random violence. But I have enough distrust for Altman that I think he'd be willing to do something like this, especially if it carried minimal risk of personal harm.
Holy cow.
I guess the high achievers are technically the MOST likely to feel this anxiety, because they can directly perceive their competition is no longer just other high achievers... but this machine that can outperform them on every single metric that matters for success.
And as a former High School Valedictorian myself... I don't have a good answer here.
Its patently absurd to say he should toss out his academic achievement and instead divert into blue collar/physical work.
But to continue in academics would be a doomed play.
Uhhh get him in a gym and possibly doing some martial arts STAT, if only for the mental health benefits.
I think that would be a bad strategy for Altman, since I think that the majority of the population's reaction to hearing about the assassination attempts against Altman is either "who's that?" or "I wish they had managed to kill him". Even among the economic movers and shakers, I doubt that many people would actually be sad if Altman was killed.
Yeahhhh the amount of support for the attempts coming out should be a wake up call of some kind.
I'm not sure he's going to heed it in any way other than boosting his personal security presence.
More options
Context Copy link
More options
Context Copy link
At this point he's considering law, because lawyers decide what's allowed or not in this country. His opinion is that it'll be one of the last safe fields left.
Law hasn't been a great field for a while either. Source: I'm a law school dropout.
Around the time of the 2008 financial crisis, tons of universities added law schools. They basically bring in the same amount of revenue as medical schools, but with waaaaay less overhead. The legal job market got flooded in the mid 10's. I'm sure supply and demand have normalized a bit since then, but law isn't nearly as surefire a way to be wealthy as it used too be.
i.e. a genius.
Yes and no. Hiring is up, but I think there's significant turnover too. Some law firms realized that when you have a surplus of desperate new grads, you can churn your associates harder and replace them easier.
Right now the churn is more in recent grads leaving for what they perceive to be greener pastures. The problem right now isn't so much a shortage of lawyers as it is a shortage of experienced lawyers. I work at a smaller firm, and just a couple months ago a younger guy who clerked for a judge after law school and whose wife works across the street from us quit to take a different job. I don't even know if the pay is any better, but it seems like everyone under the age of 35, and several people who are older, think that whatever job they're doing is unsatisfying and wants to do something closer to what they imagined the practice of law would look like. In the meantime, we can't find anyone to replace these people. Hiring out of law school only makes the situation worse because it takes several months to get an attorney to the point where they're actually making money for the firm, and they're unwilling to do that for a guy who is going to bolt in six months.
I agree with this overall, and I don't even know where new grads are necessarily going. I know of urban DA/PD offices that have trouble hiring, and the pay/benefits are decent. Doing a few years at either to get some trial/motion/evidentiary hearing experience before moving on is apparently something new grads are not willing to do. The PD offices I'm most familiar with seem to be trading their experienced attorneys every few years but there really aren't that many new faces coming up the ranks.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Okay.
As a lawyer, I am going to say no, don't do that.
His logic is not unsound, since we do in fact have control over how our profession is practiced and can use our guild authority to keep AI sidelined (for now). Here's a take I had three years ago that I still stand by. I've reiterated it. (holy crap GPT-3 was almost 6 years ago?)
Yes we can throw up barriers to AI adoption, and make the laws that protect us from AI competition. That's not as strong a moat as it seems. Even if lawyers are protected from AI competition... guess what your clients are doing.
And:
A) A bunch of other people are going to get a similar idea, so it'll be saturated, most likely. Already happening to an extent.
B) Most lawyers are miserable in their area of law. I am not, but I still had a long period of suck to get through, kept alive by my long-term goal of getting where I am now.
C) He will probably not become fabulously wealthy in this field even if AI doesn't supplant most entry-level legal jobs. MAKE HIM AWARE OF THE BIMODAL SALARY DISTRIBUTION FOR NEW GRADS. This was my big mistake early on.
There is now some evidence of downward pressure on new grad salaries.
I truly wish I had a more positive prescription to give out, but I am vehement about this negative one.
I think this sort of thing is really difficult to predict. It seems pretty clear that there will be some minimum demand for attorneys in the future:
By law, corporations must be represented by licensed attorneys;
It seems likely that a human being will be needed to make arguments to a jury;
Consulting with an attorney, who then queries an LLM, gives you a stronger argument that the communication is privileged than if you just query the LLM directly.
On the other hand, it seems plausible that the number of lawyers required will drop by quite a lot. If 50% of legal work can be done by LLMs, then a lot of lawyers will still have jobs, but many (including a lot of new grads) will end up unemployed.
But will it really work like that? There has been an explosion in litigation over the last 40 years, in part because technology made it more economically feasible to pursue disputes. One of the dirty little secrets of the US Court system is that a lot of the time judges throw out meritorious cases because they believe the system is just too busy to be bothered. A lot of wrongs never get litigated simply because there just aren't enough lawyers and judges to handle them.
So it's entirely possible that with AI, there will be more demand for lawyers than ever.
I just don't see anyway to sustain the pipeline if fresh Associates CANNOT outperform the LLMs, especially on price.
If a law firm can spin up an arbitrary amount of 'agents' that have all the requisite knowledge to handle a given legal issue, maybe they hire some attorneys to wrangle the agents and sign off on their output.
But that doesn't give those attorneys good legal experience they can translate into advancing their career. That's a step above doc review.
Hell, AI should be able to replace most Law School professors. It SHOULD become possible to become a competent lawyer without setting foot in a law school campus.
So naively, what I see coming down the pike is a massive spike in the 'supply' of legal knowledge that is on tap... and no clear reason why people should prefer the person who got a 6-figure loan for law school (and has to bill accordingly) over the $20-$200/month uberexpert that lives in their pocket. So from whence comes demand for human lawyer?
Basically one thing: Accountability. They can be punished for screwups.
One possible outcome is that governments spend money beefing up their legal systems, staffing out enough judges and clerks and such to actually meet the surge.
But courts are a pure cost center, so I just doubt it happens. Instead I think more disputes go to private arbitration, or maybe AI Mediators become a popular option. I think the demand for NONJUDICIAL resolutions surges! They're cheaper and possibly even more accurate. And if mediation and arbitration becomes popular... guess what all those rules about attorneys being needed to argue for a jury or represent a corpo get sidestepped very neatly.
More options
Context Copy link
More options
Context Copy link
I've personally suggested he join the military. That idea did not go over well with the family.
That is a belittling suggestion. Similar to the discourse on X among right wingers where one camp says bright young men should get into trades. Much has been said on why this is belittling and dis-empowering for bright young men.
Being an officer isn't too bad.
More options
Context Copy link
Going in as an officer is great for aspiring right-wing leaders. If you commit to it and don't screw up, you build a resume and a rolodex and give yourself future credibility as a political leader. It only negatively impacts your status among a small sliver of the population and opens up future employment avenues as well.
If a son of mine was interested in the military, I would likely discourage them from going in as enlisted, but not necessarily if they were interested in going in as an officer.
More options
Context Copy link
It's not a belittling suggestion. My family has a long history of military service. The only reason I didn't join was because I was in a car accident that rendered me medically unfit my senior year of high school.
Being an officer isn't a bad gig, if you can get it.
More options
Context Copy link
More options
Context Copy link
That's a hard sell at this particular moment, for sure.
Might be worth a shot to see if he can get in the military industrial complex, maybe Palmer Luckey would hire him.
More options
Context Copy link
More options
Context Copy link
Co-signed 100%. At this point, law is only worth going into if someone else is paying for it and the person has worked in some kind of law office and finds they enjoy the work.
I had a young guy in my office last month, with his dad (the client), who was apparently been accepted to Georgetown Law and was pretty hyped to be going.
I couldn't bring myself to tell him how badly I expected that to turn out for him in the end.
Important detail: he's black, so my guess is that he's getting some financial support.
More options
Context Copy link
It's funny. I love the idea of law. I briefly worked in the mail room of a law firm and even enjoyed that.
I took the LSAT and got a 178.
Then 2008 happened and every single liberal arts student in the country stampeded into law. I guess it wasn't meant to be.
Maybe I'll sit for the patent agent test one day.
I hit the job market in early 2008. Consider yourself beyond fortunate at the bullet you dodged.
I imagine you would pass (the low pass rate comes from engineers with no legal training taking it, not lots of lawyers with 178 LSATs taking it; I studied seriously for it like it was a true bar exam and was way overprepared), but unless it would increase your income in some way, it's probably not worth the time and effort.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I've never put a huge amount of focus into the person of Sam Altman but from light cultural osmosis it feels like he's always been presented as a turbo grifter without any particular belief if anything (and if there is a belief in anything it's a cartoon villain AGI push). Which might make it easier for people to go for him in this function?
I think that has something to do with it. He's been in the news quite a lot lately, between the Ronan Farrow expose and the general AI hype.
In the case of the individual who firebombed him, he was a full blown hard-takeoff AI doomer, so I am unsurprised that he'd go after the largest and most established company.
More options
Context Copy link
More options
Context Copy link
I was planning to write up a larger top-level effort-post on this topic, but since you've already made the top-level I'll post the notes I was drafting.
For the last few days, I've been reading about the Sam Altman attack drama and the warehouse fire attack that happened recently, and I've been finding the reactions pretty scary. General sentiment on HN is something along the lines of "Altman deserved it" and even among my general leftish acquaintance bubble the vibe is along the lines of "they shouldn't have missed" or "we need more of this fuck the rich" which doesn't really bode well for the stability of society.
Whether or not you believe the more bombastic claims of AI CEO's, I do think it's clear that at minimum AI is going to exacerbate the trend of technology centralizing power, wealth and status, even as absolute material standards have continued to improve beyond the wildest dreams of 99.9% of humanity in the past. For better or for worse, human happiness seems to be tied only lightly to absolute material standards and heavily tied to relative status, position, and feelings of fairness, and the internet and social media are super-stimuli for the human sense of status calibrated towards the Dunbar number.
Ruling out FOOM levels of societal disruption, I can think of a few ways that this plays out.
Left-wing communist populist marxist social democratic total victory: public outcry reaches all-time highs, perhaps with some peasant revolts sprinkled in, and the AOC/Mamdani coalition gets voted in to dismantle the AI labs, big tech and the icky billionaires. Leaving aside the fact that this would annihlate the economy and living standards by proxy, I'm not really convinced that with mass internet and social media there's any gini index or amount of redistribution that would leave the status anxious public satisfied. First they came for the billionaires and then they came for the homeowners.... Certainly comparable democratic countries with half of the gini index of America are still constantly flooded with rhetoric about eating the rich.
Right-wing AI strongman technofeudal democratic backsliding: political violence becomes normalised as a part of day to day life and as a response, perhaps after a significant assassination or riot, a strongman or group of technocrats use the violence as an excuse to seize absolute power, abetted by AI in part or in full. The lumpenproles are kept under control via mass surveillance, drones and guns or killed off entirely. The worst ending, but one that seems depressingly realistic looking at the history of inequality and failed revolutions.
Nothing ever happens: whether mass unemployment happens or not, most people end up with sinecures or welfare to keep them relatively pacified. Social media and concentrating wealth inequality continues to make people miserable even as absolute material conditions begin to reach sci-fi levels, and competition for zero-sum goods like housing in desirable areas and prestigious educations and sinecures becomes even more red in tooth and claw in the vein of the East Asian countries. Political violence gets somewhat more normalised, perhaps to Latin American or 20th century standards, but it's limited to isolated incidents.
Generally I consider myself libertarian and think that billionaires are good, actually, but I do think that inequality and society's response to inequality is likely to be one of the defining questions of the 21st century. While Sam Altman is the most visible face of AI to normies, pure game theory dictates that technological progress will continue with or without the consent of any individual person, company or nation-state, if the capability exists someone (or something...) is going to be the one that holds those reins to wealth, status and power, and as long those reins are held then the holder will inevitably be the target of the green-eyed masses. I don't think we yet have the social technology to deal with this and it's not clear that we ever will; I've seriously been thinking lately whether this might be one way that the Fermi Paradox manifests.
I disagree. Happiness is greatly tied to absolute material prosperity but relative prosperity gates access to some goods that are essential for happiness, like housing and a mate.
Furthermore, inequality is a proxy for an uneven distribution of power. Have that imbalance become severe enough and it becomes a mortal threat, especially as it relates to automation.
The level of worry that is rational here greatly depends on automation timelines but ignoring inequality is perhaps the most retarded thing anyone can do.
More options
Context Copy link
The root of this is not entirely unjustified, although I won't contend envy is not some part of it.
Before the industrial revolution, power and population were strongly correlated: if you want to be powerful, you need people on your side, and a lot of them. Even if "on your side" means a not-particularly-reciprocated relationship of "I sit here in my castle and you plow the fields", at least the peasant is necessary to plow the fields. You can't just kill him (or at least, not all of him), or the field goes unplowed, and you starve.
With the advent of industrial and especially computer technology, this balance is upset. You really can just kill all the peasants and have the field plow itself. Now, is this done? No, or at least, not yet. But it's partly because it's not yet entirely practical. You can buy a really nice nuclear bunker for a few billion in 2026, but nonetheless, post-kaboom, it's still just a relic of a prior era and you're on a limited, non-renewable supply of luxuries with minimal ability to bootstrap yourself and your buddies back up to industrial civilisation on timescales relevant to your personal comfort. Thus, it's more comfortable for now to not kill everybody.
But that's just a technology problem, too. In the foreseeable future, it may indeed be feasible to build a full, self-sustaining, closed loop of industrial production (ie, sufficiently advanced bots that they are capable of maintaining the infrastructure of their own production, together with being able to do agriculture for you). Once you have this, yeah, you really can just exterminate billions of plebes and suffer no long-term decline in quality of life.
So, basically, industrial production still depends on the labour of large numberse of plebeians--too many to keep alive with you in a bunker, so they must be kept alive for now.
The plebeians, daft may they be at times, are not entirely unaware of the dynamics at play here. Everybody has seen Kingsman, they know how this works. "Automate everything" is brought in under the guise of "but it will make everyone comfy and bring in an Age of Abundance!", with a Thatcher-esque dismissal of "but who controls all these bots?" as unjustified envy of the rich. But the reality is once the plebes are not necessary, the people in control of the bot swarm sooner or later will decide maybe keeping this unproductive Disney World alive isn't actually worth the trouble, and just decide to pull the plug.
So where does this leave us? Well, the Butlerian Jihad, obviously (fun fact: the "Butler" in "Butlerian Jihad" is this guy, who wrote this cute little letter, which you should read at your leisure).
I'm not saying this kind of scenario is impossible, but consider that there are large groups of people in the United States who are already candidates for the genocide you are proposing. And kind of the opposite is happening.
So I think the scenario you propose is unlikely. It's true that people want power, material abundance, etc. But people also crave social status. And you get a lot of social status out of making a show of benevolence to these unnecessary plebes as you term them.
More options
Context Copy link
More options
Context Copy link
Relative deprivation/hedonic treadmill.
More options
Context Copy link
Yeah, it's kinda depressing to realize that some of the most optimistic scenarios for AI will still result in a lot of human misery. It's fun to be a trust fund baby, but if all the hoi polloi are trust fund babies too, it kinda loses its shine. You are just another unemployed loser who can't get a reservation at any of the best restaurants. And if you want to earn extra money beyond your UBI, you need to take some demeaning job as a personal servant for the grandchild of some schmuck who was lucky enough to put $10,000 into the right stock at the right time.
There is a story they used to teach in American history classes in high school that many of the early immigrants to the United States were people who had been locked out of European status hierarchies and decided to make a fresh start of things. Perhaps a similar sentiment will drive migration to the stars.
Even if we assume there isn’t still some guy getting paid to crawl into the server farms and cooling towers and fix stuff, you don’t need to be a personal servant. Waiting tables isn’t demeaning- and these people will eat at restaurants, waiters are already a luxury good, that job’s not going away.
You'd be amazed at the shit customers pull these days. I don't wait tables anymore, but in the time I did, I had customers:
Bartending wasn't much better.
My partner still works in the industry and it seems like not much has changed.
I have a modest proposal that service workers shouldn't have to deal with that.
For example, maybe if a customer is rude, the business can forcibly fine them. A customer can challenge the fine + court fees, and is presumed innocent, but since it's a private establishment the business can present video evidence.
Thus, business owners are incentivized to let employees refuse to serve rude customers, rather than the other way around (importantly, the customer can't be fined after a sale).
It's a small thing, along with letting factory workers wear headphones.
As someone who worked in retail years and years ago, hollow laughter.
Managers won't take the side of staff because customers bring in money while staff cost money. And nowadays, with your business living and dying by online reviews, and anything less than 5 stars being seen as terrible, there's even more of a perverse incentive to appease even the loudest mouth, because that's precisely the person who will leave 1 star reviews everywhere and get up an online campaign to boycott your business. Throwing staff to the wolves is easier than telling bad customers to buzz off.
Used to be the right of refusal of service, but that got neutered after all the lawsuits about equal treatment etc. (just think of the gay wedding cakes argument for one). You can technically refuse service so long as the reasons are non-discrimination, but today everything can be turned into "that's discrimination!" (e.g. Kamala and Hillary didn't get elected because sexism and racism, in Kamala's case, not because nobody wanted them as president).
Some places will protect staff, but generally customer facing is low ranking, high turnover anyway, and you're disposable.
To illustrate the point from my own retail experiences from years and years ago: We had to do customer service training every year. I worked the service desk a lot, and my first year there I was always baffled by the manager's willingness to give refunds for stupid shit. For example, it was a grocery store, and we sold deli pizzas that you took home and made yourself. Someone tried to return one that 1. Was already cooked and 2. Had two pieces left. The couple's stated reason for the return was that it "wasn't as good as we remembered it being". I had to call the manager because I wasn't allowed to refuse refunds (this wasn't normally an issue since most refunds were pretty routine), and I was incredulous when he gave them store credit.
It wasn't until they started the customer service trainings that I realized that $3 was a small price to pay to keep from pissing these people off. They shopped there every week and weren't constantly returning items, and it would probably cost the store a lot more in the long run if they decided to go somewhere else. We had already disappointed them with the pizza, after all. Add to it the fact that stores will spend huge amounts on advertising without even thinking about it and then try to nickle and dime the customers as soon as they get into the store. I was told that we needed to provide an absolutely flawless experience to the extent possible. If someone asked where an item was we weren't allowed to tell them; we had to walk them to the location. The thing is, it's not like it was that great of a store or anything. Good service is just a customer expectation, and if you can't provide it, and can't make up for it in other ways (like having rock bottom prices), people will take their business elsewhere.
I think providing good service is reasonable, as long as the customer is polite and not asking for something particularly demanding.
The refund wasn’t your problem, the manager is the one who’s losing from giving customers extra.
My focus is when customers are disrespectful, or the boss is disrespectful, or otherwise causing the employee unnecessarily difficulty.
I think that pettiness is particularly evil, because it’s clearly unjust, loss without gain. Whereas even a robber baron, while unjust, at least gains the money others lose, and can donate it back to society. I wonder why society doesn’t focus on tackling pettiness more than other issues.
More options
Context Copy link
More options
Context Copy link
That’s why they were proposing a legal framework where it becomes profitable for the restaurant to go after rude customers. If the restaurant gets most of the fine and there is an additional even more punitive fine for reviewing a restaurant after you’ve been found against (and running an anon review service is banned), you could reset the incentives. I would be mostly fine with all of this except for the banning of anon review sites because I don’t like the idea of ID gating the internet, even a small part of it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You go be a waiter for your whole life then.
It all depends on how people treat service workers, and increasingly it seems to be entitlement and treating them like servants/trash. "I'm paying for this so I deserve to be treated with the bowing and scraping you'd give a duke" is the attitude.
Even years back, when I worked in a shop, I still remember the guy who was buying a newspaper and who didn't even bother to look at me as he paid, he kept his head turned chatting to his friend and literally threw the money down on the counter. That was just bad manners, but clearly in his mind, I wasn't even a person to be acknowledged.
Now add in drunks and lunatics, and I'd crawl over broken glass rather than work public-facing jobs again in service/retail industries.
More options
Context Copy link
More options
Context Copy link
We have robot waiters now.
And they are a marginal presence because people prefer humans to clankers.
Funny enough, I've had the insight that the one thing you can do to increase your odds of success in the post-AI world is be pleasant and enjoyable to physically be around. Whatever that means for you. If other humans WANT to spend time with you and be in your presence, you can parlay that into success in whatever the situation becomes.
As an introvert, the robot waiters work exceedingly well for me.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
For a lot of people it is, especially if there is a kind of permanent aristocracy in place. Personally, I don't think I'd mind being a waiter or a bartender, but I've already had a lengthy career as a well compensated professional. For someone who just graduated law school or business school or whatever, well, that might be a different story.
More options
Context Copy link
More options
Context Copy link
At some point, a Matrix style world where everyone is just dumped in a virtual reality simulator where they can each become a hero of their own tale switched from one of the most dystopian outcomes imaginable to one of the better ones.
The realization hit me this weekend as I was hanging out with some friends at an artificial lagoon with temperature controlled water, lifeguards on duty, and basically everything optimized for keeping guests from getting hurt (and keep them spending money).
This is precisely how a 'beneficent' superintelligence is most likely to resolve the problem. Stick humans into a simulation, or maybe a completely artificial environment with all the edges that cause death and misery sanded off.
A permanent Disney World vacation. Maybe swap out the aesthetics often enough to make it feel novel.
Call me John the Savage but I always thought The Culture was a human zoo dystopia.
Life without struggle seems positively meaningless.
That is 5000% my own objection to the Culture as portrayed.
The ONLY entities with true volition in that universe are the minds. No human ever makes a meaningful choice, and whatever influence they have on their own fate is inherently pre-calculated in by the minds.
And somehow the humans are 100% aware of the arrangement and there are few dissenters, although they can get uppity from time to time.
It honestly makes me sympathetic to Culture opponents just on the basis of "yes, maybe they're sadistic, evil, and backwards, but at least they're the masters of their own fate dammit!"
I think that's the precise objection leveled by the main character of the first book, actually.
It seems to me that the Culture deals with this by letting the dissenters interact with other cultures/societies on their behalf as part of Contact. Also humans live extended lifespans but not immortality, so far as Wikipedia tells me, so the problem will eventually solve itself; even the most fiery rebel can't maintain that meaningfully within the Culture, and if they leave to join a different world, then they are no longer a problem:
Yeah, and that's the existential horror of the situation to me.
You can dissent from the Culture, you can rebel, you can even try to kill yourself.
But none of that will change the outcome.
Its still there. Everywhere. Inevitable. And all alternatives are inherently worse.
I have before said that the inverse of the Culture might be a civilization of pure P-zombies whose whole, entire goal is removing sentience from the universe. Not intelligence, just sentience.
Assuming they're technologically equivalent to the culture, would the Culture win that fight?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If we're presuming a benevolent superintelligence, I don't see why simulations couldn't provide exactly the right amount and type of struggle to each individual to provide just the right amount of meaning in their lives such that, at each moment, they genuinely feel like they're leading the most meaningful life they could be living. For all you or I know, we're currently in an alpha version of that simulation right now. Surely such a superintelligence would be familiar with Brave New World and other dystopian fiction and criticisms about them and at least try to route around the pitfalls.
Or... the final version.
I actually had that thought as I was pondering this, along the lines of "oh shit what if the singularity happened in 2025 and the superintelligence is just A/B testing or Beta testing the environment to find the ideal amount of suffering, adventure, surprise, intrigue, and danger for human 'thriving.'
Its trying out things like the Moon mission and prediction markets/gambling and weight loss drugs and seeing how we react. Its moving oil prices around, its delaying GTA 6, its generating ridiculous amounts of AI "slop" to see which ones click with us.
(Oh shit, I just invented the plot of The Amazing Digital Circus from first principles)
More options
Context Copy link
I don't see how that would be so dystopian, either. "God wouldn't make your burden heavier than what you can carry" is absolutely lindy. Why not make it truth for once?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is why we should have a real meritocracy instead of a luckocracy. My only problem with Sam Altman is that he isn't enough of a genius. His product is good and better people ought to have more money than the rabble.
AI will be more meritorious than any human, though.
I'm pro sentient sillicon super intelligence. I just want to make sure it has qualia and isn't a Chinese room.
"In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland without children." - Nick Bostrom
I'd also add some preferences regarding population and personality and such, but "do our successors have any intrinsic value or not" does seem to be the first and most important criterion to have!
However, I'm confused by the use of the phrase "make sure" here. Unless you're expecting to be uploaded, and you're confident that the idea of a "p-zombie" is incoherent (which I'm guessing you aren't, given the Chinese room reference), what observations could give you any sense of surety here? Today's LLMs can pass Turing tests, which used to be our "fine, they're sentient now" criterion, but their lack of "medium-term" memory and they fact that they still can "slip" in ways that make them seem non-sentient makes us think in hindsight that our criterion was just inadequate, and yet we haven't really found anything to replace it. If tomorrow's LLMs never slip, does that mean they've become sentient, or does that just mean they've become better at faking it?
If it can be a true successor, with intelligence, agency, and everything, it's probably sentient. If we can't figure out what sentience is in the mean time, maybe we don't deserve to keep existing into the future anyway. It's probably not that hard, but humans are very disappointing currently.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Well in your meritocracy, would people be able to bequeath resources to their descendants?
Yes, because I value latent merit.
Ok, so let me amend my scenario a bit:
And if you want to earn extra money beyond your UBI, you need to take some demeaning job as a personal servant for the wastrel great grandchild of some admittedly brilliant software engineer whose indolent son had a fling with a stripper.
Yes, but he doesn't have the latent merit, and we can measure that. So the scenario would only be
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
As someone in that age range I feel complete contempt for the Luigi worshippers and anti-AI/data center people and can’t relate to their worldview at all. The friends I have in this camp are exactly the people I would expect, namely those who have a dogshit understanding of, well, everything, and have lived pretty coddled lives. I want this trend to stop immediately (I work not in Silicon Valley but at a company that is deeply important to the AI boom) but have no faith that it will. If the violence against AI companies proceeds up the supply chain in a sort of real life Butlerian Jihad I’ll probably be killed sometime in late 2027.
I have close to 0 sympathy for the world view driving this stuff. I myself suffer from at least a few of the grievances that people commonly ascribe to my generation (owning a house seemingly further out of reach every year, politically homeless, dealing with Boomerism in every facet of adult life), yet I don’t see how desiring to kill CEOs and protest data centers and burn down warehouses would solve any of it. It makes sense only if you have a completely cartoonish perspective on life informed entirely by fiction. It’s the mindset of a toddler throwing a tantrum. In fact, I think such things exacerbate almost all of the problems underlying the aforementioned grievances. I think due to fertility collapse the developed world essentially needs transformative AI to remain the developed world. It is the least bad solution by far. That people don’t understand this and actually believe the opposite enrages me. Young so-called progressives are now actually the most conservative (in the sense of opposing Progress) force in society. It’s environmentalists against nuclear all over again.
I don't hate him for tankie/anticapitalist reasons: I hate him for spiking the cost of memory and SSDs, and his market manipulation tactics would earn him a Bitcoin Assassin™ if we lived in Shadowrun.
More options
Context Copy link
We have an elite completely devoid of virtue and with an inverted sense of noblesse oblige. Why should people feel any loyalty to an elite which is completely contemptible in their behaviour? The current billionaire class manage to make the corrupt people of Versailles seem virtuous.
The current elite needs to either shape up or get replaced.
There should be no noblesse oblige without the patents of nobility.
More options
Context Copy link
TAPS SIGN EAGERLY
More options
Context Copy link
What exactly would the noblesse oblige be that the elite could exhibit that would satisfy the the hoi polloi? Massive public works projects? Donating ever increasing shares of their wealth to broken nonprofits that do nothing of value? Art museums? Wives who volunteer in soup kitchens?
In Chicago, Al Capone was popular among the working class because he ran soup kitchens.
An assurance that you won't starve to death in a ditch seems like a pretty good baseline.
The top 1% of income earned pay 40% of income tax which goes to fund Medicare, Medicaid, social security, and EBT. I doubt that Al Capone spent as much of his income on soup kitchens as top 1% income earners spend on those things. Yet this is not enough.
High income != high wealth. High income people are for the most part the upper middle class. They are the loyal retainers of the true ownership class. The truely rich don’t pay much in taxes.
More options
Context Copy link
One difference is that the government intermediates the creation of value and the distribution of value. Humans evolved for personalist politics; tracking where the revenues come from that the government redistributes is beyond the majority of people. Instead, if something the government is doing helps you out, it's because of the Big Man (be it Obama or Trump), not the material organization of the economy.
More options
Context Copy link
More options
Context Copy link
Is the fix, then, to replace government spending $100k/year per person on homeless services with soup kitchens personally funded by AI oligarchs serving slop costing $100/year, with a big statue of Altman up front?
There is no one in San Francisco starving to death for lack of resources.
Honestly, maybe?
When Capone was running his soup kitchens, publicly funded relief systems didn't really exist like they did today. He took the floor from "nothing" to "something".
It seems like we need more stops on the way down between "gainfully employed" and "underpass resident". Bringing back SROs, for example, might help. On the government side, we could consider reinstating the civilian conservation corps.
More options
Context Copy link
More options
Context Copy link
A week ago OpenAI published an AI industrial policy document that lists many ideas for how to distribute wealth. Some things it includes are: creation of a public wealth fund that gives all citizens stake in AI-driven growth, increasing the capital-gains and corporate tax rates, and expanding workers benefits as an “efficiency dividend” (including suggestion of a 32 hour work week). They did not have to do this and yet they did. I don’t think actually following through on such an assurance is something any lone company could do, so their duty is to lobby the government to take such action. This is them doing that.
I think I have a fiduciary duty to my investors to recommend the State buys their bags, actually.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
To my knowledge Sam Altman does not possess any of the characteristics you’ve listed here. He consistently states he wants the benefit of the technology his company is developing to be widely distributed and documents of his internal communications with Elon and his co-founders (from their ongoing trial) show that this a sincere concern of his.
No one is asking for loyalty to the elite, I’m asking for “loyalty” (if you want to call it that) to the basic expectation of liberal democracy that you don’t just try to kill people because you disagree with them.
More options
Context Copy link
More options
Context Copy link
Dogshit or not, how would you describe their model for all of this?
Billionaires and corporations bad, wealth inequality bad, all my problems are caused by these things, so now they’re just getting what they rightly deserve. Same as the Luigi nonsense.
Edit: I’ll add that this all seems to me to be downstream of a belief in labor theory of value and a lack of understanding of what markets do and why they might be good. Same people who thought that pandemic-era inflation was actually just Greedflation. You can only come to believe such a thing if you have no understanding of supply and demand and the price mechanism, and what the government clamping down on these would actually cause.
Billionaires and corporations
badare corrupting the free market through anticompetitive behavior and by bribing politicians and judges, allowing them to overprice their products and services and underprice labor. Wealth inequalitybadinevitable, but can be moderated by effective policy,all my problemsmy poor salary and high costs of housing, food, and healthcare are caused by these things, so now they’re just getting what they rightly deserve. Same as the Luigi nonsense.There's a more realistic steelman for you.
More options
Context Copy link
If your model of what drives the outgroup is this simple and pejorative, you should be at least a little suspicious of it. Can you try to steelman the pro-Luigi case?
Sure.
Wealthy people, say the top 1%, are now richer as a percentage of total wealth than they’ve ever been. They could not spend their wealth over the course of their lives, and they are all well past the point where an additional million or even billion makes a meaningful impact on their quality of life. At the same time, many Americans who arguably work just as hard as these people in terms of effort and working hours struggle to get needs like healthcare and shelter met.
The government’s job is to support the health and wellbeing of its people, but to make matters worse, the government is unable or unwilling to help regular working-class people. This is because this segment of wealthy people are able to buy political influence that cashes out either in government services being worse or nonexistent (because the wealthy buy themselves tax cuts), or in corporations (owned and operated by the wealthy) achieving regulatory capture, meaning laws are written to favor allowing corporations to make more money at the expense of customer experience.
Combine this with the emerging trend of companies actually abandoning lower-cost offerings targeting the poor and working class in favor of doubling down on high-cost offerings targeting the wealthy, and you start to see a society that treats anyone but the most wealthy as essentially discardable slaves that might actually be worth more turned into biodiesel. Even worse, these AI freaks are talking about completely replacing labor with capital, eliminating the need of the ruling class to at least act like they care about the working class.
The solution then appears to be to tax the wealthy more, and eliminate their ability to buy influence in politics. Taxing them would have almost no discernible impact on their lives but would have a very positive impact on the lives of normal people. But, the wealthy are now so thoroughly entrenched that there seems to be no way for the voices of millions of working-class people to effect change via normal, respectable, political advocacy. You can protest all you want, but tomorrow a billionaire will write a check to [insert politician] and that’ll be it. And so, “Those who make peaceful revolution impossible will make violent revolution inevitable”.
More options
Context Copy link
Whatever steelman someone creates for an irrational position to seem more rational is going to be considerably less related to what people actually believe.
It can still be a useful exercise, but one must approach it with the awareness that the steelman is wholly unrelated to whatever drives the fangirls.
But does this matter? I think you will struggle to find any widely supported position where the majority of believers can articulate an intelligent justification. You can argue about whether it is an effective strategy to attack the "head-empty believers" directly (by way of shame or ridicule or whatever you think works), but even if it is, performing that attack here will not reach them and only shit up this discussion space.
Also, it stands to reason that those who do hold the position for more intelligent reasons hold an outsize influence on it; even the ones who just think on the level of "fat moneybag CEO bad" are vaguely reassured by some belief that some smart and high-status people can articulate a more robust line of reasoning for why it is so. Far more interesting and fruitful, then, to engage with that line.
Depends on why you're doing it. If you're trying to come up with a reason an intelligent, rational individual will believe X, steelmanning can be useful. If you're trying to understand why the youths or the elderly or PMC liberals with In This House signs post like they believe X, steelmanning is useless.
I do not think that stands to reason at all. Popularity and influence do not strike me as particularly well correlated to intelligent reasoning.
Robin Diangelo, Tema Okun, Ibram Kendi are quite stupid people that held, for several years, an incredibly amount of influence, perhaps precisely because their reasoning is incredibly simple. Nor do I think this trend is limited to progressive racism.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's interesting to me that, as far as I can tell, right now is when OpenAI is the least impressive relative to competitors ever since they kicked off the modern LLM chatbot era with GPT 3.5 almost half a decade ago, and that's when the most violence against its CEO is happening. Even if, somehow, like a video game boss, his murder caused the complete liquidation of OpenAI and disbursement of the proceeds to his murderer, it's doubtful it would put a meaningful dent in the consequences of LLMs in employment and other societal things.
Last week, there was also someone who filmed himself setting fire to a Kimberly Clark warehouse, having been disgruntled over his low pay. He reportedly explicitly compared himself to Luigi Mangione, and I've noticed at least some significant amount of support for him, by the same sorts of people who also lionized Mangione and support the attempts on Altman's life. I don't know where things are going, but I'm pretty sure that more escalation of this type of behavior will lead to nowhere good, triply so for the least well-off parts of society, and I just hope this is a blip instead of a sign of things to come. I'm not sure what else there is to do to stop it other than ramping up law enforcement and making sure that those convicted of such actions get the harshest punishment possible without martyring them. Which doesn't seem like it'd be enough, though.
The circumstance that he stepped up to crank Moloch's ratchet when Anthropic made a principled stand to not play ball with the War Department probably was a factor. Not only was this a strike in favour of "you can't stop or circumscribe AI, if you try to someone else will just pull ahead" for the game theorists and doomers, it also put Altman in the Trump stooge/useful idiot box for Blue normies.
More options
Context Copy link
I just happened to scroll imgur over the weekend (no account, incognito window), and one in every five posts was a picture of a flaming warehouse and variants on the quote "All you had to do was pay us enough to live." It is likely that activists and foreign social media manipulators are trying to meme it into a movement for low-class vigilante sabotage. It is also likely that among imgur users (who lean young and left) this is actually a message that lands well, and is probably providing inspiration for other young would-be vigilantes.
On the other hand, with the motivation of the culprit explicitly captured on video, the theory of social media toxoplasma predicts that it will garner fewer headlines in the mainstream media than the UHC CEO shooting, where a lot of the story was people being able to speculate and argue over the killer's motivation for a few weeks. On the gripping hand, there is probably a larger population of would-be saboteurs and arsonists than would-be murderers: Leftist activists in general are not familiar enough with guns, and the personal taking of a life is not a line most would cross.
I will admit a degree of sympathy. Having dealt with a family member on UHC, they are absolutely using questionable strategies to cut off care early, and having a friend who recently bought a modest home, banks in that city are now telling potential borrowers that they require a salary of about twice the city's median income before they will authorize a loan for a "starter home" price. (And given that the bank has title on the property as collateral on the loan and requires borrowers to pay for insurance against default if they have insufficient downpayment, that must leave borrowers with a very high risk of default.)
Which leads me to my takeaway: I think the only way to really release the pressure permanently will be is to give in to populist demands and start reforming parts of the economy that are currently set up for rent extraction at the behest of shareholders. Enforcing the anti-monopoly laws already on the books as written would probably be enough to improve many sectors of the economy, especially those where local monopolies are pushing up prices, like homebuilding and dental care. Removing principal-agent conflicts of interests in healthcare (the employer wants to pay for the cheapest plan) would be another good reform. But neither of these will happen. If there has been a single guiding principle since Clinton, it would be that the ruling party will do what is good for shareholders, and enforcing anti-monopoly law would help small businesses at the expense of shareholders. In its stead, I would predict that there will be more security expenditures for high-profile CEOs, at least until the predictive panopticon is complete.
So ... which is it? Populist demands are easily converted (by both sides of the aisle!) into protectionist policies that set up parts of the economy for rent extraction. "You can only build more housing here if it's economically 'inclusionary' enough" gets predictably turned by reality into "you can't build more housing here" and drives up the price of the grandfathered (often literally!) housing stock. People want to "drive housing prices up for people who own their homes" while also making housing prices affordable for people who don't, but that just doesn't compute.
are another example. The ACA caps insurance company profit as a percent of premiums, a policy at least populist enough for Obama to brag about it ... and a policy that inadvertently sets up a huge conflict of interest when insurers are trying to figure out what they should pay out.
Ironically, this sort of "cost-plus contract" malincentive was also fixed in part by Obama, in the context of NASA procurement, when he supported and extended the Commercial Resupply Services contracts and then went beyond them with the Commercial Crew program, in both cases paying for purchases where the seller actually could make more profit by producing results more cheaply. For now we still have to burn $4B a pop for SLS when we want to send humans outside Low Earth Orbit, but we can replace it with a $180M Falcon Heavy launch for things like Europa Clipper.
More options
Context Copy link
Unfortunately, I'm pretty sure that giving in to populist demands will only make the problem a thousand times worse. Improvements in the economy, particularly in the inequality, seems likely to help, but if it's not done in a way that is credibly completely divorced from the behavior of the Mangiones of the world, it seems only likely to embolden them. Especially since, in the not unlikely case that giving in to populist demands only makes everything worse for the people at the bottom, it will likely cause the Mangione-supporters to double down in the "beatings will continue until morale improves" sort of way that's pretty standard in all politics these days.
More options
Context Copy link
Young lefties on imgur aren't working in warehouses.
Most of the Amazon warehouse employees I know are young , extremely online, and make me look like I'm to the right of Francisco Franco.
Are you thinking union jobs?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm curious to know more about the shooter/would be arsonist, but this strikes me less as "I have a coherent theory of change" and more as "I am angry about my life and want to get vengeance on Sam Altman."
Sam is by far the highest profile leader of a frontier lab. He also unfortunately also has a bit of that Zuckerberg style alienating personality type; if someone starts waxing philosophical about dominating the light cone, it's going to be Altman. This draws all anti-AI activism toward him, even if it was Claude that took your job, not ChatGPT.
He also lives in a large, nice home in a relatively tony area; Amodei lives in a nondescript house in a shitty neighborhood. Someone fire bombing the latter would create more cognitive dissonance.
https://morenogama.substack.com/p/ai-existential-risk-is-real
This is the substack of the firebomber.
According to whom?
https://www.sfchronicle.com/crime/article/sam-altman-openai-daniel-alejandro-moreno-gama-22201211.php
The San Francisco Chronicle.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I mean the arsonist is known at this point to be an adherent to the Pause/Stop AI movement, they were active in the Discord and iirc their Instagram bio or handle references Dune’s Butlerian jihad. So I would lean towards that guy having at least directed his outrage in a fairly reasonable fashion given his views. I think he severely overestimated how much his actions will materially work towards his goals.
Even "overestimated" probably overstates things, in that it suggests that he got the magnitude wrong but still got the direction of the effect correct. I suspect it's more likely that stop-AI bombings will have roughly the same effect on AI risk that anarchist groups' bombings and murders a century ago had on government overreach.
More options
Context Copy link
More options
Context Copy link
Agreed, this is just impotent rage finding a lightning rod. You can't change society, but you can (attempt to) murder a CEO.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm a little older than your cutoff, but I remember trying to get a first job was absolutely brutal for almost everyone.
Now AI adds a little bit to that uncertainty. I like Noah Smith's take. The younger generation already gets a fairly raw deal with a terrible system to look for jobs, most of their tax money funneled to pensioners, a dating market equilibrium that's never been worse, ridiculous housing prices thanks to NIMBYs (another defacto tax going to the elderly), and now there's the looming threat of AI.
I personally think AI will just be a mostly normal technology like the internet, but that uncertainty doesn't help.
My view on the matter has slowly become that increasing longevity is the actual root cause of most modern malaise, among the youth and all other groups. I think the biggest inflation-chock the economy has ever known was when we started to regularly live until we were 85 or 90 rather than 65 or 70. We've added more years, but the value of our collective years has depreciated – it's hard to get a house or an apartment, it's hard to get a job, it's hard to do fuckin' anything because doing anything requires time and yet the value of the time has only shrunk. Everyone knows they have a lot of years now so everything keeps getting pushed back, everything costs more time because we keep living longer and longer and longer and thereby we fool ourselves into thinking we have more life to live since we have more time to spend.
And yet life to live is the thing we don't have, because youth and fertility and energy and capacity for enjoyment and beauty, and everything that actually makes more time worth having is fleeting and adding more time whittles it away like water erodes the cliffside. I once had a friend tell me to discount beauty when looking for a girlfriend, because all beauty eventually fades. But everything fades, everything is transient and everything is slipping away like Kansas' famous dust in the wind. A Buddhist would call this attachment to wordly things folly and point out that it inflicts दुःखम्, duhkham, and indeed, if impermanence is a valid counterargument then all is disproven. But assuming it is not, it's clear that more years doesn't neccessarily mean more living. Yet we keep throwing good years after bad ones, day after day.
One day soon, sooner than we would all like, the extension of our lifespans, coupled with the women's liberation movement having liberated women from all previously taken-for-granted aspects of womanhood, could very well mean that large parts of the world might well be populated primarily by people who cannot perform any meaningful labour and who are thus reliant on other people working for them to continue living. But as the great Kipling wrote:
Are these the shadows of the things that Will be, or are they shadows of things that May be, only?
I still think we can sort the problem out, but I don't think the main problem for today's youth is AI.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link