This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I'm thinking about the culture war around AI, specifically the whole UBI debate. If AI truly does take over a lot of human work, there's a lot of people who are savagely agitating for a UBI on one side, saying we'll be post work. The other side of course says no that's not how it works, besides we aren't even close to being able to afford that. The left (generally) takes the former, while the right generally takes the latter.
What I'm surprised by is why nobody has so far mentioned what, to me, seems the obvious compromise - we just shorten the work week! As our forefathers did forcing a 5 day, 8 hour work week, why don't we continue there? Go down to a 4 day work week, and/or shorten standard working hours to 6 per day?
If AI truly will obviate the need for a lot of work, how is this not the more rational solution than trying to magically create a UBI out of money we don't have? How come this idea has barely even entered the discourse? I have been talking and thinking about AI unemployment for years and never once have heard someone argue for this compromise.
Because it's significantly less effective for knowledge workers. As an analogy, consider Amdahl's Law for parallel computing. The amount by which you can parallize a task is limited by the non-parallizable component. Except it's even worse for team projects, where the non-parallizable component is the meetings and coordination between teams, and that is a function of the number of people added to a project. The more people working on a project, the more overhead you have in coordinating their work, and the lower the marginal value of each additional IC. Often one talented guy working twelve hour days can outperform a team of 5-10 people, just because he has a compete mental model for the state of the project, and can just do things without spending hours in discussions and consultations.
More options
Context Copy link
Who's "we"?
If it's the government, then how? Currently, they can set incentives like full-time benefits at X hours per week and required overtime pay for >Y hours (X=30, Y=40 currently, IIRC), but they aren't anywhere close to banning work (outside of a few edge cases like long-haul trucking).
If it's the companies, then why? They'd have to pay four sets of benefits, rent four workspaces, run training four times, have single-path tasks take 33% longer, and have meetings with four people instead of three with a 30 hour workweek and 120 hour weekly workload. If they're early adopters, then they'd also attract people looking for reduced time commitments compared to the standard, which is horrible negative selection.
If it's the employees, then who are they? Most people I know look for overtime, not temporary layoffs or unpaid time off. That suggests that their optimal work week is above 40 hours given their financial needs and time commitments. Heck, some people take multiple part-time jobs (which sounds horrible) because they want to work more hours than one job can provide.
More options
Context Copy link
I'm not sure there's much reason for a UBI in a post AGI jobless world to begin with, you only need money currently to exchange with other people. Jobs exist to do things for other people so they'll do things for you.
Any jobless world then is either
No one is alive
People have their desires that are capable of being met by a job already met so they don't work to do things for others and those others don't work for them.
Of course there's a possibility that Person 1 has an automated life with no desires left unfulfilled and Person 2 has tons of desires, but that would only mean Person 2 has a job then, working to fulfill their own desires! And if there's lots of people who don't have their desires fulfilled, they can do what humans do now and participate with each other in trade of labor and resources.
Anything problematic regarding jobs is more likely to happen in the interim, where people get laid off and displaced in batches of suffering before they've achieved status of having their needs met without requiring others.
Regardless the greater problem here would be resource distribution. An AGI and automation might be able to do everything better and quicker than a human to the point there's no need for anyone to work ever, but eventually resources will run out. Maybe it'll be so super smart it even figures out how to prevent that, but the real issue seems to be
Group 1: Fully automated life Group 2: They literally can't work a job because all the resources are guarded by group 1's super robots and they die.
Even with all labor automated, there could still be scarcity due to lack of resources.
Fairly valuing and distributing the different kinds of resources could be via some kind of currency issued by and accepted by the AI overlords.
I think you also see changes in how quality is perceived: it's easy enough to put printed posters on your walls and sit in injection-molded chairs, but many (probably not all) who possess that sort of slop, to use a term coined by AI skeptics, will wish they had hand-crafted wood chairs and original paintings.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
In some industries, ObamaCare already did this -- the mandate for insurance kicks in at 30 hours.
But in other industries this is counterproductive. Those have overhead: training, management and communication (synchronization) that doesn't allow work to just scale. In a competitive market, those firms would always rather pay fewer employees proportionally more -- and those firms would outcompete those paying more employees proportionally less.
So you cannot "just shorten the work week" -- at least not for a large sector of the economy.
I liked the suggestion elsewhere in this thread that "white collar" work is that which the mythical man month does not apply to: 40 different rides with 40 Uber drivers is pretty much linear scaling, but 40 software engineers working 1 hour a week won't get anything done where one working solo full-time might.
Notably, medicine has adopted grueling shift lengths because shift changeovers are bad for patient outcomes.
More options
Context Copy link
More options
Context Copy link
It’s like when the Nazis designated certain sections of their populations to be “useless eaters” and then gave them a lifetime stipend out of the government’s pocket so they could continue uselessly eating. Or when corporations run the numbers and decide that ten percent of their employees aren’t making the company enough money so they decide the keep paying them anyway. Or when Pol Pot decided Cambodia didn’t need scientists or intellectuals so he gave them all a monthly check to stay out of everybody else’s way. Or how during periods of food insecurity, Inuit tribes would give most elderly and infirm members double portions of seal meat to make sure they don’t lose too much weight.
The civil wars will continue until the maximum wage returns to zero.
The problem with "well, it'd finally be a communist society" is that communist societies only work if the proletariat has a [distributed] monopoly on violence (and often they don't, or they lose it, which is why real communism has never been tried). From 1750 to 1900 (and even today, to a point), this was the hand-held firearm, but as soon as that decisively changes you can expect industrial-scale 20th-century style mass murder campaigns to make its triumphant return.
Of course, drones may just as easily not form another shot heard 'round the world, but killbots require highly advanced manufacturing and materials which are extremely capital-intensive. Which is why the average citizen, and in particular the average man, has seen his socio-economic standard of living decline over the last 50 years (hence why he is beholden to endless bureaucracy, the heckler's veto, environmentalism, etc. that didn't exist back when he was needed).
Can anything fix that? Well, maybe you can ask the AI how to build your own personal nuclear deterrent in a cave with a box of scraps (in which case things get very interesting; there may be a time period where haplocide is available at one's fingertips if the technology develops in certain ways)- again, it's not a sure thing that everyone just starts killing each other... just a likelihood.
More options
Context Copy link
Or when the richest cities in the richest country that ever was saw people people shitting and smoking fent on the streets and spent billions on them to make sure they had a clean place to smoke their fent.
Yeah, the people potentially losing their jobs due to AI aren't as sympathetic as that bunch.
Techbros really have had horrendous PR as a class and it's not helped that people associate the average techbro with the people at the top of the techbro pyramid like Zuck and Musk etc. and so are happy with seeing them suffer, even though their suffering is often done to benefit people like Zuck and Musk etc. (less headcount, more automation).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What is this actually supposed to do? If you want to work 4 days a week, 6 hours a day, you already can.
Well, the real problem is that there isn't a finite amount of work to be done. The AI taking over a lot of human work because they can do the work of a bajillion people doesn't mean there's no longer work for humans to do.
For a while, yes, AI will only take over some jobs and parts of jobs and free up humans for other productive work. But sooner or later, it will mean exactly that there is no work left for humans to do. Sure there might be a handful of niches where hunter-gatherers outperform agricultural societies, and where a horse is preferable to a motorcar, and where calligraphy beats printers and digital displays, but they won't be necessary for the perpetuation of those who actually create value and those who actually call the shots. Sooner or later, humans will be useless eaters, and they will be optimized away.
And this in turn comes down to whether AI have fundamental limits of their own, which is a matter of some contention not worth typing too much on here.
I thought the whole point of the thought exercise above was AI being capable of doing everything humans can do to the point where humans doing work becomes optional.
The OP is raising a question of a policy. That policy questions rests on a premise, but the premise itself can be faulty.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You actually cannot in most of the white collar world, it's extremely inflexible. Also, it's supposed to increase human flourishing and give us more time to spend on things we want to do! Ideally help people grow.
Imagine this attitude back when work was 7 days a week, 12 hour days. Work is a necessity, ideally we live as well or perhaps work on projects more aligned to our souls when we have more free time.
I do agree that there's always more work to do. I think our modern economy doesn't value the type of work left to be done very well, namely spiritual / emotional / community work.
Working in the white collar world is a choice, primarily done for money. If you don't care about the money, you can already go to a different sector with less rigid hours. If you do care about the money, it's not clear how a four day work week will make as much as a five day work week absent fiat government transfers, such as UBI.
This is an evergreen argument that has always been made regardless of the tech level. Why was it not compelling enough before, aside from the need/desire for more money?
Note the lack of limiting factor here. What [necessity] makes four days a week of drudgery any more reasonable than seven days, beyond current attitudes? Why should it not be viewed as soul-crushing and the [necessity] of work be paired back to 3 days of work a week?
And rightly so. People terribly interested in other how other people organize their spiritual / emotional / community affairs tend to be petty tyrants on how others should value such things if they themselves are not preoccupied.
If you want to work for money you can also work 6 days a week over 5 and get more money, and yet very few people, even those who enter the white collar world for money, do this. If there's a societal shift working Fridays is going to end up looking as quaint to Westerners as working Saturdays does to the right now (plenty of parts of the world where working Saturdays is normalized). We keep it at 4 days to start with because we need to take baby steps, it's a small move of the Schilling fence and once its normalised and if productivity has gone up so much we can shift over to a 3 day week as a society then we'll do that, the down to 2 days and so on if general societal productivity allows it.
I assume this was meant to be some combination of Schelling point and Chesterton’s fence; otherwise I’m not sure what the pre-Euro currency of Austria has to do with fences.
Schilling fences are a recognised term going back to the Great Scott himself: https://www.lesswrong.com/posts/Kbm6QnJv9dgWsPHQP/schelling-fences-on-slippery-slopes
More options
Context Copy link
See "Schelling Fences on Slippery Slopes" by Scott Alexander.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The white collar world is exactly that place that's dominated by frictions that scale with the number of employees.
People don't value it -- or if they did, they would pay for it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
No reason I’m working 45 hour days at a big box retail environment - an honestly great and thriving business. But the inertia for the management team, much less the associates, working 30 hours four days a week is absolute.
Due to the usual cost disease considerations if the rest of society moved over to 30 hour weeks you'd see big pay bumps for 45 hour days in retail environments compared to the current situation.
More options
Context Copy link
More options
Context Copy link
It would only be a temporary solution, one that works for a world with ai that is as good or slightly better than we have now, it doesn't work in a world where ai does everything better and cheaper than humans because our three days of effort is a valueless as our five days. No one really knows when we'll end up there, but I haven't heard a convincing argument for it being impossible and it might be years rather than lifetimes away.
I also don't think Ubi can be anything other than a temporary solution. Money has no value of its own, it represents created value. In a world in which humans don't create any value, why would the robots or robot-owning class want to exchange the real things they made for 'money' that's given out freely to everyone? What can they do with it? They can't buy anything that the humans produce because humans don't produce anything. They can only exchange it for things of robot-created value, which they already have.
More options
Context Copy link
To be honest the existence and shape of much of this discourse continues to baffle me. There's a discourse around AI causing unemployment, even though AI thus far has not caused any unemployment, and there isn't an obvious mechanism for it doing so. Isn't the evidence so far that incorporating AI into a workplace increases workload, rather than decreases it? It's always possible that this changes, but I'd at least like to see the argument that it will, rather than it just being assumed.
The pattern seems to play out time and time again - Scott's last post about China made me want to scream something. Where is the reason to think that AI is so militarily and economically significant at all? What if this is all nonsense? Isn't this all based on a vision of AI technology that has no justification in reality?
Maybe there's an AI 101 argument out there somewhere that everybody else has read and which passed me by entirely, but right now I continue to be incredibly confused by this discourse. We made systems that can generate text and images, but which are consistently pretty crap at both. Given time I can imagine them becoming somewhat less crap, but where do they pivot or transform into the sorts of devices that could cause massive technological unemployment, or change a war between great powers?
This just isn't true. Big companies are sacking people because of AI. Chegg, Salesforce, IBM, BT Group, Morgan Stanley... More are freezing hiring for juniors. Why are so many artists complaining about AI if it's not costing them anything?
Modern warfare runs on software. The logistics chains, communications, intelligence-gathering and analysis, sensors communicating with eachother to guide missiles over 1000s of kilometres, electronic warfare... all of it relies on an extremely complex base of computer code that nobody really understands that well.
AI improves that. If your drones can't be jammed because they're autonomous and can find targets on their own, that's a critical military advantage. If your radar software gets optimized by some black-box AI to counter whatever arcane modification the enemy made to their jamming software, that's a major military advantage. Optimization of complex systems in unintuitive domains is a strongsuit of AI. See AI-designed computer chips, Google has been doing that for a while. Modern AI systems are also useful for controlling high energy plasma in fusion reactor chambers, predicting the weather (obvious military and economic significance) and countless other complex domains. Cyberwarfare is another obvious domain where AI is relevant: spear-phishing, reconnaissance, actual infiltrations...
If you can quickly process huge amounts of satellite, infrared, aerial, sensor data to provide firing coordinates to your forces, that's a major military advantage. Not to mention fast translation of signals intelligence... There just aren't enough analysts to cope with all the data that militaries can scrape up.
Facebook is making billions and billions from its AI-optimized advertising, as are other big tech companies. Consumer-end text and images are just the tip of the iceberg.
It's not just 'producing crap text'. The text is valuable and useful. Domain-specific programs are valuable and useful. General text-generation (which is capable of doing advanced cyber tasks like writing kernels or performing cyberattacks) is valuable and useful. I can tell it's valuable and useful because people are paying billions for it!
Nvidia products are killing people at the front in Ukraine right now. Hell, an AI found me these links.
https://www.longwarjournal.org/archives/2025/06/ukrainian-intelligence-details-russias-new-v2u-autonomous-loitering-munition.php
https://isis-online.org/isis-reports/russian-lancet-3-kamikaze-drone-filled-with-foreign-parts
In conclusion, it's obvious and straightforward that AI is hugely important. That's why the great powers are racing to develop it, why the US is anxious about China getting AI chips, why the megacorps are investing hundreds of billions in it. The worldview of the AI-believer is simple and makes sense 'powerful technology - big investment - widespread use' whereas the AI-doubter is mired in weirdness 'mostly useless technology - big tech just throwing money down the drain for some inexplicable reason - no widespread use once you ignore most of the use'.
Let me ask a practical question. That's a lot of if statements you made there.
Has AI actually done any of those things? The specific examples you give of things that already exist are mostly speculative - all I can find about AI-designed computer chips, for instance, are hype stories in pop science magazines, rather than anything credible, and even they include the note that most of the AI designs did not work.
In general I am skeptical of the argument that goes, "I can tell it's valuable and useful because people are paying billions for it!" In a sense that proves that it's 'valuable', insofar as you can define value in terms of what people are willing to pay for, but none of that proves that it's useful. People are willing to pay vast amounts of money for obviously worthless things on a regular basis - NFTs are one infamous example.
I can concede a handful of highly technical niche applications - protein folding, plasma confinement, etc. - though even there I'm a little cautious. (I don't understand those technical fields, but in fields that I do understand, where AI is being hailed as a major breakthrough, the breakthroughs once analysed turn out to be, at best, heavily overrated.) But the AI-believer position, in cases like this, are that AI is literally going to make labour obsolete, or that AI is going to become superintelligent, achieve god-like power, and either usher us all to utopia or to utter destruction. And that's a position that is so far in excess of any reasonable estimation of what this technology does that I have to raise my eyebrows. Or yell at a blog post on the internet, I suppose.
More options
Context Copy link
More options
Context Copy link
It's obvious if you assume the models will improve up to, and then past human level intelligence.
At that point every job that can be done from behind a computer becomes trivial to automate. The remaining jobs become trivial once AI control of robots improve as well.
Now we're not there yet, and maybe we won't ever get there, but it's pretty hard to be confident one way or the other.
It continues to frustrate me that nobody seems to have found (or be seriously looking into, as far as I can tell) theoretical bounds on "intelligence", and some philosophers in these parts seem to assume that something "smart enough" can derive a complete physics, the universe, and divine the state of everything in it given nothing more than the text of the ten hundred most relevant books, which feels very ontologically lazy.
Although I'd be interested in reading anyone looking at this mathematically, presumably needing a very heavy dose of information and complexity theory. Links are appreciated.
Does a set of all sets contain itself?
Yeah, but now you're into the territory of religions, specifically those that suggests a deity actively maintains the (finite?) state of the universe in this way.
More options
Context Copy link
More options
Context Copy link
I think this includes a number of questionable assumptions built into the idea of 'human level intelligence'. The models we have now are very good at doing some things that humans struggle with, but are also completely incapable of some things that are trivial for humans. There isn't a unified 'intelligence' where we are at a specific level, and machines are approaching. Rather, human intelligence is a highly-correlated cluster of aptitudes; aptitudes which do not necessarily correlate in machines. It seems at least plausible to me that existing AI models continue to get better at the sorts of things they are currently good at without ever becoming the kind of thing we would recognise as intelligent.
Now on one level that doesn't matter - I'm just suggesting that AI might keep improving without ever becoming AGI. But AI doesn't need to become AGI to cause technological unemployment, or to give some nation or other a major military advantage, or whatever else it is we're worried about. But I'd still like to know what the mechanism we're predicting for that unemployment, or military advantage, or whatever else might be, because it is not immediately obvious how a language model produces any of those things.
More options
Context Copy link
More options
Context Copy link
AI and techno-futurism in general are the dominant religion of our times. They aren't seen as a religion because the worshipers ignore metaphysics and fundamental axioms. So yes, it's based on a religious vision a lot of the time.
Metaphysics and fundamental axioms preclude human level AI? Can you elaborate?
Depends on what you mean by 'human level AI'. I believe they preclude machine consciousness, but I don't know it well enough to explain it. I'd recommend the book The Experience of God: Being, Consciousness, Bliss if you want an intro to classical theistic metaphysics.
By "human level" I mean an AI that can perform tasks at the same quality as a human, such that you could replace a human employee with an AI employee.
I actually think consciousness might not be required for this level of AI. It's like how chess AIs are superhuman at chess without being conscious (probably).
I'm also not a theist. Any arguments about AI that really on theism are not going to be convincing to me.
More options
Context Copy link
More options
Context Copy link
I think what he's saying is that techno-futurism is not perceived as a religion because techno-futurists do not make metaphysical or fundamental claims.
Personally I think this is mainly a semantic difference. It's not clear to me that there's a difference between "X is not perceived as a religion because X does not do these things typical of religion" and "X is not a religion". Isn't religion defined, at least extensionally, by the things typical of religion?
I don't think the concept of religion helps very much here. Better to just say that AI hype is a form of collective irrationality or delusive behaviour, if that's what he means.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Rational or not, companies are radically reducing full time employees (FTEs) in their long term plans (LTPs). This occurred over the past 2 years, but I’m actually seeing the hiring budgets impacted now. This is everywhere and part of the bad job market right now.
This is a direct response to AI. We can debate on whether or not that’s a smart reaction, but it’s most certainly happening.
Got any specific examples? Would this be something they announce in their annual earnings reports or something else?
More options
Context Copy link
More options
Context Copy link
For example, if you / Trump / Xi take ChatGPT5000 and type in
You can turn a text generation chatbot into a do-things AI by just asking it what should be done next and then following its advice… in theory. In practice that seems not to work well, and it’s not clear why.
Because it's just picking statistically likely responses based on its training data, so it can't really suggest anything radically different (or more insightful or creative) than the human-generated information it was trained on.
You’re correct but they can’t do bog standard everyday things like running a store either.
I think there are more fundamental issues related to
a: chaining multiple stochastic processes causes randomness to build up in the system producing wacky results (even with a supervisor agent since that is also a stochastic process)
b: a lot of the things that we do are ‘learn with your body’ tasks that aren’t adequately expressed with words.
More options
Context Copy link
Yes, but the average human being can't do that either.
Doesn't that just further underscore ChickenOverlord's argument? The position he(?) is arguing against is that AI will somehow get better than any human at this, and CO is pointing out that as currently implemented, AI isn't really analyzing anything except language and so is unlikely to outperform the human-generated data it's trained on. Seems to me you're just giving a further reason to think the bar of what it can do is rather low.
ThisIsSin's argument is that the bar of what AI needs to do is low - not what it can do.
The idea being that even if the AI can't surpass the best humans, it can learn from them in order to be better than the rest.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I've argued since at least 2015 that the US government should invest, on the behalf of its citizens, in AI and automation companies. In the event that such automation pans out, each citizen reaps the benefits through his capital stake. This is inherently solvent (there is no promise of continued UBI payments). It would only "pay out" if automation was in fact successful. And it would help unify US citizens, who would feel pride of ownership in their country rather than a beggar for handouts.
Unfortunately, it looks like the time to do this would have been 2015. Genesis not withstanding, OpenAI, Google, and Anthropic are in too late a stage to need or desire government investment on behalf of US citizens.
You don't want a government that can do this. There is no incentive for politicians to go a good job with the investments, and every incentive for them to channel these investments to favored constituencies. You would get stakes in Solyndra, not Nvidia.
What about modern politics gives you the impression that something as simple as objective truth can cut through partisan affect?
Yes, look at housing. The government made a large investment in banking and housing finance a decade ago that has paid off, but the citizens are very very unlikely to see a tangible benefits from that investment.
More options
Context Copy link
More options
Context Copy link
Would've been a great idea, but yes far too smart and forward thinking for the U.S. Government to actually implement... sigh.
More options
Context Copy link
More options
Context Copy link
Four day weeks already exist, they're a thing in the trades that come with their own set of tradeoffs- either as part of a four/3 schedule or as a mon-thurs/tues-fri schedule. They seems to be slowly getting more popular
Interesting given that the trades are the least impacted by AI. I suppose I'm thinking white collar work should be implementing some of this.
In general, you see four/three schedules at 24/7 facilities, so to spread the undesirable hours around, and four day weeks when no one trusts management to send them home at a reasonable hour.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
AI taking over some human work doesn't make it practical for all humans to work less. It makes it so some humans are useless, while the others need to do as much work (or even more!) as they ever did.
Don't think this is true at all. We can always find uses for humans, even if it's just serving others. Big disagree.
Serving others is no longer considered an acceptable use of humans. Further, the humans most likely to be laid off by AI are not well-suited for it. During the dot-com bust there were plenty of software engineers waiting tables in Silicon Valley, and it wasn't pretty.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is obviously the correct solution. AI is going to reduce the need for human labour by increasing productivity; rather than transferring the fruits of this productivity to the owners of capital it's much better to transfer it to labour instead by mandating a three (or even two) day work week as standard on the same pay as before, thereby not only creating a lot of jobs to coutneract the job loss from AI but also helping people get more of their own free time.
I've long been a proponent of a forced average long term (over say 6 months) 40 hour work week for Investment Banks etc., sure they can make you work a 100 hour week when a deal is close but to make up for that they need to give you a week and a half off to rest and recover. If the IB wants to preserve its man hours it can simply hire a lot more people, it's not like there's a shortage of capable people who want to go in that area or they don't have the money to do this.
The reason this doesn't happen is simply because the people at the top want to maximize their "PnL per partner" which is an argument I've started to see as more and more bullshit over the years (if you're happy with a yearly $2 million PnL per partner you shouldn't be any less or more happy if the people in $RIVAL_BANK are making $0.5 million or $5 million in PnL per partner, anything else is just PnL envy and should be beaten out of you by the government).
Investment bankers work long hours because they want to. Well, not the individual analyst, but it’s not a demand issue. The client (ie the CFO, possibly an few less lazy directors and a few corporate development or treasury depending on what it is guys who actually halfheartedly read (skim) the pitchbooks) doesn’t care whether that pitchbook is 20 slides or 300 slides. The modelling is bullshit anyway because it’s designed to produce a specific output that the client wants, and again everybody knows it. Everybody also knows that all the banks are interchangeable and that for any big normal M&A or E/DCM every major player is capable of achieving exactly the same result, and that in the end they will pick Goldman because nobody got fired for hiring them or Citi because the CFO plays golf with the vertical head or JPM because the CEO and the respective vice chairman went to boarding school together or whatever it is.
The reality is that investment banking is and has long been (probably since the late 1980s and the arrival of spreadsheets and digital data providers) hugely overstaffed. Analysts and associates shouldn’t even exist, they have a role in equity research and on the buy side, and maybe as job titles in sales and trading, but in actual advisory it’s a fake job. In 1975 the analyst was the guy who physically walked seven floors down to the corporate library and spent three hours finding the 1962 annual report for Philip Morris with the archivist so he could underline some figures and bring them upstairs and then spend four days building the most basic valuation model on paper spreadsheets that is vastly more simple and with more assumptions that whatever FactSet has already pre-generated today. The job is fake.
But everyone knows that clients have money and that you can’t bill $100m on a mega M&A deal if even the client knows you literally have a 5 man team on the job (that privilege is reserved for the Robey Warshaws of the world) , so it serves the industry to let juniors into the game in exchange for creating a hierarchy of fake-work make-work where cascading levels of VPs, associates and analysts invent pointless tasks to do.
It also gives you a chance to recruit the best connected from your analysts and mentor them to replace one of the people who matters as they age out of the game.
More options
Context Copy link
Even reading this makes me go "yuck" at the whole business model of these places. Prop shops etc. manage to produce more value per employee by only working them 40-50 hours a week (notable exceptions excepted) than investment banks manage. All that talent which could be put to good use elsewhere to benefit humanity gets wasted in IB make work.
I know places like Jane Street etc. are expanding out into more traditional type banking and trying to eat the lunch of these dinosaurs billing $100m on something that can be done faster and better by smarter people running a leaner operation but providing a more complete service for under $10 million (while the employees still work something resembling a 9-5).
Similarly in the legal world I know there are now barristers who with their junior bill around £700-£800 an hour but as a one two team coupled with a very hands off instructing solicitor produce more robust documents with a faster turnaround than the overcharging magic circle firms but the MC firms still get a ton of business from clueless corporate charging more to produce worse results just because clients want to communicate with people that have "Clifford Chance" on their letterhead rather than "4 Stone Buildings" even though your average junior at 4SB is higher human capital than a partner at Clifford Chance.
Jane Street, Point72, Citadel etc are massively scaling up their hiring. It’s like back in the dotcom era or early 2000s when people were saying Google would become one of the biggest companies on earth with only 5000 employees since you dont need that many people to maintain a search engine. Now they have almost 200,000 employees, mostly unnecessary, because everyone wants their fiefdom and when you start making money everybody wants a cut. As the big funds and quant shops start expanding into traditional banking territory they will hire bankers (as they already are) and they will bring headcount, because that is what humans do when there’s money. Jane Street is growing headcount at like 50% a year; revenue is growing faster, but my suspicion is that much of that isn’t due to the headcount.
I take a rather dim view of barristering as a profession. Many barristers are great people, but it’s always seemed like a job for the Oxbridge debate types who don’t like to work very hard and who get paid insane hourly rates to re-enact the Oxford Union in court, RP accent and all. Top solicitor partners make more but they really do seem to work much more too. Maybe I just don’t understand it as a foreigner, but my barrister friends barely seem to work and get paid big time hourly or daily (not sure which it is) to regurgitate the same arguments, cadence and so on for new clients. Plus it’s the clearest example of an AMA-type employment cartel in the UK, because they deliberately restrict training places to a trickle so that fees remain extremely high. The UK decided that lazy people with high verbal IQ and the right accent who read literature or languages at Oxbridge ‘deserve’ a comfortable £120k a year and so they have this process of the conversion course and bar school and then the pupillage bottleneck to give them the job.
More options
Context Copy link
Investment banking isn’t really an 80/100 hour a week job. No job is, there is research about productivity dropping off a cliff after 50/55 hours anyway. Investment banking is more like one of those jobs where the gap between personal time and work time is dictated by the role (like the army, or working on a ship) rather than a regular 40 or 50 hour a week professional office job.
For example, your investment banker friend says he works 9am to 1am every day. OK. Firstly, he’s not in at 9. You could walk through any bulge bracket investment banking floor at 9.05am and not even 20% of juniors would be in. The usual start time is maybe 9.45am, often 10 if it’s been a late night in the office. So your friend comes in at 10am. He drops his stuff off, then makes a coffee, checks his emails, all the rest of it. There’s usually no last minute work to do unless it’s the literal day of a pitch because the MD (who works from home 2 days a week, sees clients 2 days a week, and comes in from 8-4pm the other day) started reviewing changes to the deck at 8am over his breakfast in Surrey anyway. The junior reads the news, halfheartedly sends a few emails to the lawyers / client / whatever, attends an internal meeting and does some ‘research’ (a YouTube video and ChatGPT) for a couple of long shot rfps that the global vertical head wants to say the bank pitched for.
Then it’s lunch, quick trip to Farmer J, then a coffee, then a walk around outside, then he picks up some dry cleaning. Catches up with a colleague from another team over another long coffee. Then comes back to the office and sends a few more emails, MD comes back with a few small adjustments, maybe some light modelling, pull a few news articles to lazily include in a daily sector market recap summary he will send to some clients that they never read alongside the similar email from every bank and brokerage and research provider and newspaper. Then it’s 4pm. Your friend goes to the gym for a relaxed 90 minutes. Comes back at 6pm after a shower. The day really begins as the MD / director comes back with comments. VPs start getting more demanding. He orders dinner at 7 for 8. He eats, has another coffee, then gets down to real work while “chill beats to work/study to” plays in the background. He works through to 1am then goes home.
If you compare it your archetypal hardworking 45 hour a week PMC, the banker still has time to get his dry cleaning, have a long lunch if he wants to, go to the gym for a long time every day (there are others who sit in the lobby or a cafe and read, take a nap, go for a swim, go shopping, etc), do any “life admin” he needs to (he can go home briefly at 2pm to let in the plumber if he wants, as long as he’s back later), he doesn’t need to make dinner or clean up because it’s paid for every day etc. He actually has a lot of leisure time, my first staffer would play video games for like three hours a day in the afternoon and nobody cared. He doesn’t even necessarily have less sleep than the average worker (company car has him home on empty night roads by 1.30, he showers and sleeps, wakes up at 9, he can easily get 7 hours of sleep). He just needs to technically show his face in the office by 10 and then do his actual work between 7pm and 1am, so the MD can review it over breakfast.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
A network of N people working on a problem requires at least order
logNoverhead to synchronize their efforts and receive instructions/feedback.So unless someone accepts working 75% time for 50% pay, they are gonna naturally scale to working more.
Fortunate log(N) grows really slowly compared to N. Doubling N only requires adding a constant amount of extra overhead regardless of how big your company is, which can easily be handled by big employers.
The true extra costs of doubling N is the doubling of the total salaries you'll have to pay out, not the O(log(N)) extra overhead and if AI increases productivity to the point where the former is viable then the increased costs of the latter will be easily covered by a few extra months of productivity gains, your argument is at best one that this transition might have to be delayed for a few months to account for overhead costs, not one that it's not feasible.
You're already getting O(N) increase in costs due to the extra headcount by paying people the same but working them for half as long, the O(log(N)) increase in overhead is a minor triviality compared to that.
More options
Context Copy link
More options
Context Copy link
High praise coming from mister Count! Yes to me it seems obviously far more elegant than the frankly idiotic arguments for UBI that often get bandied about by otherwise very smart people.
Interesting, so you think that this idea is unpopular because it would basically increase the amount of internal competition in corporate hierarchies?
Nah, I don't think the people in charge of decisions like this think far enough ahead to consider the increased amount of internal competition etc., rather their thought process is a lot more base: they want to win the status competition with their current peers, and the way they do this is by having higher PnL per partner etc (PnL envy, like I said) and if they have to treat their workers as badly as possible to eek out those last few percentages then they'll absolutely do that for their own self ego.
Story I was told about someone who witnessed this event first hand (and who I have reasons to trust): Apparently one year Ken Griffin (Citadel dude) got visibly super angry at his senior team and demanded changes because Millennium (run by rival Izzy Englander) had managed to make more money than Citadel had done that year, even though it had been a very good year for Citadel compared to its average performance too. People like that don't belong anywhere near the reins of power in a society that has its head screwed on correctly.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link