This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
In a way, AI is harder on nerds than it is on anyone else.
It is interesting to see, now that it is ingrained into the personal and professional lives of vast numbers of ‘normal’ people, how mundanely it slots into the daily existence of the average person. I don’t mean that critically, I mean that the average person (especially globally but probably also in the rich world) probably already believed there were ‘computers’ who were ‘smarter than them’. ChatGPT isn’t so different from, say, Jarvis in Iron Man (or countless other AIs in fiction), and the median 90-100IQ person may even have believed in 2007 that technology like that actually existed “for rich people” or at least didn’t seem much more advanced than what they had.
Most people do not seek or find intellectual satisfaction in their work. Intellectual achievement is not central to their identity. This is true even for many people with decent-IQ white collar jobs. They may be concerned (like many of us) with things like technological unemployment, but the fact that an AI might do everything intellectually that they can faster and better doesn’t cause them much consternation. A tool that builds their website from a prompt is a tool, like a microwave or a computer. To a lot of users of LLMs, the lines between human and AI aren’t really blurring together so much as irrelevant; the things most people seek from others, like physical intimacy, family and children, good food and mirth, are not intellectual.
This is much more emotionally healthy than the nerd’s response. A version of the Princeton story is now increasingly common on ‘intellectual’ forums and in spaces online as more and more intelligent people realize the social and cultural implications of mass automation that go beyond the coming economic challenge. Someone whose identity is built around being a member of their local community, a religious organization, a small sports team, their spouse and children, a small group of friends with whom they go drinking a couple of times a month, a calendar of festivals and birthdays, will fare much better than someone who has spent a lifetime cultivating an identity built around an intellect that is no longer useful to anyone, least of all themselves.
I was thinking recently that I’m proud of what I’ve done in my short career, but that smart-ish people in their mid/late twenties to perhaps mid/late forties are in the worst position with regards to the impact of AI on our personal identities. Those much older than us have lived and experienced full careers at a time when their work was useful and important, when they had value. Those much younger will either never work or, if they’re say 20 or 22 now, work for only a handful of years before AI can do all intellectual labor - and have in any case already had three years of LLMs for their own career funeral planning. But in this age range, baited to complete the long, painful, tiresome and often menial slog that characterizes the first decade of a white collar career, we have the double humiliation of never getting further than that and of having wasted so much of our lives preparing for this future that isn’t going to happen.
Probably I am arrogant bastard, but after AI I feel just like a superhero in an origin story that has just discovered its superpower. My appetite for knowledge and understanding is voracious. I have many side projects on which I am progressing. Just waiting for some properly uncensored local models to dab into chemistry and biology.
Do I feel threatened - I don't know. I know there are turbulent times ahead. I know that being a codemonkey is no future option. But I see huge potential in the technology and I want to be part of it.
I think that AI hurts not the smart people, but Taleb's IYI class. The guys and galls for which credentialism was important.
I feel roughly the same. I think that AI will destroy a bunch of jobs that were the intellectual equivalent of menial labor, but create an equal or greater number of creative jobs. If you're writing formulaic grant proposals or building websites with React then AI is coming for your job, but that's not a bad thing. An LLM can replace a web designer, but only a full-blown strong AI can replace the UX designer whose job it is to tell the LLM what website to make.
LLMs won't replace the actual nerds. It'll replace the 0.5X programmers, the offshore assets, the email-senders, the box-tickers, and the bureaucrats. On a fundamental level there will still need to be someone to tell the AI what to do.
This is only true if AI plateaus. If it gets even a couple dozen IQ points smarter, those creative jobs are gone, too. And I don't see any indication of AI plateauing.
More options
Context Copy link
I feel like this is bad for mental health/fertility unless we have off-ramps for people. I don’t think there will be enough status seats for the highly intelligent which means more artificial status hierarchy (like woke). This basically comes down to everyone needs to be a playable character. Is this a good thing in society?
Even if someone is highly agentic I feel like people need breaks in life where they can just live and not be building. It gets really hard to have a family if you always need to be in a risk seat and can never step back into a support seat doing boring white caller work.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
There seems to be a similarity between AI and woke thinking in the workplace.
Right now in many businesses the expectations have flipped so that rather than being ashamed of using AI, one has to either use AI, pretend that you use it to your superiors, or keep quiet about the subject and hope it goes away. If you say out loud you don't use it, you are a drag, a buzzkill and a dinosaur (maybe a young dinosaur as I don't think intensity of AI use or AI boosterism corresponds with age).
Many people are even under pressure to use AI in cases where no one even pretends it is adding anything, so long as it gives them bragging rights to tell their bosses 'we used AI for this'.
It is/was pretty similar with woke thinking. There was a pressure to believe, pretend or keep quiet.
Both AI and things labelled woke can often get good results though.
More options
Context Copy link
This is something I've been thinking about lately, and was actually thinking of doing a WW thread because it's depressing me. I do not believe that LLMs can adequately program, but ultimately it won't matter what I think. What will matter is what the industry at large thinks, and there's a decent chance that they will believe (rightly or wrongly) that everyone needs to use LLMs to be an effective engineer (and that's if they don't replace engineers entirely with LLMs). If that happens, then I'll just have to suck it up and use the bag of words, because I have bills to pay like anyone else.
But the thing which sucks is, I like doing my job. I get a great deal of joy from programming. It's an exhilarating exercise in solving interesting problems and watching them take shape. But using an LLM isn't that. It is basically delegating tasks to another person and then reviewing the work to make sure it's acceptable. But if I was happy doing that, I would've become a manager ages ago. I am an engineer because I like doing the work, not handing it off to someone else to do.
Like I said, I'll do what I have to do. I'm not going to kill myself or go homeless or something rather than suck it up and tolerate the LLM. But at that point my career will go from "one of the biggest sources of joy in my life" to "something I hate every second of", and that really, really sucks. Of course I won't be the first person to work a job he hates to get by, but it's one hell of an adjustment to have to swallow. Right now it hasn't come to pass yet, but it's a possibility, and I'm not sure how I will be able to adjust if it does come to pass.
More options
Context Copy link
I'd say it's actually harder on artists more than everyone else (assuming you aren't counting artists as a subset of nerds). 90% is not 100%. At least for programmers reviewing code and structuring the solution were always part of the job, people who were fond of codegolfing crud in rust (look how much more elegant I can make this by using a right fold) are going to suffer, but only a little bit.
I imagine the same is true for physicists, maybe not, but the fact that they are willingly implementing it motu propriu suggests it is.
Maybe in a few years things will change, AI will be able to do everything fully autonomously, and we'll all end up at the bottom of the totem pole (or "the balls on the dick" as some will say). But so far that's not the case and, to be honest, the last big improvement to text generation models I've seen happened in early 2024.
Meanwhile I see artists collectively having a full blown psychotic break about AI, hence indie gaming dev awards banning any and all uses of AI etc. I think this is because it changes their job substantially, on top of slashing most of them, and also because it came completely out of left field, nobody expected one of the main things that AI would be good at would be art, quite the opposite, people expected art to be impossible to AI because it doesn't have imagination or soul or whatever. In fact, the problem with AI is actually that it has too much imagination. And revealed preference strikes here too, you don't see many artists talking about how they are integrating AI into their workflow.
It's quite revealing comparing the criticisms of AI from programmers vs from artists. From programmers the complaint is "I've tried AI and it sucks at doing X. Why are you trying to force me to use it for X?" when from artists it's "AI is bad because it steals from artists / has no soul / lacks creativity / other vague complaint. Nobody should be allowed to use AI."
More options
Context Copy link
Most art was already commodified, and it was commodity artists, not creative artists who got the most brutal axe.
Essentially, contrary to your point about AI having imagination, creativity is the primary skill it lacks. It's basically a machine for producing median outcomes based on its training data, which is about as far away from creativity as you can get.
But for most artists, their jobs were based on providing quotidian, derivative artworks for enterprises that were soulless to begin with. To the extent that creativity was involved in their finished products, it was at a higher level than their own input, i.e. a director or something commissioning preset quotidian assets as a component in their own 'vision', the vision being the creative part of the whole deal.
However, I do believe creative artists will be threatened too. It's a little complicated to get into, but I think creative art depends not just on lone individuals or a consumer market, but on a social and cultural basis of popular enthusiasm and involvement in a given artform. I'm talking about dilettantes, critics, aficionados here. It's a social and cultural pursuit as much as it's an individual or commercial one, and I think that AI will contribute to the withering away of these sorts of underpinnings the same way corporate dominance and other ongoing trends previously have.
So for the artistic field, I envision complete and total commoditized slop produced by machines, once the human spirit has finally been crushed.
If your market consists of 99 derivative rip-offs and one legitimately interesting and fresh idea, the fresh idea will take half the market and the 99 rip-offs will fight over the other half. If there are 999,999 derivative rip-offs, then they'll have to split their half a lot more ways but they still won't be able to push in on the fresh idea's cut.
Art is a winner-takes-all industry. The JK Rowlings and Terry Pratchetts of the world have many thousands of times as many sales as Joe Average churning out derivative slop that's merely so-so. The addition of more slop won't change the core dynamic. Fundamentally, anyone trying to get the audience to accept a lower quality product isn't pitting themselves against the ingenuity of the artist, but the ingenuity of the audience. Trying to hide information from a crowd that has you outnumbered thousands-to-one is not easy.
Okay, I like J. K. Rowling, I think she was underrated back in the day by Serious Literary People, but I still feel like bringing her up torpedoes your case about more creative artists going further.
More options
Context Copy link
More options
Context Copy link
Because creative artists got the axe a very long time ago. I expect the modal net earnings for a creative artist is already quite negative.
How much work has there ever been for creative artists? I would bet that a solid 95% of art over the last 1000 years has been one of:
There used to be a lot of jobs for people liks: local music hall player, freelance graphic designer, craftsman stoneworker, small town paper writer, etc. Admittedly most of those dried up long ago, though.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Eh. This is like claiming people who enjoyed traveling and being perceived as "worldly" would have been devastated by the internet allowing anyone to chat with strangers from 1000 miles away with minimal friction. Was that a thing? Plausibly maybe, but I don't recall much to that effect.
As someone who bases his identity a decent chunk around being intelligent, I'm not too worried. It turns out that a lot of it was implicitly graded "relative to other humans". I'm not to worried that calculators can do mental math better than me for instance. And smart people will be able to leverage AI much more effectively than dumb ones. We can already see that in stuff like education: AI is easily one of the best tools for learning that has ever been invented, but it's also one of the best tools to avoid learning as well. Naturally curious people will gravitate to the former on important topics, while less curious people will gravitate to the latter and suffer long-term costs from it.
It's highly unlikely that the value from human intelligence is going to 0 any time soon. If anything, AI could plausibly increase it rather than decrease it.
More options
Context Copy link
Eh? I'm very confident that's wrong. Normies might not appreciate the impact of ChatGPT and co to the same degree, but I strongly doubt that they literally believed that there was human-level AI in 2021. AGI was science fiction for damn good reason, it didn't exist, and very, very few people expected we'd see it or even precursors in the 2020s. Jarvis was scifi, and nobody believed that something like Siri was in the same weight-class.
To shift focus back to your main thesis: the normie you describe is accustomed and acclimatized to being average. Bitter experience has proven to them that they're never going to be an "intellectual" and that their cognitive and physical labor is commoditized. It's unlikely that being the smartest person in the room (or in spitting distance) is an experience they're familiar with. Hence they have less to lose from a non-human competitor who dominates them in that department.
On the other hand, their average Mottizen is used to being smart, and working in a role where it's not easy to just grab a random person off the street to replace them. That breeds a certain degree of discomfort at the prospect. I've made my peace, and I'm going to do what I can to escape the (potential) permanent underclass. It would be nice to have a full, accomplished career with original contributions to my professional field or the random topics I care about, but I'll take a post-scarcity utopia if I can get it.
Presumably via investments?
I've been... lazy in that regard. Far too much money in my account that's not accruing interest. But yes, that's a factor. I also earn an OOM more than I did back India, which definitely helps. If I was less lazy, I'd have put most of my money in the S&P500 by now, but I've already put myself in a much better place than if I'd been complacent about things.
I don't expect that this will necessarily make me rich in relative terms, I'm starting too low, too late. But I want enough of a safety net to survive in comfort for the (potential) period of unemployment when AI eats my profession whole, before we implement solutions such as UBI. Not starving, not dying in a riot, all of that is important to me.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I had a somewhat related idea to this. It's relates to ways that middle class professionals could be screwed. I haven't really hammered it out fully, but here's the gist of it. Basically, the value of automating labor is that it allows human resources to be freed up for other tasks. Rather than having one hundred artisans hand tooling goods, you have one machine operating by one engineer producing the same goods and then ninety nine people who can perform tasks in other areas of the economy.
But with AI, there will be an extinction of an entire class of meaningful work. That which is done by the middle class. There aren't adjacent fields for them to move into once displaced, as those will also be taken by AI. Their only options will be to move up or down, into different classes of the economy, and for the vast, vast majority of them, it will be a downwards spiral.
The area below the middle class economy is called the gig economy. So the value of AI is that there will be a wealth of gig workers, and thus fast food can be delivered more cheaply than ever before.
That is the one benefit of AI we are certain about.
There is a hypothetical scenario, a longstanding dream of science fiction, where with infinite labor afforded by AI there will be infinite opulence. However, some points that contest that are 1) there is only so much demand for consumables and market goods and services, so that economic demand begins to be overshadowed by status concerns and non-economic spheres of life in terms of desired things, 2) many of the inputs that go into supplying those goods and services are finite (i.e. resources) and so their creation can't be infinite, 3) political ramifications suggest reduced power and thus leverage for the displaced, and so their economic needs could easily be ignored by those who retain power.
All in all, there looks to be dark times ahead.
We already have, in effect, a trial run of post scarcity civilisations. Not complete or total, obviously. But western society is long past needing to worry about food and water.
I think men will play games and have fun in that kind of sci fi world. They'll find new and interesting things to pursue. They'll go sailing or rock climbing.
Women will play the status games, become depressed and create social problems via whatever the next social media is. Unless AI can turn this behaviour more productive at least.
Men's contests often don't look like rock climbing or sailing; they look like war.
But I don't think we'll get the sci-fi world. Scarcity will be with us always. Even if someone has to create it (by violently taking control or destroying the means of production), though I don't in fact think that will be necessary.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Emotional health isn't what it's about. You've got people who work with physical things, people who do intellectual work, and people who play monkey dominance games at a high level. The latter are almost always indisputably on top, but that hasn't been entirely true in recent years; there's been significant status overlap between the intellectual workers and the monkey dominance people. AI threatens to throw the intellectual workers all the way down to the bottom -- not even so high as the privileged slave levels they had in ancient Athens, but all the way down to utter uselessness, like drug addicted alcoholic bums but not as sympathetic. The monkey dominance people are of course overjoyed at this, putting these interlopers in their place has been a nagging goal for a long while now. Nerds are more threatened by AI than normies because AI is vastly more of a threat to them.
More options
Context Copy link
Even if the ai bulls were right (they're not), most of the remaining 10% of research work can still be done by grad students, postdocs, and other humans. We shouldn't expect to see any decrease in staffing at labs, but instead a huge increase in productivity.
More options
Context Copy link
Something I'm curious is how AI has been implemented to peoples professional workflows. My company has been implementing Ai in various places, but thanks to the level of human supervision and human-centric communication needed in my work process, I'm not convinced of significant human replacement for quite a while.
AI benefits:
Meeting transcriptions. Online meetings between two parties require each interaction to be recorded. Representative case notes are spotty at best, and the AI generally make significantly more complete and timely transcriptions freeing up a lot of time, and being more accurate than most human-written notes. Adoption of this tool has been spotty, but the people who are using are seeing significant benefit. Job replacement impact: 0, as no rep has someone to specifically write case notes. Benefit: significant.
Internal document searches. AI searches generally are better than our internal search engine in locating company documentation and resources. It is still hit or miss, but luckily the AI search provides the links it is citing, so I can go through the links to locate the specific policy or document I'm looking for. It's not consistent, but generally I use it before I use our internal search engine. Job replacement impact:0 Benefit: Medium
Email drafts: Great for rapid iteration of emails. They still need to be edited and reviewed, but they're very helpful if I'm having trouble finding the correct wording and I need to get something out quickly. Some people use it a lot, I use it only when I don't have a clear structure in mind. Job Replacement impact:0 Benefit: Limited to significant depending on user preference.
AI weaknesses
In the various professional careers I've held, I still don't see a significant AI impact in that it's replacing workers or reducing the intellectual motivation of young professionals. I still hold on to the idea that AI will be and is unable to innovate because it doesn't and cannot push against the zeitgeist on the data it is trained on. If AI was around during the time of the Wright Brothers, would it think human flight was possible?
The main problem of AI is people trying to use it to do their thinking for them, when it is most effective at automating monotonous tasks and increasing productivity. Maybe it's because I don't use AI in my life the same way many adopters have, but I don't see any significant impact in my day to day even though it is coming more advanced.
I'm probably in the 99.99th percentile for doctors (or anyone else) when it comes to the use of AI in the workplace. I estimate I could automate 90% of my work (leaving aside the patient facing stuff and things that currently require hands and a voice) if I could.
The main thing holding me back? NHS IT, data protection laws and EMR software that still has Windows XP design language. This means I'm bottlenecked by inputting relevant informant into an AI model (manually trawling the EMR, copying and pasting information, taking screenshots of particularly intransigent apps) and also transferring the output into the digital record.
The AIs are damn good at medicine/psychiatry. Outside my own domain, I have a great deal of (justified) confidence in their capabilities. I've often come to take their side when they disagree with my bosses, though the two are usually in agreement. I've used them to help me figure out case presentations ("what would a particularly cranky senior ask me about this specific case?" and guess what they actually asked?), giving me a quick run-down on journal publications, helping me figure out stats, sanity checking my work, helping decide an optimal dose of a drug etc. There's very little they can't do now.
That's the actual thinky stuff. A lot of my time is eaten up by emails, collating and transcribing notes and information, and current SOTA models can do these in a heartbeat.
To an extent, this is an artifact of resident doctors often being the ward donkey, but I'm confident that senior clinicians have plenty to gain or automate away. The main reason they don't is the fact that they're set in their ways. If you've prescribed every drug under the sun, you don't need to pop open the BNF as often as a relative novice like me would - that means far less exploration of what AI can do for you. Yet they've got an enormous amount of paperwork and regulatory bullshit to handle, and I promise it can be done in a heartbeat.
Hell, in the one hospital where I get to call the shots (my dad's, back in India), I managed to cut down enormous amounts of work for the doctors, senior or junior. Discharges and summaries that would take half a day or more get done in ten minutes, and senior doctors have been blown away by the efficiency and quality gains.
Most doctors are at least aware of ChatGPT, even if the majority use whatever is free and easy. I'm still way ahead of the curve in application, but eventually the human in the loop will be vestigial. It's great fun till they can legally prescribe, at which point, RIP human doctors.
Charting is not supposed to be the majority of the job and is more or less a recent invention (in the US at least).
I find OpenEvidence and other similar tools to be relatively unhelpful, especially since I generally have to cross reference.
More options
Context Copy link
More options
Context Copy link
I don't know how my coworkers are using it, but I've been having great results with replacing "google an excel function and hope somebody else had the same problem and got it solved".
More options
Context Copy link
More options
Context Copy link
There may be a bit too much romanticization of "salt-of-the-earth normies" going on here. Last I checked, the social atomization trend (friendship- and sex-recession) is just happening across the board, while many (most?) career-intelligentsia derive satisfaction both from their work and from those other things. It's not that one is a substitute for the other.
It seems that you acknowledge this ("This is true even for many people with decent-IQ white collar jobs"), but then you posit "someone who has spent a lifetime cultivating an identity built around an intellect that is no longer useful to anyone, least of all themselves". Who are these people, exactly?
Internet nerds like us who based their lives around forums, intellectualism, in my case, literature, etc. The new AI world of dopamine cattle harnessed by the tech fiends suggests total obsolescence of any sort of life that isn't fully grounded in the concrete or else enslaved for the purpose of dopamine-slop control. Admittedly, some people here have lives which go beyond the abstract.
I find this take so hard to understand. I like talking about things, learning about things, thinking about things. The existence of vastly more minds (mind-like objects, I'm using shorthand here) with whom I can do that is great! GPT or other AIs don't mind me asking endless questions about beginner-level stuff, or helping with technical things, or working through ideas.
Granted, these AI are mostly junior partners at the moment, or at least 'experienced friend who doesn't mind helping if asked but won't do stuff of their own initiative' and perhaps I'd feel differently if I really did just become an appendage, but at the moment things are great.
Personally I don't find AIs as fun to talk to as any human. To me, they're like an interactive encyclopedia. It is fun to read and learn about stuff, but they can't stand in for the human element, either on the individual level or the level of an entire society or group (like the motte). Ultimately I find them in some sense desirable in terms of their first order effects (helping with research, etc.), but it's their second and third order effects I'm worried about, where I think, as I explain elsewhere, they will kill off large parts of human culture, remap the class system, and generally work towards all the ongoing, negative trends that already seem apparent. In a sense they are a continuation of capitalism and its logic.
Does that include people with Down's Syndrome? Outright and obvious diseases aside, I can think of plenty of people who are so unpleasant/pointless to talk to that I'd speak to an AI any day instead. And even within AI, there are models that I'd prefer over the alternatives.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Spinners and weavers from 250 years ago. Imagine that your (and your ancestors) whole identity is your skill and craft, and then some nerd invents a contraption that makes yarn and cloth faster, cheaper and better and you end in the gutter.
Easy to understand why they were angry, and also easy to understand why their anger achieved nothing at all.
There's an episode of Lark Rise to Candleford where an elderly lady's bobbin lace is no longer needed by the local dressmakers, due to the new machine lace. Also a bit of other industrial commentary in other episodes, but that one always hits me the hardest.
More options
Context Copy link
More options
Context Copy link
Redditors. The irony is that their intellect was never that impressive anyway.
But seriously, there really is an entire cohort of people who were in the top five percent of their high school and college because they could sit still and gulp down boring bullshit who think they are somehow intellectually superior to the plebs they disdain. Usually it’s not actually the really smart people making big strides in science and tech.
You do realize how unconvincing it is to cite the top 5% of students as not really being all that useful? Do those people have any purpose in their existence in your eyes? Regardless of any unwarranted sense of self-worth, if they're doomed, then what hope is there for anyone?
I remember the sheer glee they had about factory workers, coal miners and truck drivers being driven out of business by automation and illegal immigration. Fuck ‘em. I hope they enjoy their brave new world.
More options
Context Copy link
When I worked at Google, about 5% of applicants got through the phone screen. A lot of them weren't all that useful.
Welcome to the black pill.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link