This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
In a way, AI is harder on nerds than it is on anyone else.
It is interesting to see, now that it is ingrained into the personal and professional lives of vast numbers of ‘normal’ people, how mundanely it slots into the daily existence of the average person. I don’t mean that critically, I mean that the average person (especially globally but probably also in the rich world) probably already believed there were ‘computers’ who were ‘smarter than them’. ChatGPT isn’t so different from, say, Jarvis in Iron Man (or countless other AIs in fiction), and the median 90-100IQ person may even have believed in 2007 that technology like that actually existed “for rich people” or at least didn’t seem much more advanced than what they had.
Most people do not seek or find intellectual satisfaction in their work. Intellectual achievement is not central to their identity. This is true even for many people with decent-IQ white collar jobs. They may be concerned (like many of us) with things like technological unemployment, but the fact that an AI might do everything intellectually that they can faster and better doesn’t cause them much consternation. A tool that builds their website from a prompt is a tool, like a microwave or a computer. To a lot of users of LLMs, the lines between human and AI aren’t really blurring together so much as irrelevant; the things most people seek from others, like physical intimacy, family and children, good food and mirth, are not intellectual.
This is much more emotionally healthy than the nerd’s response. A version of the Princeton story is now increasingly common on ‘intellectual’ forums and in spaces online as more and more intelligent people realize the social and cultural implications of mass automation that go beyond the coming economic challenge. Someone whose identity is built around being a member of their local community, a religious organization, a small sports team, their spouse and children, a small group of friends with whom they go drinking a couple of times a month, a calendar of festivals and birthdays, will fare much better than someone who has spent a lifetime cultivating an identity built around an intellect that is no longer useful to anyone, least of all themselves.
I was thinking recently that I’m proud of what I’ve done in my short career, but that smart-ish people in their mid/late twenties to perhaps mid/late forties are in the worst position with regards to the impact of AI on our personal identities. Those much older than us have lived and experienced full careers at a time when their work was useful and important, when they had value. Those much younger will either never work or, if they’re say 20 or 22 now, work for only a handful of years before AI can do all intellectual labor - and have in any case already had three years of LLMs for their own career funeral planning. But in this age range, baited to complete the long, painful, tiresome and often menial slog that characterizes the first decade of a white collar career, we have the double humiliation of never getting further than that and of having wasted so much of our lives preparing for this future that isn’t going to happen.
I find this wildly implausible.
Have you done research? It's not even about the tedium of grant-writing or whatever, people are doing a tremendous amount of routine work in data analysis and literature search, and scientific standards for programming are very low, "Ph.D code" is a meme. Even if AI doesn't contribute to the process of "discovery" per se (such as reasoning about hypotheses and planning experiments – though it definitely can plan experiments at this stage), it can trivially take over 90% of pure cognitive work-hours.
I recommend trying out some of the hot new models, with high reasoning settings. Opus 4.6, ChatGPT 5.2, Gemini 3.0 pro and so on, or even DeepSeek-Speciale. They're starting to make progress on really hard research-level physical reasoning tasks even without human guidance, and in a structured environment they are a great help to researchers.
P.S. Just an example of people's opinions, one of hundreds.
More options
Context Copy link
The "up to" might be doing a lot of the work here. Some (many) weeks, those people spend all their days just writing grant proposals, writing/editing research papers, peer-reviewing other papers, preparing teaching, and answering emails. On those tasks, you could have an LLM do 90% of the writing. Still going to involve lots of prompting, rejecting output and prompting again. You can also have it do 90% of your literature surveys. It is better at search than the old tools are, after all.
The question is if this "up to 90%" is actually what "drives scientific discovery". Because when a grad student shows up with interesting measurement results, the LLM will do 0% of the thinking of what that means, what the updated hypothesis is, what direction the research is going to go in, and what potential papers this might result in and what other measurements are now necessary to test the current hypotheses.
Same goes for peer-review. The LLM can write the boilerplate "this is garbage unfit for this kind of journal". The decision that the paper sounds fishy and the data looks unconvincing is not coming from the LLM.
Probably well over 90% of the effort put into research is already useless. It hasn't gone unobserved that we keep putting more and more resources into research with fewer and fewer results (compared to the early-to-mid 20th century, for example). Many excuses are offered for this such as "the low hanging fruit is taken", but perhaps they're all false and the current methods of doing research are just extremely wasteful.
It's certainly both. If your grant success rate in 2025 is 16%, you just have to write 6 times as many grant proposals. Does that additional labor lead to progress? No, on the contrary.
But on the other hand it is absolutely true that the low hanging fruit are gone. Look at the first Nobel in physics: X-rays. Even in the late 19th century, a single motivated human could just go and make a cathode tube from scratch. Glass blowing, vacuum pumps, high voltage source, some simple metal work, silver bromide coated plates. It's far from trivial, but really, you could do it entirely on your own, and fast. Stuff like that is mostly gone now. You need hundreds of thousands of dollars just for the experimental equipment - because for 100 years, legions of people have tried doing frontier work with little money, and they still do inside no-name university labs all over the world. The frontier now needs hundreds of hours of work from an army of expert technicians across a dozen specialized companies just to do the first test setup. And someone needs to pay for that.
Or do you have another explanation? The reward for cheap and effective science would be enormous. If it were possible, somebody somewhere would be doing it, right?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
FWIW a similar thought occurred to me 25 years ago -- that I might be among the last generations of people to experience the highs and lows of starting and building a business.
That being said, I think it's worth noting that feelings/predictions along the lines of "my generation is screwed" or "the current generation is screwed" tend to be very common. My generation was told there would be no way we could afford to buy a house; that social security would run out before we could collect any; that the American economy would decline and we'd all have to learn Japanese; that the smart thing to do was go to grad school and become a college professor; etc.
So based on experience, I will predict that your generation will be worse off in some ways; better off in some ways; and that there will be a lot of surprises.
More options
Context Copy link
Probably I am arrogant bastard, but after AI I feel just like a superhero in an origin story that has just discovered its superpower. My appetite for knowledge and understanding is voracious. I have many side projects on which I am progressing. Just waiting for some properly uncensored local models to dab into chemistry and biology.
Do I feel threatened - I don't know. I know there are turbulent times ahead. I know that being a codemonkey is no future option. But I see huge potential in the technology and I want to be part of it.
I think that AI hurts not the smart people, but Taleb's IYI class. The guys and galls for which credentialism was important.
Agreed. AI is amazing if you ask me. AI can write code extremely well right now but its "research taste" still leaves a lot to be desired. I should have a few more years of a secure job in me at least, and by then hopefully I should have enough money that I don't need to work and can focus on what I genuinely want to do myself, for which AI is a massive force multiplier. I'm not one of those people who competes with others based on the size of my bank account, to me money is what I use to ensure I can have and keep my time for myself doing what I want.
More options
Context Copy link
You are comparing yourself to AI at its present capabilities (or the capabilities it has that have already diffused to your interest and skills). Give it some time.
I do manage to be cautiously optimistic, though, at least for my individual future. I have no illusions that I'll be able to provide any economic value in 5 years, and I'm fine with that. And I'm excited for many of the same reasons you are: knowledge is so much easier to find and learn than three years ago, and I'll have decades to learn things about the world that no human knows today. The only question is how to protect myself from futures where we evolve into a two class society of the high and the low.
More options
Context Copy link
I just don't think there is any way this lasts. It's like the guys who learned that playing them + the computer edged on computer alone in chess for a period. Eventually the meat just isn't going to be adding anything and I doubt it's even that long after.
More options
Context Copy link
I feel roughly the same. I think that AI will destroy a bunch of jobs that were the intellectual equivalent of menial labor, but create an equal or greater number of creative jobs. If you're writing formulaic grant proposals or building websites with React then AI is coming for your job, but that's not a bad thing. An LLM can replace a web designer, but only a full-blown strong AI can replace the UX designer whose job it is to tell the LLM what website to make.
LLMs won't replace the actual nerds. It'll replace the 0.5X programmers, the offshore assets, the email-senders, the box-tickers, and the bureaucrats. On a fundamental level there will still need to be someone to tell the AI what to do.
This is only true if AI plateaus. If it gets even a couple dozen IQ points smarter, those creative jobs are gone, too. And I don't see any indication of AI plateauing.
All current generation AIs rely on someone telling them what to do. ChatGPT will do what you ask it to do and no more. Telling people what to do is surprisingly hard, and telling AIs what to do has most of the same challenges plus a bigger communication barrier.
For safety and legal reasons I would be really surprised if someone made a completely autonomous robot whose job it was to give orders to the other robots. That seems like tempting fate. On some level, bossing around a flock of robots is going to be a job until we develop trustworthy strong AI. The AI we currently have is neither strong nor trustworthy.
if you think no one is going to do this you have not been paying attention
More options
Context Copy link
A year or two ago the telling has to be very specific, and even then it wasn’t a guarantee of useful output. “Write a function that takes these inputs, performs this logic, applies this transformation, return this output.”
Now it’s “make me an iOS app that does X” and the AI enters planning mode and many iterations and hours later can give you a working app, though likely won’t get everything the way you want on the first draft.
More options
Context Copy link
Sure, but that's true no matter what "AIs" stand for; Artificial Intelligences, Actually Indians, Average Interns...
More options
Context Copy link
I don't think that should be surprising at all. Look at moltbook - yes it's low stakes and yes it's LLMs which aren't remotely intelligent. But it's still clear that the people running it think it's cute to have the bots talk amongst themselves and everything. I think it's very likely that even if it was an actual AI and not a bag of words masquerading as one, there exist people who would see nothing wrong with doing the same sort of thing.
Also look at people running Claude in their shells with the ability to change stuff on the system. That is very obviously a terrible idea, as LLMs have no understanding of what they are doing at all. And yet, people think "yeah it's fine to let this thing touch my computer with potentially destructive commands" (and then they are shocked when the LLM deletes stuff because it's not actually intelligent). Again, if we had an actual AI then I don't see a reason to expect people would hold back from letting it touch things.
It's not clear whether or not we can develop true AI based on where the research is now. But what seems clear, to me at least, is that if we ever do develop a real AI there will be humans which are only too happy to recklessly hook it up to stuff.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I feel like this is bad for mental health/fertility unless we have off-ramps for people. I don’t think there will be enough status seats for the highly intelligent which means more artificial status hierarchy (like woke). This basically comes down to everyone needs to be a playable character. Is this a good thing in society?
Even if someone is highly agentic I feel like people need breaks in life where they can just live and not be building. It gets really hard to have a family if you always need to be in a risk seat and can never step back into a support seat doing boring white caller work.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
There seems to be a similarity between AI and woke thinking in the workplace.
Right now in many businesses the expectations have flipped so that rather than being ashamed of using AI, one has to either use AI, pretend that you use it to your superiors, or keep quiet about the subject and hope it goes away. If you say out loud you don't use it, you are a drag, a buzzkill and a dinosaur (maybe a young dinosaur as I don't think intensity of AI use or AI boosterism corresponds with age).
Many people are even under pressure to use AI in cases where no one even pretends it is adding anything, so long as it gives them bragging rights to tell their bosses 'we used AI for this'.
It is/was pretty similar with woke thinking. There was a pressure to believe, pretend or keep quiet.
Both AI and things labelled woke can often get good results though.
There's certainly a lot of nonsensical pressure to use AI from executives, which all seem to drink at the same information trough that has decreed "AI is the hip new thing". I've written about my experiences with that here. That's a fad and will probably go away within a year or two.
I'd still recommend playing around with AI and finding where it can add value. I'm doing roughly 30-40% less work in my software engineering role because of it, with the savings being redirected into building more robust systems, as well as many hours into Factorio.
Serious take: This makes you a better developer.
Heh, I'd certainly like to think so. I wish my current job had infrastructure that was as elegant and well-functioning as my train-based city blocks.
The "oh shit" moment for me in factorio as it relates to software engineering is that there is a fundamental game meta-mechanic where you have to rebuild large parts of your base. You can't not. The way the tech tree progresses, old smelters and belts can't provide the throughput as you advance. Your option is to either abandon them and make new ones or refactor the old sections. And you kind of have to do the latter because there are all of these geographic dependencies flowing into your older base (without trains and other stuff, you can't make central raw materials depots, you have to build belts straight from a coal field to a smelter).
That's literally codebase refactoring / major update / integration with new capability.
Late in the game, with robots, you can get really modular and independent. That's when you're hitting google scale and things like kubernetes actually become super valuable rather than engineer theater.
Tinfoil hat: Factorio was going to be the thing that turned every kid born after 2010 into a dev. Then AI happened and now everyone is a dev and I can build and re-build my base every 20 minutes.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I enjoyed reading this, especially the part about your CTO. I am 4 years into my career (31 is still young right?), and I have met my organizations CTO exactly once and he came across as a complete moron, but he had a helper/fixer/(handler?), who actually had real technical competency of the sort I would expect from someone with a CTO job title.
I'm glad you enjoyed reading it! And yes, it's mildly frustrating in a lot of ways to have a CTO that doesn't know much about tech. I wish ours had a handler that knew what they were talking about.
More options
Context Copy link
He has probably just been out of the game a long time and has specialized in non technical things like "how to manage managers/directors," "political tactics to protect the engineering department's budget," "communicating the value of technical projects to the head of accounting who does not care about tech at all," plus all the mundane process, paperwork, and ego soothing one must do to keep things running smoothly. He probably hasn't written serious code in years and may not have more than a high level understanding of what his department's tech stacks are and how its products works, but that doesn't make him a moron.
Alternatively, he may be not do any of the above and might just be a smooth talking glad-hander. Your department might be a dumpster fire and he's just very adept at shifting blame or sweeping the fires under the rug. That would suck, but it also means he is far from a moron.
That’s a fair point, if you can become the cto for medium sized organization, you must be very smart (or at least very crafty!). It would have been more accurate to say that I was unimpressed with the level of technical knowledge displayed.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is something I've been thinking about lately, and was actually thinking of doing a WW thread because it's depressing me. I do not believe that LLMs can adequately program, but ultimately it won't matter what I think. What will matter is what the industry at large thinks, and there's a decent chance that they will believe (rightly or wrongly) that everyone needs to use LLMs to be an effective engineer (and that's if they don't replace engineers entirely with LLMs). If that happens, then I'll just have to suck it up and use the bag of words, because I have bills to pay like anyone else.
But the thing which sucks is, I like doing my job. I get a great deal of joy from programming. It's an exhilarating exercise in solving interesting problems and watching them take shape. But using an LLM isn't that. It is basically delegating tasks to another person and then reviewing the work to make sure it's acceptable. But if I was happy doing that, I would've become a manager ages ago. I am an engineer because I like doing the work, not handing it off to someone else to do.
Like I said, I'll do what I have to do. I'm not going to kill myself or go homeless or something rather than suck it up and tolerate the LLM. But at that point my career will go from "one of the biggest sources of joy in my life" to "something I hate every second of", and that really, really sucks. Of course I won't be the first person to work a job he hates to get by, but it's one hell of an adjustment to have to swallow. Right now it hasn't come to pass yet, but it's a possibility, and I'm not sure how I will be able to adjust if it does come to pass.
I don't think this claim has much shelf life left <-- this is a link.
It (AI) built a C compiler.
I am more than happy to list the caveats;
But, still, it wrote a freaking c compiler! This is a programming task that is out of the reach of 90% of engineers over their entire career. It did it for about $20k (and the cost of labor to write the context) in two weeks.
I wrote an article last week about how even I think that LLMs aren't ready to refactor big legacy code bases. The recent brouhaha over Clawdbot / Moltbook also show how dangerous these things can be in the hands of people who don't know / don't care about basic security management.
But the generalized claim that "ai can't write code" is just this side of "embarrassingly false." When you then consider the pace of progress, it really seems like cope. Two years ago, LLMs would make basic syntax errors in print statements. Today, they can write fully functional programs - albeit probably bloated and often security unacceptable - with ~30 minutes of prompting and system design blueprint work.
You're welcome to your opinion, but I neither share it nor am I interested in debating the topic. Literally everywhere else on the Internet is happy to have that debate, I'm content to not participate.
More options
Context Copy link
More options
Context Copy link
Do you actually like writing code qua writing? Is there fun to be found in adding every last semi-colon? I'm assuming you still use a linter and aren't manually typing out every variable, etc.
Or is the stuff you actually enjoy the planning, architecture, or problem solving? Because you can easily still take that on and pass even more of the boring repetitive tasks over to AI. Just write everything in a shorthand style pseudocode and pass that to the LLMs to fill in all the little bits.
Yes, I do.
Well that will teach me to make assumptions
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
They're getting awfully close, and the gap is small enough that the sheer quantity of work the LLMs can put out will outcompete what greater quality a human can achieve. At least for most applications. if you're programming nuclear reactor safety, autonomous combat drones or high-value finance, the human will be in the loop for a while longer, but for anything more pedestrian the building blocks are already in place for replacing programmers entirely. It may take another year or two to assemble them into convenient packages, but it's clearly on the horizon.
More options
Context Copy link
If you really don't see LLMs adding any value, then you can just lie about using them quite easily. I think they're very useful, but can still see they've become a huge management fad, and I doubt they'll stay like that for more than a year or 2. You can just say you're using AI if they don't check, or send it off on goose chases with filler prompts that you don't actually use the results of if they do.
I just don't understand this mindset at all. There's a certain elegance in the craft for sure, but the value of the end-product is what's always been truly impressive to me. It's like for an architect/builder: Seeing them swing the hammer can be cool, but it's the house that they build that's worth admiration in my eyes. LLMs have thus been a thing of beauty for me since they can get there so much faster, and more robustly. It feels like I have a cheat code to just snap my fingers and pop buildings into existence.
Yeah perhaps. I'm not sure if I would want to lie but it's not entirely out of the realm of possibility either.
I think it's legitimately hard to cross that gap of being wired differently. My reaction is actually pretty similar to yours (but in the opposite direction of course): it's hard for me to understand why someone would care about the end product instead of the process of making it. I have never found some kind of intrinsic value in the stuff I work on, so the perspective of "I can create things faster and that's the part I enjoy" is wildly different from mine. I can understand it in a detached intellectual way, but I can't truly get it. Just different personalities I know, but I sympathize with how foreign other perspectives can seem from oneself.
I guess a lot of it comes down to what exactly you're building too. I doubt I'd get as much pleasure from the end product if I was designing something like medical device firmware that hardly anyone would ever interact with, and they mainly only get upset if it stops working. In my job I'm designing data reporting tools which let me see a big difference compared to the old version, and I have end-users telling me "oh wow, this is a lot better than what we had before". In my free time I design little video games and ad hoc apps that let me automate things I had previously done manually, both of which have intrinsic appeal.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Not necessarily related, but DAMN I am so jealous of my programmer friends who report this. Having an exhilarating job that stimulates the intellect, that you genuinely enjoy, that gets you high social status, AND a ton of money? Good Lord... how is it even possible?
I can barely imagine having a consistent full time job as an adult that I enjoy and find stimulating, let alone all of the other goodies that seem to come associated with many of my programmer type friends. Seriously I have multiple friends making 4x my income with 3x my job satisfaction. It just seems incredibly, brutally unfair that the world is like this. Alas.
High social status? I'm basically embarrassed to tell anyone I work in tech because either they also work in tech (in which case they will probably talk my ear off about some inane office politics) or they don't (in which case they probably hate me).
Fascinating. We must run in very different circles.
More options
Context Copy link
Yes. Not as high as a doctor, but absolutely. People - men and women - absolutely treat me differently (better) after learning where I work.
Money and status are too linked for it to not give status. There's some importance to it too - we (often) work on systems people know and care about. We're famously difficult to tell what to do professionally, which is itself a form of power/status.
I gotta move to your neighborhood.
Pharma sales reps would probably beg to differ.
Or it causes people to suspect the job is in some way "fake".
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Well... being a programmer doesn't get you high social status, except from other nerds. But yeah I agree that it's unjust in a cosmic sense that programmers get to do something they love which also pays very well. I look at someone like my sister, who works crappy factory jobs and gets paid 1/3 of what I do if that, and it seems to me that in a just world she would be getting paid what I do and vice versa. But that second part might come true at least... I guess we will see how it shakes out.
As I told @sarker, we must be in very different circles. Programming is extremely high status in all the circles I've been in, even at my church!
At least you are aware of your relative blessings here my man. It's tough out there. I wish I enjoyed coding, I really do. Have tried it multiple times and, yeah just not my bag. But hey for what it's worth you're a real one.
More options
Context Copy link
More options
Context Copy link
Don't be too jealous, if AI meets its promise they'll all be... well, they won't be saying "would you like fries with that" because the AI will do that too. But they might be delivering the fries until the robotics catches up.
Well, I'm a junior marketing guy so, we'll see who gets hit first. I'm thinking of transitioning out of marketing entirely, though.
More options
Context Copy link
More options
Context Copy link
Remember, it's a mathematical result that all your friends are probably more popular than you. If it makes you feel better, I was really good at programming, but it wasn't intellectually stimulating, I didn't enjoy it, I have bottom-of-the-barrel social status, and ... ok, I do have a ton of money. 1 out of 4 ain't bad.
Yeah that's fair. I also have put a ton of effort into building charisma and social capital as well, so it stands to reason that I have a high powered friend / social network. It's actually a good thing a lot of the time, I feel very blessed. But every now and then I do get quite jealous!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'd say it's actually harder on artists more than everyone else (assuming you aren't counting artists as a subset of nerds). 90% is not 100%. At least for programmers reviewing code and structuring the solution were always part of the job, people who were fond of codegolfing crud in rust (look how much more elegant I can make this by using a right fold) are going to suffer, but only a little bit.
I imagine the same is true for physicists, maybe not, but the fact that they are willingly implementing it motu propriu suggests it is.
Maybe in a few years things will change, AI will be able to do everything fully autonomously, and we'll all end up at the bottom of the totem pole (or "the balls on the dick" as some will say). But so far that's not the case and, to be honest, the last big improvement to text generation models I've seen happened in early 2024.
Meanwhile I see artists collectively having a full blown psychotic break about AI, hence indie gaming dev awards banning any and all uses of AI etc. I think this is because it changes their job substantially, on top of slashing most of them, and also because it came completely out of left field, nobody expected one of the main things that AI would be good at would be art, quite the opposite, people expected art to be impossible to AI because it doesn't have imagination or soul or whatever. In fact, the problem with AI is actually that it has too much imagination. And revealed preference strikes here too, you don't see many artists talking about how they are integrating AI into their workflow.
Artists were already in a fairly bad place, due to centuries of encroachment and ease of replication making it a tournament profession.
A normie compatible take is something like Brandon Sanderson's recent speech, We are the Art (https://youtube.com/watch?v=mb3uK-_QkOo). We've had enough content to consume or decorate with or read or whatever for quite some time. Most people who aren't Brandon Sanderson can't make a living off of it anyway. But people still want to be makers, not just consumers anyway, but it's annoying because... probably because our communities are broken, so it's not trivially easy to just give a friend a handmade thing and have them put it up and appreciate it, or read it, or write a letter back about it.
People have been appreciating the new Opus quite a lot at least, and the others to various extents.
More options
Context Copy link
GPT-4. I found reasoning models to be a big flop. Not zero improvement but fairly small.
Have you not seen an LLM go from planning mode to functioning whole product?
More options
Context Copy link
More options
Context Copy link
It's quite revealing comparing the criticisms of AI from programmers vs from artists. From programmers the complaint is "I've tried AI and it sucks at doing X. Why are you trying to force me to use it for X?" when from artists it's "AI is bad because it steals from artists / has no soul / lacks creativity / other vague complaint. Nobody should be allowed to use AI."
More options
Context Copy link
Most art was already commodified, and it was commodity artists, not creative artists who got the most brutal axe.
Essentially, contrary to your point about AI having imagination, creativity is the primary skill it lacks. It's basically a machine for producing median outcomes based on its training data, which is about as far away from creativity as you can get.
But for most artists, their jobs were based on providing quotidian, derivative artworks for enterprises that were soulless to begin with. To the extent that creativity was involved in their finished products, it was at a higher level than their own input, i.e. a director or something commissioning preset quotidian assets as a component in their own 'vision', the vision being the creative part of the whole deal.
However, I do believe creative artists will be threatened too. It's a little complicated to get into, but I think creative art depends not just on lone individuals or a consumer market, but on a social and cultural basis of popular enthusiasm and involvement in a given artform. I'm talking about dilettantes, critics, aficionados here. It's a social and cultural pursuit as much as it's an individual or commercial one, and I think that AI will contribute to the withering away of these sorts of underpinnings the same way corporate dominance and other ongoing trends previously have.
So for the artistic field, I envision complete and total commoditized slop produced by machines, once the human spirit has finally been crushed.
If your market consists of 99 derivative rip-offs and one legitimately interesting and fresh idea, the fresh idea will take half the market and the 99 rip-offs will fight over the other half. If there are 999,999 derivative rip-offs, then they'll have to split their half a lot more ways but they still won't be able to push in on the fresh idea's cut.
Art is a winner-takes-all industry. The JK Rowlings and Terry Pratchetts of the world have many thousands of times as many sales as Joe Average churning out derivative slop that's merely so-so. The addition of more slop won't change the core dynamic. Fundamentally, anyone trying to get the audience to accept a lower quality product isn't pitting themselves against the ingenuity of the artist, but the ingenuity of the audience. Trying to hide information from a crowd that has you outnumbered thousands-to-one is not easy.
If the market collapses then you create demand for someone to create a new market with less crap.
Those 'markets' would require nothing more than a blog or Youtube account and a well-trusted reviewer with a following.
More options
Context Copy link
More options
Context Copy link
Okay, I like J. K. Rowling, I think she was underrated back in the day by Serious Literary People, but I still feel like bringing her up torpedoes your case about more creative artists going further.
If you make more money from your book series than anyone ever has before then you must be doing something right.
She wrote the Bible for Liberal kids, effectively.
More options
Context Copy link
More options
Context Copy link
Sure, her writing's mostly pretty average (...or abominable, if you count the Fantastic Beasts movies), but don't underrate the skill of being able to wrap up a highly-anticipated series with an epic and decently satisfying conclusion. Sure would be nice if a certain "highly-skilled" fantasy author with the middle initials R. R. .... uh, whose first name isn't John ... could manage that.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Because creative artists got the axe a very long time ago. I expect the modal net earnings for a creative artist is already quite negative.
How much work has there ever been for creative artists? I would bet that a solid 95% of art over the last 1000 years has been one of:
There used to be a lot of jobs for people liks: local music hall player, freelance graphic designer, craftsman stoneworker, small town paper writer, etc. Admittedly most of those dried up long ago, though.
I was putting that in Nybbler’s ’commodity’ category. My point was that ‘creative’ ‘raw’ ‘self-expression’ Art with a capital A has always been very rare.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Eh. This is like claiming people who enjoyed traveling and being perceived as "worldly" would have been devastated by the internet allowing anyone to chat with strangers from 1000 miles away with minimal friction. Was that a thing? Plausibly maybe, but I don't recall much to that effect.
As someone who bases his identity a decent chunk around being intelligent, I'm not too worried. It turns out that a lot of it was implicitly graded "relative to other humans". I'm not to worried that calculators can do mental math better than me for instance. And smart people will be able to leverage AI much more effectively than dumb ones. We can already see that in stuff like education: AI is easily one of the best tools for learning that has ever been invented, but it's also one of the best tools to avoid learning as well. Naturally curious people will gravitate to the former on important topics, while less curious people will gravitate to the latter and suffer long-term costs from it.
It's highly unlikely that the value from human intelligence is going to 0 any time soon. If anything, AI could plausibly increase it rather than decrease it.
More options
Context Copy link
Eh? I'm very confident that's wrong. Normies might not appreciate the impact of ChatGPT and co to the same degree, but I strongly doubt that they literally believed that there was human-level AI in 2021. AGI was science fiction for damn good reason, it didn't exist, and very, very few people expected we'd see it or even precursors in the 2020s. Jarvis was scifi, and nobody believed that something like Siri was in the same weight-class.
To shift focus back to your main thesis: the normie you describe is accustomed and acclimatized to being average. Bitter experience has proven to them that they're never going to be an "intellectual" and that their cognitive and physical labor is commoditized. It's unlikely that being the smartest person in the room (or in spitting distance) is an experience they're familiar with. Hence they have less to lose from a non-human competitor who dominates them in that department.
On the other hand, their average Mottizen is used to being smart, and working in a role where it's not easy to just grab a random person off the street to replace them. That breeds a certain degree of discomfort at the prospect. I've made my peace, and I'm going to do what I can to escape the (potential) permanent underclass. It would be nice to have a full, accomplished career with original contributions to my professional field or the random topics I care about, but I'll take a post-scarcity utopia if I can get it.
You have to remember that plenty of normal people (not bottom of the barrel) believe things like “the government has the cure to all cancer but doesn’t release it so that drug companies make money”, which is rather more farfetched than GPT4 existing in 2007.
I'll eat my hat if they were anywhere a majority. I'm far more inclined to believe that polling would show something very close to Lizardman's constant.
Study: As of 2013, a whopping 37 percent of USAians think that the federal government is colluding with pharmaceutical companies to hide natural cures for cancer from the public.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Presumably via investments?
I've been... lazy in that regard. Far too much money in my account that's not accruing interest. But yes, that's a factor. I also earn an OOM more than I did back India, which definitely helps. If I was less lazy, I'd have put most of my money in the S&P500 by now, but I've already put myself in a much better place than if I'd been complacent about things.
I don't expect that this will necessarily make me rich in relative terms, I'm starting too low, too late. But I want enough of a safety net to survive in comfort for the (potential) period of unemployment when AI eats my profession whole, before we implement solutions such as UBI. Not starving, not dying in a riot, all of that is important to me.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I had a somewhat related idea to this. It's relates to ways that middle class professionals could be screwed. I haven't really hammered it out fully, but here's the gist of it. Basically, the value of automating labor is that it allows human resources to be freed up for other tasks. Rather than having one hundred artisans hand tooling goods, you have one machine operating by one engineer producing the same goods and then ninety nine people who can perform tasks in other areas of the economy.
But with AI, there will be an extinction of an entire class of meaningful work. That which is done by the middle class. There aren't adjacent fields for them to move into once displaced, as those will also be taken by AI. Their only options will be to move up or down, into different classes of the economy, and for the vast, vast majority of them, it will be a downwards spiral.
The area below the middle class economy is called the gig economy. So the value of AI is that there will be a wealth of gig workers, and thus fast food can be delivered more cheaply than ever before.
That is the one benefit of AI we are certain about.
There is a hypothetical scenario, a longstanding dream of science fiction, where with infinite labor afforded by AI there will be infinite opulence. However, some points that contest that are 1) there is only so much demand for consumables and market goods and services, so that economic demand begins to be overshadowed by status concerns and non-economic spheres of life in terms of desired things, 2) many of the inputs that go into supplying those goods and services are finite (i.e. resources) and so their creation can't be infinite, 3) political ramifications suggest reduced power and thus leverage for the displaced, and so their economic needs could easily be ignored by those who retain power.
All in all, there looks to be dark times ahead.
How broad would you consider 'gig economy' to be? Because as I understand it, gig economy means signing onto an app to become a service provider for some kind of on-site labor. It may be handyman, driving, tutoring - but it is built for "one man shows".
As far as I can tell, winning at the gig economy means securing enough contracted work to regularly delegate labor to someone else. Then (poof) you're a business. A business that's a bunch of gigs underneath a suit, but a business nonetheless. Is this business status still in the gig economy? Or has it transmogrified into something else?
My understanding of the gig economy is that it's a progressive step towards the disenfranchisement of workers caused by their weakening bargaining position as demand for unskilled or semi-skilled labor continues to fall. I do not have an in-depth understanding of it, but it seems to me that many of them occupy precarious positions, accept low wages, and lack many of the benefits workers in the past enjoyed, such as union representation, health care plans, etc., and plus have to take on the burden of supplying their own equipment (cars for Uber drivers, for instance). I think app based employment has essentially undercut the collective bargaining position of workers and empowered the huge, centralized corporations which control them.
More options
Context Copy link
More options
Context Copy link
We already have, in effect, a trial run of post scarcity civilisations. Not complete or total, obviously. But western society is long past needing to worry about food and water.
I think men will play games and have fun in that kind of sci fi world. They'll find new and interesting things to pursue. They'll go sailing or rock climbing.
Women will play the status games, become depressed and create social problems via whatever the next social media is. Unless AI can turn this behaviour more productive at least.
Men's contests often don't look like rock climbing or sailing; they look like war.
But I don't think we'll get the sci-fi world. Scarcity will be with us always. Even if someone has to create it (by violently taking control or destroying the means of production), though I don't in fact think that will be necessary.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Emotional health isn't what it's about. You've got people who work with physical things, people who do intellectual work, and people who play monkey dominance games at a high level. The latter are almost always indisputably on top, but that hasn't been entirely true in recent years; there's been significant status overlap between the intellectual workers and the monkey dominance people. AI threatens to throw the intellectual workers all the way down to the bottom -- not even so high as the privileged slave levels they had in ancient Athens, but all the way down to utter uselessness, like drug addicted alcoholic bums but not as sympathetic. The monkey dominance people are of course overjoyed at this, putting these interlopers in their place has been a nagging goal for a long while now. Nerds are more threatened by AI than normies because AI is vastly more of a threat to them.
In some hypothetical future level of development, sure. But in its current or slightly improved form of LLMs, AI is mostly a threat to midwit pencil pushers and not to any genuine expert or intellect, whether a skilled artist or an engineer.
More options
Context Copy link
It's funny, but, some of the greatest authors of all time realized this way before computers, let alone AI. Bukowski was notorious for more or less believing that to be a true artist, you had to be a sort of social vagabond. Any attempt at real genteelness would pollute the pure art. Spengler wrote The Decline of the West in poverty and Eric Hoffer lived in a one room apartment near Chinatown, San Francisco, for decades.
If we hit vertical takeoff and the knee jerk response is a kind of poverty level AGI, we could maybe get a renaissance of true art and philosophy. That or sex robots
More options
Context Copy link
More options
Context Copy link
Even if the ai bulls were right (they're not), most of the remaining 10% of research work can still be done by grad students, postdocs, and other humans. We shouldn't expect to see any decrease in staffing at labs, but instead a huge increase in productivity.
More options
Context Copy link
Something I'm curious is how AI has been implemented to peoples professional workflows. My company has been implementing Ai in various places, but thanks to the level of human supervision and human-centric communication needed in my work process, I'm not convinced of significant human replacement for quite a while.
AI benefits:
Meeting transcriptions. Online meetings between two parties require each interaction to be recorded. Representative case notes are spotty at best, and the AI generally make significantly more complete and timely transcriptions freeing up a lot of time, and being more accurate than most human-written notes. Adoption of this tool has been spotty, but the people who are using are seeing significant benefit. Job replacement impact: 0, as no rep has someone to specifically write case notes. Benefit: significant.
Internal document searches. AI searches generally are better than our internal search engine in locating company documentation and resources. It is still hit or miss, but luckily the AI search provides the links it is citing, so I can go through the links to locate the specific policy or document I'm looking for. It's not consistent, but generally I use it before I use our internal search engine. Job replacement impact:0 Benefit: Medium
Email drafts: Great for rapid iteration of emails. They still need to be edited and reviewed, but they're very helpful if I'm having trouble finding the correct wording and I need to get something out quickly. Some people use it a lot, I use it only when I don't have a clear structure in mind. Job Replacement impact:0 Benefit: Limited to significant depending on user preference.
AI weaknesses
In the various professional careers I've held, I still don't see a significant AI impact in that it's replacing workers or reducing the intellectual motivation of young professionals. I still hold on to the idea that AI will be and is unable to innovate because it doesn't and cannot push against the zeitgeist on the data it is trained on. If AI was around during the time of the Wright Brothers, would it think human flight was possible?
The main problem of AI is people trying to use it to do their thinking for them, when it is most effective at automating monotonous tasks and increasing productivity. Maybe it's because I don't use AI in my life the same way many adopters have, but I don't see any significant impact in my day to day even though it is coming more advanced.
I'm probably in the 99.99th percentile for doctors (or anyone else) when it comes to the use of AI in the workplace. I estimate I could automate 90% of my work (leaving aside the patient facing stuff and things that currently require hands and a voice) if I could.
The main thing holding me back? NHS IT, data protection laws and EMR software that still has Windows XP design language. This means I'm bottlenecked by inputting relevant informant into an AI model (manually trawling the EMR, copying and pasting information, taking screenshots of particularly intransigent apps) and also transferring the output into the digital record.
The AIs are damn good at medicine/psychiatry. Outside my own domain, I have a great deal of (justified) confidence in their capabilities. I've often come to take their side when they disagree with my bosses, though the two are usually in agreement. I've used them to help me figure out case presentations ("what would a particularly cranky senior ask me about this specific case?" and guess what they actually asked?), giving me a quick run-down on journal publications, helping me figure out stats, sanity checking my work, helping decide an optimal dose of a drug etc. There's very little they can't do now.
That's the actual thinky stuff. A lot of my time is eaten up by emails, collating and transcribing notes and information, and current SOTA models can do these in a heartbeat.
To an extent, this is an artifact of resident doctors often being the ward donkey, but I'm confident that senior clinicians have plenty to gain or automate away. The main reason they don't is the fact that they're set in their ways. If you've prescribed every drug under the sun, you don't need to pop open the BNF as often as a relative novice like me would - that means far less exploration of what AI can do for you. Yet they've got an enormous amount of paperwork and regulatory bullshit to handle, and I promise it can be done in a heartbeat.
Hell, in the one hospital where I get to call the shots (my dad's, back in India), I managed to cut down enormous amounts of work for the doctors, senior or junior. Discharges and summaries that would take half a day or more get done in ten minutes, and senior doctors have been blown away by the efficiency and quality gains.
Most doctors are at least aware of ChatGPT, even if the majority use whatever is free and easy. I'm still way ahead of the curve in application, but eventually the human in the loop will be vestigial. It's great fun till they can legally prescribe, at which point, RIP human doctors.
Charting is not supposed to be the majority of the job and is more or less a recent invention (in the US at least).
I find OpenEvidence and other similar tools to be relatively unhelpful, especially since I generally have to cross reference.
More options
Context Copy link
More options
Context Copy link
I don't know how my coworkers are using it, but I've been having great results with replacing "google an excel function and hope somebody else had the same problem and got it solved".
This is a gateway drug into AI
Start getting it to write you for macros for anything you do more than 3 time
It's really really good
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
There may be a bit too much romanticization of "salt-of-the-earth normies" going on here. Last I checked, the social atomization trend (friendship- and sex-recession) is just happening across the board, while many (most?) career-intelligentsia derive satisfaction both from their work and from those other things. It's not that one is a substitute for the other.
It seems that you acknowledge this ("This is true even for many people with decent-IQ white collar jobs"), but then you posit "someone who has spent a lifetime cultivating an identity built around an intellect that is no longer useful to anyone, least of all themselves". Who are these people, exactly?
Internet nerds like us who based their lives around forums, intellectualism, in my case, literature, etc. The new AI world of dopamine cattle harnessed by the tech fiends suggests total obsolescence of any sort of life that isn't fully grounded in the concrete or else enslaved for the purpose of dopamine-slop control. Admittedly, some people here have lives which go beyond the abstract.
I find this take so hard to understand. I like talking about things, learning about things, thinking about things. The existence of vastly more minds (mind-like objects, I'm using shorthand here) with whom I can do that is great! GPT or other AIs don't mind me asking endless questions about beginner-level stuff, or helping with technical things, or working through ideas.
Granted, these AI are mostly junior partners at the moment, or at least 'experienced friend who doesn't mind helping if asked but won't do stuff of their own initiative' and perhaps I'd feel differently if I really did just become an appendage, but at the moment things are great.
Personally I don't find AIs as fun to talk to as any human. To me, they're like an interactive encyclopedia. It is fun to read and learn about stuff, but they can't stand in for the human element, either on the individual level or the level of an entire society or group (like the motte). Ultimately I find them in some sense desirable in terms of their first order effects (helping with research, etc.), but it's their second and third order effects I'm worried about, where I think, as I explain elsewhere, they will kill off large parts of human culture, remap the class system, and generally work towards all the ongoing, negative trends that already seem apparent. In a sense they are a continuation of capitalism and its logic.
Does that include people with Down's Syndrome? Outright and obvious diseases aside, I can think of plenty of people who are so unpleasant/pointless to talk to that I'd speak to an AI any day instead. And even within AI, there are models that I'd prefer over the alternatives.
There was a man I spoke to wandering up and down my alley the other day. He was small by nature and shrunken even further by age. He had a walking aid and was somehow managing his stroll even though the entire alley was covered in ice.
Anyway, after he called to me across my backyard, I was engaged by him in a lengthy conversation in which he asked me about the species of pine growing in my yard and told me about how he used to raise dogs for a living when he was younger. He told me that his former best friend at one time killed his favorite pet dog by throwing it down a flight of stairs.
Originally I took him for a homeless person and he seemed a bit off owing to his advanced age, but I still found him an interesting enough person to meet and speak to, and nothing about the experience could have been replicated by AI.
More options
Context Copy link
People with down’s are often quite charming to speak to, infectiously optimistic and with surprisingly complex inner lives. I would find looking after a family member with the condition difficult, but not because of the conversation.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Spinners and weavers from 250 years ago. Imagine that your (and your ancestors) whole identity is your skill and craft, and then some nerd invents a contraption that makes yarn and cloth faster, cheaper and better and you end in the gutter.
Easy to understand why they were angry, and also easy to understand why their anger achieved nothing at all.
There's an episode of Lark Rise to Candleford where an elderly lady's bobbin lace is no longer needed by the local dressmakers, due to the new machine lace. Also a bit of other industrial commentary in other episodes, but that one always hits me the hardest.
More options
Context Copy link
More options
Context Copy link
Redditors. The irony is that their intellect was never that impressive anyway.
But seriously, there really is an entire cohort of people who were in the top five percent of their high school and college because they could sit still and gulp down boring bullshit who think they are somehow intellectually superior to the plebs they disdain. Usually it’s not actually the really smart people making big strides in science and tech.
You do realize how unconvincing it is to cite the top 5% of students as not really being all that useful? Do those people have any purpose in their existence in your eyes? Regardless of any unwarranted sense of self-worth, if they're doomed, then what hope is there for anyone?
I remember the sheer glee they had about factory workers, coal miners and truck drivers being driven out of business by automation and illegal immigration. Fuck ‘em. I hope they enjoy their brave new world.
There was limited glee (which is not to say there was none before the screenshots come out). Mostly the worst one could say is that they didn’t care much, but they’re hardly the only people guilty of not caring when bad things happen to other people.
More options
Context Copy link
More options
Context Copy link
When I worked at Google, about 5% of applicants got through the phone screen. A lot of them weren't all that useful.
Welcome to the black pill.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link