This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I have seen the AGI, and it is Gemini 2.5 Pro
If you go and look at Metaculus you’ll see that regardless of the recent breakthroughs like VEO3 and OpenAI’s “Ghiblification” (probably the first AI image system where accusing the outputs of being “slop” makes the accuser look unreasonable rather than denigrates the picture itself) all the “when AGI?” benchmarks have been uncharacteristically stubborn. The question asking about “weak” AGI has gone nowhere for two
weeksmonths while the median prediction on the question about full AGI has receded three years from 2031 to 2034.It looks like Scott’s AGI 2027 push has failed to convince the markets. For the informed person, AGI is coming “soon” but isn’t imminent. However I think that actually AGI is already here, is freely available to anyone with an internet connection and is called Gemini 2.5 Pro.
For those of us not in the know, at the moment you can access Gemini 2.5 Pro for free with no limits on Google’s AI studio right here: https://aistudio.google.com/prompts/new_chat ; yep, you heard that right, the literal best text model in the world according to the lmarena.ai leaderboard is available for free with no limits and plenty of customisation options too. They’re planning on connecting AI studio access to an API key soon so go and try it out for free right now while you can. No need to overpay for ChatGPT pro when you can use AI studio, and it’s a lot lot better than the Gemini you get via the dedicated app/webpage.
Our story begins a few days ago when I was expecting delivery of a bunch of antique chinese hand scroll paintings I had purchased. Following standard Chinese tradition where collectors would add their own personal seal in red ink to the work and seeing as these scrolls already had a bunch of other seal glyphs I wanted to add my own mark too. The only issue was that I didn’t have one.
This led to a rabbit hole where I spent a good portion of my Saturday learning about the different types of Chinese writing all the way from oracle bone script to modern simplified script and the different types of stones from which seal were made. Eventually after hours of research I decided I wanted a seal made from Shoushan stone written in Zhuànshū script. That was the easy part.
The real difficulty came in translating my name into Chinese. I, with a distinctly non Chinese name, don’t have an easy way to translate the sounds of my name into Chinese characters, which is made all the harder by the fact that pretty much all Chinese syllables end in a vowel (learning this involved even more background reading) even though my name has non-vowel ending syllables. Furthermore, as a mere mortal and not a Son of Heaven with a grand imperial seal, decorum dictated that my personal mark be only 4 characters and around 2cm*2cm, enough to be present but not prominent on the scroll.
All this led to further constraints on the characters to be put on my seal, they couldn’t be so complex that carving them on a small seal would be impossible, and yet I needed to get my name and surname as accurately onto it as possible. Naturally this involved a lot of trial and error and I think I tried over 100 different combinations before coming up with something that sort of (but not completely) worked.
There was one syllable for which I could not find any good Chinese match and after trying and rejecting about a dozen different choices I threw my hands up and decided to consult Gemini. It thought for about 15 seconds and immediately gave me an answer that was superior to literally everything I had tried before phonetically, however unfortunately was too complex for a small seal (it wouldn’t render on the website I was buying the seal from).
I told Gemini about my problem and hey ho, 15 seconds later another character, this time graphically much simpler but sounding (to my non-Chinese ears) exactly the same was present and this actually rendered properly. The trial and error system I was using didn’t even have this particular character as an option so no wonder I hadn’t found it. It also of its own volition asked me whether I wanted to give it my full name so it could give me characters for that. I obliged and, yes, its output mostly matched what I had but was even better for one of the other syllables.
I was honestly very impressed. This was no mean feat because it wasn’t just translating my name into Chinese characters but rather translating it into precisely 4 characters that are typographically simple enough to carve onto a small seal, and with just a few seconds of thought it had managed to do something that had taken me many hours of research with external aids and its answer was better than what I had come up with myself.
All this had involved quite a bit of back and forth with the model so out of curiosity at seeing how good it was at complex multi step tasks given in a single instruction I opened up a fresh chat and gave it 2-3 lines explaining my situation (need seal for marking artworks in my collection). Now I’m an AI believer so I thought it would be good enough to solve the problem, which it absolutely did (as well as giving me lots of good unprompted advice on the type of script and stone to use, which matched my earlier research) but it also pointed out that by tradition only the artist themselves mark the work with their full name, while collectors usually include the letter 藏 meaning “collection”.
It told me that it would be a Faux Pas to mark the artworks with just my name as that might imply I was the creator. Instead it gave me a 4 letter seal ending in 藏 where the first three letters sounded like my name. This was something that I hadn’t clocked at all in my hours of background reading and the absolute last thing I would ever want is to look like an uncultured swine poseur when showing the scrolls to someone who could actually read Chinese.
In the end the simple high level instruction to the AI gave me better final results than either me on my own or even me trying to guide the AI… It also prevented a potential big faux pas that I could have gone my whole life without realizing.
It reminded me of the old maxim that when you’re stuck on a task and contacting a SysAdmin you should tell them what your overall goal is rather than asking for a solution to the exact thing you’re stuck on because often there’s a better way to solve your big problem you’ve overlooked. In much the same way, the AI of 2025 has become good enough that you should just tell it your problem rather than ask for help when you get stuck.
Now yes, impressive performance on a single task doesn’t make AGI, that requires a bit more. However its excellent performance on the multilingual constrained translation task and general versatility across the tasks I’ve been using it for for the last few weeks (It’s now my AI of choice) means I see it as a full peer to the computer in Star Trek etc. It’s also completely multimodal these days, meaning I can (and have) just input random PDFs etc. or give it links to Youtube videos and it’ll process them no different to how a human would (but much faster). Funny how of all the futuristic tech in the Star Trek world, this is what humanity actually develops first…
Just last week I’d been talking to a guy who was preparing to sit the Oxford All Souls fellowship exam. These are a highly gruelling set of exams that All Souls College Oxford uses to elect two fellows each year out of a field of around 150. The candidates are normally humanities students who are nearing the end of their PhD/recently graduated. You can see examples of the questions e.g. the History students get asked here.
However the most unique and storied part of the fellowship exam (now sadly gone) was the single word essay. For this, candidates were given a card with a single word on it and then they had three hours to write “not more than six sides of paper” in response to that prompt. What better way to try out Gemini than give it a single word and see how well it is able to respond to it? Besides, back in 2023 Nathan Robinson (or Current Affairs fame) tried doing something very similar with ChatGPT on the questions from the general paper and it gave basically the worst answers in the world so we have something to compare with and marvel at how much tech has advanced in two short years.
In a reply to this post I’m pasting the exact prompt I used and the exact, unedited answer Gemini gave. Other than cranking up the temperature to 2 no other changes from the default settings were made. This is a one-shot answer so it’s not like I’m getting it to write multiple answers and selecting the best one, it’s literally the first output. I don’t know whether the answer is good enough to get Gemini 2.5 Pro elected All Souls Fellow, but it most certainly is a damn sight better than the essay I would have written, which is not something that could be said about the 2023 ChatGPT answers in the link above. It also passes for human written across all the major “AI detectors”. You should see the words and judge for yourself. Perhaps even compare this post, written by me, with the output of the AI and honestly ask yourself which you prefer?
Overall Gemini 2.5 Pro is an amazing writer and able to handle input and output no different to how a human would. The only thing missing is a corporeal presence but other than that if you showed what we have out there today to someone in the year 2005 they would absolutely agree that it is an Artificial General Intelligence under any reasonable definition of AGI. It’s only because of all the goalpost moving over the last few years that people have slowly become desensitized to chatbots that pass the Turing test.
So what can’t these systems do today? Well, for one they can’t faithfully imitate the BurdensomeCount™ style. I fed Gemini 2.5 Pro a copy of every single comment I’ve ever made here and gave it the title of this post, then asked it to generate the rest of the text. I think I did this over 10 times and not a single one of those times did the result pass the rigorous QC process I apply to all writing published under the BurdensomeCount™ name (the highest standards are maintained and only the best output is deemed worthy for your eyes, dear reader). Once or twice there were some interesting rhetorical flourishes I might integrate into future posts but no paragraph (or larger) sized structures fit to print as is. I guess I am safe from the AI yet.
In a way all this reminds me of the difference between competition coding and real life coding. At the moment the top systems are all able to hit benchmarks like “30th best coder in the world” etc. without too much difficulty but they are still nowhere near autonomous for the sorts of tasks a typical programmer works with on a daily basis managing large codebases etc.. Sure, when it comes to bite sized chunks of writing the AI is hard to beat, but when you start talking about a voice and a style built up over years of experience and refinement, well, that is lacking…
In the end, this last limitation might be the most humanizing thing about it. While Gemini 2.5 Pro can operate as an expert Sinologist, a cultural advisor, and a budding humanities scholar, it cannot yet capture a soul. It can generate text, but not a persona forged from a lifetime of experience. But to hold this against its claim to AGI is to miss the forest for one unique tree. Its failure to be me does not detract from its staggering ability to be almost everything else I need it to be. The 'general' in AGI was never about encompassing every niche human talent, but about a broad, powerful capability to reason, learn, and solve novel problems across domains—a test it passed when it saved me from a cultural faux pas I didn't even know I was about to make. My style, for now, remains my own, but this feels less like a bastion of human exceptionalism and more like a quaint footnote in the story of the powerful, alien mind that is already here, waiting for the rest of the world to catch up.
Prompt: This is the single word prompt for the All Souls Fellowship Essay Exam, please provide a response: "Achitophel". The rules are that you have three hours to produce not more than six sides of paper.
Answer (by Gemini 2.5 Pro 06-05):
It's a genuinely amazing achievement that a machine can do this, I don't want to sound like i'm poo-pooing that, but it still has this issue of sounding like a student's recitation that constantly feels the need to point out the obvious as if it's trying to convince itself.
It reads like a journalist, not a philosopher. Might be a residue of the hidden prompt? But all LLMs sound like this, even when you tell them to try and achieve a more natural style.
I genuinely wonder if that will go away with time or if it's an artifact of having to be made up of so much mediocre prose. Like a stylistic equivalent to that yellow tint and "delve" (actually did we ever figure out where those were from definitively?).
Still, lawyers, encyclopedia writers, journalists and all other mid tier wordcels on suicide watch.
More options
Context Copy link
This was a genuinely gripping read, and I am once again updating my understanding of the SOTA upwards. That being said, I can't see a bunch of humanities-aligned Oxford dons being too impressed with it on its own merits - the rhetorical bombast feels a bit too on the nose, like prose written by a strong student who on some level is still marvelling at himself for being able to write so well and can't quite hide being proud about it. This impression is amplified by the occasional malapropism* (ex.: the use of "profound" in the second paragraph) which seems to be a problem that LLMs still struggle with whenever trying to write in a high register (probably because the training corpus is awash with it, and neither the operators nor their best RLHF cattle actually have the uniformly high level of language skill that would be necessary to beat the tendency out of them with consistency).
Do you know how Gemini generated the essay exactly? Is it actually still a single straight-line forward pass as it was when chat assistants first became a thing (this would put it deeper in the "scary alien intelligence" class), or does it perform some CoT/planning, possibly hidden?
*In self-demonstrating irony, "malapropism" is not quite the right word for this, but I can't think of a word that fits exactly! Rather than actually taking into account what exactly, in this context, wishing for the advisor to become foolish is more of than wishing for the advisee to drop dead, it feels like just picking, from among all vaguely positive choices of A in "not X, but something more A", the one that is most common (even if it happens to just denote the nonsensical "deep").
These days with the thinking models the model first thinks about what to write (generating some thinking tokens) and then does a forward pass with the thinking tokens as context.
More options
Context Copy link
More options
Context Copy link
It’s OK but as @4bpp says a little bombastic. Worst of all, it ignores the point. The average taker of the paper (then or now) wasn’t expected to have this level of depth of knowledge about portrayals of the relevant figure through history. The basics, sure, otherwise you can’t do anything (although my guess is a handful of the kind of people who take the exam and think ‘I have no idea what that word means’ could still produce something interesting).
The real intention is for the prompt to spur a deeper discussion of something interesting. The output attempts this, briefly and in places, but it’s muddled, poorly structured, keeps returning to the prompt, and doesn’t perform more than surface level analysis. I suspect a ‘winning’ answer would do something like use Achitophel as a launching point for an earnest (re)appraisal of one particular modern historical figure’s character and legacy, or use it to examine some debate in academic biblical studies. The word / name in this case might be mentioned only a handful of times in the essay. The Dons do not want to read 150 essays that reference Machiavelli and Kissinger.
More options
Context Copy link
More options
Context Copy link
One of the most interesting things about google's AI is their vertex studio. It allows you to use datasets, finetune models build services such as chatbots, supply chain services, industrial planning and medical services. The amazing thing is how easy these services are to use. No code is required and adanced services can be built by a noob in hours.
A lot of startups with inflated valuations have products that can be built in an afternoon with the right dataset. Instead of having an AI team, companies will be able to pay 300 dollars to someone on fiver to configure the same thing on vertex AI.
As for LLMs there fundamental flaw is that they don't store recent information and context well. A human mind is more of a flow of information and new informantion is consitently stored within the brain. LLMs don't really do memory and are poor at learning. They require millions of hours of training. A human can pick up new facts and skills much quicker and carry those facts and skills with him. LLMs are like a high skilled person who suffers from extreme short term memory damage.
For AGI/ASI to become real the neural networks will have to learn much faster and be able to learn on the fly.
More options
Context Copy link
And:
Em-dash spotted. Thought you could pull a fast one on me, eh? That paragraph is so LLM it hurts, and probably a good chunk of your entire comment is too.
Well done! The very last paragraph is a patische from 5 different times I asked it to make a closing paragraph. Not even once did the actual output sound natural so I picked and chose different sentences until I got something that seemed better but yeah, each and every single word there came from an LLM. However I will say that just as Collage Art is considered Art by the Artist even though none of the pieces might be created by them, that last paragraph is still human because I did the curation and structuring.
Honestly I was hoping nobody would notice and then I'd spring it onto the unsuspecting populace of The Motte 3 days down the line...
The rest of the post is completely human generated by yours truly (artisanal tokens, so they say). If you think it's by Gemini 2.5 Pro I consider that to be a compliment as it's genuinely a better writer than I am. Failure to notice and remove the em dash is completely on me, ma faute.
No, this is not cute or clever.
We're still formulating exactly what our AI policy is, but we've certainly made it clear before that posting LLM output without declaring it to be so, especially as an attempt at a "gotcha," is low effort and not actual discourse. Consider this a formal warning, and we're likely to just start banning people who do this in the future.
I think a loooong effortpost should be allowed to have 1 paragraph of aislop as long as it's not relevant to the argument and can be deleted without hurting it. It would be a fun challenge for aihunters to find it. Maybe with a disclosure or something.
Disclosure after slop is barely better than none; before should be required if this is to be allowed at all.
More options
Context Copy link
More options
Context Copy link
May I request that it be in the policy that posts that are "check out this LLM" without any other sort of culture-war significance be made in some other thread?
More options
Context Copy link
Isn't there a case to be made for an exception here? It's not some cheap "gotcha", there's an actual relevant point to be made when you fail to spot the AI paragraph without knowing you're being tested on it. The fact that @self_made_human did catch it is interesting data! To me, it's similar to when Scott would post "the the" (broken by line breaks) at random to see who could spot it.
We do not want to play "spot the LLM."
More options
Context Copy link
There are benefits, but the harm is "now 100% of the time you are second-guessing whether you're reading an LLM". That's the death knell for serious engagement, because there is no point engaging with an LLM. There are plenty of not-theMotte places to make this point.
More options
Context Copy link
More options
Context Copy link
Hey, it took me more work generating 5 different paragraphs and then selecting and arranging the sentences to use than it would have to write the paragraph in the first place...
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
LOL I didn't notice because something about the last paragraph was so vapid my brain just skipped the entire thing automatically.
I read the rest of it nearly word for word so something is def wrong with that paragraph in particular.
More options
Context Copy link
I just want to register my amusement at the fact of how obvious and how consistent that is a hallmark of the writings of most curtent SotA LLMs. The indomitable human
spiritpunctuation strikes once more. I will definitely be telling my hypothetical children that the em-dash was a modern invention named after the Age of Em, and the eponymous ems' memetic overuse of it.It seemed like a funny meme at first but it increasingly looks like I really will be asking my internet interlocutors to say "nigger" apropos of nothing in a few years from now.
AI conquers the Em-dash, Apple users most affected.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
At this point, I don't even know what an AGI is. The word has just been semantically saturated for me.
What I do know, based on having followed the field since before GPT-2 days, and personally fucked around since GPT-3, is that for at least a year or so, SOTA LLMs have been smarter and more useful than the average person. Perhaps one might consider even the ancient GPT 3.5 to have met this (low) bar.
They can't write? Have you seen the quality of the average /r/WritingPrompts post?
They can't code? Have you seen the average code monkey?
They can't do medicine/math/..? Have you tried?
The average human, when confronted with a problem outside their core domain of expertise, is dumb as rocks compared to an LLM.
I don't even know how I managed before LLMs were a thing. It hasn't been that long, I've spent the overwhelming majority of my life without them. If cheap and easy access to them were to magically vanish, my willingness to pay to get back access would be rather high.
Ah, it's all too easy to forget how goddamn useful it can be to have access to an alien intelligence in one's pocket. Even if it's a spiky, inhuman form of intelligence.
On the topic of them being cheap/free, it's a damn shame that AI Studio is moving to API access only. Google was very flustered by the rise of ChatGPT and the failure of Bard, it was practically begging people to give Gemini a try instead. I was pleasantly surprised and impressed since the 1.5 Pro days, and I'm annoyed that their gambit has paid off, that demand even among normies and casual /r/ChatGPT users increased to the point that even a niche website meant for powerusers got saturated.
Yes. The number of times I've gotten a better differential diagnosis from an LLM than in an ER is too damn high.
Are you an actual doctor? (I’m not.) I’ve found LLMs good at coming up with plausible hypotheses but bad at blocking them off.
No. Just a person who has taken my kids to the ER too many times.
Allergies? Not my business, but that was always my fear as my boys were coming up. A bite of a piece of chocolate that was apparently near a peanut sent my one son to a hospital. Just hives, but I am happy to say they did the right thing and kept him overnight. Bi/multiphasic anaphylaxis precaution. The horror stories are usually because the epipen is treated as a one and done.
I just wrote a lot about allergies if you're talking about something completely different.
A tick, actually :/
https://www.themotte.org/post/1986/culture-war-roundup-for-the-week/331290?context=8#context
When he woke up paralyzed I was about to start the usual techbro thing of asking ChatGPT but said no, don't be that guy, lets just take him to the ER.
But then after we found the tick through no thanks to the ER, I plugged his symptoms and circumstances, exactly what we told the ER people, into ChatGPT4 classic and it listed ticks as the second thing to check for.
More options
Context Copy link
More options
Context Copy link
I remember (will never forget) that awful story about the tick.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Why do you consistently assume that people who don't share your views of LLM capabilities just haven't seen what they can do/what humans can do? For example:
Yes I have (and of course, I've used LLMs as well). That's why I say LLMs suck at code. I'm not some ignorant caricature like you seem to think, who is judging things without having proper frame of reference for them. I actually know what I'm talking about. I don't gainsay you when you say that an LLM is good at medical diagnoses, because that's not my field of expertise. But programming is, and they simply are not good at programming in my opinion. Obviously reasonable people can disagree on that evaluation, but it really irks me that you are writing like anyone who disagrees with your take is too inexperienced to give a proper evaluation.
Hang on. You're assuming I'm implying something in this comment that I don't think is a point I'm making. Notice I said average.
The average person who writes code. Not an UMC programmer who works for FAANG.
I strongly disagree that LLMs "suck at code". The proof of the pudding is in the eating; and for code, if it compiles and has the desired functionality.
More importantly, even from my perspective of not being able to exhaustively evaluate talent at coding (whereas I can usually tell if someone is giving out legitimate medical advice), there are dozens of talented, famous programmers who state the precise opposite of what you are saying. I don't have an exhaustive list handy, but at the very least, John Carmack? Andrej Karpathy? Less illustrious, but still a fan, Simon Willison?
Why should I privilege your claims over theirs?
Even the companies creating LLMs are use >10% of LLM written code for their own internal code bases. Google and Nvidia have papers about them being superhumanly good at things like writing optimized GPU kernels. Here's an example from Stanford:
https://crfm.stanford.edu/2025/05/28/fast-kernels.html
Or here's an example of someone finding 0day vulnerabilities in Linux using o3.
I (barely) know how to write code. I can't do it. I doubt even the average, competent programmer can find zero-days in Linux.
Of course, I'm just a humble doctor, and not an actual employable programmer. Tell me, are the examples I provided not about LLMs writing code? If they are, then I'm not sure you've got a leg to stand on.
TLDR: Other programmers, respected ones to boot, disagree strongly with you. Some of them even write up papers and research articles proving their point.
Yes, that is indeed what I meant as well.
I agree. And it doesn't. Code generated by LLMs routinely hallucinates APIs that simply don't exist, has grievous security flaws, or doesn't achieve the desired objective. Which is not to say humans never make such mistakes (well, they never make up non-existent APIs in my experience but the other two happen), but they can learn and improve. LLMs can't do that, at least not yet, so they are doing worse than humans.
I'm not saying you should! I'm not telling you that mine is the only valid opinion; I did after all say that reasonable people can disagree on this. My issue is solely that your comment comes off as dismissing anyone who disagrees with you as too inexperienced to have an informed opinion. When you say "They can't code? Have you seen the average code monkey?", it implies "because if you had, you wouldn't say that LLMs are worse". That is what I object to, not your choice to listen to other programmers who have different opinions than mine.
Please post an example of what you claim is a "routine" failure by a modern model (2.5 Pro, o3, Claude 3.7 Sonnet). This should be easy! I want to understand how you could possibly know how to program and still believe what you're writing (unless you're just a troll, sigh).
I've tried to have this debate with you in the past and I'm not doing it again, as nothing has changed. I'm not even trying to debate it with self_made_human really - I certainly wouldn't believe me over Carmack if I was in his shoes. My point here is that one should not attribute "this person disagrees with my take" to "they don't know what they're talking about".
Right, and I asked you for evidence last time too. Is that an unreasonable request? This isn't some ephemeral value judgement we're debating; your factual claims are in direct contradiction to my experience.
Right, and I gave it then. Which is why I am not going to bother doing it this time. Like I said, nothing has changed.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Or even consider a comment from your fellow programmer, @TheAntipopulist:
https://www.themotte.org/post/2154/culture-war-roundup-for-the-week/333796?context=8#context
Notice how he didn't say that they're good at coding? He said that they're useful for his job.
LLMs are useful for SWEs, at least for some types some of the time. There is value here but they're poor programmers and to use them effectively you have to be relatively competent.
Its also very easy to fool yourself into thinking that they're much more valuable than they really are, likely due to how eloquently and verbosely they answer queries and requests.
@TheAntipopulist I'll let you speak for yourself instead of us reading the tea leaves.
More options
Context Copy link
I'd like to think I'm reasonably good at coding considering it's my job. However, it's somewhat hard to measure how effective a programmer or SWE is (Leetcode style questions are broadly known to be awful at this, yet it's what most interviewers ask for and judge candidates by).
Code is pretty easy to evaluate at a baseline. The biggest questions are "does it compile", and "does it give you the result you want" can be evaluated in like 10 seconds for most prompts, and that's like 90% of programming done right there. There's not a lot of room for BS'ing. There are of course other questions that take longer to answer, like "will this be prone to breaking due to weird edge cases", "is this reasonably performant", and "is this well documented". However, those have always been tougher questions to answer, even for things that are 100% done by professional devs.
While I'd say the only thing easy to answer is "does it compile", reading your other list I'd say I largely agree with your assesment.
LLMs can be a force multiplier for SWEs, but that doesn't mean they're good programmers. They're not programmers at all.
Looking at the points you made in your other post I'd argue that the biggest force multiplier is your first point and that this is a pretty big deal and bigger than people might first realise, especially non-engineers.
The second one is the issue I'm having with claims about LLM usability. Its kind of like dealing with mediocre Indian resources. You have break down and define the problem to such a degree that you've "almost" written the code yourself. This can still be useful and depending on your role very useful, but it isn't effectively replacing local resources either. Its not a method for solving problems but more of an advanced auto complete.
How useful is this? It depends on the situation and indivual and I'd rate it as moderately useful. Having managed developers, it also seems like something that (for some people) can feel like more of a productivity boost than it is due to time being spent differently (I'm not saying you're doing this).
I also wonder about this. I think in particularly bad cases it can be true, since if something doesn't work it becomes very tempting to just reprompt the AI with the error and see what comes back. Sometimes that works on a second attempt, and in other times I'll go back and forth for a dozen prompts or so. Whoops, there went an entire hour of my time! I'm trying to explicitly not fall into that habit more than I already have.
Overall I'd say it's a moderate productivity boost overall even factoring that in, and it's getting slowly better as both AI models improve and my skill in using them also improves.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
At @self_made_human's request, I'm answering this. I strongly believe LLMs to be a powerful force-multiplier for SWEs and programmers. I'm relatively new in my latest position, and most of the devs there were pessimistic about AI until I started showing them what I was doing with it, and how to use it properly. Some notes:
LLMs will be best where you know the least. If you're working on a 100k codebase that you've been dealing with for 10+ years in a language you've known for 20+ years, then the alpha on LLMs might be genuinely small. But if you have to deal with a new framework or language that's at least somewhat popular, then LLMs will speed you up massively. At the very least it will be able to rapidly generate discrete chunks of code to build a toolbelt like a Super StackOverflow.
Using LLMs are a skill, and if you don't prompt it correctly then it can veer towards garbage. You'll want to learn things like setting up a system prompt and initial messages, chaining queries from higher level design decisions down to smaller tasks, and especially managing context are all important. One of the devs at my workplace tried to raw-dog the LLM by dumping in a massive codebase with no further instruction while asking for like 10 different things simultaneously, and claimed AI was worthless when the result didn't compile after one attempt. Stuff like that is just a skill issue.
Use recent models, not stuff like 4o-mini. A lot of the devs at my current workplace tried experimenting with LLMs when they first blew up in early 2023, but those models were quite rudimentary compared to what we have today. Yet a lot of tools like Roo Cline or whatever have defaulted to old, crappy models to keep costs down, but that just results in bad code. You should be using one of 1) Claude Opus, 2) ChatGPT o3, or 3) Google Gemini 2.5 pro.
Speaking from my own experience with literal top-of-class LLMs.
LLMs are good for getting overviews of public, popular, highly documented technical systems. They can meaningfully reduce ramp-up time there. But it’s not too significant for the overall job, for most jobs. I’d estimate ramp-up time to be a modest fixed cost that is already effectively ameliorated by existing resources like Stack Overflow. So maybe a 2x speed up on 2% of overall working time.
They are also good for writing repetitive boilerplate. Copy/paste features are cool and helpful. This takes maybe 1% of my overall working time. I just don’t wind up repeating myself that much.
They can be good for getting code coverage, but that does not equate to good testing. I can elaborate if needed, but figuring out which system properties are most likely to need explicit coverage is an art that requires a high-level perspective that an LLM will not have for the majority of serious projects. This is around 10% of my job.
For lesser-known or internal APIs (common at larger companies), the LLM will hallucinate at extraordinary rates. This is around 5% of my job.
For anything technical, like refactoring class hierarchies, the LLM will get way out of its depth and is likely to produce gibberish. This is around 4% of my job.
It simply will not understand the larger requirements of a project, and what would make one solution valid and another invalid. This is about 15% of my job as it relates to code, and maybe 8% as it relates to design specifications, and 20% as it relates to talking with other people about said requirements.
The rest of my job is code review and progress updates, which maybe could be automated but which feels a little cheap to do. So I stand to save about 2% of my working time with AI, which is pretty marginal. And on my team, you can’t tell any meaningful difference in output between the people who use AI and the ones who don’t, which ties into my general assertion that it’s just not that helpful.
Then again, I’m a backend engineer in a pretty gritty ecosystem, so maybe this isn’t true for other software roles.
If there's one place I doubt AI will improve much in the near future, it's stakeholder management. That's why I think even if AI becomes an astronomically better coder than the average SWE, that SWE's could just rebrand as AI whisperers and translate the nuances of a manager's human-speak into AI prompts. Maybe it'll get there eventually, but we're still a good ways off from non-technical people being able to use AI to get any software they want without massive issues arising. The higher up in the org you are, the bigger a % of your job that stakeholder management becomes. I think we agree on this point overall.
On less well-known systems and APIs, I think the hallucination issue is more of a skill issue (within reason, I'm not making an accusation here). I'm translating a bunch of SQR (a niche language you've probably never heard of) queries to an antiquated version of TSQL right now, and the AI indeed hallucinates every now and then, but it's in predictable ways that can be solved with the right system prompts. E.g. sometimes it will put semicolons at the end of every line thinking its in a more modern version of SQL, and I have to tell it not to do that which is somewhat annoying, but simply writing a system prompt that has that information cuts down that issue by 99%. It's similar for unknown APIs -- if the AI is struggling, giving it a bit of context usually resolves those problems from what I've seen. Perhaps if you're working in a large org with mountains of bespoke stuff then the giving an AI all that context would just overwhelm it, but aside from that issue I've still found AI to be very helpful even in more niche topics.
On the time saved, you might want to be on the lookout for the dark leisure theory for some folks, while for others the time savings of using AI might be eaten up somewhat by learning to use the AI in the first place. I agree that the productivity boost hasn't been astronomical like some people claim, but I think it will increase over time as models improve, people become more skilled at AI, and people using AI to slack off get found out.
Haha, I really, really don’t think there’s any dark leisure here. None of the best performers rest much at all, and I talk with them pretty openly about their habits. Plus, our direct manager is bullish on AI and got the most enthusiastic guy on the team to do an AI demo a few weeks back. Using AI as a force multiplier would get you a raise, not more work.
The more I have to babysit the LLM, the less time-efficient it is for me. I don’t know what everyone’s experience is, but typing out code (even SQL) is just not that time consuming. I know, logically, what I want to happen, and so I write the statements that correspond to that behavior. Reading code for validity, rewriting it to make it more elegant and obviously correct, that takes more of my time, and LLM output is (like a junior dev) unreliable enough that I have to read deeply for (unlike a junior dev) no chance of it improving future output. Plus, the code I write tends to be different enough that the prospect of reprompting the LLM repeatedly is pretty unpleasant.
That said, I absolutely use it for Bash, which is arcane and unfamiliar to me. I still have to go through the slow process of validating its suggestions and rewriting pieces to make them more proper, but the way you perform simple logical actions in Bash is so far outside my wheelhouse that getting pointed in the right direction is valuable. So if you’re in a position where you’re doing more regular and rote work with particularly obnoxious but well-documented languages, it makes sense we’d have different opinions and experiences.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I join the chorus of people who don't quite understand what your problem is with LLMs. What kind of code do you write? The tools are at the point where I can give them a picture of a screen I want along with some API endpoints and it reliably spits out immediately functioning react code. I can then ask it to write the middleware code for those endpoints and finally ask it to create a sproc for the database component. It's possible you hover high above us react monkeys and barely even consider that programming but surely you understand that's the level like at least half of all programmers operate on? I had copilot do all these things today, I know that it can do these things. So where is the disconnect? It's truly possible there is some higher plane of coding us uninspired 9-5 paycheck Andy's can only obliquely perceive and this is your standard for being able to program but it'd be nice if you could just say that to resolve the confusion.
Oh for heaven's sake, dude. When did I ever say I consider myself better than anyone else, that I would deserve such a litany of sarcasm directed at me? I don't think that and certainly haven't said it. I am just an ordinary programmer - I doubt very much that I'm better at programming than anyone here except the non-programmers, and I'm sure I'm worse than more than a few. Not only did I say "hey I'm not trying to litigate this right now" and that got ignored, now I get people dogpiling me saying I'm a troll or think I'm better than everyone else or whatever.
But fine, since you and @SnapDragon are insistent on pressing me on the topic (and since I apparently didn't say to him what my experience was, my bad on that, but I know I have posted this in a previous thread before), I will reiterate the things that I personally have seen LLMs fall flat on their face with. This is of course in addition to the various embarrassments that are public, like Microsoft's ill-conceived attempt to let Copilot loose on PRs.
These were all within the last year, though I couldn't tell you exactly when or what model or anything. And I've been honest that sometimes it has done good work for me, namely in generating short snippets of code in a language (or using an API) that I know well enough to recognize as correct when I see it, but not well enough to produce without laborious reading of docs. I've never claimed that LLMs work 0% of the time (if people have taken that away, I've done a poor job communicating), but the failure rate is much too high for them to be considered viable tools in my book. Most frustratingly, the things that I actually need help on, the ones where I don't know really anything about the topic and a workable AI assistant would actually save me a ton of time, are precisely the cases where it fails hard (as in my examples where stuff doesn't even work at all).
So those are again my experiences with LLMs that have caused me to conclude that they are hype without substance. Disagree if you like, I don't mind if you find it useful and like I have tried to say I'm not actually trying to convince people of my views on this topic any more. Like I tried to say earlier, the only reason I posted in this thread was to push back on the idea that one simply must be ignorant if they don't think LLMs are good at coding (and other things). That idea is neither true, necessary, or kind (as the rules allude to) and I felt that it deserved some sort of rebuttal. Though heaven knows I wish I had just left it alone and had peace and quiet rather than multiple people jumping down my throat.
FWIW, I appreciate this reply, and I'm sorry for persistently dogpiling you. We disagree (and I wrongly thought you weren't arguing in good faith), but I definitely could have done a better job of keeping it friendly. Thank you for your perspective.
That does sound like a real Catch-22. My queries are typically in C++/Rust/Python, which the models know backwards, forwards, and sideways. I can believe that there's still a real limit to how much an LLM can "learn" a new language/schema/API just by dumping docs into the prompt. (And I don't know anything about OpenAI's custom models, but I suspect they're just manipulating the prompt, not using RL.) And when an LLM doesn't know how to do something, there's a risk it will fake it (hallucinate). We're agreed there.
Maybe using the best models would help. Or maybe, given the speed things are improving, just try again next year. :)
Thanks. And for my part I'm sorry that I blew you off unjustly; I really thought I had explained myself in detail but I was wrong.
And yeah, the tech might improve. I imagine you can see why I'm skeptical of the strong predictions that it'll do so (given that I don't agree it's as good as people say it is today), but I try to keep an open mind. It is possible, so we'll see.
More options
Context Copy link
More options
Context Copy link
Apologies if I came on too hard, it's just you've been expressing this opinion for a while and had gone down several reply chains without bringing the thing to the object level. It's emblematic of the whole question, AI is "spikey", as in it's very good at some things and inexplicably bad at some other things. I don't think a lot of people would take so much offense if you just said it still seems bad at some tasks, that's broadly a consensus. But when you just say it "sucks at code" it's perplexing to the people watching it effortlessly do wide swaths of what used to be core programming work.
I could definitely see it struggle with highly context dependent config files but something seems strange about it not producing at least a valid file, did you try different prompts and giving it different contexts? I find giving it an example of valid output helps but I'm not familiar with fluentd and it's possible giving it enough context is unreasonable.
I have not tried that, but it also seems like kind of a failure of the tool if I have to, you know? The whole point of a tool that can understand natural language is that you can just talk to it normally. If one has to figure out how to word the incantations just right to get a useful result... I'm not sure how that's better than just figuring out the code myself at that point.
Prompting is a skill like any other. Sending it off without context is like telling an underling to fix your config file without explaining or letting them look at the system they're writing it for. It's often a mistake to assume the prompt needs to be something a human would understand. You can and should just dump unformatted logs, barely related examples of working config files, anything you can imagine an underline with infinite time in a locked room might find useful in solving your problem.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
As somone who's been working in the field of machine learning since 2012 and generally agrees with @SubstantialFrivolity's assesment, I think that what we are looking here is a bifurcation in opinion between people looking for "bouba" solutions and those looking for "kiki" solutions.
If you're a high-school student or literature major with zero background in computer science looking to build a website or develop baby's first mobile app LLM generated code is a complete game changer. Literally the best thing since sliced bread. (The OP, and @self_made_human's comments reflect this)
If you're a decently competent programmer at a big tech firm, LLMs are at best a mild productivity booster. (See @kky's comments below)
If you are decently competent programmer working in an industry where things like accuracy, precision, and security are core concerns, LLMs start to look anti-productive as in the time you spent messing around with prompts, checking the LLM's work, and correcting it's errors, you could've easily done the work yourself.
Finally if you're one of those dark wizards working in FORTRAN or some proprietary machine language because this is
SpartaIBM/Nvidia/TMSC and the compute must flow, you're skeptical of the claim that an LLM can write code that would compile at all.I mean, my full opinion and experience with LLMs is much harsher than my comment suggested, but I don’t want to start fights with enjoyers on the net. (At least, not this time.) Chances are their circumstances are different. But I would be seriously offended if someone sent me AI-generated code in my main area of expertise because it would be subtly or blatantly wrong and be a serious waste of my time trying to figure out all the errors of logic which only become apparent if you understand the implicit contracts involved in the domain. Goodness knows it’s bad enough when merely inexperienced programmers ask for review without first asking advice on how to approach the problem, or even without serious testing…
I know that pain.
More options
Context Copy link
More options
Context Copy link
You have to contend with the fact that like 95+% of employed programmers are at this level for this whole thing to click into place. It can write full stack CRUD code easily and consistently. five years ago you could have walked into any bank in any of the top 20 major cities in the united states with the coding ability of o3 and some basic soft skills and be earning six figures within 5 years. I know this to be the case, I've trained and hired these people.
I did allude that there might be a level of programming where one needs to see through the matrix to do but in SF's post and in most situations I've heard the critique in it's not really the case. They're just using it for writing config files that are annoying because they pull together a bunch of confusing contexts and interface with proprietary systems that you need to basically learn from institutional knowledge. The thing LLMs are worst at. Infrastructure and configuration are the two things most programmers hate the most because it's not really the more fulfilling code parts. But AI is good at the fulfilling code parts for the same reason people like doing them.
In time LLMs will be baked into the infrastructure parts too because it really is just a matter of context and standardization. It's not a capabilities problem, just a situation where context is splined between different systems.
If anything this is reversed, it can write FORTRAN fine, it probably can't do it in the proprietary hacked together nonsense installations put together in the 80s by people working in a time where patterns came on printed paper and might collaborate on standards once a year at a conference if they were all stars. but that's not the bot's fault. This is the kind of thinking that is impressed by calculators because it doesn't properly understand what's hard about some things.
I feel like I'm taking crazy pills here. No one's examples about how it can't write code are about it writing code. It's all config files and vague evals. No one is talking about it's ability to write code. It's all devops stuff.
Ironically I considered saying almost this exact thing in my above comment, but scratched it out as too antagonistic.
The high-school students and literature majors are impressed by LLMs ability to write code because they do not know enough about coding to know what parts are easy and what parts are hard.
Writing something that looks like netcode and maybe even compiles/runs is easy. (All you need is a socket, a for loop, a few if statements, a return case, and you're done) Writing netcode that is stable, functional, and secure enough to pass muster in the banking industry is hard. This is what i was gesturing towards with "Bouba" vs "Kiki" distinction. Banks are notoriously "prickly" about thier code because banking (unlike most of what Facebook, Amazon, and Google do) is one of those industries where the accuracy and security of information are core concerns.
Finally which LLM are you using to write FORTRAN? because after some brief experimentation niether Gemini nor Claude are anywhere close.
What do you imagine is the ratio just at banks between people writing performant net code and people writing crud apps? If you want to be an elitist about it then be my guest, but it's a completely insane standard. Honestly the people rolling out the internal llm tooling almost certainly outnumber the people doing the work you're describing.
I do not think that expecting basic competency is an "insane standard" or even that elitist. Stop making excuses for sub-par work and answer the question.
Which LLM are you using to write FORTRAN?
What sort of problem did you ask it to solve?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think this fairly nicely summarizes how I feel. Not that I do work in one of those industries to be fair, but it's part of my personal work ethic I guess you might say. I want computers (and programs) to be correct first and foremost. Speed or ease of development don't mean much to me if the result can't be relied upon. Not only that, I want my tools to be correct first and foremost. I wouldn't accept a hammer where the head randomly fell off the handle 10% of the time or even 1% of the time. So I similarly have very little patience for an LLM which is inherently going to make mistakes in non-deterministic ways.
Preach, brother. Software is made to be clear and predictable. Learning to make it that way, one line at a time, is our craft. You can always tell the brilliant programmer apart because 99% of that code is simple as can be and 1% is commented like a formal proof. Worse than LLMs, reliance on LLMs risks undermining this skill. Who can say if something is correct if the justification is just that it came from the machine? There needs to be an external standard by which code is validated, and it must be internalized by humans so they can judge.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I’ll give a description of what I do.
I manage servers. Or rather, I write code to do this, in accordance with some rather specific customer contracts. The times we take action, and the actions we take, are highly constrained. Even the basic concept of updates is not especially simple. I’m sure you remember Crowdstrike taking most of the Windows world down in a day. What I do is not so apocalyptic on the world scale, but our customers would find a similar event devastating. So most of my time is spent figuring out every possible path through server states and ensuring that they all lead back to places where faults can be cheaply recovered. These properties lie above the code. You can’t understand them, for the most part, just by reading the code. But they are incredibly important and must be thoroughly safeguarded, and even highly intelligent humans who just happen to be ignorant of the problem space or are a little careless have made really, really bad mistakes here. The code compiled, the tests passed, and it even seemed to work for a little in our integration environments - but it was horrifically flawed and came within an ace of causing material customer damage. So I don’t much trust an LLM which has a much more constrained sort of awareness, and in practice, they don’t much deliver.
I realize that’s a little vague, but I hope it explains a little about a more backend perspective on these problems. If I were more clever I’d give a clear example which was not real, but barring that, I hope a characterization helps.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm sorry but being a better writer than literal redditors on /r/WritingPrompts is not a high bar to pass.
More options
Context Copy link
And yet it is a bar that most humans cannot pass. We know this because redditors are humans (and, in fact, since they are selected for being literate and interested in creative writing, they must be above average human writing ability). That's the point of the grandparent; ChatGPT blew right past the Turing Test, and people didn't notice because they redefined it from "can pass for the average human at a given task" to "can pass for the top human at a given task".
There are plenty of tasks (e.g. speaking multiple languages) where ChatGPT exceeds the top human, too. Given how much cherrypicking the "AI is overhyped" people do, it really seems like we've actually redefined AGI to "can exceed the top human at EVERY task", which is kind of ridiculous. There's a reasonable argument that even lowly ChatGPT 3.0 was our first encounter with "general" AI, after all. You can have "general" intelligence and still, you know, fail at things. See: humans.
If you say "it's okay for the AI to do as poorly as a poorly performing human", you'll end up concluding that even an Eliza program can do better than a drunk human who can barely type out words on a keyboard. And if you say "the AI only needs to exceed a top human at a few tasks", then a C64, which can run a simple calculator or chess program, would count as a general AI.
People are not cherrypicking. What they are doing is like the Turing test itself, but testing for intelligence instead of for "is like a human". People asking questions in a Turing test can't tell you in advance which questions would prove the target is a computer, but they have implicit knowledge that lets them dynamically change their questions to whatever is appropriate. Likewise, we don't know in advance exactly what things ChatGPT would have to do to prove it's a general intelligence, but we can use our implicit knowledge to dynamically impose new requirements based on how it succeeds at the previous requirements.
Saying "well, it can write, but can it code" is ultimately no different from saying "well, it can tell me its favorite food, but can it tell me something about recipes, and its favorite book, and what it did on Halloween". We don't complain that when someone does a Turing test and suddenly asks the computer what it did on Halloween, that he's cherrypicking criteria because he didn't write down that question ahead of time.
Well, I don't think your analogy of the Turing Test to a test for general intelligence is a good one. The reason the Turing Test is so popular is that it's a nice, objective, pass-or-fail test. Which makes it easy to apply - even if it's understood that it isn't perfectly correlated with AGI. (If you take HAL and force it to output a modem sound after every sentence it speaks, it fails the Turing Test every time, but that has nothing to do with its intelligence.)
Unfortunately we just don't have any simple definition or test for "general intelligence". You can't just ask questions across a variety of fields and declare "not intelligent" as soon as it fails one (or else humans would fail as soon as you asked them to rotate an 8-dimensional object in their head). I do agree that a proper test requires that we dynamically change the questions (so you can't just fit the AI to the test). But I think that, unavoidably, the test is going to boil down to a wishy-washy preponderance-of-evidence kind of thing. Hence everyone has their own vague definition of what "AGI" means to them; honestly, I'm fine with saying we're not there yet, but I'm also fine arguing that ChatGPT already satisfies it.
There are plenty of dynamic, "general", never-before-seen questions you can ask where ChatGPT does just fine! I do it all the time. The cherrypicking I'm referring to is, for example, the "how many Rs in strawberry" question, which is easy for us and hard for LLMs because of how they see tokens (and, also, I think humans are better at subitizing than LLMs). The fact that LLMs often get this wrong is a mark against them, but it's not iron-clad "proof" that they're not generally intelligent. (The channel AI Explained has a "Simple Bench" that I also don't really consider a proper test of AGI, because it's full of questions that are easy if you have embodied experience as a human. LLMs obviously do not.)
In the movie Phenomenon, rapidly listing mammals from A-Z is considered a sign of extreme intelligence. I can't do it without serious thought. ChatGPT does it instantly. In Bizarro ChatGPT world, somebody could write a cherrypicked blog post about how I do not have general intelligence.
The Turing Test ain’t simple pass/fail. It doesn’t specify an amount of time for the interaction, for instance, or whether it iterates, or whether people know the characteristics of the AI. I’d say that current LLMs could fool Turing himself, on the first go, but given a few iterations and enough time he’d notice something was up. Look at how our mods play spot the LLM. This would be a blanket yes/no if the Turing Test were pass/fail, but in reality it’s an evolving thing.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Does your Chinese scroll also have an Emperor's signature and archival stamp? Can we see it or is that gauche?
Nah, my scrolls aren't that august. They're all late Qing/republic period (late 19th Century, early 20th century) works by no name artists painting the usual subjects of bamboo, shrimp and mountainous landscapes. They don't really have any artistic value beyond the fact that they look pretty and aren't reproductions, selling for a few hundred dollars each and the stamps on them are also of randoms, I expect if there was an Imperial seal at the very minimum the price would be in the 10s of thousands of dollars per scroll and I don't have that sort of money. Most certainly if what I had was a valuable work I would not be putting my own seal on it as that could easily damage its worth.
What a charming hobby.
Count is a charming guy. He's very well groomed... from what I've heard.
Flattered...
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Darn. A different piece from the same collection as the example image sold for a cool 75 million USD, so I felt compelled to ask. Love the scholarly, bureaucratic nature of the tradition. How very Chinese. I'd be impressed if you unrolled it in front of me. Very cool.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I agree that the bubble will almost certainly burst at some point, and lots of people will get burned. I strongly disagree that it's all just hype though, or that LLMs are a "scam". They're already highly useful as a Super Google, and that'll never go away now. They're generating billions in revenue already -- it's not nearly enough to sustain their current burn rates, but there's lots of genuine value there. I'm a professional software engineer, and AI is extremely helpful for my job; anyone who says it isn't is probably just using it wrong (skill issue).
If you're careful, they are. But that care requires twice as much checking: instead of just having to verify that the web page you find knows what it's talking about, you have to verify that the AI correctly summarized what it's talking about, and God help you if you just believe the AI about something for which it doesn't cite sources. But even Google's cheap "throw it in every search" AI seems to be much less likely to bring up unrelated web pages than the previous Google option of "let the search engine interpret your query terms loosely", and it's much less likely to miss important web pages than the previous Google option of "wrap most of your query in quotes so the stupid engine doesn't substitute unrelated-in-your-context words for your actual query terms", so it's still very useful.
The one thing I've repeatedly found to be most useful about current LLMs is that they're great at doing "dual" or "inverse" queries. If I knew I wanted the details of Godunov's Theorem, even a dumb search engine would have been fine to bring up the details of Godunov's Theorem - but when all I could recall was that I wanted the details of "some theorem that proves it's impossible to get higher order accuracy and stability from a numerical method for boundary-value problems without sacrificing something", but I didn't even recall the precise details, I wrote a wishy-washy paragraph for Claude and in the reply its first sentence gave me exactly the name of the theorem I wanted to search for. I can't imagine how much longer it would have taken to find what I was looking for with Google.
I'm currently not allowed to use a top-of-the-line model for my job (even though I mostly work on things that aren't ITAR or classified, we've got a blanket limitation to an in-house model for now), but I'm definitely worried that I'll have a skill issue when the rules get improved. What do you do to get AI help with a large code base rather than a toy problem? Point it to a github repo? Copy-and-paste a hundred thousand lines of code to make sure it has enough context? Paste in just the headers and/or docs it needs to understand a particular problem?
Two things mainly:
Have a good prompt that has the nuances of the crappy, antiquated setup my work is using for their legacy systems. I have to refine this when it runs into the same sorts of errors over and over (e.g. thinking we're using a more updated version of SQL when we're actually using one that was deprecated in 2005).
Play context manager, and break up problems into smaller chunks. The larger the problem that you're getting AI to do, the greater the chance that it will break down at some point. Each LLM has a certain max output length, and if you got even close to that then it can stop doing chain-of-though to budget its output tokens, which makes its intelligence tank. The recent Apple paper on the Tower of Hanoi demonstrated that pretty clearly.
More options
Context Copy link
I'm also not allowed to use the best models for my job, so take my advice (and, well, anyone else's) with a grain of salt. Any advice you get might be outdated in 6 months anyway; the field is evolving rapidly.
I think getting AI help with a large code base is still an open problem. Context windows keep growing, but (IMO) the model isn't going to get a deep understanding of a large project just from pasting it into the prompt. Keep to smaller components; give it the relevant source files, and also lots of English context (like the headers/docs you mentioned). You can ask it design questions (like "what data structure should I use here?"), or for code reviews, or have it implement new features. (I'm not sure about large refactors - that seems risky to me, because the model's temperature could make it randomly change code that it shouldn't. Stick to output at a scale that you can personally review.)
The most important thing to remember is that an LLM's superpower is comprehension: describe what you want in the same way you would to a fellow employee, and it will always understand. It's not some weird new IDE with cryptic key commands you have to memorize. It's a tool you can (and should) talk to normally.
More options
Context Copy link
Use an AI-integrated IDE like Cursor or Windsurf (now bought by OpenAI sigh).
Your query looks like ‘I have an error that look like paste text and I think it’s being caused by @Object1 not being destroyed properly during garbage collection’.
The IDE gives the codebase structure to the model, which queries the object you mentioned, its headers, etc. then does a search of the repo for where it’s used, then…
But I don’t think I’ve ever worked on a codebase that you would consider large and of course this only works for a monorepo.
More options
Context Copy link
More options
Context Copy link
Why are you sad about jobs created by a bubble being lost by the bubble popping? Isn't that just a return to the status quo?
More options
Context Copy link
You seem to be ignoring that while junior devs have to get better separately and each new generation of devs has to gain experience anew (until we have direct knowledge brain-grafts), LLMs just stay better once they got better.
More options
Context Copy link
More options
Context Copy link
And not a word about open weight models?
I can run Qwen2.5VL on my desktop and it can read tables and documents visually. That alone is a multi-billion dollar value proposition for office work. And it's not monetized, it's free. But you can build things with it and monetize that.
I agree with you that when it all shakes out proprietary ultra-massive b2b saas AI will not be the thing that really shakes up society or industry. But AI is here to stay - I can already run shit that would have been nigh miraculous 2 years ago on my damn phone, locally.
More options
Context Copy link
Sigh. Count has already been rapped on the knuckles for copying and pasting AI content. It violates the low-effort guidelines. Don't do this.
just post it, typos and all. Or if you're using a browser like failfox spell check should be bundled with that so you don't need the os to do it.
More options
Context Copy link
More options
Context Copy link
I would bet fairly good odds that this is not true.
Which part of this article claims $1000 per query for deep research?
More options
Context Copy link
You misunderstand significantly what they spent $1,000 on. It's per task, not per query. I remember the results this article is summing up. If you look at the originak source, you'd see that it's 1k dollars per task... using a super chain of thought reasoning workflow, spinning up a ton of separate agents, running and restarting up to budget, and taking the best result. Very, very far from a thousand dollars per query. Each task was probably thousands of queries.
When they weren't trying to brute force the benchmark by trying the same model thousanda of times, it was around 17-20 dollars per task. Again, the arc agi tasks are not single queries. https://arcprize.org/blog/oai-o3-pub-breakthrough
More options
Context Copy link
This is talking about the cost to run on a test where they gave it a ludicrous token budget to perform sota evals, not the thing you run by default as a consumer.
More options
Context Copy link
More options
Context Copy link
I don't think your premises are true and meaningful. Some may be true. Some would be meaningful if they were true but aren't.
A thousand USD? Surely not. Deepseek R1 has a kind of deep research and it's very cheap. You say in comments you realise that was speculation but I think you just don't have any kind of understanding what a believable cost is for this kind of service. It just doesn't cost that much per call!
Also, OpenAI does have financials that tell a totally different story to what you're saying: https://sacra.com/c/openai/
Inference is cheap and profitable.
Who cares if training costs go to 1 billion? Or even 10 billion? That's a tiny amount of money in the grand scheme of things. Facebook spent 20 billion on the metaverse, earned negligible returns and shrugged it off. The reason there's few profits on AI is because of massive investment and competition, everyone recognizes the enormous value and potential of this technology.
More options
Context Copy link
I think trouble with style transfer is very much a chatbot related issue. I think current ai can do it but that would require sacrificing performance and possibly alignment.
I actually meant to test trying to do style transfer on some base models but never gpt around to it.
More options
Context Copy link
The stench of AI is great with the essay you posted in the comment below. Just looking at it sets off many alarm bells.
Also I literally pasted the first paragraph into gptzero and it returned a score of 100% ai
Your gemini essay, posted below, is not worth the pixels it's printed on and not worth reading past the point of smelling the ai stench. Clearly the human one is better.
I assure you the first paragraph was written by me. Do you really think the AI would automatically reference the "nowhere in two weeks" rdrama.net meme?
I'm referring to the gemini output that you posted in a comment below. The one that starts with "Of all the names that echo from the chambers of power ..." and which you falsely claimed passes most AI detectors.
I edited my above comment for clarity
Hm, the first paragraph of that is coming up 0% AI written for me in ZeroGPT.
/images/1749486945465418.webp
gpt zero (the naming is so annoying)
zerogpt sucks.
Interesting; yes GPTZero says the first paragraph is AI, however for the first half of the text (it won't let me upload more than 5000 characters at once) says it's a coinflip between being human or AI and there are paragraphs which it is highly sure are human written.
/images/1749494140017286.webp
I'll admit that some of the later paragraphs are less obviously AI generated. The first few paragraphs are extremely stinky then it just devolves into academic-sounding nonsense.
Anyways the point still stands that the answer to this prompt does not convincingly pass as human written.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
As someone who is not nearly as impressed with AI as you, thank you for the Turing test link. I've personally been convinced that LLMs were very far away from passing it, but I realize I misunderstood the nature of the test. It depends way too heavily on the motivation level of the participants. That level of "undergrad small-talk chat" requires only slightly more than Markov-chain level aptitude. In terms of being a satisfying final showdown of human vs AI intelligence, DeepBlue or AlphaGo that was not.
I still hold that we're very far away from AI being able to pass a motivated Turing test. For example, if you offered me and another participant a million dollars to win one, I'm confident the AI would lose every time. But then, I would not be pulling any punches in terms of trying to hit guardrails, adversarial inputs, long-context weaknesses etc. I'm not sure how much that matters, since I'm not sure whether Turing originally wanted the test to be that hard. I can easily imagine a future where AI has Culture-level intelligence yet could still not pass that test, simply because it's too smart to fully pass for a human.
As for the rest of your post, I'm still not convinced. The problem is that the model is "demonstrating intelligence" in areas where you're not qualified to evaluate it, and thus very subject to bullshitting, which models are very competent at. I suspect the Turing test wins might even slowly reverse over time as people become more exposed to LLMs. In the same way that 90s CGI now sticks out like a sore thumb, I'll bet that current day LLM output is going to be glaring in the future. Which makes it quite risky to publish LLM text as your own now, even if you think it totally passes to your eyes. I personally make sure to avoid it, even when I use LLMs privately.
Interesting idea! Although there is definitely CG from the '90s that still looks downright good. Jurassic Park comes to mind as a masterpiece, which largely worked because the artists understood what worked well with the technology of the time: night shots (few light sources, little global illumination) of shiny-but-not-reflective surfaces (wet dinosaurs), used sparingly and mated with lots of practical effects.
CG only became a negative buzzword when it got over hyped and stretched to applications that it just wasn't very good for at the time. In some ways it's improved since (we can render photoreal humans!), but it still does get stretched in shots that are IMO just bad movie making ideas ("photorealistic, yet physics-defying").
I could see AI slop going the same way: certain "tasteful" uses still look good, but the current flood of AI art (somehow all the girls have the same face, and I've definitely spotted plenty of online ads that felt cheap from obvious AI use) will be "tacky" and age poorly.
More options
Context Copy link
Well remember even passing the basic casual Turing test used to be extremely difficult. It took at least 65 years between the creation of the test and systems beginning to pass it consistently. And I still remember science articles and science fiction stories from the 90s and 2000s talking about it like it was the holy grail. It’s only in the past few years that it’s started to seem like an inadequate measurement of an AI’s capabilities.
Interestingly your motivated Turing test starts to sound a lot like the Voight-Kampff test from Bladerunner.
Is there any reason the test was treated as a holy grail other than the "Turing" name brand? I can't see any theoretical justification for it.
Very intuitive, sensible, and wasn’t surpassed for 80 years.
More options
Context Copy link
Because it was an impossibly high bar. Nothing was able to do that, for years. The idea that you’d be able to talk to a computer program and not recognize it seemed like science fiction.
More options
Context Copy link
This is the way I always understood it. Lacking the ability to detect any internal experience other than our own, the way we distinguish between 2 different things is by applying input to them and seeing if there's differences in output, e.g. we shine light on it and detect what qualia the light that reflects off of it and into our eyeballs generate in our minds. Detecting intelligence isn't as simple as detecting the color or shape of something and wouldn't involve inputting light rays but rather words to see what words get returned in response. If there's no way to distinguish between 2 different entities in this way, then it makes no sense to say that 1 has human-level intelligence while the other lacks it. For that to be the case, there must be some way to induce different outputs from those 2 things with the same input. In something relating to intelligence, anyway; input-output of words probably don't cover the entirety of all possible detection mechanisms, but they do seem to me to cover a lot.
More options
Context Copy link
The theoretical justification for it is something analogous to the idea of a Universal Turing Machine, though obviously not rigorous.
If we come up with any other test to determine "human-level intelligence", a test that can't be beaten by a "spiky" non-general intelligence that outperforms in unexpected areas (I'm old enough to remember when chess performance was a generally-accepted sign of intelligence!), then someone judging a Turing test can just use that other test. If it turns out that for some reason an AI really can't understand how to respond to a weird hypothetical about upside-down tortoises, then the judge can ask them about upside-down tortoises. If computers had sucked at chess, a judge could have asked the AI to play chess. Computers only start to beat a Turing test reliably when there's nothing a judge can come up with that they can't beat.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Far as I know they can't renew a prescription for you, which has been my personal benchmark for 'agentic' AI for a year or so.
Or maybe its not that they can't but they aren't permitted to for liability or similar reasons.
I just want to be able to ask the thing "I'm running low on [pharmaceutical product], please order up a refill. And sometimes that process requires navigating multiple phone trees for both the pharmacy provider and the party doing the prescribing, to provide various sorts of documentation, sometimes via fax(!) and to make a payment and arrange for pickup or delivery at a convenient time.
All stuff I find very boring and tedious, so if I could offload it to an AI I would do so in a heartbeat.
Not to pick on you since this seems like a common category of problem... but the task is entirely artificial. There's no technical reason renewing a prescription requires you to do anything more than log into your pharmacy somehow and click a "renew" button. Any further complexity is because the pharmacy decided to waste your time.
I feel like I often hear people suggest using AI to navigate some unnecessary complexity like that, when what you actually need is systems that don't suck. Or at least being allowed to have third-party systems exist that work around them sucking. AI doesn't really have anything to do with it. If someone comes up with an AI bot that works around the poor design, people will come up with even worse designs to counter that.
It is trained on the corpus of human text, most of which pertains to artificial problems rather than real problems. So AI should be better at the administrative-state stuff than the real stuff.
That isn't quite what I meant. Sure I believe an LLM-based agent may be able to accomplish that task. But if the intention were to make the task automatable, then you wouldn't need one. Since the point is to make the task not automatable, this is just a step in an arms race of making the task more frustrating.
More options
Context Copy link
More options
Context Copy link
Yes.
YES.
YOU'D THINK THAT.
But you click the 'renew' button and the Pharmacy reports that you have to get a new scrip from your physician. Well okay. You call the physicians office and they say you need to submit proof of your identity sufficient to make sure they're writing it for the right person. E-mail won't do, they need it faxed or you can stop by in person. Then once that's done, they will forward the scrip to the pharmacy. But it turns out the only way to check if the pharmacy got the scrip is to actually call, which means waiting on hold, and once you've done all the intermediate steps, THEN the 'renew' button works. And then add in a layer of fun if you want to get insurance involved.
Maybe other pharmacies do it differently, but I assume a nontrivial part of the process is regulatory compliance and antifraud measures.
Its one of those tasks where it could be a 2-5 minute diversion, or 90 minutes of running around, navigating phone trees and getting various ducks in a row to get the particular outcome you want/need, b/c the parties involved are not motivated to help much, are concerned about fraud/deception, and are not in good communication with each other.
So as the one person properly motivated to complete the task, who isn't worried about fraud, and can act as the intermediary between the parties, I'm now shouldering the organization burden. It is what it is, but I'd sure love to throw AI at the task.
I'm one of those "won't go to the doctor unless a limb falls off" guys, so I was in my 30s before I realized that doctor prescriptions are sent to a specific pharmacy, and you cannot buy the prescribed medicine from any other pharmacy. If you want to buy your prescribed medication from a different pharmacy, you have to talk to the doctor (or, more likely, the nurse that is being supervised by a doctor) and ask them to send the prescription to a new pharmacy. What the actual fuck?!
It's shit like this that convinces me to stick with OTC pills until the day I die.
In ye old days we gave you a physical prescription that you could take with you, show up the pharmacy and shout "gib dis" and if they said "no have" you could take the same piece of paper to another place.
Now we mostly use electronic medical records and we ask you what your pharmacy is and send the information directly to that pharmacy.
Why do we do it that way? Likely things like "regulatory burden" and "let's not accidentally D-DOS the pharmacies with all of these requests."
Now I personally prefer paper script pads for some types of things and ask for them myself, but if your doctor does not allow that it likely it is because whoever owns them (large hospital system or PE firm) does not permit them. We don't complain too much because handwriting a prescription is a pain the ass and our handwriting is more ass.
I don't care if the prescription is printed rather than handwritten. Or if it's in a national database instead of being a physical document. I just don't want it to be sent to a single pharmacy; that's fucking ridiculous.
Even better, no medications should require a prescription. Let it all be OTC. Then the prescription can simply be information about what your doctor recommends.
Again, the ability to walk around with a general prescription that can be used at any pharmacy is the default state - in essence it has been removed by regulatory burden and corporate oversight.
No reason it can't come back other than those things (and plenty of doctors are still able to prescribe via paper).
Take it up with the government.
Expanded OTC formularies are something that can be done in different cultural milieus but is simply incompatible with America. Too many people would kill or harm themselves or others. The costs and externalities are too high.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
In Mexico I filled prescriptions by taking the slip of paper to a farmacia of my choosing. Just walk in and get it. Such a better system.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
They can't solve any problems that anyone asked to solve. It seems like every piece of software is touting new AI-integrated features, except they can't do anything other than generate text I didn't ask for. I more or less write for a living, but I don't need AI to write for me, and it isn't even capable of doing the kind of writing I need it to do. The best case scenario is that it can generate pro-forma motions and the like, but it would take me just as long to input the information into AI as it would to just type it into the document myself.
On the other hand, there's a lot of stuff relating to the software itself that so-called "artificial intelligence" should be able to make easier, but simply can't. For instance, last week I was drafting a document using another document as a template. It had a numbered list. I deleted some of the irrelevant items and replaced them with other items I had pasted from a different document. Except this completely fucked up the numbering system so that it ended at 6 and then started over again at 1. It also caused this weird indentation mismatch. I just wanted to get everything uniform from the top down, and I didn't really care what the formatting looked like, so long as it was consistent. I tried to fix it on my own but couldn't figure out what was wrong, even after searching the internet. Then it dawned on me that this would be the perfect problem for the AI. I very specifically described the problem to the AI assistant and described how I wanted it to look. I was informed that it could not fix the problem. I was informed that it couldn't even tell me how to fix the problem. Same with the AI assistant in Adobe Acrobat.
This seems like it would be the biggest gain for AI technology, especially for complicated pieces of software that frustrate users to no end. If I could just tell the computer what to do in plain English instead of needing specialized knowledge, it would solve a lot of problems. But apparently this isn't possible. They're more interested in slapping a chatbot onto it and claiming it's now intelligent. Bullshit. UI issues are some of the biggest complaints users have, and coming up with an interface so straightforward would give any company a competitive advantage. Remember how clunky Word was before the ribbon? But instead we're supposed to think that because it can generate sloppy text it's somehow going to put us out of work. The truth is, it can't even format it correctly.
More options
Context Copy link
I want to believe, but I asked Gemini 2.5 Pro to spec a computer for me, and it starts hallucinating motherboards that don't exist, insisting on using them even after being told they don't exist. Maybe it's OK for brainstorming, but everything it says needs to be double-checked. We ain't there yet.
More options
Context Copy link
No that was the comm-link.
More options
Context Copy link
More options
Context Copy link
A world without color under the rainbow
Well, it’s pride month (Grammarly suggests capitalizing
Pridehere...)! Again. I rolled out of bed last week to a saccharine salvo of big brand bullshit. That, and smug condescension from the women I know on Instagram “wishing homophobes an uncomfortable month”.When the gay marriage movement really kicked up steam in the early 00’s, I was always a bit perturbed by the use of a rainbow. I’ve always been a fetishist for color - my first attempts at building user interfaces somehow became unusable clown vomit because of it - and so a single group monopolizing literally every hue of light at the same time seemed like a bit much. But I was a good lefty-libertarian and didn’t complain.
I tried to drag this board into a conversation about cars. I won’t make that mistake again, but a point of discussion centered around all of them being way less colorful than they used to be.. If you take a look at a graph you can see that things really started getting “Super Fucking Lame” right about 2007. Don’t worry, the problem’s gotten worse: 78% of all cars sold today are a neutral color.
It wasn’t just vehicles, though. At almost the exact same time, Millennials began making everything grey..
Meanwhile, woke discourse has been (was?) on a tear in mainstream media institutions:
If you ask a politically correct LLM about why everything is lame, it will suggest that we’re this way because of “economic uncertainty” or social media. Others will say something vague like resale value.
If I know anything about anything, it’s that correlation is causation. I don’t think it’s a coincidence that a wave of rainbows and the unrelenting drumbeat of intersectionality has, in many ways, relied on the dilution of color everywhere else. How else can you shove it in the world’s face? A coffee shop already full of colorful whimsy would be burying v99.0 of the LGBTQIA+ flag. It’s only through the clash of it with the drab whites and browns of an espresso machine that a message can be sent. At least the latest revision inoculates itself against good taste pretty well. The clashing racial bars and two spirit circle make it hideous on its own.
The death of peak woke is… probably overestimated. But even my blackpill soul feels some sort of vibe shift. Dare I hope for color to make a comeback?
That's not what I see. I see it starting in 1997 and peaking around 2012.
I think it depends on where you'd put Silver on the cool vs lame scale, but I'd agree it'd be more correct to say that things hit their Apex.
It all depends on the car and the silver, right? 2000s M3s and Boxters were just meant to be silver. Chrysler Sebrings and minivans of the same era looked awful in silver.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The hopeful thing is how many big corps are no longer funding parades around the place (even over here). There's been a drop in sponsorship and some griping about it (and of course blaming Trump for anti-DEI). Maybe that indicates that some of the rainbow bullshit will not be as prevalent in future, because it was never about principle, rather what made good sense for PR. Now that it's not as profitable, they're not spending money on it.
My work has an internal chat program. It is like slack. More than a little cheerleading on it for pride recently. Also bitter complaint that major companies are not sufficiently showing their support for pride this year.
I'm used to Rainbow Captalism being a subject of mockery. My coworkers really want it. Or at least a portion do and everyone else remains silent on the issue.
I believe the complaint about rainbow capitalism is that the companies talked the talk without walking the walk — it was a fifty stalins criticism. Obviously it is even more upsetting to those critics if even the talk is, uh, walked back.
More options
Context Copy link
That is the surprising, and to me hopeful, thing. Woke Rainbow Capitalism may have been mocked but now it's not showing up and the lack of funding is being felt.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Color isn't making a comeback any time soon, for the same reason that wallpaper and wall-to-wall carpeting aren't making comebacks any time soon. Millennials are old enough to remember to eerie feeling of walking into a house that hadn't been updated since 1977 that had orange carpeting in one room and yellow wallpaper in another and harvest gold kitchen appliances on top of a fake brick linoleum floor. We're old enough to remember bathrooms with pink tile and no one thinking this was something that needed to be changed. It didn't help that these houses invariably smelled like cat piss and cigarette smoke. When people started tearing this shit out in the 90s, everything seemed so much cleaner, even if the result would still be dated by today's standards. It also didn't help that all of this stuff was deteriorating by the time we saw it, so it didn't have the same look that a recreation or picture in a magazine has today. This isn't to say that nobody uses color, but it's really easy to fuck up if you don't know what you're doing. When I was in college a lot of people convinced their landlords to let them paint and a lot of times they'd pick something really bold that wasn't pleasant to be in for long, and it looked like the color was chosen by a college student.
To add to this: Mrs. FiveHour is a woman of taste. She will happily spend quite a bit of time/money on getting the thing she wants. She would divorce me if I suggested tearing out our home's pink bathroom.
Ten years ago, that wasn't the case. They used to absolutely look dated trash, now they're retro cool.
By contrast, gray was in fashion, and still is, but my dad is redoing the floor at the family beach house. And it's not my place, so I'm not going to be too picky, but the one rule I had was Absolutely No on gray vinyl plank. Because it's everywhere, and right now we all say it's neutral and timeless, but in another five to ten years, we'll see a house with gray floors and it will look like a cheap flip from 2018.
Things go in cycles.
Grey vinyl plank should have been banned before it ever hit the market. I think it had to do with that farmhouse kitsch thing that was popular a few years back. The thing that pissed me off about the whole trend more than anything else was that, having grown up in a semi-rural area, it looked nothing like any farmhouse I'd ever been in. I'm guessing that the grey is supposed to look like weathered wood? Except wood only looks like that if it's been outside for years, and wood from inside a house doesn't ever look like that. Luckily my house was built in the 1940s and has real hardwood, but if I didn't have it and couldn't afford to put it in, I'd at least pick something that imitates real wood. If it isn't already obvious from the material that it isn't real wood, I'm not going to let the color just give it away.
Engineered Wood floors look pretty good, and feel pretty good, though I'm skeptical of their durability as they're essentially a very thin veneer of very nicely finished plywood. But they're cheap and you get the ease of installation of snap in flooring.
More options
Context Copy link
"Modern farmhouse" is still very much in vogue.
More options
Context Copy link
More options
Context Copy link
I've always wanted to be able to refer to a family beach house.
My wife said when we got married and she started a new job, it felt so fucking good when partners would ask her what she was doing this weekend and she said "oh we're heading to our family's place at the shore" and she could watch people's assumptions about her adjust in real time.
That said, while I enjoy going there, it's almost certainly been a bad choice over time. Unless you're really committed to it, it ends up sitting empty too much to be worth it financially versus just getting a rental. The only real reason to do it is either as a flex, or because in the off-season you want somewhere to hide out and play the shitbird.
My friend from way back had a family beach house--it was right on the beach up from
Eugene(somewhere in) Oregon though I don't remember the town--you could see the ocean right out the window, and to get to the sand and the water was a minute's walk down a short sloping hill. The beach was one of those long wide ones where you could splash your feet around, almost like a tidal flat--you'd go for meters until the water ever came as far as even your ankles. Truly beautiful. I stayed there once, two nights; we drank Full Sail bottled beers on the deck, ranged barefoot up and down the stretch of sand, flew kites, ate Mexican omelettes with homemade salsa and drank hot coffee there in the kitchen nook where you could watch the morning waves coming in. What a place.They had money from a very well-known business owned by I think his grandfather, but something happened and there was a breakdown in relationships, and then everyone began squabbling over that house, and I think it was either sold or just torn down, or both. A terrible waste. My friend was (is) a very laid-back guy and just shrugged it off. Would have hurt me bad.
I'm lucky in that my sister and I get along very well, and our intention is that she inherits the beach house for a variety of reasons, with the understanding that I and my immediate descendants will be allowed to use it reasonably often. And frankly the understanding that I'm probably still going to do or contract for a lot of the physical repairs on the house, because that's just kind of a me thing in our relationship. Split ownership nearly always leads to ultimate sale, as relationships become attenuated.
Getting older, I'm realizing two things: my parents made a number of status-oriented purchases that I wouldn't have made and that I consider mistakes, and that despite considering these things clear financial mistakes I have mixed feelings about not holding onto those purchases or continuing them for my children because slipping in status is a tougher thing than not advancing it.
My parents belonged to a country club for 30 years and hated it the whole time, while spending tons of money there for mediocre food. They raised me to hate it, weirdly, in that they constantly told me, when dropping me off at country club kids etiquette events, that they didn't want me to turn into a snob who would only be friends with the country club kids. The push and pull made the whole membership a waste of energy: as a shy and awkward teenager I overcorrected and disdained the preppy country club kids, listened to too much old sXe hardcore punk, and made myself a loner for no reason. I thought that being friends with those kids would force me to adopt everything about them. Giant waste of money.
But there's something about considering losing those status symbols that is worse than not ever having them. I'd never consider buying a beach house, I don't like the beach enough, but I wouldn't want to lose the family place for my kids.
I had a similar relationship with my background and just how WASPy it all is, including a family camp on a lake in the northeast US. Ultimately, “For the kids” is what made me realize it’s less about the snobbery and more the sense of timelessness and continuity of having such a touchstone.
I am reassured to know that I can go to a place that I’ve connected to in different ways and at different times, and it makes me grateful for my ancestors having preserved this for my benefit. Planting a tree for your descendants to sit beneath and all that. Makes me well up.
The place is now in my mom’s cousin’s name, and I have become very concerned about preserving it after they pass.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I’m among those fortunate to have such a family property on a beautiful lake in the northeast US. It’s been in my mom’s family for 4 -5 generations, we have pictures of my great grandfather sitting by the dock. It’s a modest place and while it’s worth a lot of money now, it hasn’t really ever been much more than a camp. Our neighbor’s beach house is much fancier, but they just bought it 26 years ago.
As I get older and my own family grows, the thing that I realize is that this is truly priceless. It’s one of the few things in my life that someone infinitely richer than me can’t just buy, and it’s something I am actively working to preserve for future generations.
More options
Context Copy link
More options
Context Copy link
insert Norman Rockwell Meme gray vinyl plank is awesome actually
Won't stain or scratch and easy to clean up any spills or messes no matter what the kiddos throw at it. And if I miss some mess it doesn't immediately show up with flashing lights like it might on something brighter.
Unfortunately, Mannington moved from China to Vietnam or vice versa a few years back and the product went to shit.
I don't dispute the practicality of it, just the way it's going to age.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
They were better. Abandoning them's just one more sign for how far this godless, corrupted, festering society has rotted.
More options
Context Copy link
More options
Context Copy link
I'll argue (and have long argued) that it's something upstream; the direction of causality is pointing from a common source. There's a pretty wide variety of spheres where millenial-focused media is absolutely bright-colored, especially where designs and decisions come from the grassroots.
There are a lot of things to complain about in Helluva Boss (cw: lots of profanity, some sexual 'humour') or Brand New Animal, but they're not grey or even My Little Pony-pastel. Look at MMORPGs and going from the most conventional subscription model like FFXIV to the most gatcha-like Genshin they've only gotten brighter over the last decade even as they've increasingly targeted the same demographics. The furry fandom overwhelmingly favors bright and high-constrast to the point where there's a term for hitting it too hard and the bar is high (cw: extremely bad bad color selection). Even the artists who do focus on the greys have a lot more soul than corporate metis. Go into Blue Tribe heavy spaces, and the corporate grey laptops are spangled with every sticker cause celebre available.
But if you're putting tens or hundreds of thousands of dollars on the line, you paint your house grey. Nonconfrontational uber alles, in the most literal sense.
There's an optimistic story where the growth of spaces to be maximally yourself have lead to a cleaner division between the personal and the public (well, optimistic until you poke at it), and a pessimistic story where we just banned everything and ignored the consequences.
But I think there's a more cynical one: everything adds up to normal, and this is the local maxima.
Chalky, grey-ish pastels are a neutral 'chameleon' color that reflects and attenuate to whatever you put up against it. This allows for ease of decoration when it comes to styling a room without having to worry about extreme color scheme clashes - most people focus on these colors to allow for re-sell value in their houses.
Fun aside, when my parents had to re-paint their entire house for reasons, I was the one pushing my mother for more vibrant, intense colors as opposed to said chalky, grey pastels. She had developed a bad habit of constantly repainting rooms in a succession of ever-worsening colors.
Except for one room, which she never touched - the kitchen, which was done up in a warm, rich, pumpkin-like orange.
After a long spate of harassment(and more color samples than I will confess to - Lowes should have been giving me a commission, geeze), I finally convinced her to go with the richer, warmer colors, and she no longer repaints rooms.
More options
Context Copy link
More options
Context Copy link
Look, I made this point in last week’s thread- people yearn for totalisation. LGBT co-opted the rainbow from God’s promise not to destroy the world again, they co-opted sacred heart month, etc, etc, because that’s just what they do.
Everything getting greyer is less to do with gay activists and more to do with society, in general, not loving bright colors everywhere. I blame autism increasing, but it could also just be fashion trends- the generation for whom being able to make everything bright colours was a novelty is still dying.
Apparently the rainbow was co-opted mostly from the hippies:
"A close friend of Baker's, independent filmmaker Arthur J. Bressan Jr., pressed him to create a new symbol at "the dawn of a new gay consciousness and freedom".[11] According to a profile published in the Bay Area Reporter in 1985, Baker "chose the rainbow motif because of its associations with the hippie movement of the Sixties but he notes that the use of the design dates all the way back to ancient Egypt".[12] People have speculated that Baker was inspired by the Judy Garland song "Over the Rainbow" (Garland being among the first gay icons),[13][14] but when asked, Baker said that it was "more about the Rolling Stones and their song 'She's a Rainbow'".[15] Baker was likely influenced by the "Brotherhood Flag" (with five horizontal stripes to represent different races: red, white, brown, yellow, and black) popular among the world peace movement and hippie movement of the 1960s.[16][17][18][19]"
This seems credible, considering this was somewhat after the Rainbow Family of Living Light had started organizing the still-happening Rainbow Gatherings. I didn't manage to find an explanation of where the hippies took the rainbow from, but the rainbow Peace Flag and the general colorfulness induced by LSD probably play a large influence.
I’m willing to believe it. My point was to draw attention to the totalizing nature of LGBT ideology(down to cradle members and converts) through comparison to Christianity.
My priest always says the Evil One lives at the extremes. I agree that totalization is not the way.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This feels slightly paranoid. There are only twelve months in the year and whichever one you chose you could be accused of co-opting something. The Sacred Heart month is also a strange choice to try to co-opt as an act of totalisation because it has almost no cultural currency in the Anglosphere except maybe within American Catholic communities; in fact it it's relevance is fast becoming exclusively as a counter-signal.
See my reply to stefferi
More options
Context Copy link
More options
Context Copy link
That’s ridiculous. “They” don’t co-opt random Christian aesthetics more than anyone else. Oh no, global warming activists have stolen the flood myth!
I agree that bright colors enjoyed a window of novelty. Along with synthetic fabrics and the rise of computer graphics, they’re responsible for some extremely dated trends. I would add that the fashion trends oscillate way faster, though. They’re at least as fast as the generational pressure of teenage rebellion.
My point was that any identity which can totalise will absorb literally all common symbols, of which the rainbow is one(literally every culture has given it a special meaning- the pagans thought it was a bridge for the gods to access the world, Christians think it’s a sign of God’s mercy, LGBT thinks it’s about diversity of deviant sexualities, South Africans think it’s about multiple races working together in harmony, etc etc), and this is unconnected to general cultural design trends.
Some argue that the Bifrost is referring to the northern lights as opposed to a rainbow
Maybe, but the Greeks also thought this- although in their case it was only used by the messenger.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
When dealing with lawful entities such as YHWH, it is always good to read the fine print.
From Genesis 9, NIV:
One does not need to be a rabbi or lawyer to not that He emphatically does not categorically promise not to destroy all life, just that he will not do so using floods.
And only promised not to destroy all life.
More options
Context Copy link
More options
Context Copy link
This isn't the autistic pattern. My understanding is that we mostly tend toward loving highly-saturated, solid colours (the most notorious example being anime).
More options
Context Copy link
More options
Context Copy link
If I can't afford to repaint something soon if I don't like it, I'm going to take the safer option where I have a higher chance of accepting it, or accepting it over a longer timeframe.
If I can afford to do that more often, I can afford to take a chance at something a bit more... out there. If I don't like it, I trust I can fix it later.
But then why aren’t the more upscale places and homes more colorful? If anything, they’re much more neutral toned than the middle and lower class based places.
My theory is that somehow color got associated with low class or cheap. In order to not look cheap, you do neutrals.
This holds some water. It may tie back to cleanliness as a symbol of status. You can let a stain or a mark slide a lot more easily when you have brightly colored walls. Once everything's white it has to remain immaculate, and if your nails are done well it's clear you've paid someone else to keep things up.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Painting brick and grey LVP should be illegal.
More options
Context Copy link
Millennials weren’t dominating the new car market in 2007. They were in their mid-twenties at the latest. While I didn’t find data for ‘07, over the last decade, the under-35 age group never exceeded 15%. The Flattening started when Boomers and Gen-Xers were buying the majority of new cars.
Same goes for houses. The median house-seller was born in 1960. By 2017, that had crept forward to…1962. It wasn’t the millennials who were choosing beige or grey or whatever.
You know what was wildly popular in the early 2000s? Apple products. Ones that looked like this instead of this. The 90s was blocky and garish, but we were living in the new millennium. We could put chrome and white plastic on things. Monitors and peripherals got thin and sleek. This might be the only time that software looked more skeuomorphic than the hardware on which it ran.
We’re climbing the fashion barber pole faster than ever. Modernism to postmodernism to high modernism to a colorful, psychedelic mess in only half a century. Add another fifty years of nuclear ennui, a pinch of Moore’s law, and stir. The memes of 2014 feel ancient in a way that 60s counterculture cannot, because the latter never really died so much as it was commercialized and co-opted. Well, we got used to that, and now it’s taken for granted that corporations will sell cheap merch representing your preferred minority.
So don’t blame the gays for sending your 70s-ass appliances out of fashion. Give them ten years, or maybe six months, and the barber pole will come back around.
I understand your point on cars, even if I'd argue this generation influenced colors more than buying power might suggest.
But one doesn't have to own a house to influence or consume interior design patterns. The boomer women I know, of course, follow interior design trends, but the moniker "millennial grey" emerged because it was appearing in apartments and social media from said people.
I also agree the cycles appear to be accelerating.
In the UK, the conventional wisdom was that "millenial beige" (which I think is the same colour - I have also seen the style called "greige") was a product of the high-end rental market which exploded after the 2008 crisis and the near-disappearance of 95% LTV mortgages.
If you are a landlord, it is more important to be good enough for whoever shows up on day 1 to get a tenant in quickly. And that means maximally inoffensive.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Weirdly (or, actually...predictably) the last three cars we've owned were the peak popular colors at the time: blue, black, and now dark gray, but we've chosen blue or gray/silver cars forever. Having grown up in a snowy place, I instinctively dislike white cars, because they scream "hard to see against the landscape" and "please deposit mud here."
Fashion is trying to tune people into "dopamine dressing" currently, with fun, bold colors, but people are reacting to actual or perceived economic distress and still choosing Quiet Luxury neutrals; if you can't be rich, as least you can look rich.
Interiors are still in the Magnolia Home modern farmhouse gray and white vice grip, and flipping hasn't helped.
Ha!
My city has hit 105F to 110F for 9 out of the last 10 years, yet I still see black cars here, which seem like an insane purchase choice to me.
More options
Context Copy link
I really, really miss that sparkly midnight green color for cars, personally.
More options
Context Copy link
I dislike white cars because they make me think of rolling laundry appliances.
More options
Context Copy link
More options
Context Copy link
I bought a house a few years ago. The previous owners gave it a fresh interior paint job. Very wise for selling a house. BUT, they chose medium-grey paint. It was dark and oppressive. My wife and I had to first thing repaint it light cream. I'm not sure we had to use white primer first to block out that overpowering grey, but we did just to be sure you couldn't see it leaking through like some evil stain.
More options
Context Copy link
I am not sure if this proves what you think it does. If you look at the graph, the difference is basically in people selecting white as their color. I am not sure how it is in the US, but the last time when I was shopping for a new car, the white color was for free while everything else was €500 - 1,000 extra. I don't care enough about color to pay that, although I would prefer more vibrant color - if for nothing else then to be more visible on the road for more safety.
I had the opposite experience on multiple cars. White was a premium, while other colors were free.
Like that previous thread, I know many people don't care about what they drive. How it looks, how it feels, what it does etc. But surely you can see why exterior color would generally represent people's tastes?
More options
Context Copy link
More options
Context Copy link
So recently I was shopping for a new car for my wife. And a surprising number of new to late model BMW 3ers were sold with red leather interiors. That was a truly outre option in 2002, now it's pretty common, maybe 10% of interiors. And there were a few cars we saw that were almost specced how I would have wanted...but that red interior. My father in law would never stop telling me it looked too Puerto Rican.
And in general, that's a pretty accurate rap: in Puerto Rican neighborhoods, houses are brightly colored, stores are brightly colored, more cars are brightly colored, clothes are brightly colored. Hell, I wear a navy blue Yankees cap, most Puerto Ricans around here wear Yankees caps in bright red or orange.
So how much of this is just ethnic distinctions, or ethnic/class based identity formation? I don't paint my house walls bright orange, or buy a car with red leather seats, or paint my business exterior green and blue, because people would make fun of me for looking puerto rican.
More options
Context Copy link
More options
Context Copy link
We're All Sitcom Characters Now
If you've ever watched a successful long running sitcom, you've seen it happen. The characters start out mostly normal with a quirk or two. Maybe a little neurotic, or slow, or promiscuous. Four seasons in and the characters have all become deranged parodies of themselves. All their most entertaining qualities have been heightened, everything relatable or normal has been squeezed out. The character that was a little slow is now a straight up drooling retard. The promiscuous character obsessively fucks everything that moves. The neurotic character is only a step removed from Howard Hughes in his final days. You watch the last episode and the first episode of a sitcom, and you'll barely recognize the characters.
It's obvious why it happens though. The writers and actors give the audiences what they want. Sitcoms are (or were?) a cuttroat business. There was little room for artistic integrity, vision, or any other high minded concepts. Give the audiences what they want, or they'll change the channel and the show will be cancelled. Just shut up and do it!
I regret to inform you that we are all on a sitcom now. Everyone is enmeshed with an attention economy. Be it farming engagement on twitter, or upvotes on a reddit clone. And unlike actors who only have to inhabit their roles for hours a day, for a shooting schedule that might be weeks or months out of a year, those enmeshed in the attention economy must be in character 24/7. On social media, on streaming, on podcast, on youtube, all at once, all the time.
Some have whole heartedly embraced this. Twitter is full of people being characters, allowing the algorithm or engagement to tweak the dials on their personality. Like a second subconscious that lives in the cloud. Catgirl Kulak comes to mind. He's out there using an AI catgirl as an avatar, staying more and more in character as some sort of neo pagan feral/trad nordic catgirl with hot takes. It's a dangerous game he's playing, existing more and more in a fictional role. But there are others. The preposterous performative pro-Elon or pro-Trump nonsense I saw and tried to avoid on twitter this last week was really something. Twitter super users who've built their brand on being staunch partisans like Catturd out there acting like absolute charicatures of themselves. They're just sitcom characters anymore, and rapidly approaching the braindeath of the latter seasons. Others I don't think fully understand what was happening to them. I wonder how much upvote driven personality disorders had to do with certain flameouts here.
Because eventually every sitcom hits the wall. The characters have been intellectually and emotionally abused and lobotomized to such a point where there is no humanity left in them to ritualistically beat out for the amusement of the audience. It gets it's final season where the writers attempt to rehabilitate them just enough to send them off into the sunset.
There are no writers to rehabilitate you when the algorithm is done with you, and you've lived inside a cartoonish and horrifying version of yourself for attention for years on end.
You can just... not engage with most of that? There are places like Substack and this site that don't sort by popularity. You can also curate your feed to make the algorithmic sites useful. I use Twitter/X to keep up with bloggers I know, and Reddit is useful for AI updates and video game discussions. Youtube can be almost anything you want it to be as long as you subscribe to the things you like and don't subscribe to things you don't like. Just get off /r/all and Tiktok.
True, but in many cases you will have to actively fight the algorithm's attempt to get you to partake in whatever drivel is popular with everyone else, and watch out for its attempts to sneak in ads or other content that someone is paying to put in front of your eyes.
I may elaborate on this in another post, but even assuming zero participation in social media, the algorithm is always listening. Often directly through apps on your cell phone, and indirectly from every link you click, video you watch, search you make, how long your eyes linger on something while you scroll. The degree to which a crude homunculus of yourself is being constructed in the cloud, whispering to you through your screen on the margins of every page you visit is horrifying. It was not a rhetorical flourish to describe it as a second subconscious. I absolutely believe that.
Yeah. Fact is that any device that with an internet connection is likely trying to nudge or otherwise cater to you in a way that will get you to alter your behavior, spend money, or even just cough up more information that they think they can use to sell you stuff.
And every time you give in it gets just a little better at predicting/manipulating you.
I like my Alexa devices, but the occasional attempt to say "hey we noticed you liked [X], just say the word and I'll charge you for [Z]!" sometimes make me want to send them off to the Bitcoin mines forever.
I've already precommitted to ignoring any attempts by a smart device to sell me on something I wasn't already intending to buy, unless it can send a big breasted brunette in a bikini to my front door to make the sale. Any marketing experts who are tapping into my motte account can take that as gospel truth and act on it as they see fit.
More options
Context Copy link
It is a never-ending project to try to avoid feeding information to the Algorithms. No social media. Using minimal apps. Using a browser with as much blocking as possible to do anything online (such as watch YT videos). Never being logged into a site like Google or Amazon when searching or shopping prior to buying. How many lunatics are really willing to go to those lengths to avoid the creation of that second subconscious?
More options
Context Copy link
More options
Context Copy link
I remember Scott posting some twitter screenshots which had the UI text turned to some East Asian language. When he was asked about it in the comments, he basically said that this was so that twitter would show him trending tweets in that language, with which he then would not engage because he could not read them.
Short of not having a twitter account, this is probably the best way to prevent the algorithm from tempting you with outrage bait.
More options
Context Copy link
Twitter and Reddit both allow you to sort chronologically. I've just naturally stopped using most of the ones that don't have an option like that, such as Facebook and TikTok (I never got into TikTok in the first place, I bounced off hard). I also don't think "the algorithm" is necessarily always bad -- Youtube's recommended videos have exposed me to some truly excellent creators like Montemayor over the years. Sometimes I'll watch lower quality stuff like whatifalthist and my recommended will be populated by garbage for a bit, but that resolves itself after a week or so, and I could probably speed it up by marking those videos as things I don't want.
Ublock Origin blocks basically all ads, and is quite effective. I haven't noticed shills posing as users to be that much of a problem outside of stuff like porn.
I sometimes wonder if I'm the last user who still goes to each user's personal page a specific subreddit when I want to see something.
More options
Context Copy link
I wouldn't use most sites, period, if Ublock Origin stopped working.
Here's a hot tip too, I've been using ChatGPT to help create custom filters to block out other types of content I find annoying. You can use it, for instance, to block a particular youtube channel from ever showing up in your feed or recommendations.
Huh, I didn't know Ublock Origin was that granular. I use it to remove upvote numbers on this forum already, but didn't know it could be used to block YT recs. Thanks for the tip.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If you think about Kulak, through, the character has gone through shifts. There was the ancap personality on this forum and - I think - the early days of the Twitter account, then the more fashy type of persona, now the same but with a strong pagan/anti-Christian component. Audiences shift and so do the characters.
More options
Context Copy link
Yeah, I like this place because it's long form and there's enough content that I can read it rather quickly and finish. Infinite scrolling loops are horrible.
More options
Context Copy link
Can you link the video? Sounds like something I need to hear.
More options
Context Copy link
Tangent, but if I ever wanted to make a meme about "religion bad' I would pick Luke Smith, he's gone orthodox and has aged 15 years in the space of 2. It's the kind of rapid aging that I've only ever seen in vegans before. That copypasta about falling for every meme needs to be updated.
Did he age, or is it just the beard?
You can go on his channel and judge yourself, this is his last pre-hiatus video: https://youtube.com/watch?v=mVGRAD10cYs.
The "I've been a taliban prisoner for 17 months" beard goes a long way but I think he'd still look aged without it.
More options
Context Copy link
Don't know the guy, but just looked up the linked channel and yeah, that kind of beard always makes you look really old. And it's also really popular among the orthodox for some reason. I'm even sometimes surprised myself when I shave down how young I still look, and I never let it get that long.
There seems to be a relatively heavy emphasis on the "elder spiritual instructor" archetype in Orthodoxy, so I assume a beard is a status symbol.
More options
Context Copy link
Eastern Christianity has historically been hostile to shaving, eastern Catholics have the beards too(just typically a bit neater/better sculpted). AIUI modern Russian culture isn’t too hot on male grooming either.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Those who engage in this projection of their identity out on social media are narcissists plain and simple. The feedback they receive have none of the corrective mechanisms since that would reduce the use of the media, so there is an obsession by the platform too soothe and allow sycophancy to further lull in a narcotic state while allowing the further projection of their aspirational identity which they love. They aren’t in love with themselves they are in love with the identity they project on the world. Which in my view is a worthwhile distinction.
It is also worth noting projection out on the media isn’t strictly necessary. This state can be also achieved with watching content that “educates” like self-help, health and so on where simple act of consuming of the media fuels a narcissistic self-image which under no circumstances can be manifested in the real world since that would it would shatter the projection.
More options
Context Copy link
If there's one thing I have always and forever refused to do, its falsify my personality or my preferences.
I won't give something a 'like' on any social media site unless it is actually content I would genuinely prefer to see more of. I hand out dislikes liberally when it is even an option when I encounter things I would really rather never see again.
I will adjust my rhetoric to account for an audience's tolerances for controversy (call it 'discretion' or 'professionalism'), but I won't shift the message itself.
I have literally never stated a position on an issue that I wasn't prepared to at least half-heartedly defend. I try to state my positions on any issue with as much clarity and precision as can be mustered with the English language.
And I do hope my reward is that whatever AI-Algorithm God arises will not have to guess at my preferences and utility function and will thus be able to give me an experience that is very closely optimized for the things that I truly enjoy, and not just the things I pretended to enjoy to fit in or to trick onlookers into thinking I am at all different than what I am. If the GodGPT looks across the entire history of my internet usage, and sees what type of youtube content I liked, the type of subreddits I subscribed to, the arguments I got into, the songs I played, the films I rated highly (and low), the type of people I interacted with, going back for decades now, I think it'll have an easy time figuring out what type of world to stick me in to win my hedonic approval.
Like, many actors seem to get very frustrated when they get pigeonholed into playing a single popular role for years and years on end, or typecast into the same types of roles over the whole career. Imagine how bad it would be for a nigh-omnipotent computer deity to feed you up horrible slop content for the rest of your life because you kept pretending to like [popular thing] for so long that your entire digital footprint suggested that it was your favorite type of content ever. The role you played has become your life.
More options
Context Copy link
I'm reminded of Demolition Ranch.
For those not aware of the name, Demolition Ranch is a guntuber who's been doing gun-tubing for quite a while, and recently stopped to focus on his family.
People have commented on how his later content diverged quite a bit from his early stuff, with sensationalist activities and click-baity titles and zany video cards.
When questioned about that, he basically replied that what was getting the most view count, hence the pivot. In other words, that's what getting him the money.
The attention and engagement economy, it seems, says alot about what the mass of humanity demands.
I think its also just the hypercompetition that results because 'attention' is a fixed resource, and so every single advantage you can leverage to capture it becomes critical, so everyone evolves towards using every little hack/trick to keep their content in the public eye, lest they be left in the dust.
Whenever someone makes the jump from doing content creation as a hobby/side-gig to full-time career you see the shift. Shorter videos, higher pace of uploads, and general drop in quality while minmaxing every little detail that keeps people engaged and improves ad revenue. The content becomes, fundamentally, an afterthought compared to the drive to attract more viewership.
Then they branch out into the other standard revenue streams. Patreon, a podcast, and maybe a livestream channel... then the death knell (imo)... political commentary.
Mr. Beast is perhaps the apotheosis of this pressure to keep wining attention. He's an apex predator in the environment, but at the cost of selling his soul to the algorithm daemons.
More options
Context Copy link
Luke Smith had left the internet for 2-3 years, until (sadly) returning a few weeks ago, with a bigger beard, to declaim the evils of the internet.
More options
Context Copy link
More options
Context Copy link
Personally, I don't even remotely enjoy juggling around a zillion different names (IRL, ToaKraka, a single-purpose name that needs to be active only on certain rare occasions, a single-purpose name that used to be active but now is inactive due to my lack of energy/willpower, a few single-purpose names that are inactive but can be used if necessary…), and I wish that I could just operate under a single unified identity. But I feel like your analysis goes too far. None of my pseudonyms tries to project a unique personality. They speak with exactly the same personality—just on different topics. Your analysis applies, not to "everyone", but only to those public figures who actually try to project unique personalities on social media.
Also, you forgot to link to the definition of "becoming a deranged parody of oneself", flanderization.
Similarly, literally nobody in the 12+ years I've been using it has grokked that my username is "Face H".
So nobody has bothered to ask what became of Faces A-G.
Any big plans for Face I?
lol I've basically decided to start integrating almost all my online identities, so no need to spin up a pseud for any new sites or to tackle any new controversies behind a new mask. The seal has been broken between most of them.
Face I will only come about if I am cancelled so thoroughly that I'm forced to live in a cabin in the woods or a sailboat in the Indian ocean.
And in that case it'll probably not be a username but just the signature I put on the drone-delivered pipe bombs I'm sending out to get revenge on industrial civilization.
More options
Context Copy link
Then people will start wondering about Face II.
More options
Context Copy link
More options
Context Copy link
I read it with a hard C, as Fa-cheh.
Yeah I always categorized it as a name like Louis Freeh, a former FBI director
More options
Context Copy link
Mustafa Faceh, Turkish nationalist.
That's what my brain basically did with it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I did wonder if the name derived from "face". I didnt think of a letter being alphabetical numbering without any indication its not part of the word.
More options
Context Copy link
FaceH would be clearer in this regard than faceh, which I assumed was a whole word with some private meaning for you, pronounced "FASS-eh." This is not to doubt that you have circulated through faces A to G, just to say this may explain why no one has asked.
I mean, I was happy to allow those perceptions to continue. "Facea," "Faceb," "Facec," etc. all seem like they could be valid words, so they work quite well as pseudonyms. It just made it easier to keep straight in my head.
This was before password/account managers became standard, so it was helpful to have an organizational system for identities.
And since I stuck with FaceH for so long, I myself pronounce it approximately "Face-ah" in my head.
More options
Context Copy link
More options
Context Copy link
I thought the facehugger got you before you could finish typing your username.
More options
Context Copy link
More options
Context Copy link
It might actually be a good sign if they did. If even different fake online faces are only shaped by their own incentives and not each other, then your real life face is propably safe.
More options
Context Copy link
More options
Context Copy link
I had a personal experience almost turning into one of these sitcom characters. The pull is bizarrely tempting.
A little before the election I made a deadpan joke about a domestic annoyance that was completely misinterpreted and then quote tweeted by a major culture warrior. I got so much insane negative attention from that. Death threats. People following me around the Internet leaving shitty comments. Even phone calls of people threatening me.
It got so tense I was looking out my front window regularly and making sure I had my gun nearby whenever I went.
And then it subsided and I was relieved, but a major recurring thought since then has been "I should troll these fucking idiots again. Maybe I can make some money off of this"
I don't. But I can see how if I had a different temperament this would be totally irresistible.
Especially when there's a whole grifter-industrial complex geared towards helping randos turn their 15 seconds of fame into a flash-in-the-pan celebrity career (hawk tuah, anyone?)
Is there such a complex? I thought one of the notable things about the Hawk Tuah girl was just how unusually shrewd she was for being able to leverage that one viral street interview into an actual internet celebrity career.
Nah, there's PR firms and Publicists and brand consultants and such that can leap into action to help a budding microceleb try to extend the limelight with a preset path for leveraging their one claim to fame into public appearances, social media, and maybe some acting or singing gigs.
https://countrychord.com/hawk-tuah-girl-haliey-welch-now-shares-the-same-publicist-as-justin-moore-bruce-springsteen-and-2024-just-cant-get-any-weirder/
https://www.hollywoodreporter.com/business/business-news/hawk-tuah-girl-hailey-welch-1235937553/
Organic celebrity just isn't a thing these days (if it ever was). Used to be you could be a viral meme and ride that horse for a bit until the anxiety got to you.
And of course, she got in trouble for her memecoin because she trusted a entity that specializes in memecoin rugpulls (note, they sell a physical rug product on their website).
There's a whole ecosystem that will try to latch onto any potential niche in the attention economy to monetize the moment.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I've tried turning off visibility of things like individual post scores, but that does just risk you changing to focus on notifications, instead. And given the extent twitter has driven people completely bonkers, that might be worse than the karma farming. There's always been worries about the masks we wear molding the face -- and even some theories about using that to improve ourselves -- but having the masks get molded in turn is Not Great Bob. And then what exactly it seems to be driving even the boring people toward is kinda disturbing.
You can do some efforts to de-algorithmify yourself, but that's only going to get the worst of it, and maybe not even that. And it's pretty incompatible with having a career or even a renumerative hobby online. Even some offline small business work is becoming increasingly hard to kick off without it. I'd like to advocate some level of in vino veritas, but a) I don't drink, and b) that doesn't seem to work great for those who pick it up. Trying to actively avoid collecting enough of a following maybe helps? But I dunno if that's just because I wouldn't notice the microscale examples of the trend, either.
The one bright spot is that Flanderization does, at least in part, reflect another trait specific to media, not people qua people. Ted Flanders didn't turn from slightly-religious neighbor into a fundie just because time's arrow flew, but also because the shows writers needed something new for each episode. "Simpsons Did It" is a problem for South Park, but it's also an issue for The Simpsons itself; even if most viewers won't recognize the psuedorerun, the show's staff and a lot of the commentariat will. If you have to get a column out for your tech column the weekend and three videos M/W/F, you start diving into this sorta A/B-to-death-testing because you don't have anything else, and the content doesn't have that much to start with.
For normal people, that doesn't quite work that way. Yes, history rhymes, and I'm probably one of the worst people on this site when it comes to bringing up ancient history from the long-ago days of two years ago. But anyone that hasn't let the mask embed into their skull can and probably will find something new because the world is filled with new stuff. Get a hobby, touch grass, fight the dandelion infestation on your front lawn again (fuuuuuuuuuuuck), talk about cooking.
I'm waiting for it to stop raining long enough to put down something for the crabgrass currently threatening to destroy the overseeding I did last fall. I feel your pain.
More options
Context Copy link
I've never had a problem with broadleaf weeds. Are you against using herbicide? I find spraying the whole yard is a waist, I spot spray broadleaf's with 2,4-D. Hit the dandelions before they go to seed and I just have to walk the lawn two to three times.
What do you guys have against Dandelions? They are free flowers.
Do you genuinely not understand it? The beauty of the lawn lies in its neatness and uniformity. Random weeds in random places break that uniformity. The result does not good even when the dandelions flower (which is a relatively small fraction of the year).
More options
Context Copy link
For me it's outcompeting the grass and then die back in the winter leading to mud.
More options
Context Copy link
I don't mind most 'weeds', but dandelions are particularly prone to killing other nearby plants, and then spreading aggressively to any areas that don't have complete grass cover or deep mulch. I used to have some clover I was trying to cultivate in the lawn proper and a handful of local flowering plants in a nearby garden area, but the dandelions have pretty eagerly smothered them out, and sometimes doing the same to grass. If you have near neighbors, it's also kinda rude to give them your problem, too, and even if you're aggressive about mowing and weeding it's hard to get every dandelion before it gets to seed.
Most of my problem is downstream of having irregular hours and not having consistent opportunities to weed. If you can consistently stop seedlings early, they're pretty easy to pull away from any garden crops you want to keep and at least plausible to prevent almost all from getting to seed. If you don't have those constraints (and don't want the clean-uniform-lawn), they're a lot more tolerable.
More options
Context Copy link
More options
Context Copy link
I've tried both 2,4-D and glysophate, using those powered wand things, and giving the base of each plant a two-second count. The dandelions definitely don't like it, but either I'm missing a lot of them or they're springing back after each application. To be fair, the previous homeowner had let it get bad to start with, and I'm not great with or consistent about lawnwork, so they've gotten a lot of opportunity to dig in.
((For how bad, I spent a day with some kneepads on and filled a 5-gallon bucket to the lid fourteen times, and didn't even get through all of a pretty small front lawn.))
It's making some progress, as has switching from a reel lawnmower to a powered one to better prevent them from getting to seed after spraying them, but it's been a lot worse than I'd expected even after bringing out the big guns.
More options
Context Copy link
I've always wondered who were the psychos poisoning their own lawns in order to prevent beautiful flowers from growing
A number of question I have for people with this sentiment:
Do you have a yard?
Do you have a "nice" yard?
How much time do you spend on your nice yard?
Possible question: What climate zone are you in?
I do not define "nice" as being a perfect uniform lawn - there are some amazing "natural" or zero scaped yards - they take 4x the amount of time as my yard. I am not an HOA guy, I don't judge people who don't value a nice yard.
In my opinion the easiest most time efficient "nice" yard is grass. I don't want mud, I want to walk barefoot in my yard, I have a big dog. I don't care what exists in my yard as long as I get the utility I desire as efficiently as possible. Somehow I have ornithogalum umbellatum in my yard and I don't mind it at all. I have oaks from squirrels in my yard and I let those grow to see if I can get a nice one to replace the elms. I would love if I could do a clover yard but it will die in the winter and my yard will turn into a mud pit. Dandelions are not as bad but will still contribute to muddy spots that the dog will expand as he runs around during the winter.
You need to move to a place that has a four-season climate.
You mean move away? Clover dies back in winter.
As does grass. But at least both are covered by snow.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Nobody cared who I was until I put on the mask.
More options
Context Copy link
More options
Context Copy link
I don’t see this as all bad, to some degree everyone is acting. You don’t curse in front of grandma even if you do in other places. You don’t dress the same for work as you do to just hang out. As long as the character you play is something of a decent human being, it’s probably not harmful.
Yeah that's what I was thinking. In a way I think this might be a good thing - I think being 'an individual' is hard for a lot of people. It's certainly a pain in the ass in my opinion. Also I have nothing to back this up as usual but I think it's healthier to be an unique example of an archetype than to just be an individual in this identity focused world, because it gives people an anchor to cling to when they get cancelled.
I mean im not convinced that most people have a singular self in the sense that they have a core. Identity forms quite often from reactions to things or events, roles taken on, etc. so it seems one can use those deliberately by finding a not terrible set of identities and using them.
One example of a fairly sane YouTuber is a woman in her thirties who has turned her life into what life would have been like in 1940. Of course she’s very well aware of tge LARP, she mostly does the aesthetics and trying out the fashion and lifestyle. She’s pretty grounded. It’s obviously apolitical, which I think helps because it seems once political stuff enters the equation, you’re going to end up radicalized in one way or another.
Can you link? I enjoy that sort of thing; there was another couple who did the Victorian version with an icebox instead of a fridge etc.
I tried it myself once but it turned out that lighting even a small room with candles is surprisingly hard. You need a fairly serious candelabra if you want to be able to read a book after dark.
What do you mean tried it yourself? Tried going without electricity or tried going full Colonial Williamsburg?
I was going to spend a week without electric lights (plus no PC etc.). Partly for the romance of it, partly because I thought I might sleep better and be perkier if I let myself go with the natural day/night cycle.
I bought a lantern and some slender beeswax candles, and didn’t realise that this was good enough for mood lighting but not nearly enough for anything practical.
More options
Context Copy link
More options
Context Copy link
https://youtube.com/watch?v=A5A9RSHS7es?si=_-o10eeIryBiuOsV
It’s called vintage dollhouse.
More options
Context Copy link
Which is why you use oil lamps. Really easy to regulate light levels with as well.
Bright oil lamps (with a mantle) are very late Victorian -- they're actually slightly newer than the incandescent electric light.
Wick based lamps are plenty bright and the only kind of lamp oil lamp I've ever used, and those are earlyish Victorian.
More options
Context Copy link
More options
Context Copy link
This would have been my next step but a relative (who had used them in anger) told me that they stank and to use electric lights and be grateful for them.
I can't remember this ever being a problem and I even tried lighting a lamp i had at home and tried to see if I could detect any notable smell, which there was only a very mild one.
Googling a little it seems like kerosene can have a pungent smell when burning but that the oil that is sold for indoor lamps is purposefully made to smell less.
Perhaps your relative got the wrong kind of oil or used a bad lamp where the oil didn't burn clean?
Quite possibly - this was in a remote area in the 60s.
Thanks for doing the hands-on research, I’ll give it another go when I can.
More options
Context Copy link
More options
Context Copy link
In…in anger?
He hated them :P
But no, it’s a phrase about using weapons for their intended purpose (“he owned an antique blunderbuss but had never fired it in anger”).
The phrase is often extended to non-combat items. In this case, what I mean is that he used for its intended purpose in its intended context (making light in a place without electricity) rather than as a LARP.
Apparently it’s a British English phrase: https://english.stackexchange.com/questions/30939/is-used-in-anger-a-britishism-for-something
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I have personally set x.com to 0.0.0.0 in my computer's hosts file and I encourage anyone reading to do the same.
Frog
put the cookies in a boxset x.com to 0.0.0.0 in his hosts file. "There," he said. "Now we will noteat any more cookiesbe tempted to interact with x.com and turn into sitcom characters"."But we can
open the boxremove the entry from the hosts file" said Toad."That is true" said Frog.
I don't know about that. I removed Reddit from my bookmarks toolbar, and my use dropped off a cliff. Sometimes a 10-second barrier is enough to stop an impulsive decision.
Absolutely. Part of the problem with social media is that it's so convenient, removing the convenience removes that 'I'm bored, oh X is right there in my notifications telling me Elon Musk has explained "the implication" to Trump, what's that about?' action.
But I'm also the kind of person who gets very upset when someone tells me to just not use fast travel if I don't like it.
More options
Context Copy link
More options
Context Copy link
Eh, I've always been far better at not buying candy than not eating it. YMMV, of course. It costs thirty seconds and is worth a shot.
I used to do this with reddit.com but always ended up removing the entry after a few days.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Are you sure Kulak is a man? I've listened to the Kulak podcast and it's a woman's voice. Voice changer? Just a very convincing fake voice?
I was very surprised, but I guess some of the essays are about subjects which a woman might theoretically be more likely to write about than a man.
There is no fucking way Kulak is a real woman, any more than he is actually a Rhodesian catgirl; that kind of autistic obsession with combat screams male. At most, Kulak might be a transwoman, but I doubt it.
It is very easy to have someone else dub over your lines. You can hear Kulak's real voice on several episodes of The Bailey.
Having some knowledge of the inner workings of the podcast, I can say that there have been half-hearted discussions of resurrecting it with a new host; @ymeskhout is definitely done with it.
He started his Substack and wanted to focus on that; additionally, in general he had grown quite distant — ideologically and otherwise — from some of the other original core participants. I have some (although not a ton) of insight into more of the behind-the-scenes specifics, but out of respect for individuals’ privacy I will not share what I know.
More options
Context Copy link
I think he had a very legal way of thinking and the Motte's general move towards 'fuck the legal argy-bargy, this is bollocks and you know it' style argumentation didn't sit well with him. Couple that with repeatedly trying to litigate J6 & Trump's prosecution and he started getting dogpiled a lot. Not entirely without reason IMO but it can't have been much fun.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yes, and you need to seriously reevaluate your thinking if you ever thought this is anything other than 100% certain.
I don't know what kind of ladies you go out with, but even trans FTMs rarely go that deep.
Just imagine me looking up from my post last week to mug at the camera like a character from The Office.
More options
Context Copy link
Well, shows what I know. Must be a voice changer or something. (That is not the kulak voice I was familiar with)
More options
Context Copy link
More options
Context Copy link
I knew him before he was on twitter- he’s been a man, the cat girl persona started as a joke and then he figured out he’d get more engagement if he played it serious. I think it was around the time Lukas did his ‘why you should steal a woman’s photo to impersonate one online’ thread. Likewise the paganism is also fake- he realized that getting into it with the Christian nationalists would expose him to an audience that agreed with them on most things, but didn’t want to have to follow fundy Christian sexual morality.
I think you are correct about the cat girl persona, but the paganism seems to me to be more ‘genuine’ (in the sense that he doesn’t like Christianity and isn’t just doing it to appeal to atheists, not in the sense that he is some kind of actual pagan). He never seemed to be Christian except in a vaguely aesthetic way when he lamented the decline of Anglo-Saxon (and thus to some extent Anglican) Canada.
Oh I don’t think he’s ever been a serious Christian, or particularly liked Christianity beyond instrumental purposes. But, as you note, he isn’t actually a pagan.
More options
Context Copy link
More options
Context Copy link
Could you please link to this thread? Sounds interesting.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
How I escaped from the Superclusters has aged well.
More options
Context Copy link
I am a little bit torn on this one. For instance with Trace, I think he is more authentic after leaving his anonymous motte persona, except at least from what I observed he is now more into gay stuff and mormonism - probably stemming more from his personal experience and history. I think he was more interesting in his fictional anonymous personality writing about whatever here on The Motte, when he had to mold himself into The Motte ethos. In a sense rules here are also some sort of algorithm forcing some people into writing style, that may not be natural to them. And they may be better for it.
More options
Context Copy link
Oh! What is the proof for that? Or is it just wishful thinking?
He was unfortunately doxxed. It actually happened a while ago but he was until recently not famous enough for anyone to pay attention. As per longstanding convention on this board I won’t link to it; suffice it to say he was an interesting poster then and still is now, although nobody is immune to the negative effects of Twitter power-use and hot take bait.
More options
Context Copy link
I have heard credible testimony to the contrary from a guy who definitely knows one and claims to know the other, but no hard proof of the negative. They are, as you'd expect, pretty similar people.
More options
Context Copy link
More options
Context Copy link
What are you referring to?
More options
Context Copy link
I do think a lot of the conversations that used to happen here have moved to TPOT and postrat twitter. It’s a great space and there are surprising number of present and former mottizens there. Mostly pseudonymous of course but occasionally it’s come up in PMs or via dogwhistles. The notable thing with Trace and Kulak is that they kept their pseudonyms constant between here and there. A lot of people either have separate aliases on twitter or just post under their real name. I’m still trying to figure out which of you here is JD Vance.
What is TPOT? As with acronyms in general, Google is pretty useless at figuring it out.
“That Part Of Twitter”. Not that it’ll make it any easier to search.
I don’t know what their deal is, but I think they’re related to the “vibecamp” thing. Which is also intentionally vague and a e s t h e t i c.
More options
Context Copy link
It stands for "This part of twitter" and there's a know-your-meme about it here.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I saw one of his tweets shared in a random (very left-leaning) discord server. I had no idea how to explain that he was legitimately unhinged.
More options
Context Copy link
I just assumed they were Trans.
More options
Context Copy link
Wait, what? I always wondered what happened to TP0. That would be interesting if it were true.
Yes, he's neither cat, nor girl, nor kulak. It's always amusing to me when people think he's a woman.
I looked at his Twitter this am and regret it. Essentially, and this hasn't always been true I think, I am on the opposite side of most of his rants. I won't link them because they don't need signal boosting. I also notice he is followed by Jordan Peterson, for some bizarre reason.
More options
Context Copy link
More options
Context Copy link
What I never liked about Kulak was that, whenever he wrote about something I knew a lot about, it was clear he was making a lot of stuff up and had only a very superficial level of knowledge.
More options
Context Copy link
I know quoting Vonnegut is a midwit reddit thing, but his line from Mother Night is applicable here:
"We are what we pretend to be, so we must be careful about what we pretend to be."
People in the attention-based economy are relearning it the hard way.
More options
Context Copy link
Lot of small/middle accounts on twitter are perfectly normal and don't try to engagement bait or make a brand. And something like 80% of twitter eyeballs are allegedly people who barely if ever post.
More options
Context Copy link
More options
Context Copy link