site banner

Culture War Roundup for the week of April 6, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.

No email address required.

Since nobody seems to be bringing it up, I will:

"Tuesday will be Power Plant Day, and Bridge Day, all wrapped up in one, in Iran. There will be nothing like it!!! Open the Fuckin’ Strait, you crazy bastards, or you’ll be living in Hell - JUST WATCH! Praise be to Allah. President DONALD J. TRUMP"

It really is Poetry.

Over time, I've lost faith in religion. I no longer believe in deontology. I doubt objectivism. I don't think consequentialism produces meaningfully outcomes. I find modernism passe. The rationalists seem kinda irrational. I've done the calculations: utilitarianism doesn't math out.

I think I'll have to RTVRN to tradition: I think Plato might have had it. Maybe Aesthetics as Virtue was the true path all along.

It seems that the aesthetics someone chooses to project and their aesthetic sense (taste? values?) are better predictors of what they will do and who they really are than anything else. It seems that half of my political values boil down to aesthetics in any case: I find trump-hegseth-vance-desantis et al to be disgusting and contemptible; I have more respect for Rubio, but the last Republican I could really get down with was Mccain, purely off of his aesthetics, even if choosing someone as gauche as Palin disqualified him from my vote (Romney was too morman for me to handle, I'm sad to say).

Likewise with the D's: Their candidates have been universally superior to the republicans these past 8 years because they would rather be eaten by wild dogs than put "Tuesday will be Power Plant Day, and Bridge Day, all wrapped up in one, in Iran. There will be nothing like it!!! Open the Fuckin’ Strait, you crazy bastards, or you’ll be living in Hell - JUST WATCH! Praise be to Allah. President DONALD J. TRUMP" up in lights and then line up behind it, but I have the most good vibes off of Bernie, Buttegieg, and Mamdani; also probably for purely aesthetic reasons.

I think this might actually be rational: just by observing the aesthetics an individual chooses to portray, you can make a judgment vis. how they intend to act in a way that is much harder to fake than "Saying shit". Kamala was a social climber totally absent of virtue, and campaigned like it. Bernie is a crusty old marcher, and acts like it. Buttigieg is a bloodless technocrat, and looks like it. Trump is a neuvo rich venal tasteless rich guy, and governes like it.

All this to say: I think I'm just going to be unapologetically ruled by my aesthetic sense from now on, and say that we can allow some grace. Maybe Duublya had a stutter, you can get an aphorism wrong and it's fine. It's ok. That being the case, if any politician in the future sits down and types out something as fucking sauceless and cringe and gross as "Tuesday will be Power Plant Day, and Bridge Day, all wrapped up in one, in Iran. There will be nothing like it!!! Open the Fuckin’ Strait, you crazy bastards, or you’ll be living in Hell - JUST WATCH! Praise be to Allah. President DONALD J. TRUMP" and thinks "This is great, fucking SEND IT"; they should probably go back to screaming at the cocain ghosts in an alleyway stop blighting our eyes with their garbage.

Just so we're clear on the timeline here:

Saturday March 21, 2026, 7:44 PM (ultamatum expiring 7:44 PM Monday March 23, 2026) -

If Iran doesn’t FULLY OPEN, WITHOUT THREAT, the Strait of Hormuz, within 48 HOURS from this exact point in time, the United States of America will hit and obliterate their various POWER PLANTS, STARTING WITH THE BIGGEST ONE FIRST! Thank you for your attention to this matter. President DONALD J. TRUMP

Monday March 23, 2026, 7:23 AM (ultamatum exteded to Saturday March 28) -

I AM PLEASED TO REPORT THAT THE UNITED STATES OF AMERICA, AND THE COUNTRY OF IRAN, HAVE HAD, OVER THE LAST TWO DAYS, VERY GOOD AND PRODUCTIVE CONVERSATIONS REGARDING A COMPLETE AND TOTAL RESOLUTION OF OUR HOSTILITIES IN THE MIDDLE EAST. BASED ON THE TENOR AND TONE OF THESE IN DEPTH, DETAILED, AND CONSTRUCTIVE CONVERSATIONS, WHICH WILL CONTINUE THROUGHOUT THE WEEK, I HAVE INSTRUCTED THE DEPARTMENT OF WAR TO POSTPONE ANY AND ALL MILITARY STRIKES AGAINST IRANIAN POWER PLANTS AND ENERGY INFRASTRUCTURE FOR A FIVE DAY PERIOD, SUBJECT TO THE SUCCESS OF THE ONGOING MEETINGS AND DISCUSSIONS. THANK YOU FOR YOUR ATTENTION TO THIS MATTER! PRESIDENT DONALD J. TRUMP

Thursday March 26, 2026, 4:11 PM (ultamatum extended to Monday April 6, 8:00 PM)

As per Iranian Government request, please let this statement serve to represent that I am pausing the period of Energy Plant destruction by 10 Days to Monday, April 6, 2026, at 8 P.M., Eastern Time. Talks are ongoing and, despite erroneous statements to the contrary by the Fake News Media, and others, they are going very well. Thank you for your attention to this matter! President DONALD J. TRUMP

Sunday April 5, 2026, 12:38 PM (Ultamatum extended to Tuesday April 7, 8:00 PM)

Tuesday, 8:00 P.M. Eastern Time!

All this to say: I think I'm just going to be unapologetically ruled by my aesthetic sense from now on, and say that we can allow some grace

It's like coming full circle. The pre-politically interested child who makes superficial comments about a politician's appearance had it right all along. "He blinks too much!" or "he's fat like Santa" was all the political philosophy you need, turns out.

Let's hope he delivers on this. But I am doubtful. But yeah - he is going insane. Which is much more entertaining than Biden's senility.

One possibility: Trump lost his edge after being banned from Twitter. He used to be legitimately great at writing funny tweets, even if you don't agree with him. But Twitter is an ecosystem, and a skill, TruthSocial just isn't the same (I don't think I've ever seen anyone share a post from there that wasn't Trump). He's basically just talking to himself there, so his Tweeting skills are getting rusty.

His live standup insult comedy act is still top form, though. Did you see his meeting with the Japanese PM and journalists? Hilarious. He told an extremely crass joke with no hesitation or shame, off the cuff, and made it work.

Yes, you do understand this goes both ways? You understand democrats come off as Halloween villains to much of the country?

Trump is an aesthetician. He governs on a platform of, essentially, ‘I’m the tsar and I’m gonna look like it’. Yeah, aesthetics. And he baits democrats into the vanguard party damn-fool aesthetics.

Yes, you do understand this goes both ways? You understand democrats come off as Halloween villains to much of the country?

I'd go with actual living demons over Halloween villains, but to each their own.

In light of the over the top evil of the opposition, I'm fine with my chosen champion acting like a Crusader King.

but the last Republican I could really get down with was Mccain, purely off of his aesthetics

The guy who chanted "Bomb Iran" to the tune of "Barbara Ann"?

Buttigieg is a bloodless technocrat, and looks like it.

And allegedly purchased a child via surrogacy. Probably not the kind of aesthetics I'd support, but I suppose that's the problem with trying to judge someone by aesthetics. I don't care for Trump's, but I'd be hard pressed to name a politician from either party who has aesthetics that make me think I should support them.

I never knew it was any different. I was singing that song in my head a couple weeks ago.

In Pete’s case, the aesthetics has already gone south. Aretaics couldn’t be all that salvific if it produced the same mediocre outcomes. It’s the case with all moral systems. Moral systems fail as people ‘depart’ from their values unless the content itself is the object of your critique (e.g. Nietzsche).

The guy who chanted "Bomb Iran" to the tune of "Barbara Ann"?

To be fair, just about everyone of a certain age has to be tempted to do that when the subject of Iran comes up. If the regime falls, the new regime would be well-advised to ask the country be called "Persia" again just to break that association.

It is a quintessentially tasteless tweet

  • Posting a message about your enemy living in hell on Easter, the joyful day celebrating Christ rescuing sinners living in hell

  • Posting it on TruthSocial, which I imagine is only populated by evangelicals who care a lot about the holiday

  • Threatening to destroy civilian infrastructure, which again, is on Easter morning, and presenting it in the language of an easter basket

  • Concluding with Praise be to Allah (???????)

  • Posting no other Easter message the rest of the day

  • Coming off as desperate, not at all in control

My running hypothesis is that the rescue operation went poorly and handicapped his judgment.

Isn’t the rescue operation being claimed as a victory? What's your theory?

It seems more parsimonious to assume the negotiations are going poorly. That also strikes me as more in-character for Trump (seems himself as a big negotiator, probably doesn't really care about the lost C-130s.)

But we did get a couple of birds stuck over there and had to blow them up, which I imagine being frustrated by in theory (particularly for the people who were really hoping we could avoid anything that remotely resembled Eagle Claw this time lol).

Worth noting that the failure of Operation Eagle Claw wasn't the lost equipment (losing the helicopters was already priced in), but the failure to rescue the hostages.

Here it appears at least possible that they contemplated the possibility that they'd have to ditch the planes.

Praise be to Allah

I think this should be read as a threat. Which it is.

I kind of have the sense that Trump is actually going insane, or at least his emotional control over himself is slipping. It's not that he is bombing Iran - that isn't very different from normal US foreign policy. And it's not that he is being bombastic - he has always been bombastic. But his pronouncements lately have had a very deranged and openly sadistic frothing-at-the-mouth quality that is noticeably different from his usual previous posting style.

I don't think that he is just talking like this for strategic purposes. His base likes the bombast but would probably prefer a kind of bombast that seemed more composed and less emotional. They like the idea of "Trump the strong man", not "Trump the ranting lunatic". As for Iran, after having experienced assassinations and bombings for weeks, there is no reason why they would not believe a threat that was worded more calmly. If anything, I think a calm-worded threat would probably seem more plausible to them. I can't think of any way in which frothing at the mouth would help manipulate the stock market any more than a calmer tone would, either.

I think this communication strategy makes sense in the context of the Middle East and Iran in particular. The region is pretty well known for its bombast. The videos of political rhetoric I’ve seen from that region sound pretty bombastic as they chant for the deaths of their enemies. There are videos of toddlers chanting for the death of Assad, feel good news stories about a kid healing from the death of his father by playing video games (in which he pretends the enemies he’s killing are Jews). You can’t convince those people you’re serious if you’re not over the top bombastic and ready to kill them and destroy their country. This isn’t Sweden, and you can’t talk to an Iranian Shia Muslim like he’s a Swedish Lutheran.

As for Iran, there is no reason why they would not believe a threat that was worded more calmly. If anything, I think a calm-worded threat would probably seem more plausible to them.

Honestly, the 4d chess argument I can come up with for this is that Trump is actively trying to make sure the war does not come to a diplomatic conclusion, and as such is utilizing a mix of insults and obvious bluffs to convince the Iranians to stay in it.

Related to my conspiracy theory that this entire adventure is designed to let some air out of the stock market bubble, on the theory that the AI investment process needs to continue in order to achieve AGI, but that a catastrophic sudden bubble pop would torpedo the whole industry, so they needed to do something to bring down the stock market slightly prior to the bubble.

Honestly, the 4d chess argument I can come up with for this is that Trump is actively trying to make sure the war does not come to a diplomatic conclusion, and as such is utilizing a mix of insults and obvious bluffs to convince the Iranians to stay in it.

Agreed, but I don't think it's really 4d chess; it's not a really sophisticated strategy. He doesn't want them to make an offer that sounds reasonable.

Alright, AI bros, follow-up from last week. I was able to secure access to Claude Opus 4.6 at my job, and I gave it the same prompt that I had given to Sonnet. It overlooked the authentication part of the HTTP client library completely this time in what it generated. In a follow-up I asked it to extract out the common logic for the authentication portions specifically. It didn't do that, instead it generated a class with two helper methods.

The first helper method was just a thin wrapper around System.Text.Json for deserializing the response. There's an optional flag to pass in for when case insensitive deserialization is needed, and nothing else.

The second helper method was something for actually making the HTTP calls. The strangest part with this one is that it has two delegates as parameters, one for deserializing successful responses, the other for handling (but not deserializing) error responses. It didn't do anything to split out handling of the 2 different ways to authenticate at all.

The issues with what was generated (for both the API client as a whole, and for the authentication part of the code specifically) are numerous, here are a small handful that I identified:

  1. It assumes that an HTTP 200 code is the only successful response code, even though some endpoints return 202, 207, and more.

  2. It assumes that all endpoints return plaintext or JSON content, even though several return binary data, CSV data, etc.

  3. It didn't do null checking in several places. I assume it was mostly trained on C# code that either didn't do null checks correctly, and/or on code that doesn't use the nullable reference type feature that was added in C# 8 (back in 2019). Regardless, the null checks are missing/wrong regardless of whether nullable reference types are enabled or disabled. Also it always checks nulls with == or != null. This works 99% of the time, but best practice is to use "is null" and "is not null" for the rare cases where the equality operator is overloaded. Once again, I assume this is because most of the training data uses == and !=.

  4. It doesn't handle url query parameters (nor path parameters), it assumes everything is going to use a JSON body for the request.

  5. It uses the wrong logging templates for several of the logging calls. For example, the logs for an error response use the log template for logging the requests that are sent. Even more troubling is that it removed all the logic for stripping user secrets out of these logs.

There are quite a few more issues, but overall my experience with Opus was even worse than my experience with Sonnet, if anything. AI bros still in shambles. I definitely have zero fears that AI will replace me, though I'm still definitely fearful that retarded C-suite execs will think it can replace me.

My post from last week about using Claude Sonnet: https://www.themotte.org/post/3654/culture-war-roundup-for-the-week/426666?context=8#context

Edit: Just saw a very relevant post over on Orange Reddit about this very topic: https://news.ycombinator.com/item?id=47660925

If you're willing to iterate one more time can you try giving it this series of prompts?

  1. @agent "Please create a standard set of agents, skills and prompt files for this project. I want specifically for there to be an orchestrator that I can give a complex query to that will walk through a planning stage, asking me relevant questions, create a plan md file and then manage subagents to execute on that plan. Two agent definitions that I definitely want are a security specialist that will audit changes for best practice and a reviewer agents that will audit to make sure updates do not break previous functionality. The orchestrator should know to invoke these agents for these tasks"

  2. @orchestrator "Please scan through and document this project using standard claude.md files to aid agents in navigating and understanding the project. Update agent definitions with relevant information."

  3. @orchestrator "[Insert your prompt]"

You can tinker to make this much better but doing this should greatly improve your results alone.

I'll try that later this week when I get a chance. Maybe next time I'm stuck in an awful meeting for two hours

I'm not a programmer, so what you just said is all Greek to me, but I'll take your word for it that what you described represents a significant departure from the expectations that the AI horny would lead one to have concerning the capabilities of the product. But they can always respond that these are problems that are solvable, and with the technology in a constant state of flux we can expect that in the coming years things will only continue to improve, since it was only very recently that even that level of functionality wasn't possible. My concerns with AI go beyond that, though, to problems that don't seem to be solvable in the short term and that have only gotten worse in recent years. These are more business-related than technology-related (though the limitations of the technology do factor in), and threaten the entire viability of AI as an industry.

I use Photoshop quite a bit. During the pandemic, though, my graphics card crapped out, and since they were in short supply, I replaced it with an old one from 2014 I had lying around. Since I don't play games or anything this was a perfectly acceptable solution, except that at some point newer versions of Photoshop started offloading some of the workload to the graphics card, for which mine was hilariously out of date. While the newer versions technically worked, there was a certain wonkiness that prevented me from adopting them full-time, and I continued using an install of Photoshop 2018, which was more than adequate for my purposes. In the meantime, I noticed that a newer version I had installed had incorporated "neural filters" aka AI into the program, which of course it did, and I fooled around with this a bit. Some functions were fun, if limited, while others, like upscaling and automatic scratch removal, didn't seem to do anything useful. But whatever. A few weeks ago I finally got a new graphics card after the old one gave up the ghost, and I looked into Photoshop 2026 to see what had changed since 2025. The answer was that the updates were basically all AI-driven, and not in a good way.

Adobe has been a convenient punching bag for the enshittification trend as of late, and the purpose of this post isn't to pile on, but to illustrate how it's representative of a greater rot in the software business and how AI only seems to accelerate that rot. Like previous iterations, some of these AI features are impressive, and some or stupid, but all of them cost extra. The way it works is that you get a certain amount of credits depending on your subscription (and as a long-time customer of the Photoshop-only plan I get a generous number of credits), and each time you use one of these features it costs a certain number of credits. And if you run out you can't just buy more, but you have to upgrade your subscription, and I already get the most credits you can with an out -of-the-box subscription that doesn't involve going through their sales department. To make matters worse, determining how many credits a given action will cost isn't based on a set rate but depends on 900 different factors, and is so complicated that the software can't even tell you how much an action will cost before it's run. And as a final blow, they don't even provide a way of telling you how many credits you have remaining; you eventually just get a message that you've run out.

The latter problem is obviously part of Adobe's slimy sales tactics where they want users to be unable to plan ahead so that they unexpectedly run out of credits in the middle of a time-sensitive project and are forced to upgrade, so I can choke that up to normal corporate bullshit. The former problem is due to the fact that there is simply no way of predicting how much compute an AI system is going to use until it's already used it. The real kicker is that, due to the inherently unpredictable nature of generative AI, you don't even know if the command is going to achieve the desired result, or how many attempts and tweaks it will take to get the desired result, and it may take multiple, expensive generations just to get something usable. The result is that the function is inherently self-defeating. There are lots of Photoshop functions that may require tweaking or not work at all, but they're integral parts of the software and aren't costing the user anything but time if they don't get things right on the first try, and the individual user will get more proficient with experience. The AI features are simply a black box that requires you to throw an unknown amount of money at it and hope it does what you want it to. I, as a user, thus am disincentivized to bother learning how to use these features because my access to them is liable to be cut off at any moment, whereas my existing workflow works fine as it is.

This is basically the problem with the whole "AI as a service" model these companies all seem to be banking on. If the response to Photoshop 2026 is any indication, customers want cost predictability and function predictability. If Microsoft Word cut you off after 1 million words per month it would seem less like you were buying software and more like a free trial. It would be even worse if the number of words you were allowed to type depended on font, font size, formatting, etc., and you didn't know how many credits each action you would take and were liable to be cut off while in the middle of writing something important. Luckily, I can use Word to my heart's content without it costing Microsoft any extra, so they have no reason to impose such a restriction. With generative AI, on the other hand, every action costs the company money, whether it benefits the customer or not, and the company can't predict in advance how much money that's going to be. So there's no way an AI company can realistically charge based on use without pissing off their customer base, who will cancel after getting that first $75,000 bill in the mail that no, they aren't paying.

Charging a flat monthly fee for unlimited usage doesn't solve this problem so much as stick the provider with the bill instead of the customer, so most of the AI services have resorted to a deceptive hybrid model where it looks like you're getting unlimited usage but has asterisks stating that it's subject to a cap, which caps are never explicitly defined. Some charge a monthly fee for access to a certain number of credits, which don't roll over at the end of the month. I'd find a lot to criticize about these models, which wouldn't fly in any normal business sales situation and would be relegated to the scummy end of the consumer pool in any other context, except that they still manage to lose money for the big players. Third-party agent developers may be profitable, but it's only because they're already buying their compute at a discount.

The only conclusion I can draw from all this is that software as a service, while loathed by customers, isn't really beneficial to companies either, other than as a cheap way of temporarily boosting numbers. And that's indicative of a deeper problem in the tech industry as a whole, a problem of their own making. From the 1980s through the 2000s, the computer industry grew exponentially. In the 1970s computers were things that large corporations and government agencies had to manage large databases. In the 1980s they became productivity tools that every employee had on his desk. By the mid-90s, home adoption had started in earnest, and by the end of the decade practically everyone had one. In ten years the internet went from being a hyped curiosity to an essential utility. The technology was also changing quickly, and the improvements were massive. In 1994, a typical home PC had a 486 processor clocked at 66 MHz, 8 MB of RAM, and a 500 MB hard drive. It would run Windows 3.1, which would be replaced a year later with Windows 95, a huge upgrade. Five years later that computer would be hopelessly obsolete; in 1999 a comparable build would have a 450MHz Pentium II, 128 MB of RAM, and a 13 GB hard drive. It would run Windows 98, which would be replaced 2 years later with Windows XP, and even bigger upgrade that eliminated the finickiness of DOS once and for all.

By 2010 CPUs would be clocked in the gigahertz and run multiple cores, RAM would be measured in gigabytes, and external hard drives of more than 1 TB would be affordable. Windows 7 was released the year prior to great acclaim. To put all that in perspective, I'm currently writing this on a Lenovo Thinkpad from 2024 that has the same amount of RAM as the currently-avalable model, which has the same amount of RAM as my home PC build from 2019. Or 2018; I can't remember the year I last did a major upgrade, but I haven't done any since before the pandemic, aside from the aforementioned graphics card. I haven't needed to upgrade it either, as there hasn't been any decline in performance in the tasks I actually use it for. And even that upgrade didn't appreciably improve performance from the 2014 gear I was running before that. Windows 7 was the last Windows release that was universally loved; every one since then has been met with varying degrees of derision. There had been flops before, but Vista was too far ahead of its time to be usable, and ME was a half-assed stopgap that never should have been released. The only mistake in this vein since then was 8, which completely misread the future of computing. Every new Windows since then has been an unexciting incremental upgrade that would probably have worked just as well as a security patch for 7.

I don't want to overstate my case here and suggest that computers haven't improved in the last 15 years; I'm sure my 2014 build would be woefully inadequate by today's standards. The point is that the advances aren't coming as fast as they did in years previous, and when they do come the improvements are more subtle. It feels like 2010 was the year that computer technology reached a mature phase where all adults, even your grandparents, knew how to use it, and good technology was as cheap as it was going to get. This wasn't clear at the time, but in a few years it was apparent that things had stagnated. In the early 2010s I listened to TWIT semi-regularly, and it didn't seem like there was much to get excited about. The two big things that the industry was pushing as the next frontier at the time were wearables and IOT devices. The former flopped spectacularly. The latter had better market penetration, though some of the implementations were ridiculous, and the whole concept has since become a metaphor of how technology has gone too far, trading simplicity and security for dubious functionality. As hardware stagnated, software quickly followed suit. Improvements in software follow improvements in hardware, and with hardware capability virtually unlimited, there was nowhere left to go. Sure, there would always be new features, support for new devices, and better security, but the game-changing upgrades seemed like a thing of the past.

So take a program like Photoshop that was first released in 1990 and had improved leaps and bounds by the time CS6 was released in 2012. A lot of users contend that this was peak Photoshop and that everything since then has been unnecessary bloat. I am not one of those people; the current software is significantly better. But CS6 was also the last version to be sold as a standalone product. Adobe had good reasons for doing this at the time—Photoshop was an incredibly expensive professional grade product that also had broad-based appeal. This meant that it was particularly susceptible to piracy, and lost more money to piracy than more modestly-priced products. They had tried to combat this in the past by releasing less expensive consumer-grade versions like Elements, but these never really took off, as consumers felt like they were missing something (most notably, Elements did not provide access to curves, which every photography book agreed was an essential tool). The decision to go subscription would give consumers access to an always-up-to-date full version of the product for less than it would cost to upgrade every other release.

The crowd who insists that CS6 is better is dwindling now, but even in its heyday it was mostly composed of people who had never actually paid for Photoshop and were mad that it was more difficult to pirate. But when Creative Cloud was first released in 2013, much of the criticism came from professionals and actual customers who were concerned about the new model. Sure, it was cheap now, but what was stopping them from jacking up the price in the future? Creative professionals aren't exactly the most highly paid. In the past one could upgrade whenever he could afford to and, if necessary, stick with a legacy version until things improved. But making one's continued access to software they needed for their job dependent on paying a ransom that they might not be able to afford was a different story. The reaction may have been better if CC offered a significant upgrade over CS6, but rather than wait a few years and offer a significantly improved version, CC came out earlier than one would expect and didn't offer much of an upgrade. Accordingly, the new subscription model was the only noteworthy thing about it. To Adobe's credit, the subscription price didn't change at all for over a decade, but in hindsight, there weren't any game-changing upgrades, only incremental improvements. If the company had simply relied on customers paying full price to upgrade whenever they felt it was worth it, they may have been waiting a long time.

As SaaS has matured from those early days, it has become less about preventing piracy and more about anxiety that newer products won't differentiate themselves enough from the old to merit the user to upgrade. Better instead to lock in that revenue stream with a user subscription that's impossible to cancel short of telling the bank to stop paying. Unfortunately, as a business move it's a one-time thing; make the number go up as all the old customers switch to subscriptions, but once they're aboard, the line flattens out again. In normal industries, this isn't a problem. In the computer industry, 30 years of exponential growth being not only welcomed but expected meant that the situation was unacceptable. Since there was nowhere left to go technologically, the industry had to resort to cheap gimmicks to keep the numbers up. SaaS was one. The aforementioned IoT was another; nothing better than announcing huge deals with appliance manufacturers who will be integrating your products. The problem with gimmicks like this is that, while they can increase revenue, they have a shelf life. A deal with Whirlpool to make a smart fridge may make both of your numbers go up, but once you have computers in every fridge sold, exponential growth is no longer possible. By the 2020s, the tech industry was running out of gimmicks. I think the reason Apple became the top dog during this period is because they were the only tech company that didn't seem to be peddling bullshit. I had a friend who was in and out of tech startups during this period (I even interviewed at one of his companies), and every idea was based on a free service that was really just scaffolding for advertising or data harvesting. A company like Apple that still sold products and services they expected customers to pay for was an outlier indeed.

So AI came to save the day. I'm not denying the fact that the technology is impressive and potentially useful, but it is just about the biggest gimmick one could imagine. Because simply being impressive and useful puts it in about the same league as, well, Photoshop, which, even in its first iteration, was a revolution to anyone who had ever worked in a darkroom. Unlike Photoshop, though AI promises to solve not one particular problem, but all of the problems, including ones that haven't been identified yet. This latter point is particularly salient, because exponential growth in the tech sector was never based on the present, but on the future. If the tech industry in the 2010s looked like it was in danger of stagnating and becoming a normal industry, in the 2020s the sky was the limit. It was now worth it for capital to invest all of the money in AI companies, because if they were successful, then money wouldn't matter anyway.

And if they weren't successful? Well, they never considered that possibility, because the line only moves in one direction. The equation is pretty simple: If AI companies are successful, then your support was worth it and will be repaid. If they aren't successful, then you need to give them more money. But what happens when the money isn't there? How good Photoshop's AI features are is ultimately secondary to how much they cost. Someone has to pay for them, be it the customer or Adobe. Some companies may be willing to subsidize AI, but if Adobe is willing to give product away for free, they'd do better by dumping CC and charging $500 for CS7, but we know that ain't going to happen. Instead, they've raised subscription prices by 50% in an attempt to get customers to pay for the privilege of having access to functionality they have to pay extra for if they actually want to use. I doubt it's a coincidence that the first substantial price hike in the history of CC coincides with the introduction of the expensive AI upgrades. I doubt Adobe will suffer much for it, because their business (like Apple's) is actually sound, and their products indispensable, but it's indicative of the perversion that's at the center of the tech world. Eventually, somebody is going to expect to get paid, and the party will be over. And as I write this, I don't see any scenario where the money is going to be there.

It's interesting that you mention C# and null checks.

I also work C# here and there, as well as a language that is a relatively verbose, garbage collected, class based, statically typed, single dispatch, object oriented language with single implementation inheritance and multiple interface inheritance. Like you, I'm seeing unimpressive results that do not justify the spend necessary for agentic coding.

Every time I've mentioned it here, I'm told the following:

  1. I'm using the wrong model. It does not matter what model I'm using - I'm using the wrong one. If it's not the absolute latest model as of three days ago, I'm speaking in bad faith because I'm using an outdated model (and I should ignore the fact that people were saying the same damned thing about the last version that they're now denigrating). If I am using the latest model, I should be using a different model from a different vendor. At this point I've tried Gemini 3.1 Pro/Thinking/Flash, Opus 4.5/4.6, and GPT 5.4. I'm running out of frontier models.
  2. Next, I'll be told I'm not using plan mode. I can read the manuals. I assure you that I am using plan mode. The fact that the agents frequently do not follow their own plan is apparently a moral failing on my part.
  3. Next, I'll be told I'm using writing a bad spec and providing bad prompts. I'm an experienced developer. I'm a published author. I have an English minor from college. I worked as a technical writer for a while. If I can't write a solid prompt, I have to wonder who the ideal candidate is - especially when these things are supposedly so frighteningly powerful that the vendors claim to be half-afraid to release them.
  4. After that, I'll get barraged with vague claims about how the tech is so rapidly improving that my personal tribulations don't matter. Depending on the person, they'll either refer to radiology as a benchmark (ignoring the fact that the models return results even without a film ) or something about how the models are only improving, and inference is only getting cheaper.

Nobody seems to want to offer the sane take, which seems to be that there can be real efficiency gains for small, well-specified projects, provided you are already an expert in the domain and are willing spend a considerable amount of time beating it into submission whenever it so much as coughs.

If you're working on a small (or perhaps exquisitely modularized) codebase, and it's chock full of documentation written in a way that the LLM can comfortably consume it without getting confused, and it's using only the happy path architecture and library set for your language, and it's in one of the "favored" languages (like python), and you have a robust set of preexisting end to end tests that can help keep the LLM on the rails, then this technology is probably pretty great.

Outside of FAANG and a few startups, however, I'm not sure how often that's the case. Legacy code is real. Enterprise customers can have upgrade cycles that are measured in years. Backwards compatibility is worth more than features. Regulatory compliance issues might end up a court summons instead of a JIRA ticket. That's not a world that does well with disposable code. Unless startups can outcompete every established player in every industry with those characteristics, I'm not sure how that changes. I can't rule out that such a future might happen, but given the moats around those industries, it'll be a tough row to hoe.

In our internal pilots, AI-generated PRs from frontier models make it through our test suite on the first try about 15% of the time. Another 30% never pass at all because they spiral out into schizophrenic fantasy lands, trying to call libraries that don't exist or attempting to rewrite a two million line codebase in "modern python". Of the ones that do make it through, about three quarters of them end up failing code review, even as we update and refine our agent instructions. At this point, dependabot has a better track record, and it doesn't even have Dario Amodei crying at night about how terrifyingly capable it is.

It pisses me off. The technology clearly has some uses, but fuck me if it doesn't feel like it's been wildly oversold. We still use it internally, but the mania is starting to die down. Management thinks it's the best thing ever because it can automatically spam LinkedIn for them. Development uses it as a more accessible StackOverflow. But we've given up on agentic coding for the time being. We'll probably look at it again in six months, assuming nothing bizarre happens between now and then.

I don't really know how to answer your posts because you seem to live in a different universe than me when it comes to AI efficacy. It's like someone checkmating "grass is green" bros by saying they checked and their lawn is brown.

Perhaps there are some unstated assumptions that lead to our differing views on it. Have you read this article about a guy accomplishing a highly nontrivial project with significant AI assistance? It matches my experience pretty well, from the pitfalls you can fall into to the genuinely new possibilities it opens up.

I don’t think anyone is say it doesn’t have its use cases. It’s a problem of expectations both on the technical as well as the business side.

I don't understand the people who don't understand how much of a big difference AI is making to software development. Speaking of Software development, this video is great and captures my frustration with a lot of the software developers who I have to work with and yeah for people like that I'd understand completely why AI is bad, all it does is it 10x the amount of slop they produce which then others have to review.

this video is great and captures my frustration with a lot of the software developers who I have to work with

Sadly the parody developer in that video would be more competent than most of my Indian coworkers, so I wouldn't be surprised if AI could replace them. But instead it will be one of the slightly more competent Indians generating mountains of barely functional AI slop (instead of the small hills of slop my coworkers currently generate).

That's one of the things that has caused the org I work for to re-think their internal AI push. The blast radius of bad developers is no longer limited by their own incompetence.

These tools don't generate 1-shot perfection - you need to create a feedback loop that will iterate until it reaches the goal. That can be either test coverage or using tool calling to hit a live service with a test API key or something. Even just prompting it to use a linter or a compiler to catch syntax errors makes a huge difference. Claude would fix most of the issues you flagged in a few loops of trying to test the library, failing and getting an error message, adding the error to its context, editing the code, and repeating. Then at the end once you have something that works, instruct it to write some regression tests, clean up the code, and make sure everything still works as intended.

You're doing the equivalent of handing an intern a sheet of paper, telling them to write down their program based on a vague problem description, and then calling them an idiot when it doesn't work on the first try.

I have no idea why Claude Code is working so badly for you. I work at a FAANG-level company, and a huge amount of our code is written by Claude. Garry Tan is in AI psychosis, but Claude Code is easily the biggest productivity unlock in CS since I started my career.

Few recommendations:

  • What thinking mode are you using ? Use at least high or max.
  • For the purpose of this test, give it all permissions and link it to an mcp like context7
    • This allows it to independently read documentation on your local and from remote sources
  • Basic, but update the app. This lapse happened to a very smart coworker of mine.
  • Use plan mode. It allows the model to build an intuition for the problem before it goes off on its own
  • If you want specific behaviors, then ask for that. Something like:
    • State and scrutinize your assumptions explicitly
    • Consider and invalidate counter factuals.
    • Utilize coding patterns that have already been established in the repo.
    • Ideally, ask it to go write readme.md files for core utility dirs in your repo, so it doesn't cold start
  • Pair it with a type checker / linter and add it as a post-model hook
    • In python land, ruff & based-pyright are the tools of choice.
    • I have used pre-defined open source linting rules, which allows the model to implement best-practice behaviors (eg: opinionated null checks) without human intervention.

I've noticed that the quality of the codebase plays a huge role in the model's ability to write effective code.

For ex:

It assumes that all endpoints return plaintext or JSON content, even though several return binary data, CSV data, etc.

Ideally, all endpoints will already be typed. The model should not have to guess the request-response types.


Unless there is a specific regression in Claude Code, I don't know why claude failed at your task. It should have worked.

Also, if you're looking for a model that prioritizes meticulousness, then I'd use codex. Codex has a tendency to autistically cover all of your bases, that benefits the sort of problem you're work with (again, Use in high or xhigh mode).

and a huge amount of our code is written by Claude. Garry Tan is in AI psychosis, but Claude Code is easily the biggest productivity unlock in CS since I started my career.

That's weird because in my experience, Codex 5.4 is way better than the most recent Sonnet. Haven't tried Opus though.

I have no idea why Claude Code is working so badly for you.

I'm not @ChickenOverlord, but I'm also seeing unimpressive results. Maybe we can get to the bottom of it.

I've tried Claude (via Claude Code), Gemini (via Gemini CLI), and GPT (via codex).

In all of them, I've used their equivalent of Claude.md/Agents.md to lay ground rules of how we expect the agent to behave. Multiple people have taken multiple shots at this.

We always use plan mode first.

Our documentation is markdown in the same repository, so that should be useful and accessible.

We're using Java, which is strongly typed and all our endpoints are annotated with additional openapi annotations that should provide even more metadata.

We're using a pretty basic bitch tech stack, but it's not spring boot. All three models regularly fight us on that fact.

We have a four levels of validation, each with their own entry point in the build scripts. These are described in a readme.md in the root of the project. The first is a linter. The second is unit tests and code coverage. The third is a single end to end test. The fourth is all end to end tests. We have instructed the models to use these validation targets to check their work.

Despite all this, we see common failure modes across all models we've tested.

  1. Bad assumptions about the tech stack. No, we do not use spring boot.
  2. A tendency to add more code, rather than fix code.
  3. An urge to "fix" "bad tests" that exist for very specific reasons. These specific reasons are usually covered with inline developer documentation as well.
  4. Confusion about what capabilities our version of java has available. Yeah, the pattern matching preview was cool. Stop trying to turn it on with experimental feature flags.
  5. Writing tests that don't actually test the thing it's changing.

I'm sure there are more, but these immediately come to mind. There are four of us trying to make these things work, and we all keep running into the same problems again and again. It's not just me - even people with dramatically different writing styles and thought processes are seeing the same thing. I feel like I'm taking crazy pills, because a lot of people I know in real life are experiencing the same pain, but on the Internet it seems like I'm a huge outlier.

What's the disconnect here?

What's the disconnect here?

It works a lot better if you bend to the AI and use a stack it likes. Why this specific Java stack?

Legacy concerns. The amount of custom code that has built up over the last 15 years is too big of a shift to deal with right now. It's on the backlog, but not anywhere near the top priority.

I'm sure there are more, but these immediately come to mind. There are four of us trying to make these things work, and we all keep running into the same problems again and again. It's not just me - even people with dramatically different writing styles and thought processes are seeing the same thing. I feel like I'm taking crazy pills, because a lot of people I know in real life are experiencing the same pain, but on the Internet it seems like I'm a huge outlier.

My most competent co-worker, a Russian guy who got his start writing assembly back in the 80's, was the most enthusiastic about/interested in AI person that I knew. He was always trying out the latest models from OpenAI, Google, and Anthropic. He was also running his own local LLMs and diffusion models locally. He even dropped $4-5k on a DGX Spark late last year. And even he seems to be getting disillusioned/losing interest in AI, he doesn't seem to think it's going to be able to achieve anything remotely close to the promises and hype. Though I will note that the push from our upper management to use AI hasn't pleased him much either, especially since the project we've been working on for the past year (modernizing a giant mess created by our Indian coworkers. They weren't using package management at all, they were literally emailing around zip files full of DLLs for years, I got pulled into 4 hour long calls to fix dependency conflict issues in prod once every 2 or 3 months) was very much not aided by AI, but management insisted we find a way to use AI on the project regardless.

This AI bro vs (idk what to call the opposition) schism on this site is very funny

I feel like both sides are talking passed each other in many ways, and also have no interest in bridging the epistemic gaps.

About me

I'm firmly in the "AI bro" camp I guess. I do not code, nor do I know how to do code aside from simple programming 101 type stuff, which is all I need(ed) to make VBA scripts work in excel. I will never copy/paste another line of Stack Overflow VBA to jank together a macro again, and that makes me very happy.

Adoption is slow, but it's gradually happening at my employer $MULTI_NATIONAL_FINANCE_CO. It is very clear to me that I will see (and already have seen) large productivity gains, especially as agent scaffolds are made for things other than coding.

LLMs are both extremely powerful and very jagged. I think a huge amount of their "jaggedness" is due to their nature as LLMs, and are very unlikely to get to ASI/some versions of AGI*. My best guess is they'll be as disruptive as the ~computer (i.e. the information age) was from 19XX-now, perhaps slightly smaller given "AI impact on human civilization" is kind of a subset of "computer impact on human civilization".

*Notwithstanding some kind of paradigm change in algorithm/AI approach. Which is always possible, but we're pretty clearly on the LLM-tech tree path for the next bit.

Vague Predictions

I am sure many white collar jobs will disappear entirely, many will be insulated for any number of reasons (ranging from genuine limits to retarded bureaucracy and everything in between) and will remain unchanged for a while, and some, like mine, will keep their core identity but day to day tasks will shift a lot and who knows what happens to employment (too many factors to guess per job).

Coding

It is clearly revolutionizing coding. This cannot be denied. GitHub commits are now going parabolic, so people are "building things". Much of which is slop. I am one of those people, I now have a small but growing fleet of personal tools. I'm sure they are coded awfully, I've never looked and wouldn't understand if I did. I don't care, they work for me.

There are much more accomplished coders on twitter, etc, who are also reporting massive changes to their lives. Many of them are incentivized to say such things and over exaggerate, but I doubt it's a massive coordinated lie or mass delusion. So there is truth there.

The more sensible ones will even agree that AI code is on average mediocre to bad, and AI can't do high precision high quality specialized code like a cracked human can. AI will even take your amazing high precision high quality specialized code and slop it if you're not careful. Many of them, like Karpathy, have just given up and accepted the slop as a price of doing business. Because they're accomplishing what they want with the code too. It works.

It's assumed that AI performance will improve massively from where it is today. It has so far, it's a pretty safe assumption right now. It's rumored that the new Claude model beat expectations on performance vs scaling laws. AI model hype is always a large % bullshit, but we'll find out the real capabilities soon, and no matter what they will be better than they are now.

I don't think LLMs are going to bring us the ASI digital god of Sam's wet dreams/nightmares. I think they are going to profoundly change our service economies regardless.

Your situation

I don't know your codebase or the thing you're getting it to do. I don't know anything about HTTP.

I seriously doubt you're trying to set the AI up for success at all. I can't code and I'm probably using more AI coding best practices than you are, and all my git commits are titled "lol".

It's also very possible that it's not worth the time to set up AI "properly" to fix this. There's a very real possibility it's much faster, if more tedious, to just do it yourself. But this is one task. N=1. There are things AI can do for you today, that's a guarantee.

The bubble

The usual retort of "skill issue" is "well if I have to set it up and use best practices then AI is a bubble". I think that's a strawman, because I am not stuck in a reflexive yes/no binary where if you like AI you can't also think it's a bubble. It could be a bubble, I don't know (or care). It's incredibly easy for an asset to be over-financed and you never know if you've done enough capex until you do too much (at any scale). What I care about is the AI tools I can access which are excellent and also flawed.

Maybe AI needs to be that good out of the box to justify the trillions in capex. It probably does. But does that matter here? Neither you nor I control capex spend or can predict how long the scaling laws will hold for.

I don't care if AI is a bubble - we'll all find out and predictions of this scale/magnitude are essentially worthless. If you have alpha and guess right, all power to you, but the bubble conversational branch strikes me as a fool's errand. And it's irrelevant to "can LLMs do things for you?".

Closing thoughts

We have LLMs here right now that are massively changing basically any digital task you point them at. It's not easy, and it doesn't work everywhere, but it's insane when it does.

It's cognitively exhausting. It's a new way of thinking + every time new models/tools come out you change many things you were previously doing. So many assumptions and bottlenecks change. It's genuinely not easy or obvious always how to implement it. We are learning this in real time as a culture.

It's so exciting, and I hope to soon quit my job at $MULTI_NATIONAL_FINANCE_CO to capture more of the value of my labor, which is about to increase a lot (probably lmao, could also go to 0).

If you want to refuse or deny the power of these tools you can. You can set about finding examples of them sucking to point and laugh. But you're letting your bias blind you, and leaving a lot of value on the table. You can tell your computer to do stuff and it can now, it's awesome.

Also noting that in your HN link the inventor of Claude Code is asking ppl for feedback/providing explanations live as I type this.

I've never looked and wouldn't understand if I did. I don't care, they work for me.

This might be a huge part of the divide between doubters and believers.

The code coming back might be ugly, buggy, insecure, and probably completely impossible to scale.

But if it works, how much does the 'average' user care?

Yet those who care for the quality of the code or product it might grate when they look and see the inelegance of the solutions and the lack of foresight.

Apply this to the AI art debate, too. Sure a trained eye will notice deficiencies and shortfalls. But the average user notices that they can produce a logo or a cute cartoon portrait in 15 seconds for pennies.

Me, I'm now basically using the LLMs to do final review on any work I don't feel 100% competent on, since its attention to detail is now impeccable and of course it never gets tired or complains.

Sometimes it hits some nitpicks I genuinely find stupid because in actual practice its an irrelevant detail for the actual outcome of the matter. But it catches things, so it almost feels like it'd be malpractice to not use the tool.

Anyway, its broke through to normies, AI agents are going to be huge among small busineses, I see people who are otherwise technologically inept with Grok AND ChatGPT on their phones lock screens. They are already relying on this tech to a degree that might startle you. Genie ain't going back in the bottle.

Get psychologically (and financially) prepared to adapt, that's the only advice that I can truly offer right now.

It's so exciting, and I hope to soon quit my job at $MULTI_NATIONAL_FINANCE_CO to capture more of the value of my labor, which is about to increase a lot (probably lmao, could also go to 0).

Love this uncertainty. On the one hand, I could 10x my productivity and cut my rates by half and still be making crazy money for myself. Seriously, the number of basic and intermediate tasks that GPT can do for me is freeing up time to engage with the higher leverage tasks that I enjoy and get paid the most for.

But if it gets just a little better then my role as an expert intermediary becomes redundant. I myself become a wrapper for the LLM, I'm just giving the stamp of approval to outputs that are already 99% perfect, and getting paid to eat the blame if something does go wrong 1% of the time. And competition with other humans in this role will drive my marginal profit down to pennies.

I hate this uncertainty.

I hate this uncertainty.

I've always been an anxious person, worried for the future, etc. I've basically given up with AI, the world has gotten so ridiculous it's just funny.

I have no control, everything is going to change. Everything has changed a lot already in my lifetime. I'm just gonna ride it out, I had my friends over for a BBQ last night. Trying to do more of that this year.

hell yeah brother.

The thing about singularity-like situations, reliable prediction becomes impossible. Although technically I don't have to predict with real accuracy, just better than 90+% of the population. Beat the masses to do alright, provided we aren't all killed. You can fret about this, or you can let go and focus in on the tiny parcel of territory in the vastness of probability-space that you have any influence over.

In my most primal moments, I sometimes think I should literally just locate the most physically enticing female I can attract (and compromise on everything else because what else matters if AGI hits?), liquidate most of my assets except like $100k kept in the S&P, and shack up in my house to have gratuitous amounts of sex, get all my groceries delivered, and just fuck around with AI art generators and see if I can make a bit of money off them before whatever comes next washes over us.

But man, it turns out somebody still has to do the hard work of keeping civilization turning so we can keep the lights on until we can finish the silicon god (or the false idol). Those data centers and nuclear plants won't build themselves. Yet.

I despise people who do that stupid "permanent underclass" posting, specifically to drive anxiety without any actionable outlet.

I had my friends over for a BBQ last night. Trying to do more of that this year.

Strong recommend. I've focused on keeping the friendships I have as strong as possible. Say "yes" to more social invites than you used to. As long as the activities don't kill you before we reach utopia, why spend this exciting time hunched over a desk or lying in bed doomscrolling?

One of my favorite parts of this forum is moments like this, when someone puts my thoughts into words better than I could. I agree with every word.

I have the exact same view on AI art. I have quite low skills in "artistic taste", it's never I skill I've been good at or sought to develop much (low reward per n time vs things I like more). But now I can get to make funny images and concept art and express ideas in mediums that were previously locked to me. What fun! Yet there's people crying and screaming on the internet because like game developers are using AI agents to help them make games faster+better. I'm just excited for the golden age of AI gameslop. Good dev studios are going to be absolutely cooking.

I myself become a wrapper for the LLM, I'm just giving the stamp of approval to outputs that are already 99% perfect, and getting paid to eat the blame if something does go a wrong 1% of the time. And competition with other humans in this role will drive my marginal profit down to pennies.

I'm hoping this window of time lasts a while. I'm adjacent to the legal world and they're going to use every institution they wield (many!) to keep themselves in this state for as long as they can.

I mean, there's no way that the legal profession doesn't outlaw AI use in law the moment it becomes a threat to their jobs, right? Lots of law makers are lawyers, and I don't think they are above using the levers of power to make sure their profession can't be replaced.

I'm not sure how they'll catch attorneys who are careful about the end products they're filing.

You might see attorneys staying suspiciously effective despite juggling large caseloads, making surprisingly adept legal arguments in their briefs while their performance at a live hearing is lacklustre.

But yeah it'll be banned from any client or public-facing roles to large extents.

AI use by attorneys will get lots of attention for job market and ethics reasons, but the courts are 100% unprepared for the day when pro se litigants start filing piles of plausible-sounding briefs in their traffic ticket/misdemeanor/family court cases.

They're already doing it in low-stakes Civil cases.

Ask me how I know.

The code coming back might be ugly, buggy, insecure, and probably completely impossible to scale.

But if it works, how much does the 'average' user care?

In my experience the average user starts to care right around the same time that heir credit card number and mother's maiden name end up for sale to the highest bidder.

No one is going to vibe code their own SAAS to replace Salesforce et al

Salesforce and other huge boys with giant moats will enjoy higher labor efficiency. May experience serious pain due to higher competition > margin pressure but hard to predict.

Mid-cap software will knife fight each other over margins as competitors grow like weeds.

Small-cap/VC/PE idek lol, really excited to watch this space.

I'm super curious to see what happens when a given VC can invest in 5x as many startups per unit of $capital. I assume startups will scale faster. Do VCs stretch themselves thin with more companies in a portfolio? Do funds get bigger or smaller? Are there more or less actual VCs? Is it easier or harder to get a VC fund going?

That last bit is the most interesting part to me.

Right now, my understanding is that VC is extremely hard to get because a handful of AI darlings have sucked all the air out of the room. If they IPO soon, VCs should theoretically have freed up capital to deploy as the OpenAIs/Anthropics of the world start to show a return.

If I believe the argument, then it should result in a much larger number of smaller investments, since labor is ostensibly the biggest cost of software startups and that cost should plummet.

I don't think the normies are THAT far along that they'd trust it with their financial information.

But not too far out, either.

They might trust a Vibe-coded website, though.

As I understand it any website taking customers' financial information will usually use a third party's software rather than roll their own.

If Paypal et al. are vibe coding without regard to security we are in for some pain.

Block is vibe coding now

Yeah, personally, I've never bought into the AI hype at all. Everything I've ever tried to use it for, it promptly shits the bed on, so I just dismiss it as worthless.

But even in an alternate universe where I'm the crazy one and everyone else is sane, there are severe problems with trusting this stuff: first, you're de facto ceding control over your technical infrastructure to a third party (run by exactly the sort of people who say stuff like "idk, they trust me. dumb fucks"). Yes, yes, you're supposed to religiously check the output before committing, not let it execute unsafe commands in a privileged environment, yada yada. I've got a bridge in Brooklyn to sell ya. Second, there is existing precedent for tech services being intentionally made worse to increase usage: for example, Google intentionally made Google search worse by doing things like disabling spell check so that users would have to search multiple times to find the result they were looking for, thus "increasing usage" (yes, this is from an actual court document lol). As OP and plenty of other smart people have noted, there is is a trivially obvious incentive and mechanism for this to be done with LLM coding agents. Just make the agent worse so people have to use more tokens!

If they're going to enshittify the AIs, it'll have to happen after one company gets sufficient market dominance that swapping to a different one isn't trivial.

And we're not at that point yet. If anything, the competition is at its fiercest right now. People seem to be willing to drop one product for another if they notice any tiny loss of performance.

I've mostly stuck it out with GPT so far, but I can't see any way they could lock me in hard enough that I wouldn't leave if it was obvious their model was consistently 10% 'stupider' than the alternatives.

I'm surprised (as a mostly non-user for now) at the complaints that engineering performance has degraded over months. Is this rapidly-improving expectations? Honeymoon phase wearing off? AI vendors cranking the screws to reduce costs and looking for pennies? I thought the models were difficult to train and largely static, so I didn't think that sort of scaling was trivially on the table.

Is this rapidly-improving expectations?

Yes

Honeymoon phase wearing off?

Yes + its fun building the first part of slop-software (slopware?). The last 20% of finalization/polish is much less fun. I have gone from "holy shit AI software development is so neat" to "AI software development is neat but I'm getting really sick of making X specific and complicated thing I have no business building work" (I'm so close though).

AI vendors cranking the screws to reduce costs and looking for pennies?

Yes, this is getting worse too.

I thought the models were difficult to train and largely static, so I didn't think that sort of scaling was trivially on the table.

That's why they're all spending Billions

I think it's largely AI vendors cranking the screws.

On the other hand, Anthropic can't even manage two nines of uptime, so it may just be outright incompetence on their part. Being a PhD in machine learning does not make you and expert at SRE, and the models aren't quite there yet either.

Mostly vendors cranking the screw to squeeze more cash out of us. Worst thing is when providers silently update or change the quantization of the model without making it known. Local Models don't have this problem people (I say this as someone who's managed to get Qwen 3.5 397b-a17b running locally on a server I rented).

Somewhere here is a good reaction meme joke about AI doomers having to cope with AGI dumbing itself down to maximize token usage ($$$), not paperclips.

Not saying it's happening, but it'd be ironic.

I listened to the recent Odd Lots episode with Gina Raimondo (Biden's Secretary of Commerce) and I would echo the sentiment here:

Most Americans when they hear AI, they get afraid, right? The vast, vast majority of Americans, "AI = anxiety", "I am going to lose my job". I get that, you know, people are scared. I think it would be a huge mistake to like retard our AI progress with overregulation. [We] just talked about China. I want to win the AI race, I want America to lead the AI world. And I think when we get to the fifth or sixth inning of this AI revolution, whatever you want to call it, I firmly believe there will be more jobs. I do. I think that there will be new industries, new companies, new products and services. I'm an optimist. [That being said,] I am pretty worried about getting from the first inning to that inning.

I am not dismissive of AI because it made me more productive but I also believe that software engineers will be around.

European tech, American tech, and regulation

tl;dr: what do you think about 1) European alternatives to American tech, and 2) European and American tech regulations?

Background

  1. Europeans (citizens, businesses, and governments) heavily rely on American tech. Europe has alternatives in most categories (e.g. phone, CDN) but most have less adoption.
  2. The US has regulations. The EU and its nations have different regulations, notably the Digital Serivces Act and Digital Markets Act. Occasionally a big company gets fined and told to change; they usually appeal, then sometimes still don't pay or change anything. The EU and its nations are widely regarded as having way more, stricter regulations and fines.

Recent events

Online ideas and my opinions

More radical

  • "The EU should ban and block US tech companies": on (pro tech freedom) Hacker News of all sites, which suprised me. Effectively a Great Firewall for the EU. I strongly disagree. More broadly, I believe people should have the freedom to stream propaganda from any nation they want: Russia, China, even Iran. I have no issue with governments directing citizens to their own propaganda and discouraging other sources, even preventing people who are so dumb they may actually believe whatever e.g. the Iran regime says. But this leads to the proposal's more significant, practical issue: way too many Europeans use American tech, and they aren't switching despite seemingly having some national pride and US dislike. European governments internally use Office and other American tools. It's near-term infeasible.
  • "Europe should stop protecting US Intellectual Property, from Cory Doctorow: while I'd love to see the end of IP, like I'd love to see the end of labor, this is also near-term infeasible, so I also strongly disagree. If a European nation "just stops" enforcing the DMCA, tech companies can "just stop" operating there, and remember that practically all of Europe still relies on them. Cory Doctorow has lots of interesting arguments, and I really admire and support his crusade against IP and enshittification, but his views are very extreme and some of his ideas go too far.
    • What I think European nations should do in the near term is provide leniencey for and encourage companies to not over-enforce IP laws; for example, by supporting companies who get sued for not taking down content from a flawed DMCA claim (DMCA takedowns are heavily abused). Likewise, they should defend companies who are wrongly sued for copyright/patent infringement, and ensure, however strictly IP is enforced, it's equally strict on small and big companies.
    • I'd still like to see the end of IP, but it must be done reasonably and with an alternative for deserving IP owners (particularly artists who need to make a living, and not platform owners who restrict users' content). For example, LLMs sidestep existing IP: they can scrape any website, build any app from a description, and generate copyrighted characters for personal use. Maybe European (and American) nations can accept AI companies training on copyrighted data in exchange for keeping this.

Less radical

  • "European nations and/or the EU should encourage and fund European alternatives": strongly agree. In general, I want to see more variety and innovation. In particular, I think everyone using locked-down platforms (social medias, phones, mail, etc.) is really bad, and the way out is not regulation (though some is important/useful) but competition, so companies are pressured to open their platforms or at least stop degrading them.
    • Notably, I don't actually care whether the alternative platforms are European.
    • Unfortunately, I'm not optimistic that governments will help here. And I myself avoid mainstream social media, but still use an iPhone and Mac because they're better.
    • On Mistral. AI is particularly important, so Europe will be at a big disadvantage if they don't get competitive AI and America restricts its own. Mistral makes local models (as opposed to locked-down cloud ones), so I want them to succeed. However, even with full EU backing, they'd be outcompeted by OpenAI and Anthropic, who can release local models themselves, making all their effort and work seem wasted. Except I don't think it would actually be a waste, like how acquiring weapons isn't a waste, when the deterrence from their existense makes them unnecessary.
  • "European nations should relax (tech and general employee) regulations to encourage innovation": agree, there are way too many. But I don't think they should relax them as far as the US. I don't know where to draw the line, and I don't have the motivation or discipline to understand existing regulations (not even getting into how they're applied in practice).

Vaguely, I believe American tech companies should be regulated more, since they seem to be damaging society and have effective monopolies due to network effects. And more importantly I want to see more tech innovation, which I think is hurt by less competition. But I don't exactly know how.

I generally think America and Europe should work together, but here, I think different regulatory frameworks and competing tech services is good.

I happen to have experience in this space, and my tl;dr take is very simple: EU tech regulation is, historically, productive when and only when it focuses on creating a general standard which big tech companies are required to meet, then leaves the technical details up to the companies. DMA-mandated interoperability for messaging apps is a good example. Anything more granular you can trust them to be too out-of-touch and slow-moving to get right, and that's how you end up with cookie popups everywhere.

On your opinion survey:

  • Agree bans, fines intended to crush companies, etc. on US tech companies are stupid, and the US should treat them as a geopolitically hostile act (a real act, not like shitposting about Greenland). But European commentators and even policymakers these days are not as rational as they used to be; there's a lot of feelings of fear and inferiority that manifest as aggression.
  • What would happen if the US, in return, decided to stop protecting European IP? Oh no, how horrible, guess we gotta escalate further, maybe we can get Japan in somewhere on the escalation ladder? Seriously, not realistic but I agree we can move towards more lenient IP enforcement.
  • On European innovation, there's really nothing stopping French dirigisme from building an AI juggernaut like they built Airbus - except choking regulation, capital draining off into foreign markets and the welfare state, and a decline in high-capacity population. I have no doubt De Gaulle would be building a European hyperscaler right now if he were still in charge. Sadly...
  • EU labour laws are part of the picture but more relevant to the non-tech sectors. General compliance burden and access to capital (partially downstream from regulation) are bigger problems. But like the IP point it's a one-step-at-a-time thing even if the EU could fundamentally change its governing culture in a deregulatory direction.

Cory Doctorow has lots of interesting arguments, and I really admire and support his crusade against IP and enshittification, but his views are very extreme and some of his ideas go too far.

I'd give a different issue: regardless of how good or bad his ideas are, they're clearly unrelated to the actual goals he's claiming to champion. Twitter and YouTube and Discord and almost every company of relevance here are not market leaders due to the strength of their intellectual property; it's trivial to implement one-off examples of their functionality, and building a decent many-to-many implementation is a small business, not a large one. Their strengths come from their scaling capabilities and, to a far greater extent, the absolutely massive network advantages. The division from LibreOffice or GIMP to MSOffice and Photoshop isn't a massive, deep moat of algorithmic design or CPU optimizations, but a shallow one of user interface and user training. Individual people can build cell phones. It's just only a rounding error of people wants that done, to fund it, or to use it once manufactured.

It might be more relevant for specialized software (operating systems, CAD work, simulation software), but notably none of these spaces are things Doctorow focuses on. He talks about iOS in the sense of jailbreaking iPhones, a matter where legal constraints have never been the primary limit. He never mentions Linux, and only mentions Microsoft to say they "bricked" the International Criminal Court's outlook server due to sanctions (real world: cut access to Karim Khan's e-mail account). The ICC's moving to openDesk (also not mentioned, wouldn't have been my first choice)... and having it run by B1 Systems GmbH, a contractor in Germany. A quick google estimates <150 IT staff; having tried OpenDesk, I'd expect <20 full-time staff equivalent for the ICC, mostly tech support.

That is not a moonshot. It's definitely not the moonshot Doctorow's theory would need.

The only place they might be relevant is AI models (hmmm), and then only to the point where there are closed-source, high-capability models that could be cloned and run from EU services. That's not coherent to Doctorow's whole view - "Because even though the AI can't do the 's job, an AI salesman can convince the 's boss to fire them and replace them with an AI that can't do their job", that's the text - but he's not pretending to be coherent so much as tell his readers what he needs to get his goals, so whatever.

((Presumably they only ignore the copyright requests Doctorow dislikes, not artist and writer intellectual property, but to be fair, it's not like anyone without a hundred million dollar business can get an inter_state_ copyright lawsuit, nevermind an international one.))

How's that supposed to work? Okay, the model leaks, quickly. That I can buy, I've been a proponent of the theory that 'the leak always gets through' even if it hasn't always applied in practice. The EU companies are able to clone the graphics cards or ASICs, probably. Can they make them? The current best fab is 18nm, and while they're planning to build a 2nm-ish plant, the current timeline is 2030 and also kinda a joke. Okay, well, over long enough the hardware and training costs get amortized, it's the landscape and inference cost. Is EU power going to be cheap? Regulatory compliance? Legal overhead?

What's the business plan, here? Be annoying?

Mistral makes local models (as opposed to locked-down cloud ones), so I want them to succeed. However, even with full EU backing, they'd be outcompeted by OpenAI and Anthropic, who can release local models themselves, making all their effort and work seem wasted

Mistral's been suffering for a while. It had some sizable influence in low-parameter models a year ago - and to an extent, still has: Cydonia is a Mistral-3.1-24B-derived model that's popular for roleplay, even if it introduces a lot of world consistency issues as context scale - but it's ranged from middling to actively bad since.

One complication here is that there are clear spaces that OpenAI and Anthropic are unlikely to want to explore, that would leave a niche for not-quite-frontier models that don't excel at things like coding but do focus well on other career spaces ... but that is likely to be more regulated in the EU, in ways that impact the ability of providers to provide decent models. And that's particularly overt for Mistral: one of the suspected causes for (some of the many) problems in Mistral 4 was the repeated 'safety' failures in Mistral3 variants. Ideally, they'd be able to avoid regulatory failures without harming core capabilities, but so far the degree models seem to suffer from overcorrection correlates pretty heavily with regulatory exposure.

(Caveat: they could have also just found some local minima. Things are moving so fast in these spaces that they could well turn around quick.)

US subpoenas tech companies for private messages of European officials enforcing the DSA: the Trump admin is criticing European governments for censoring speech, which is...true, and not just "hate speech" but sometimes just criticing politicians.

Oh, snap. I was sitting on an effortpost on the subject, but never got around to finishing it. Since you're bringing it up, I'll just dump the draft I had stored:


Freedom of expression is a fundamental right in Europe and a shared core value with the United States across the democratic world.

Some of you might scoff at these words if you've been keeping tabs at what's going on in Europe. Some might scoff even harder upon realizing they come from a statement from the European Comission responding to Trump's travel sanctions against Commissioner Thierry Breton, who sent a letter to Elon Musk, threatening him with regulatory retaliation, ahead of his interview with Trump. But even if you were familiar with that situation, when you find out how deep this rabbit hole goes, it might turn out all that scoffing is nowhere near enough

Recently the House Judiciary Committee released a report on EU laws' impact on American political speech. They subpoena'd the major platforms for documentation on the measures they took to comply with EU regulations, and the results were quite illuminating. One of the responses to the Twitter Files story was that it's a nothingburger. Private companies came up with private terms for using their private platform, and the government was essentially just pushing the "report" button. We've had plenty of conversations about whether that is an accurate portrayal of the situation, but aside from that, it now looks like the core premise of that response is wrong. The platforms' terms of service weren't established on their own accord, but rather under pressure from the European Commission. From the report:

starting in 2015 and 2016, the European Commission began creating various forums in which European regulators could meet directly with technology platforms to discuss how and what content should be moderated. Though ostensibly meant to combat "misinformation" and "hate speech," nonpublic documents produced to the Committee show that for the last ten years, the European Commission has directly pressured platforms to censor lawful, political speech in the European Union and abroad.

The EU Internet Forum (EUIF), founded in 2015 by the European Commission’s Directorate-General for Migration and Home Affairs (DG-Home), was among the first of these initiatives. By 2023, EUIF published a "handbook ... for use by tech companies when moderating" lawful, non-violative speech such as:

  • "Populist rhetoric";
  • "Anti-government/anti-EU" content;
  • "Anti-elite" content;
  • "Political satire";
  • "Anti-migrants and Islamophobic content";
  • "Anti-refugee/immigrant sentiment";
  • "Anti-LGBTIQ . . . content"; and
  • "Meme subculture."

Now, some might say that just because an official government body invited some companies to have a friendly conversation about moderating their platforms, doesn't mean any pressure is actually being put on them, but the problem with that theory is that the companies themselves weren't under that impression. The report contains examples of emails such as this one from Google:

...co-chairs set the agenda under (strong) impetus from the EU Commission; decision is taken by "consensus" -- but consensus can be heavily pressed by the EC, if they disagree where it's going.

or:

The EC is opening the GAI subgroup under the Code of Practice. I assume we want to join (we don't really have a choice), but do we also want to co-chair it?

or one from TikTok about adding rules against "marginalizing speech and behaviour", and various forms of "misinformation":

This update, which was advised by the legal team, is mainly related to compliance with the Digital Services Act

Now, maybe this is just a case of overzealous bureaucrats throwing their weight around to push their private agenda? Despite the letter of support for Breton after Trump's sanctions, the official line was that was acting without authorization, so maybe this is was also the case here? Well, maybe, but said bureaucrats really wanted to make it seem like this is all done with the blessing of the top brass. For example an email from an EC official representatives at Microsoft, Google, Facebook, Twitter, and Bytedance signed off with:

Given the urgency, I take the liberty to use this informal channel but I want to assure you that I am addressing you with the agreement of the Vice-President (who is cooperating on this with [redacted] and [redacted]) and the knowledge of the President.

Personally, I think this casts doubt on the claims about Breton as well.


The executive summary of the report isn't a long read, and has receipts for a few other dramas like the Romanian elections.

Now, some might say that just because an official government body invited some companies to have a friendly conversation about moderating their platforms, doesn't mean any pressure is actually being put on them, but the problem with that theory is that the companies themselves weren't under that impression.

One reason tech companies might form that impression is because regulatory bodies seem to be developing a habit of giving off that impression even when without exercising formal power. Recently, in eSafety Commissioner v Baumgarten, the Australian eSafety Commissioner had been revealed to be sending "informal requests" to X using X's legal requests portal, and then turning around and claiming to the Administrative Review Tribunal that the decisions were not reviewable because they weren't exercising formal powers granted to the Commissioner.

https://www.auspublaw.org/home/2026/3/the-government-is-not-the-same-as-us-esafety-commissioner-v-baumgarten-2026-fcafc-12-gwdak

The Baumgarten case reveals that the Commission has gone beyond its statutory mandate by working to limit online speech that it considers harmful or otherwise problematic, but that falls below the thresholds set in the statute. Ms Baumgarten posted a video on X which was critical of a Melbourne primary school teacher for organising a ‘Queer Club’ for students. The post named the teacher, but did not identify any children. The eSafety Commission received a complaint about the post. The complaint was considered by Samantha Caruana, an official within the eSafety Commission, who had no delegated authority to compel social media services to remove posts. Ms Caruana concluded that the post probably did not amount to ‘cyber-abuse material’ for the purposes of s 7 of the Online Safety Act. Despite her conclusion, Ms Caruana filled in a form on X’s ‘Legal Requests Portal’ asking that the post be taken down. The eSafety Commission’s request referred to s 7 of the Online Safety Act as authority for the request.

Ms Baumgarten sought review of the eSafety Commission’s ‘decision’ to order the removal of her post in the Administrative Appeals Tribunal (which was replaced by the Administrative Review Tribunal (ART) during the course of her case). Section 220 of the Online Safety Act provides for a right to seek merits review of the Commissioner’s decisions to issue removal notices. But the Commission argued that Ms Baumgarten had no right to challenge the decision in the Tribunal, because it had not made a removal decision under s 88. Rather, the Commission argued, it had simply made a request of X that it remove the post. Thus, the Commission argued, that there was no ‘decision’ for the Tribunal to review and it had no jurisdiction.

The Commissioner's argument was rejected by the ART and the appeal rejected by the Federal Court of Australia.

My general impression is that Europe doesn’t respect free speech the way we do in the United States. For example, in England, one RooshV was banned from entering England (even for getting on a connecting flight) because he expressed views the leaders of England disagreed with.

I think a strong case that a lot of the online censorship (e.g. not allowing people to have frank discussions about Trans rights—and, yes it’s Reddit’s trans rights discussion censorship which drove The Motte to have their own website instead of remaining on Reddit) we saw in the late 2010s and early 2020s was partly a result of EU overreach. Indeed, Twitter/X doesn’t censor the way most other major social media platforms do, and they were hit with a huge fine from the EU late last year, and I feel the EU unfairly targeted Twitter/X because that platform allows people to express views which get people banned on other platforms.

For one, I’m glad this site is here to allow frank discussions. Yeah, it can be right-learning, but considering a lot of mainstream right-wing views are straight up suppressed and silenced on other platforms, it’s no surprise right wing people flock to the relatively few platforms which allow frank open discussion.

I’m saying all this as a classic liberal.

Has there ever in history been a government that implemented any speech restrictions that didn't spread to broad criticism of the ruling party?

Arguably Singapore? It’s legal to criticize the people’s action party despite not being super-pro-free speech in general.

They might not honor it perfectly in the breech, I suppose.

IANAS, but my impression of Singapore was that criticism of the party (ideally constructive criticism) was accepted, but that criticism of prominent individuals faced very harsh and sometimes politicized libel laws. Not bad as they go.

The US? say what you will about America, the first amendment is amazing. I suppose it depends on what you mean by "the ruling party".

Edit1: There has been certain attempts, like the Alien and Sedition Acts of 1798, but overall the first amendment has been a strong stalwart against government overreach.

I think the first amendment reinforces my point: it has no speech restrictions. Narrow exceptions only exist outside, yet even they've been twisted (e.g. prosecuting Communists for "planning to overthrow the government" in Dennis v. United States).

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

I suspect that speech hasn't been prosecuted more in the US because children are taught this first, then exceptions later, so they're generally biased against exceptions.

In 1969, Dennis was de facto overruled by Brandenburg v. Ohio.

Took 18 years, but that's a short time compared to the long history of a country.

I suspect that speech hasn't been prosecuted more in the US because children are taught this first, then exceptions later, so they're generally biased against exceptions.

Yes, makes sense, the freedom is broad, so the exceptions are "the exceptions that prove the rule".

I feel Americans are far too quick to congratulate themselves on the topic of freedoms and rights. Not only has the US government worked to censor in recent years using big tech as a proxy, it has also done so historically, such as with the case of Schenck v. United States, Charles Coughlin, McCarthyism or COINTELPRO and similar.

If the government was the owner of all major communications platforms, then yeah, the first amendment would technically be super relevant. But when American law is willing to leverage the right of a single company owner to censor speech as being equal to the right of millions of people to express themselves on that companies platform, you have a state of affairs that is effectively no different from not having any free speech rights at all. Which is exactly the case for anyone wanting to color outside the lines of American powers that be. Maybe not by putting you in jail, as is the case in Europe. But via indirect means, such as with the examples given earlier or suddenly not having a bank account or not being able to freely choose an airline or host a website by any normal means.

I think a secondary part is that what a lot of Americans believe doesn't seem to matter a whole lot. And even if that wasn't the case, American media has had such a stranglehold on the public that it's not as if there was ever going to be a risk of anyone believing anything truly heterodox to begin with. And if that were ever a likely case, the American government can and has stepped in to get ahead of those movements. The sheer mass of the American media and political system has been too great for any popular grass roots movement to budge it until, arguably, 2016 Trump arrived.

But even after Trump, TPTB have learned their lesson, are course correcting and we are now only celebrating 'free speech' in America because a South African bought twitter.

  1. As pointed out by @ChickenOverlord, Americans and their speech is so so so much free-er than other countries that sometimes I feel Americans don't get congratulated enough for it
  2. Yes, that's right, the question was about government overreach. Being able to does not mean it has to be easy. And yeah, the difficulty with getting your ideas and thoughts across to others is part of the friction of communication. I'm not sure what is being asked here, are you asking that political belief is to be a protected class and private companies should not use that as an excuse to offer/not-offer products and services? Either way, if people want their speech heard, nothing prevents them from taking over or recreate what they need.
  3. What Americans believe matters a whole lot. Trump's 2.0 victory is complete vindication of how what the median American thinks matters and led the country to what they want. Feels like every other presidency can be easily characterized as "newcomer with grassroots momentum that trounced the elite favorite".
  4. So the freedom of the people worked. An American, with the means and opportunities to make a change, made a change! He certainly didn't stay in South Africa to do that. He did what he did with Twitter because he had ideological and philosophical values, very American ones if I might add, that drove his actions.

I feel Americans are far too quick to congratulate themselves on the topic of freedoms and rights.

Not at all. Our track record is far from perfect, but we still somehow manage to completely eclipse every other country on earth when it comes to speech rights, in spite of our failures and shortcomings. We can call our politicians idiots without getting arrested [1], and in the rare cases when cops have overreached for that sort of thing the courts have shut it down.

1: https://www.dw.com/en/germany-greens-habeck-presses-charges-over-online-insult/a-70793557

That's a comparison revolving around being the cleanest pig in the sty. If the culmination of the freedom loving spirit of Americans can't reach beyond comparing themselves to the Germans then the point, that Americans are far too quick to congratulate themselves on the topic of freedom and rights, is very much made.

The Germans, thé UK, the Canadians, in fact most of Europe…

The four ideas are not mutually exclusive, in theory, the EU can do all 4. In fact, there is an existing successful example of doing all these ideas: China. It is in the greater interest of both America AND EU, geopolitically at least, to not get to that point, because individually they won't be able to compete with China if they also have to spare energy to compete with each other.