TheAntipopulist
Formerly Ben___Garrison
No bio...
User ID: 373
Ah, it's interesting Perplexity actually had an advantage there. This is the only time someone has been able to give me an actual reason anyone would use Perplexity over any other LLM.
I remember getting pretty good citations (actual papers) from LLMs in early 2024, but you had to use the "deep think" mode (or whatever it was called) that took like 15 minutes to run. Now that's unnecessary and, like you say, ChatGPT is a lot more interesting out of the box. You still have to check things to be sure, but 95% of the answers it gives aren't hallucinations.
I presume the AI slop merchants are repackaging that sort of story over and over because it gets a lot of views, and I presume it gets a lot of views because a lot of tech workers can empathize with it. It's a particularly infuriating mix of complete obliviousness + pigheadedness that's ripe for parody.
I'll spend a lot of time understanding what needs to be done, and then 15-30 minutes describing, in detail, what needs to be done, and supplying the necessary context.
Yeah, this is the way to get the best results. But man, 15-30 minutes? Is that per context window, or per major project? If that's per context window then you're doing even more than I do.
And yeah, I find it somewhat sad that software engineering seems to eventually be going the way of blacksmithing, but as a younger dev I'm excited about how much more I can create on my own terms now. I always wanted to create video games as a hobby, and it's so much more viable with AI -- partially due to faster coding, but honestly more due to stuff like Nano Banana Pro.
Surprised to see how dopey people still are, how can someone be a CTO and not know the difference between the models under the hood of Copilot? Would've thought a CTO would know better.
Yeah, it's painful to watch what the executives at our company do a lot of the time.
I fired up Claude Code with Opus 4.5 and got it to build a predator-prey species simulation with an inbuilt procedural world generator and nice features like A* search for pathfinding - and it one-shot it, producing in about 5 minutes something which I know took me several weeks to build a decade ago when I was teaching myself some basic programming, and which I think would take most seasoned hobbyists several hours. And it did it in minutes.
I mean, Twitter users have always been saying Opus/Sonnet is sooooooo good since like Sonnet 3.0 back in early 2024. I know the capabilities are advancing steadily, but early 2025 felt like much more of a step change. Opus 3.7 almost certainly could have handled his A* search problem, just with a few more reprompts on average than Sonnet/Opus 4.5. And again, does somewhat fewer reprompts or requiring the issue be broken up a bit more really do that much?
write a serviceable database server
An LLM could absolutely do 30-40% of this, assuming it's being directed by a senior programmer and has said programmer on hand to do the other 60-70% of it. We might be getting into a semantics issue on this, as I'm assuming that a "senior" programmer would still be doing a decent amount of coding. Perhaps in some orgs that "senior" title means they're only doing architecting and code reviews, in which case that 30-40% might not be accurate.
"what kind of database we actually want to use here and what is more practical given limited resources we have?"
LLMs are also fairly decent at answering design questions like this, assuming they have the right context. I might not always go with an LLM's #1 answer, but the top 3 will usually have at least 2 decent responses.
Last frustrating exercise was when I tried to have it explain to me how to use two certain APIs together, and it wrote a plausible code and configs, except it didn't work.
For most of this paragraph, were you 1) using a frontier LLM model on its "thinking" mode (or equivalent), and 2) did you give the LLM enough context for what a correct API call should look like? Not just something like "I'm using version 2 of the API", but actually uploading the documentation of version 2 as context. I want to emphasize that context management is critically important. Beyond that it sounds like you just hit a doom-loop, which still happens sometimes, but there are solutions to that. Usually just telling the LLM to break the problem into smaller chunks, and use extra verification (e.g. print statements) so it can tell where the error is actually coming from.
The main issue that I see with AI is that it is exceedingly difficult to maintain a well structured project over time with AI. I want it to use the same coding standard throughout the project. I don't want magic numbers in my code. I only want each thing to be defined once in the project. Each block of code it generates may be well written but the code base will spaghettify faster with AI. Unless the context window becomes the size of my codebase or a senior devs knowledge of it this is inevitable.
Most of this seems like it would be solved (or at least mitigated) by managing context correctly and having a prompt library. I don't know the particulars of your codebase so there's a chance it's crazily spread out, but if you just tell the AI that something has already been defined then any frontier LLM will be pretty good at respecting that. If there's a particular formatting style you're particular about, then just add that to one of your prompts when you start a new conversation.
It is clear that we can't take the human out of the loop.
I definitely agree with this for at least the near future (<=5 years).
Beyond what I've listed here, I've also used AI for data analytics using R/Python. It's really good at disentangling the ggplot2 package for instance, which is something I had always wanted to make better use of pre-LLM but the syntax was rough. It's good at helping generate code to clean data and do all the other stuff related to data science.
I've also used it a bit for some hobbyist game development in C#. I don't know much C# myself, and LLMs are just a massive help when it comes to getting started and learning fast. They also help prevent tech debt that comes from using novice solutions that I'd otherwise be prone to.
At this point it's better to ask "what standalone programming tasks can't LLMs help with", and the answer is very little. They're less of a speedup compared to very experienced developers that have been working on the same huge codebase for 20+ years, but even in that scenario they can grind out boilerplate if you know how to prompt them.
Thanks for the kind words. It's cool that you helped design the ICPC; I remember reading about the 11/12 score earlier and yeah, it's indeed impressive that a machine can do that. I just wished more of that would translate over to the real world. Like Dwarkesh said, AI's abilities on evals and benchmarks are growing at the rate that short-timelines people predict, while AI's abilities in the real world are still growing at a rate that long-timelines people predict.
I feel like we don't really disagree on context. If it needs to be broken up between a "short context" and "long context" to make it work, then yeah that'd be good. I mean, it sorta works like that already behind the scenes with the data compression algorithms the LLM designers have, it just doesn't work super great. I too was hoping some solution would be developed in 2025, and was slightly disappointed when it wasn't. Hopefully we get one in 2026 as you say -- if we don't, then it could portend a pessimistic outlook for short timelines, which would be a shame.
AI will do some things that humans used to have a monopoly on, like companionship, but other sectors will remain the realm of humans for decades. Stuff like human doctors, even if AI is proven to be better in every possible way, will still be wanted since "that's how we've always done it". It will take many years to work through stuff like that.
Oh look, it's another batch of absolutely nothing. Still no evidence of any conspiracies involving Epstein trafficking young girls to other men. Yet every new revelation is treated like it confirms the narrative.
Well, I guess there was one big revelation: Bill Clinton. Not that he actually did anything bad, but that he appears in the photos at all. This lets MAGA do something it's always interested in: give a pass for daddy Trump by saying "whatabout the Left?" Instead of looking at the evidence and deciding this whole Epstein stuff belongs in the political trash bin, MAGA can now continue being conspiratorial about its outgroup. Democrats are "in a panic". The Epstein files are overall "just a Clinton photo album".
It created ample unemployment among industries where the machines were just flat out better than a human could be.
Yes, but those people then just found different jobs, and society became more efficient overall. Losing your job temporarily sucks but creative destruction is part of living in a vibrant society.
The whole premise with AGI is that it can in theory be better at everything that a human could do.
AGI will never be better than humans at simply being human, which will count for a lot to some people and to some fields.
Strong agree that AI will not cause mass unemployment. If the industrial revolution didn't create widespread unemployment while pushing 80%+ of the population from agriculture to manufacturing/services, then it's safe to assume basically nothing ever will stop a society from having to do work of some sort, even if it's just silly stuff like zero-sum status games.
Also agree that AI will be "mid" relative to the FOOM doomer and singularity expectations that some have. I'm a bit more bearish on the productivity gains than you are. There will certainly be gains to some extent, but a lot of society's blockers are socially-enforced like housing restrictions, lawyerly reviews, etc. that are political problems that AI won't be able to solve by itself.
From what little I've seen, he's a conspiratorial slop-merchant peddling some combination of common-sense-implied-as-dark-truth, along with obvious nonsense presented in a confident cadence. I can understand why people get sucked in by the common sense stuff because seeing it repackaged as a "dark truth" can be fun to some people, but accepting the rest of his arguments shows bad things about your epistemic hygiene. I'm a bit more familiar with Whatifalthist, and he fits this description to a T.
the narrative that algorithms have a left wing bias and that dissident voices are difficult to find.
You're in a very right wing ecosystem if this is the only narrative you've heard about algorithms. Leftists have been complaining about "radicalization pipelines" for a decade+ now, and it formed one of the key arguments they made for cancel culture.
You must be working with very strange/niche languages then. I've had no trouble getting them to understand SQR and a couple other extremely old languages like FAME.
Another possible outcome is an uptick of antisocial behaviour from those who expect to remain childless, whether by choice or not, as they hear the message that society does not care for them
This is what I'd do, for sure. I'm all in favor of cutting benefits for the olds, but detest the fascist-feminist synthesis that its all men's fault for not "stepping up" or some nonsense, and that the best remedy is social harassment.
I would have agreed with you last year, but it's getting easier and easier to ignore learning the language you're working with too. It's obviously still useful to have at least a basic understanding, but I feel we're like <10 years from just trusting LLM code output as much as we trust compiler output. Nobody reads compiler stuff any more.
I'm a SWE that's never worked with Rust (I've mostly been in R/Python, then SQL/Java/C#). I feel like with the advent of LLMs, the choice of programming languages will be so much less important in the near future. Before LLMs, having to learn a new language imposed a lot of costs in how to do the basic stuff, as having 10+ years of experience in a language means you can bust out features much more quickly than someone who has to constantly go to StackOverflow to figure out how to do boilerplate stuff. I feel like a lot of the debates over languages was really just "please don't make me learn this new crap", with people having their preferred language and then actively searching for reasons to defend it. Now you can just have Claude Code easily do boilerplate in any language for you, and focus on testing things instead. I'm converting old SQR code into SQL now, and pre-LLM this would have required me to have at least a basic knowledge of SQR, but that's no longer really the case.
this thread, is my best attempt to provide.
I read through it and I'm still not seeing any good examples. I see two main examples with you claiming they're violating the unwritten rules of debate by making a "flat dismissal" and being "uncharitable" in some nebulous way. Once again, this seems like a case of "you just don't like his arguments". I don't either, as I think they're bad arguments, but I'm really not seeing anything objectionable in terms of debate decorum, at least not something that right wing posters do on a nearly constant basis without any intervention.
as religion is getting a bit of a upswing
Not a thing, source 1 source 2 source 3. At most you could say that the decline has levelled off by some metrics, but statistics keep showing that the importance of religion in peoples' lives is slowly but monotonically going nowhere but down.
I'm not really sure what you mean by "postmodern" here other than as a vague gesture at a blob of liberalism-wokism-rationality etc. Perhaps the Right will come to dominate. Currently, the Right is dominated by conspiracists like Candace Owens, Just Asking Questions connoisseurs like Tucker Carlson, and shitposters like Catturd. As bad as it is right now, I have faith that it will eventually be replaced by something even worse.
A lot of the hate for Jews comes from the following areas:
- Genuine (stupid) Neo Nazis that hate Jews for little reason other than because Hitler hated them 80 years ago. Some people want to keep up the LARP.
- Disgruntlement that the US has acted like an arm of Israeli foreign policy with almost no pushback for peoples' entire lives.
- Jewish domination of culture relative to their population count, and their pushing of leftist propaganda from their positions of power. Jews are overrepresented due to their high verbal IQ, and this has given them quite a bit of clout. Dumb rightists have hallucinated a coordinated attempt to destroy America, when the reality is much simpler: smart people are just overwhelmingly liberal no matter where you go. There was also extra incentive for Jews to push for leftism since they perceived the Right as their main threat for a long time and many probably thought that an America that was dedicated to multiculturalism was the best defense against anti-Semitism.
I personally agree that Jews are pretty great overall, and it seems like they've been having a slow-motion awakening on the threats of mass-migration. A good chunk of them are becoming socially conservative, but are leaning towards a more intelligent conservatism rather than the conspiratorial populist rightism. Maybe they'll be the ones to eventually salvage the Republican party, doing the job that the tech-right was supposed to do but utterly failed at.
I mean, the Nazi Bar analogy explains a decent chunk of it at least. But this is the type of Nazi Bar where anti-Nazis are viewed with deep suspicion by most of the patrons, as well as the barkeep.
I don't recall Amadan explaining that to me, but maybe I just forgot or only glanced at his reply at some point. It doesn't really change my point, thought the fact he's not banned right now is something I'll keep in mind.
The conversation I linked is a great example of him not being hostile to anyone involved in the conversation, while people like Amadan are using tons of personal attacks.
Darwin was banned for a long time at some point. Is he unbanned now? I thought it was a permaban, but maybe I'm misremembering.
He confidently asserted something as fact, was shown that he was wrong, and then got hostile about it.
I've never seen an example of him getting hostile despite asking people multiple times for examples of his worst posts. I've only seen people getting hostile towards him.
I have stated a couple of times before that this place is not right-wing, it has not ever been.
I'm coming to this post from the AAQCs thread. This is farcically wrong. This site absolutely tilts right pretty far. That's not to say it's exclusively right-wing, but the following are all true:
- The Quality Contributions threads are a combination of nonpartisan wonkposts, and right-wingers creatively sneering at the left. There is no equivalent of left-wingers creatively sneering at the right due to a combination of fewer left-wingers, and since any left-leaning effortpost is much less likely to be nominated.
- Upvotes/downvotes skew rightward. They also skew towards longer/higher quality posts which some people try to point as the only effect, but low-quality left-leaning posts will almost always be heavily downvoted, while there are plenty of low-quality right-leaning posts that will be highly upvoted.
- Consistently left-leaning posters have much higher moderator scrutiny and can follow all the rules and still get banned for frivolous rules that plenty of right-leaning accounts violate all the time. A great example is Darwin, who was a prolific left-leaning poster. There was plenty of consensus that he was "bad" in some nebulous way, but when I asked repeatedly what was wrong I was only ever given vague runarounds and examples of posts that proved my point like this one, where I disagree with Darwin's political point, but in terms of debate etiquette and rule-following his detractors are massively worse than he ever was.

Ramaswamy is doing some motte-and-bailey nonsense here, pointing out a few flaws in American culture, but then using that as a non-sequitur to justify his ridiculous immigration views. The simple fact is that the H1B system is used to undercut American wages. While ostensibly only permitting "foreign experts", companies game the system by allowing diploma mill bachelor's degrees in India to be valid, and then pay them garbage salaries. An easy solution would be to just require anyone hired on an H1B visa to have high relative wages. Basically everyone agrees this would fix the problem, but nobody makes the change because they actually want to use it H1B's as a cynical vehicle for mass-migration.
More options
Context Copy link