site banner

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Read this article on Substack for images and better formatting

This article will cover the following:

A summary of where I think AI stands for software developers as of 2025.

  • How much AI has progressed in 2025, based on vibes.

  • A summary of how I’m currently using AI for my work, and an estimate of how much it’s speeding me up in certain tasks.

  • A summary of the top-down attempt from the executives of the company to get people to use AI.

  • How much I’ve seen the other developers that I work with use AI of their own volition.

  • What advancements in AI I’d find most helpful going forwards.

I’ll keep my explanations non-technical so you don’t have to be a software engineer yourself to understand them.

As 2025 comes to a close, it will mark the end of the third full year of the LLM-driven AI hype cycle. Things are still going strong, although there’s been an uptick in the amount of talk of an impending Dotcom-style crash. Basically everyone is expecting that to happen eventually, it’s just a question of when and how much it will bite. AI is almost certainly overvalued relative to the amount of revenue it could generate in the short term, but it’s also clear that a lot of people “predicting” a crash are actually just wishcasting for AI to go away like VR, NFTs, the Metaverse, etc. have done. That’s simply not going to happen. There will be no putting this genie back into the bottle.

Where AI stands for developers

The past few years of using AI to build software have looked something like this:

2023: This AI thing is nifty, and it makes a decent replacement for StackOverflow for small, specific tasks.

2024: OK, this is getting scarily good. Unless I’m missing something, this seems like it will change coding forever.

2025: Yep, coding has definitely been changed forever.

There’s no doubt in my mind that AI is the way that a lot of future coding will be done. This idea was mostly already solidified in 2024, with what I’ve seen in 2025 fortifying it further. The only question is how far it goes. That almost entirely depends on how good the technology eventually becomes, and how quickly. If AI stalls at its current level, then it would only produce a long run productivity increase for coders of 20-50% or so. If it fulfills the two features I discuss towards the end of this article, then it could automate huge swathes of coding jobs and increase productivity by 100%+, but there will still almost certainly be human work to do in interfacing with + building these systems. I don’t foresee a total replacement of SWEs anytime in the near future without a massive recursive self-improvement loop that comes out of left field.

If you’re a developer, you’re shooting yourself in the foot if you’re not using AI to speed-up at least some parts of your job. This isn’t an employment risk now, but it could be in 5-10 years if progress continues at its current pace1. My range for “reasonable disagreement” is the following: on the low end it’s something like “AI is useful in some areas, but it’s still woefully deficient for most parts of the job”, and on the higher end it’s something like “AI is extremely useful for large parts of the job, although you can’t trust it blindly yet”. On the upper end I remain pessimistic about “vibe coding” for anything except the most disposable proof-of-concept software, while on the lower end I respect anyone a bit less if they confidently dismiss AI completely in terms of programming and claim it's just another passing fad. AI is well-within those two extremes.

AI progress in 2025

As for how AI progressed in 2025, results were mixed.

If evaluated from a historical context where technological progress is nearly stalled outside of the world of bits, then 2025 was a bumper year. Indeed, every year since 2023 has seen bigger improvements in AI than in practically any other field.

On the other hand, if we evaluate from claims of the hypester CEOs, FOOM doomers, and optimistic accelerationists, then 2025 was highly disappointing (where’s my singularity???).

From my own expectations that were somewhere in between those two extremes, 2025 was a slight-to-moderate disappointment. There was a fair bit of genuine progress, but there’s still a lot of work to be done.

There were two major areas of advancement: LLM models got smarter, and the tools to use them got a bit better.

In terms of LLMs getting smarter, most of the tangible advancements came in the first few months of the year as reasoning paradigms proliferated with Gemini Pro 2.5, ChatGPT o3, Claude Opus 3.7, and Deepseek taking the world by storm. This happened mostly between January and April2. These were good enough that they could solve basically any self-contained coding challenge within a few prompts, as long as you supplied enough context. They could still make stupid mistakes every now and then, and could get caught in doom loops occasionally, but these weren’t particularly hard to resolve if you knew what to look for and had experience with the tricks that would generally solve them. Comparing the benchmarks of a person using ChatGPT 4o in November 2024 vs ChatGPT o3 in April of 2025:

After this, though, there wasn’t much beyond small, iterative improvements. New versions of Claude, Gemini 2.5 getting tuned in May and June before releasing 3.0 later in the year, ChatGPT getting v5, 5.1, and 5.2 -- none of these had any “oh wow” moments for me, nor did I notice any significant improvements over several months of use. I’m playing with Sonnet and Opus 4.5 now, and they’re… maybe a small step up? I’ve noticed it will correctly ask for additional context more often rather than hallucinate as weaker models tended to do, but this might also be explainable by me just improving my prompts. I keep up with model releases pretty well, and I see their benchmarks continuing to slowly rise, but it doesn’t really feel like I’m using a model that’s obviously smarter today in December 2025 than I was from April 2025. Obviously this is all highly anecdotal, but vibes and personal use are how a lot of people primarily judge AI.

Part of the issue is almost certainly diminishing returns. Would it make that much of a difference if LLMs could code some feature in an average of 4 reprompts instead of 5? It’d probably be useful if you could consistently get that number down to 1 for all coding tasks as you could then trust LLMs like we do for compilers (it’s very rare for a dev to check compiler output any more). But getting AI to that level will take exponentially more work, and I doubt it will be doable at all without unhobbling AI in other ways, e.g. automating context management instead of relying on a human to do that.

The other major area of improvement was in the tools to use these LLMs, and by that I’m talking about AI “agent” tools like ChatGPT Codex and Claude Code. These are glorified wrappers that are nonetheless useful at doing 2 major things:

  1. They edit code in-line. In 2024 most people relied on copy+pasting code from the browser-based versions of LLMs, which caused a number of problems. Namely, either the AI would slowly regenerate the entire program each time there was a single alteration which would take forever and burn a bunch of tokens, or they would only regenerate parts of the code and thus rely on the user to selectively copy+paste the updated code in the right locations, which caused an endless series of headaches whenever there was a mistake.
  2. They have a limited ability to go out and find the context that they need, assuming it’s relatively close by (i.e. in the same repo).

I don’t want to undersell the usefulness of tools like Claude Code, as those two things are indeed very helpful. However, the reason I called them “glorified wrappers” and put the “agent” descriptor in quotation marks is because they’re still extremely limited, and aren’t what I had in mind when people were talking about “agents” a year or two ago. They cannot access the full suite of tools a typical fullstack engineer needs to do his job. They cannot test a program, visually inspect its output, and try again if things don’t look right. Heck, they don’t even do the basic step of recompiling the program to see if there are any obvious errors with the code that was just added. When people discussed “agents”, I had something in mind that was much more general-purpose than what we have right now.

Both Sam Altman and Jensen Huang stated that 2025 would be the “Year of AI Agents”, but this largely did not happen at least in terms of general purpose computer agents. There were a few releases like Operator that generated some buzz earlier in the year, but they were so bad that they were functionally worthless, and I’ve heard very little in the way of progress since then. I feel this is a bigger deal than most people think, as it’s almost certainly necessary (though not sufficient) for AI to have at least mediocre skills in general computer use for it to break free of the rut of short time horizons tasks it’s currently stuck in.

AI on the job in 2025

Below is a chart of the tasks I do as a software engineer, how helpful AI is in completing them, and the approximate percent of my working hours that I spent on them in 2020 vs today. My job over the past year has been in modernizing a reporting tool that our Research team uses for statistical releases. The old tool was designed in the 90s using SQR and Java Swing, i.e. it’s two fossils duct-taped together. My job is to modernize it to SQL and React.

Task group AI utility out of 10 % time in 2020 % time in 2025
Gathering requirements
and useful meetings
0 5% 5%
Pointless meetings 0 10% 10%
Translating SQR to SQL
(backend)
7 50% 25%
Creating the frontend 8 15% 10%
Creating the Excel
templates
2 10% 15%
Dealing with the middle “guts”
(service layer, service email, etc.)
3 5% 15%
Dealing with server
issues
4 10% 20%

Gathering requirements and useful meetings: AI is not helpful here since I need to know how I should be designing things, and that’s just most easily done by being present and asking questions. Perhaps I could use AI to help take notes, but that seems like more of a hassle than its worth so far.

Pointless meetings: AI is also not helpful here since these are socially enforced. Every day we have a completely unnecessary 30-45 minute “standup” (we are always sitting) where we go around the room with all the devs explaining what they worked on to the manager, like a mini performance review. These often devolve into debugging sessions if any task takes longer than a day. I think the manager is trying to offer helpful suggestions, but the way he phrases things always makes it sound like thinly veiled accusations that people aren’t working fast enough. He also has no real idea of the nuances of what people are working on, so his suggestions are almost always unhelpful. God I hate these meetings so much.

Translating SQR to SQL: In 2020 this would have been by far the bulk of my work. This is where the logic for the reports is, and AI has been a massive help. AI has been so good that I’ve just never bothered to learn much about SQR -- it’s close enough to SQL that I can get the gist of it, while any of the more complicated bits can be dealt with by the LLM. AI can often one-shot the data portions of the logic, although it has trouble with the more ad-hoc formatting bits. The reason I give this a 7/10 instead of a 10/10 is because my human skills are still needed to give context, define the translation goals, break compound reports into manageable chunks, reprompt when errors occur, and press the “compile” button. I’m more of a coordinator while AI is the programmer, and as a result I spend a lot less time on this part than I would otherwise need to.

Creating the frontend: AI is also quite helpful in wiring up the frontend. We have less ad hoc jank here which typically means less reprompts are necessary, but my human skills are still required to give context, press the “compile” button, etc.

Creating Excel templates: The people we’re creating reports for are very particular about the formatting, which means AI isn’t very helpful. I’ve tried it a few times, but it’s just faster to do things myself to get the exact bolding, column widths, etc., though AI is good at giving conditional formatting formulas. Since AI has sped up the coding parts I’ve ended up working faster, which means I end up spending a greater percentage of my time on these.

Dealing with the middle “guts” and Dealing with server issues: The servers and the middleware “guts” are a tangled mess of accumulated cruft solutions that have built up over the years. Before AI I would have tried to avoid touching this stuff as much as possible, but now it’s much easier to understand what’s going on and thus to make improvements. If the servers go down (which is very frequent) I can try to proactively fix the problem myself. I’ve also played around with Jenkins to automate some of the server redeployments, and learned how to pass info through the various layers to tell if something is going wrong. This stuff is too tangled and spread out over different repos for a tool like Claude Code to automate anything, but LLMs can at least tell me what’s going on when I copy+paste code.

Overall, I’d estimate my work speed has gone up by about 50%. That’s spread out over 3 main areas:

  • 10% working faster and doing more.
  • 20% spending more time to design more robustly, avoid tech debt, and build automation, mostly in the middle “guts” and server areas.
  • 20% dark leisure -- most of this article was written while at work, for instance.

Management’s push for AI

As the executives at the company I work for have tried to implement AI, they’ve done so in an extremely clumsy way that’s convinced me that most people have no real clue what they’re doing yet. If what I’ve witnessed is even somewhat representative, then the public’s overall understanding of AI is currently at the level of a senile, tech-illiterate grandma who thinks that Google is a real person you ask questions to online rather than a “search engine”. Even something as basic as knowing ChatGPT has a model selector with this thing called “Thinking” mode easily puts you in the top 10% of users, perhaps even the top 1%.

Upper management is plugged into whatever is trendy in the business world, and AI is certainly “in” right now. The CTO (with implicit pushing + approval from the rest of the executive team, especially the CEO) is heavily promoting AI use, but this seems destined to be mostly a farce for the usual reason that all bandwagony business trends tend to be -- saying the firm is “using” is the goal moreso than any extra efficiency it can give. Oh sure, they’d happily take any extra efficiency if it was incidental, but it’s not really the primary focus. This acts as one of the most powerful anti-advertisements for AI use among the developers where I work. Fads like low-code tools or frameworks like CoffeeScript come along perennially and almost always disappoint. It’s no wonder that many devs instinctively recoil when a slimy MBA comes in who clearly doesn’t know much about AI or programming, and starts blathering on with “let me tell you about the future of software development!” We’ve seen it all before, so now whenever I try to sing the praises of AI I have to overcome that stench. “No really, AI is a powerful tool, just ignore how management is trying to hamhandedly shove it into everything including places it obviously doesn’t belong”.

Since I’ve become known as the “AI guy” among the developers, I got put on the CTO’s “AI working group” -- basically a bunch of meetings twice a month where we discuss how AI could be implemented across the organization. I tried to go in with an open mind, but that didn’t last long.

Meeting 1: The CTO gathers the developers and asks for input on what AI tools we could augment the development stack with. He mentions examples including Github Copilot, Claude Code, and a few others. I express strong interest in Claude Code as I’ve heard good things about it, while another dev chimes in to back me up. The CTO responds with “interesting perspective… but what have you heard about Copilot?” It becomes clear that the meeting is consultation theater and that his mind is already made up -- he tells us he’s attended several conferences where Copilot was mentioned favorably and he has a bunch of promotional material to share with us. I ask if we could try both Copilot and Claude Code, but that idea is also shut down. I do some research on my phone and discover that Github Copilot is just a SaaS Wrapper with the useful models locked behind “premium requests” that you have to pay a subscription to access. The difference is between a model like Sonnet 4.5 on the premium side vs ChatGPT 4o-mini for the free version, which if you’ve played around with those models you’d immediately know there’s a vast gulf in capabilities there. I ask if we can get the business license to use the advanced models, and the CTO asks if the advanced models are really necessary. I cringe internally a bit, but convince him that they are, and he agrees to get the business licenses.

Meeting 2: The CTO gathers 2-3 employees from each department of the company for us to discuss how each of us is using AI in our day-to-day work environment. Well, at least that’s ostensibly what was supposed to happen. In practice it becomes an entire hour of introductions and each of us giving a little blurb on what our overall opinion is on AI. Basically everyone says some variation of “It sure seems interesting and I look forward to hearing how other people use it” without anyone actually saying how they use it beyond the most basic tasks like drafting emails. The only interesting part was when one of the employees said she hopes we can minimize AI use as much as possible due to how much power and water it uses. The CTO visibly recoils a bit on hearing this, but collects himself quickly to try to maintain that atmosphere of “all opinions are valid here”. I cringe a bit internally since the bit about water usage is fake, but I know the perils of turning the workplace into a forum for political debate so I say nothing.

Meeting 3: The CTO gathers the 2-3 employees per department and announces an “Action Plan” where the company will be implementing a feature that will allow us to build our own little mini Copilots that have specific context in them. He requests that each of us comes up with 2-3 use cases for where these could be utilized. These are using tiny models that will be worthless for coding so I mentally check out.

Meeting 4: This meeting involves the details of the “Action Plan” but I strategically take a vacation day to miss it.

How I’m seeing other developers (my coworkers) use AI

Despite the executives’ clumsy attempts with a top-down approach, I’ve seen a decent amount of bottom-up usage of AI among the developers that I work with, though it’s highly stratified.

Older dev #1: This guy is a month or two from retirement. He’s shown approximately zero interest in using AI himself, although he’s been impressed when he’s seen me use it.

Older dev #2: This zany individual has since left the company. He was, for some reason, a big fan of Perplexity -- an AI tool that I’ve heard very little positive feedback about from anyone else. Beyond this, he tried using AI to solve a difficult coding challenge (likely one he didn’t want to deal with personally), but a combination of 1) it being a tough problem, 2) him using a weak (free) AI model, and 3) him not being particularly experienced in aspects like how to break down a problem for AI and how to give it the right amount of context, all combined to produce an unsatisfactory result. After this, he mostly dismissed AI in the software world as any other passing fad.

Middle age dev #1: This guy is probably the most skilled dev out of all of us has tried using AI at least a little and had a few positive stories to share about it, but it seems to be something he’ll only use occasionally. When we were debugging an issue together, his first instinct was the old method of Google searching + StackOverflow like it’s still 2020.

Middle age dev #2: He has shown some interest in using AI, but has little gumption to do it himself. He asked me for advice on using AI on a problem he clearly didn’t want to deal with since it was long and complicated. I told him that AI would probably only give middling results on a task like that, to his disappointment. I’ve seen very little AI use after this.

Middle age dev #3: I’ve seen him use AI a little bit here and there, but mostly just for debugging things, not systematically to generate code.

Middle age dev #4: He’s basically using AI as much as I do. I’ve asked him his opinions on the subject, and they’re a close mirror to my own -- he thinks AI is here to stay, but he’s doubtful of things like vibe coding. He subscribes to the $100 plan of Claude, and I see him using Claude Code almost all the time.

Younger dev #1: I see him using AI a decent amount, although it seems somewhat unsophisticated -- he’s been using the Github Copilot trial that the CTO pushed for. A lot of the time savings seem to be going towards dark leisure.

Manager: He’s tried to use AI out of what seems like pure obligation -- partially from the CTO pushing down from above, and partially from also being plugged into the MBA zeitgeist that’s saying that AI is trendy and implicitly that anyone who isn’t using it will “fall behind”. He offhandedly mentioned how he tries to set aside 30 minutes per day to test out AI, but the features he’s most excited about are extremely basic ones like voice mode and the fact that ChatGPT retains some basic memories between conversations. When he complained about hallucinations in a particular instance, I asked him what model he was using and he reacted with total confusion. Then we went to his screen and he was on the basic version of ChatGPT 5 instead of ChatGPT 5 Thinking, and I had to explain the difference -- so he’s not exactly a power-user.

Wishlist #1 for AI development -- more context!

One of the most critical aspects of programming with AI is managing its context window. LLMs learn pretty effectively within this window, but long conversations will rack up usage on your daily allotment or your API costs rather quickly. You want to start new conversations whenever you’re done working on a small or medium sized chunk of your program to flush out the old context and keep everything chugging along smoothly. When you start a new conversation, you’ll need to supply the initial context once again. In my case I keep several text files that have general guidelines for how the AI should approach the frontend, the backend, various connector pipelines, etc. Then I supply the latest version of the relevant code, which can span across multiple files that I’ll need to manually select based on whatever feature I’m adding. Finally, I’ll need to tell the AI what I want it to build and the general gist of how that fits into the code I just supplied. All of this navigating around and thinking about what info should be added vs what should be omitted takes time. It’s not a huge amount of time, but it’s not trivial either -- of the time I spent manually coding previously, I’d reckon I spend about 15% of my time now supplying context to the AI, and 85% working with the code that it spits out (integrating it, debugging it, reprompting if it’s wrong, etc.)

Problems arise if I get this wrong and don’t supply enough context. The AI will hallucinate in random directions, sometimes adding duplicative functions assuming they don’t exist, while other times it tries to rely on features that haven’t been created yet. The outcome of this is never catastrophic -- usually the code just fails to compile -- but it can waste a surprising amount of time if the issue only breaks at one critical line halfway through the generated output. If I’m being lazy and just copy + pasting what the AI is churning out, I’ll get the compiler error and copy + paste that back into the AI expecting it to fix it, which can lead to several rounds of the AI helplessly trying to debug a phantom error of a feature that doesn’t exist yet. Eventually either I’ll read through things more closely or the AI will figure out “hey wait, this function is totally missing” and things will be resolved, but this can take 15 or 30 or even 45 minutes sometimes. If you include the time I spend managing context, along with all the issues I have to resolve with not supplying the right context, then that 15%-85% split I mentioned in the previous paragraph can look more like 35%-65%. In other words, on a particularly bad day I might spend more than 1/3rd of my productive time just wrangling with context rather than actually building anything.

Some of this is just on me as a simple “skill issue”. I should be less inattentive both in supplying context and in checking what the AI is pumping out. But on the other hand, if I exhaustively proofread everything the AI does then that would destroy much of the time savings of using AI in the first place. There’s a balance to be struck, and finding that balance is a skill. If we were in a scenario where LLMs stalled out at their current capabilities as of 2025, I doubt I’d ever completely eliminate making mistakes with managing context for the same reason it’s unrealistic for even experienced developers to never have bugs in their code, but I’m confident I could get it below 20% of my productive work time.

<20% isn’t that much, but I think this still underestimates the time lost to managing context for one final reason: it’s boring, and that makes me want to do other things and/or procrastinate. Managing context mostly just consists of navigating to various files and copy + pasting chunks of code, then writing dry explanations of how this should fit together with what you actually want to build. It’s not difficult by any absolute metric, but it’s fairly easy to forget one important file here or there if you’re not paying attention. Starting new AI conversations becomes a rupture in the flow state, which gives my brain the excuse to think “Break time! I’ll check my emails now for a little bit…” And then that “little bit” sometimes turns into 60+ minutes of lost productivity for normal human reasons.

Obviously this is a willpower issue on my part. But it’s also not particularly reasonable to demand that humans never procrastinate, especially on boring, bespoke tasks. My preferred solution to this issue would just be for someone else to fix the problem for me.

If we had unlimited access to infinite context windows, this entire subset of problems would basically disappear. Instead of needing to flush out the context every so often, we could just have one giant continuous conversation. Instead of strictly rationing context piecemeal for any given request, we could just dump the entire source code into the AI and let it sort itself out.

The fantasy of an infinite context window is fairly analogous to the idea of AI agents that are capable of “continuously learning”. We’re probably several years if not a decade+ away from a robust version of that. However, even moderate increases in usable context window sizes would still be very helpful. A usable context window that was twice as big would mean we’d need to refresh conversations half as often, which would mean half the time spent on supplying context, half the chances to screw up and provide too little, and half the opportunities to procrastinate on the issue. An order-of-magnitude increase could mean that large codebases only need 3-6 conversations going for each of its broad areas, rather than needing 30-60 separate conversations for each feature.

It almost feels like a fever dream that ChatGPT 4.0 launched with a context window of just 8K tokens back in early 2023, with a special extended version that could accept 32k tokens. Today, ChatGPT 5.1 and Sonnet/Opus 4.5 both have 200K+ context windows, while Google Gemini 3.0 advertises 1 million.

But it should be noted that just because they advertise something doesn’t mean it’s necessarily true for practical purposes. I used the term “usable context window” a few times, which depends on 2 main factors:

  • How good the compression algorithms are. From my limited research, it appears that context windows follow an O(N2) relationship, i.e. increasing context by 10x requires ~100x more compute, which is brutal. To get around this, model developers use tricks like chunking, retrieval, memory tokens, and recurrence to compress past context into a smaller representation. This allows for larger context sizes, but comes at the cost of making in-context elements “foggy” if the compression is too aggressive. Companies could advertise a context window of 10M or 100M, but if the details from 100K tokens ago are too compressed to be usable then that wouldn’t be worth much.
  • How much access end-users actually get at a reasonable cost. Longer conversations with more stuff in the context window use more resources, which model providers meter with increased API costs and daily allotment usage. I recently had an exceedingly productive conversation with Claude Sonnet on a long and complicated chunk of code that was almost certainly near the limit of the context window, and each message + response took a full 10% of my 6-hour allotment on the $20/month plan. Even if the model’s recall is perfect in a 10M context window, if companies charge an arm and a leg to use anything past 500K then 10M once again doesn’t mean much to the average user.

It’s almost certain that LLMs will continue to get cheaper over time. There’s a very robust trend towards more cost-efficiency per performance, so that half of the equation should continue to improve rapidly over the next few years.

As for how big the context windows will grow, that part is much less clear. I would expect it to get better, but not necessarily at a Moore’s Law exponential rate. The O(N2) relationship is the stick-in-the-mud here, and I’d expect lumpy jumps from time to time when there are breakthroughs in compression algorithms, or if companies feel they have sufficient compute to eat inflated costs.

But there’s also somewhat of a case for pessimism on the rate of improvement, as it feels like context is getting a lot less attention than it used to. Part of that might be due to how most casual users won’t need super huge context windows unless they’re doing some crazy long RP session over multiple weeks. Context limits used to be highlighted metrics when new models were released, but many new models don’t even bother mentioning it at all any more. This feels strange when there are some very goofy benchmarks like Humanity’s Last Exam that get cited almost every time there’s a 1-2 point improvement. The only benchmark I know of that tests context recall is Fiction Livebench which is questionably designed and rarely updated -- as of December 2025 the latest results are from September 2025, with many of the most recent models not listed.

The days of 8K token context limits feel similar to when video games in the 80s had to be compressed down to just a few hundred kilobytes (e.g. Final Fantasy 1 was less than 200 KB). In just a few years we’ve entered the equivalent of the megabytes era, which is a lot better but is still quite restrictive in a lot of ways.

Wishlist #2 for AI development -- better general computer use.

The other innovation that would substantially improve AI coding at this point would be for the AIs to learn how to use computers like humans can. There was quite a bit of noise around this in late 2024 with Claude and in early 2025 with Operator, but the consensus at the time was that they were both terrible. They were at the ChatGPT2 era of functionality that could sometimes do something useful, but relying on them was a fool’s errand.

Since then, we haven’t heard much in the way of progress. Gemini had a version of computer use in October, but it was mostly focused on browsers and still wasn’t very good. General computer use agents are still broadly worthless as of year-end 2025.

Competent general computer use would be extremely helpful for moving between various layers of software. Most nontrivial modern applications end up being a horrible Frankenstein monstrosity of many different frameworks that each deal with one specific issue. Software developers have euphemistically come to refer to this as a “stack”. For a basic example, a customer who logs into a site will see the frontend in a language like Javascript, while validating their login credentials and serving account details is dealt with a backend in a language like SQL.

In practice, just having 2 parts like this is VERY conservative. For a more realistic example, I’ll go with what I’m working on at my job. I’m rebuilding an internal app for statistical data that needs to Create, Read, Update, and Delete entries. It’s a bit more complicated than your average CRUD app, but at the end of the day it’s still just a CRUD app -- it’s not exactly rocket science. Yet even for this, I have to deal with the following:

  • Monthly: The frontend.
  • service-layer: An interstitial layer that routes most of the info from the frontend to the backend.
  • azure-sharepoint-online: Another interstitial layer that routes some info from the frontend to the backend.
  • service-reports: This receives calls from the backend and packages them into formatted excel reports.
  • service-email: This emails finished reports to the end-user.
  • The backend to query data goes through TSQL, which we build and debug with an IDE called DBArtisan
  • The formatting of reports happens through a package called Aspose, which turns TSQL results into an Excel format
  • The Excel sheets themself also require a bunch of manual formatting for each report.
  • Jira tickets for documenting work
  • Bitbucket for source control, and navigating between 3 different environments of DEV, TEST, and PROD.
  • The internal server that hosts this app runs off of a Linux box and Apache Tomcat.

AI within any single one of these domains is quite good. Claude Code can hammer out a new UI element in the Monthly repo pretty effectively (assuming I provide it adequate context). Similarly, I can go to the browser interface of any frontier AI and generate SQL that I can paste into DBArtisan to do backend calls within a handful of prompts. But navigating between any of these areas remains the domain of humans for now. AI simply doesn’t have the capacity to operate on the entire stack from end-to-end, and so its ability to accomplish most long-horizon programming tasks is gimped before it's even out of the starting gate. Before AI, I reckon I spent 80% of my time writing individual features, and 20% of the time gluing them together. Now I spend about 1/3rd of my time supplying context to the AI, 1/3rd of my time verifying AI output and reprompting, and 1/3rd of my time gluing various AI-built things together.

An AI that could navigate the entire length of an arbitrary software stack is a necessary (though not sufficient) requirement to be a drop-in replacement for a software developer. The most robust solution would be for the AI to just learn how to use computers like a human could. The main benchmark for this is OSWorld. Unfortunately, this is [also quite bad](https://epoch.ai/blog/what-does-osworld-tell-us-about-ais-ability-to-use-computers

  • like Fiction Livebench is, although at least it gets updated more frequently. Part of the issue is that many of the questions that are ostensibly supposed to be judging computer + keyboard usage can be cheated through the terminal where AI is stronger. However, this got me thinking “maybe this could be part of the solution?”

By that, I mean another potential solution to the problem is to flip it upside down: instead of designing an AI that can use arbitrary computer applications, we could design computer applications that an AI could more effectively use. Claude Code can be seen as an early version of this, as it’s very good at building things through the terminal. I want to use it as much as possible, so when designing future projects I’d look for backend solutions that play to its strengths while avoiding its weaknesses. For example, in the future I’ll look for pipelines that avoid calling SQL through janky IDEs like DBArtisan. Also, while breaking the main body of our apps across 5 different repos may have made sense pre-2023, now it just makes the context management portion more difficult than it needs to be. Beyond what I could do as a single software dev, the AI designers and third-party scaffolding companies could also probably create individual apps or bundled computer environments where AI excels. These types of solutions would be a lot more brittle than simply making a competent general-purpose computer agent, but they’re probably a lot easier, and so they could serve a transitional role like how the somewhat-dirty natural gas is helping us move away from ultra-dirty coal to clean renewables.

Conclusion

As said before, I don’t think that LLMs delivering slightly more accurate one-shot answers is a good way to further their programming capabilities at this point. This is what a lot of benchmarks are set up to measure which is part of the reason I think a lot of benchmarks are kind of superfluous now. If continuing in that direction is all 2026 has in store, I’ll be very disappointed. On the other hand, if there’s even a moderate amount of progress in the two areas I’ve described, then we could see a discontinuity with LLM programming ability jumping up to where it can suddenly do 80% of a senior programmer job instead of the current 30-40%. These two areas are pretty similar to what Dwarkesh cited recently, and progress (or lack thereof) is what I’ll be watching most closely as a leading indicator of how well LLMs will scale in the near future.

I can’t wait for what AI has in store in the next few years. I hope it keeps scaling and we get more periods like early 2025 with rapid progress. AI has been far more positive than negative, though it will probably take the broader public many years, or perhaps even a decade+ to realize this. There are other AI tools I’ve played around with like Nano Banana Pro that are extremely impressive on their own terms, but which are beyond the scope of this article.

In any case, I hope this article has given you some insight into what AI looks like on the frontlines of software development.

This thread is for anyone working on personal projects to share their progress, and hold themselves somewhat accountable to a group of peers.

Post your project, your progress from last week, and what you hope to accomplish this week.

If you want to be pinged with a reminder asking about your project, let me know, and I'll harass you each week until you cancel the service

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

Transnational Thursday is a thread for people to discuss international news, foreign policy or international relations history. Feel free as well to drop in with coverage of countries you’re interested in, talk about ongoing dynamics like the wars in Israel or Ukraine, or even just whatever you’re reading.

The Wednesday Wellness threads are meant to encourage users to ask for and provide advice and motivation to improve their lives. It isn't intended as a 'containment thread' and any content which could go here could instead be posted in its own thread. You could post:

  • Requests for advice and / or encouragement. On basically any topic and for any scale of problem.

  • Updates to let us know how you are doing. This provides valuable feedback on past advice / encouragement and will hopefully make people feel a little more motivated to follow through. If you want to be reminded to post your update, see the post titled 'update reminders', below.

  • Advice. This can be in response to a request for advice or just something that you think could be generally useful for many people here.

  • Encouragement. Probably best directed at specific users, but if you feel like just encouraging people in general I don't think anyone is going to object. I don't think I really need to say this, but just to be clear; encouragement should have a generally positive tone and not shame people (if people feel that shame might be an effective tool for motivating people, please discuss this so we can form a group consensus on how to use it rather than just trying it).

1

This thread is for anyone working on personal projects to share their progress, and hold themselves somewhat accountable to a group of peers.

Post your project, your progress from last week, and what you hope to accomplish this week.

If you want to be pinged with a reminder asking about your project, let me know, and I'll harass you each week until you cancel the service

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

19

A Broken Model of The World

The American visa rejection was delivered with the bureaucratic indifference characteristic of empire in its senescent phase. No California, no Texas, no opportunity to temporarily escape to the land of my dreams and do Rationalist Things. Instead: India. The eternal return. Air conditioning as opposed to indoor heating, and dogs who hadn't yet learned that unconditional love is a dangerous thing. I didn't intend to disabuse them.

But of course, and here's where the reptilian cortex asserts its dominion over whatever higher functions medical school was supposed to cultivate, there were women. Specifically, women who might conceivably miss me, which is to say women whose neural architecture had been sufficiently damaged by prior exposure to my personality that they'd developed something like Stockholm syndrome, except with worse texting habits. I didn't have the time to cultivate new relationships, nor was I prepared to go through the rigmarole of setting up a dating profile to local tastes. Old flames could be fanned out from the embers instead.

Near the top of this list, glowing with the phosphorescent intensity of a bad decision that knows it's bad and has made peace with this knowledge: Her. The Model. You know the one. Hot as hell, but her head is held aloft by a mixture of helium and bad decisions.

I'd dated her very briefly before fleeing to residency, that period of psychiatry training designed to teach you about antidepressants and then teach you more about which ones you've come to need (all of them). She presented, in the phenomenological sense that Heidegger might have recognized had he spent less time with Nazis and more time on dating apps*, as the eternal feminine victim: doe-eyed, helpless, perpetually buffeted by the cruel winds of toxic masculinity, which is to say every man she'd ever met, perhaps excluding me.

She'd been reaching out at semi-random intervals during my Scottish exile, something my brain's tired pattern-recognition systems had correlated with relationship turbulence, usually accompanied by marriage proposals that made me feel simultaneously desirable and like I was being offered a role in a particularly depressing regional theater production. Very ego-syntonic, as we say in the trade, which is professional code for "it made me feel good in ways I'm not too ashamed to admit."

Then: radio silence. Months of it. I'd interpreted this through my characteristically solipsistic lens as evidence that she'd found stability, or at least a nice man in the neighborhood, which turned out to be partially correct in the way that a broken clock is correct twice daily, accurate in its specifics while missing the larger horror entirely.

She had technically just reached out. Just a few days before I was due to fly in. Just a perfunctory "hey" on Insta, which I had genuinely not seen for days because, well, psychiatry doesn't make for very exciting day-in-the-life posts. At least not without trouble with the GMC.

I'd landed back in India and reached out. Nothing. I began contemplating that I was being ghosted, or that I'd outlived my usefulness to her. Maybe she had found a nice Punjabi boy to grow fat with. My daydreams were each more psychologically sophisticated than the last, which is what you do when you've spent too much time learning about defense mechanisms instead of developing functional ones.

The truth was stupider: she didn't check her DMs. She'd always been a bit shite about that. Well, self_made_human, that's the pot calling the kettle black. The solution, obvious in retrospect, required abandoning digital mediation for its older, more aggressive cousin: I called her.

Two rings. Then:

"Oh my god! You're back?"

The voice hit me like a familiar drug: breathless, pitched at a frequency that triggered some deep mammalian subroutine, laced with an enthusiasm that I knew was performed but which worked anyway because evolution has programmed male brains to be very, very stupid about certain audio frequencies. It was the auditory equivalent of those supernormal stimuli ethologists use to make birds try to mate with volleyball-sized eggs.

"I am," I said, attempting to maintain the facade of being a person with boundaries. "I thought you were ignoring me."

"No! Never! I just don't check my phone, I swear." A statement that would have been disqualifying if I were capable of learning from experience. Women and their phones are inseparable at the hip. "I missed you so much. We have to meet. Tonight? Please say tonight. I need to vent."

Reader, I am a man of medicine, of science, someone who has spent years training to make rational decisions based on evidence. I am also a man who hears a pretty woman say she needs him and immediately becomes a golden retriever who's been told there might be treats. I tell myself I'm only going out of a curdling combination of curiosity and boredom, but my tail wags nonetheless.

(The charitable explanation is that I have a genuine drive to be helpful and derive satisfaction from being nice to people. Less charitably, I crave mild amounts of drama in my life, preferably when I'm out of the immediate blast zone. The truth can be found with a Monte Carlo simulation, namely throwing darts at me.)


I arrived at her workplace, a boutique where she moonlights in sales, effectively selling insecurity to women and delusion to their husbands. Local traffic made me late, which meant I missed seeing her in her element, which was probably for the best. Some illusions should be preserved.

She drove. I rode shotgun. She was competent behind the wheel, which I noted with the mixture of surprise and guilt characteristic of men who've internalized certain stereotypes while remaining theoretically opposed to them. The other drivers, less conflicted, shouted helpful commentary about her driving that had nothing to do with driving and everything to do with living in a society that's still working through some issues around women operating heavy machinery.

It's an interesting dichotomy. Male drivers face less verbal abuse, mostly because they're a physical threat. Female drivers bring out the peanut gallery, but they're not really at much risk of having someone lay hands on them in such a public setting. But I digress:

She needed to park. I needed something to do with my hands. I bought her a soft toy from an overpriced Japanese store, that particular species of useless consumer object that somehow carries totemic significance, a material manifestation of affect that short-circuits rational gift-giving in favor of pure aesthetic stimulus. Women are suckers for these, which is a sexist observation that's nonetheless empirically correct, which is why sexism persists: it works.

After an interval calibrated to maximize anxiety without quite tipping into actual worry, she returned. She loved the gift. Then she began talking, and I realized I'd made a terrible mistake, which is to say exactly the mistake I'd intended to make.

The story was long. She'd warned me it would be long. She wasn't lying, which may have been the only thing she wasn't lying about. Or perhaps she's excessively honest with me, I seem to be a safe space, a person she can unload all her cares on without much concern. The lies were for the rest. Regardless, I took my glasses off and buried my face in my hands so many times I lost count, performing exasperation for an audience of one while that audience performed innocence for an audience of me.

The situation had evolved. The roster of suitors had expanded.

There was the Poor Nice Guy (who lives with his parents and won't move out, who I'd previously dissected with the detached interest of an entomologist pinning butterflies to cardboard). There was the Toxic Ex (who cheats), but as far as I could tell, was now out of the picture. And now, there was the Rich Guy. He's new.

The Rich Guy. Precisely as advertised. Distantly related (third cousin maybe?) far enough to avoid the genetic problems, close enough to carry social weight. He'd proposed marriage multiple times. He sounded, even to my determinedly cynical ear, like a reasonable choice. But she couldn't commit.

The reasons were familiar: he lived with his parents, lived below his means. But also (and here's where it got good) he had dogs, and her OCD couldn't handle them.

I couldn't relate. Shortly after I had landed in the country, my puppy had just destroyed my best shoes and my comfortable slippers, and my response had been mild scolding undermined by my complete inability to maintain anger at something with floppy ears. But I'm not the protagonist of this story. She is. Or maybe the dogs are.

She has OCD. She hates the dogs. She claims it's hygiene, but we know the diagnosis: Narcissism cannot tolerate a rival for attention, even if that rival licks its own ass.

I feel like an ass just saying that, I'm not The Last Psychiatrist, even if I'm more cynical than a certain Buddhist-Sufi-Lite Namebrand alternative. Don't listen to me, she does actually have OCD. Sees an actual shrink for it, not that that lady sounds like she's competent.

"He said he'd give them away," she says, pulling back to look at me with those wide, imploring eyes. "He said he'd get rid of them for me."

Pause.

This man is willing to exile two living creatures that love him unconditionally, loyal beasts that rely on him for their survival, just to secure access to her. But he won't move out of his parents' house. He is willing to sacrifice the innocent (the dogs) but unwilling to sacrifice his safety net (Mommy and Daddy). It might also have been filial piety, who knows. I had complained that Poor Guy had a stick up his ass, whereas this gentleman could use such a prosthetic as a spine.

"So let him give them away," I say.

"No," she pouts. "I can't make him do it. Then his parents will hate me. Then he'll resent me."

Then came the bombshells, delivered with the casualness of someone ordering coffee. One, she was still seeing Poor Guy. Two, she wanted me to commit fraud.

She'd convinced herself that the solution was a forged medical document stating she was deathly allergic to dogs. She'd already tried this gambit with Rich Guy, but he'd pointed out (with admirable attention to empirical reality) that she'd played with his dogs before without issue. Now she wanted me, as a doctor, to make it official.

"Write me a note," she says. "Say I have a severe allergy. If it's medical, he has to get rid of them, and it's not my fault. It's doctor's orders."

She wants the result (no dogs) without the cost (guilt). She wants to outsource the moral culpability to me.

I have many moral failings. They are numerous and well-documented. But I enjoy having an unblemished record and no medical board investigations, so I declined, explaining this in terms I hoped were clear even to someone whose relationship to truth was essentially fictional.

She escalated. She offered sex.

"Come on," she says, pressing against me. "I'll make it worth your while."

Let me pause here to note the cosmically insulting nature of this offer. Sex as payment for fraud. Sex as the universal solvent for moral reasoning. Sex offered with the bland confidence of someone who's learned that it usually works, which is the most damning indictment of men as a category that I can conceive.

Been there, done that, I told her. Which was true. Which made me complicit. Which made this whole scene a kind of recursive nightmare where everyone's crimes implicated everyone else's.

She changed tactics: Would I help her decide between Rich Guy and Poor Guy?

Finally, a question I could answer. My reply was nigh instant, the answer was obvious.

"Go for Rich Guy," I said. "He's sensible. It's better to be with someone who loves you, than someone you love (if you can't have both). And I know you. You couldn't adjust to a lower standard of living if your life depended on it."

She blinked. "But won't Poor Guy become rich when he marries me? He could take over what my dad built!"

I sighed the sigh of a man who's realized he's explaining addition to someone who's still working on number permanence. "That's your own money, returned to you. If you marry into wealth, you have twice the money. Use that pretty head. Think."

Her face scrunched up in an adorable display of revelation. She told me that she'd never considered this. Twice the money sounded good. Almost twice as good, accounting for diminishing marginal utility. The fact that she was treating marriage as a financial instrument while simultaneously maintaining that she wanted true love, this contradiction didn't seem to register. Cognitive dissonance requires cognition.

But wait: Poor Guy worked in her dad's field. Rich Guy was adjacent: leather tanning, not textiles. Who'd run the family business?

I suggested that maybe Rich Guy could learn. She seemed unconvinced. I offered to make a SWOT analysis, because apparently I'd become the kind of person who does strategic planning for other people's romantic clusterfucks.

I reached for my phone and its rarely used stylus. "Let's be logical. Let's do a SWOT analysis."

Strengths. Weaknesses. Opportunities. Threats.

I started drawing the grid. I was outlining why the Rich Guy was the strategic play. Strengths: Money, Devotion. Threats: The Dogs.

"See?" I said. "The Rich Guy is the move. You just have to deal with the dogs."

"I made a list too!" she chirped.

She pulled out her iPhone and shoved the Notes app in my face. (Why do women love Apple's default apps? This is a genuine mystery to me, impenetrable as quantum mechanics.) "Great minds think alike!"

Fools seldom differ.

I looked at her list. It was a chaotic mess of emojis and bullet points. Rich Guy and Poor Guy were neck and neck.

But there, buried in the text, was a note she had clearly forgotten was there:

Still sleeping with [Poor Guy]. [Rich Guy] doesn't know, haven't slept with him yet.

She was showing me the evidence of her own infidelity. She was handing me the smoking gun. And she didn't even realize it. She was scrolling past it, pointing out that the Rich Guy buys nice purses, completely oblivious to the fact that she had documented her own moral bankruptcy.

I looked at her. "Are you fucking insane? What if Rich Guy finds out about Poor Guy??"

She startled. "When did I tell you their names?"

I pointed at her phone. The blush that overtook her face was the color of shame, or possibly arousal, or possibly both, because at this level of dysfunction all emotions blend into an undifferentiated psychic sludge.

I laughed. It was absurd.

"What?" she asked, smiling blankly.

"You're amazing," I said. "You're really something."

I grabbed her hand. I deployed a metaphor about masturbation and bushes that I'm not proud of but which seemed apt.

"So you'll write the note?" she asked. "You're a liberal guy. You understand. You should just marry me."

"Liberal."

She uses that word like a get-out-of-jail-free card. To her, "liberal" doesn't mean "politically left-leaning." It means "permissive." It means "you are too smart to have boundaries." She thinks that because I listen to her stories without vomiting, I approve of them. Maybe I've internalized too much, it's worth reminding myself that in my personal life, I can just get up and walk away. I've done that before, with her, when she'd called me out on a date and then broke down into tears and asked me to drive her to her ex’s place.

"How long are you staying in Scotland?" she said. "Why won't you just marry me? Things would be so much simpler!"

Previously, this plea had made me feel significant, wanted, like Captain Save-a-Ho riding in on a white horse. Now I felt something closer to disgust. Not an immense amount of disgust, I've long since abandoned the pretense that I hold all the moral high ground. Mostly the aesthetic disgust of watching someone dig their own grave with manicured nails until those nails chip and bleed, and then mild, incipient rage at the idea that she saw me that way, as a convenient solution to all her problems. The kind, thoughtful doctor who actually listened, didn't judge too much (to her face, an anonymous audience is different, or so I say). I was her idea of a BATNA, a man without an ego, willing to tolerate stodgy in-laws, the kind who wouldn't tell his wife to stop dressing like such a slut the moment the marriage pyre went cold.

The safe choice. I resented this, I do have an ego. I do have standards, even if I'm too polite to throw that in someone's face when they presume that they meet them.

But disgust and rage are just other forms of engagement, and I was too deep in this to extract myself cleanly.

So I tried reverse psychology.

It was then, that I played the card I'd kept up my sleeve for exactly this moment.

I told her I'd come around to marriage. (True.) That I could be convinced to marry her. (Highly Debatable.) She demanded to know when I'd be back permanently.

Two years minimum, I said. Probably more. She deflated immediately. Too long.

So I flipped it: "Come to Scotland," I said.

I said it with the gravitas of a romantic lead in a period drama. Leave this all behind. Come with me. That wasn't a lie, technically. A proposition can't be false. But I said it with the confidence of someone who knew exactly how she'd respond.

I only said it because I knew with 100% certainty she would decline.

It was a zero-risk bet. She has her "career" here, her parents, her tangled web of dysfunction. She wasn't going to Glasgow. But by offering, I get to be the Savior. I get to be the "One That Got Away." I get the credit for the gesture without having to buy the extra plane ticket.

She blanched. Scotland? Doing her own laundry? Cooking? Cleaning? Not cool. She'd grown up wealthy. She told me she couldn't adjust. She didn't seem to be the least bit ashamed of this.

"I can't," she sighed, exactly as predicted. "It's too complicated."

"I know."

I pointed out that I'd grown up similarly and adjusted fine. That First World life wasn't so bad. I explained that even my salary was enough to allow for a decent existence for a young couple. The more I pushed, the more she retreated, exactly as predicted.

Excellent. My model of human nature, or at least her particular neural architecture, remained accurate. I'd convinced her that I wasn't an option by making her convince herself. The lies you tell yourself stick harder than the lies others tell you. So does the truth. Nothing I'd said was a lie, after all. This is why advertising works. This is why democracy fails.

"Will you wait until you're back to marry me?" she asked.

I laughed. "You won't wait two years."

"You're right," she admitted.

Throughout this conversation, she kept flinching, looking out the windows (but hadn't asked me to remove the arm I had around her, or the other on her thigh). I asked why. She said she was worried one of the men might be in the neighborhood. It was midnight. They lived elsewhere. I pointed this out.

"Wait! I can check." She opened WhatsApp. Rich Guy, it turned out, was insecure and demanded she share her live location constantly. Every few minutes, down to the meter. No wonder she'd chosen this café, it was close enough to home to explain, far enough from anywhere else to avoid detection. A prisoner's exercise yard.

To his credit (which is very little), he reciprocated by sharing his own location. The panopticon didn't have a one-way mirror.

She messaged him asking him his plans. His reply was terse but quick. Business meeting, too tired to visit, going to bed. Her paranoia subsided.

Then came the detail that broke me: he'd offered to get rid of the dogs. Kicking out his elderly parents? A step too far.

Where did she find these people? My dog had destroyed my shoes and I'd merely scolded him. This man was willing to dispose of two loyal animals for a woman who felt nothing for him.

Psychiatry teaches phenomenology, empathy, understanding. It never quite conveys that some people are mentally alien. If I had to choose between a woman and my dogs, I know which bitch I'd be showing the door. Both my dogs are male.

More conversation. More coffee. Then beer, she told me they secretly sold it, just hid the menu to maintain a veneer of family-friendliness, which felt like a metaphor for something but I was too tired to figure out what.

She looked exhausted. Grey hairs emerging. Still gushing about her nephew, the Indo-Italian baby who'd break hearts someday, she exulted over my observantion. Feminine solidarity is nothing next to evolutionary psychology.

More terrible ideas sprouted like mushrooms after rain. Could I present as her psychiatrist and talk to Rich Guy? I said I'd talk to him in a personal capacity only, mostly from morbid curiosity about the kind of men she attracted. Maybe I'm trying to understand myself.

"Why can't I be happy?" she asked.

"Because," I said, with clinical detachment, "you are a dumb bitch."

I didn't say it with anger, even if I felt it. I said it with the flat affect of a clinician delivering a terminal diagnosis. It was cruel. Very uncharacteristically so for me, I still feel bad about it, but she'd pushed me to breaking point. It was also a diagnosis. She teared up.

"You're mean!" she sobbed. "I'm trying so hard! Why are you calling me names?"

No she didn't. That would have been easy, given me the option to stonewall in the face of bluster and crocodile salt-trails. Instead:

She stayed quiet, head lowered, hair cascading down to hide her tears. This made it much harder, she was self-aware enough to know of her flaws. I decided to relent, and attempt an explanation.

I explained that her misery was entirely self-manufactured, a boutique artisan suffering. "You are crying because you don't like the mirror," I told her. "Look at what you're doing. You have a guy who wants to marry you. He is rich. He loves you. He is willing to give up his dogs for you. And it's not enough."

"It's not perfect!" she wailed.

"That's your problem," I said. "In the search for perfection, you are turning down 'good enough.' You are creating chaos because you are terrified of settling. You cheat on the Rich Guy with the Poor Guy, you cheat on the Poor Guy with the Rich Guy, and you try to cheat on both of them with me. You are miserable because you refuse to make a choice."

She looked at me, mascara running, eyes wide.

"But I just want to be happy," she whispered.

No, she doesn't.

She wants to be admired. Happiness requires compromise. Happiness requires you to live in a house with a mother-in-law or a dog you don't like. Happiness is tolerating unhappiness today in the hopes it'll pay interest tomorrow. Happiness is boring.

She doesn't want boring. She wants the drama. She wants the crisis. She wants to be on a couch begging a doctor to commit fraud so she doesn't have to feel bad about making a man kill his dogs.

I told her the juggling act would end, the plates would smash on her pretty face, and I would not be there to sweep up the shards.

She didn't disagree.

Eventually it was late. I was out of useful things to say. "Go back to the Rich Guy," I said, standing up. "Marry him. Make him give up the dogs. See how that feels."

"You think I should?"

"I think you deserve each other," I said.

She took this as a compliment.

He is a coward who betrays his loyalty to his pets. She is a narcissist who betrays her loyalty to her partners. They are a match made in hell, and they will be perfectly miserable together in a very nice house, once the parents and the dogs die of old age.

She kissed me goodbye, carried off that kawaii rabbit with a spring in her step, turned the corner to her gated compound. I gave in to impulse and bought a cigarette.

I didn't smoke it.

The visa was declined. My winter in California is gone. But as I stepped out into the humid Indian night, I realized I didn't need the Pacific Coast Highway.

Here's what I think: everyone in this story should kill themselves. Except the dogs. I'll include myself if they get a pass.

The dogs are the only innocents. The rest of us are complicit in whatever this is, this performance of intimacy masquerading as intimacy, this simulation of care that exists primarily to confirm our worst suspicions about ourselves and each other. We're all playing roles in a production that should have closed years ago, but we keep showing up because what else are we going to do? Be alone? Be honest?

I get roped into this shit because I'm bored. I relate to the claim that the worst thing a man can be is useless. Perhaps I am minimally complicit, as it goes, but my hands are hardly clean. They probably still smell of her perfume.

Better to keep performing. Better to keep pretending that our patterns aren't patterns, that our compulsions aren't compulsions, that our inevitable trajectories toward mutual destruction aren't already written in every decision we've made since we were old enough to know better.

The dogs, at least, love honestly. They destroy things because they don't know better, not because they're trying to avoid knowing better. There's something almost sacred in that.

As for the rest of us? We're just apes with pretensions and smartphones. Millions of engineers work tirelessly to make them capture accurate renditions of reality, millions more work to meet market demand by creating filters to reduce reality to something more palatable, more Insta-worthy. Some of us are stumbling through the dark, convincing ourselves that the lies we tell ourselves are somehow more sophisticated than the lies others tell us.

The standard literary thing to do would be to protest that they're not, that all lies and sins are made equal. I'm not so far gone as to believe that. No, I think I've put in a reasonable amount of effort into giving her the best advice I could. She never listens, but isn't patient autonomy all the rage?

The head is a hot air balloon.

But remember: the balloon only looks like it's flying. It's really just at the mercy of the wind.

Stop blowing.


*Confession: I haven't read Heidegger, unless a Wikipedia summary counts. I both refuse to read Continental Philosophy on principle and happen to be new to the whole pretentious navel-gazing literary style, please bear with while I calibrate the signal.

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

Transnational Thursday is a thread for people to discuss international news, foreign policy or international relations history. Feel free as well to drop in with coverage of countries you’re interested in, talk about ongoing dynamics like the wars in Israel or Ukraine, or even just whatever you’re reading.

The Wednesday Wellness threads are meant to encourage users to ask for and provide advice and motivation to improve their lives. It isn't intended as a 'containment thread' and any content which could go here could instead be posted in its own thread. You could post:

  • Requests for advice and / or encouragement. On basically any topic and for any scale of problem.

  • Updates to let us know how you are doing. This provides valuable feedback on past advice / encouragement and will hopefully make people feel a little more motivated to follow through. If you want to be reminded to post your update, see the post titled 'update reminders', below.

  • Advice. This can be in response to a request for advice or just something that you think could be generally useful for many people here.

  • Encouragement. Probably best directed at specific users, but if you feel like just encouraging people in general I don't think anyone is going to object. I don't think I really need to say this, but just to be clear; encouragement should have a generally positive tone and not shame people (if people feel that shame might be an effective tool for motivating people, please discuss this so we can form a group consensus on how to use it rather than just trying it).

3

This thread is for anyone working on personal projects to share their progress, and hold themselves somewhat accountable to a group of peers.

Post your project, your progress from last week, and what you hope to accomplish this week.

If you want to be pinged with a reminder asking about your project, let me know, and I'll harass you each week until you cancel the service

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

Transnational Thursday is a thread for people to discuss international news, foreign policy or international relations history. Feel free as well to drop in with coverage of countries you’re interested in, talk about ongoing dynamics like the wars in Israel or Ukraine, or even just whatever you’re reading.

The Wednesday Wellness threads are meant to encourage users to ask for and provide advice and motivation to improve their lives. It isn't intended as a 'containment thread' and any content which could go here could instead be posted in its own thread. You could post:

  • Requests for advice and / or encouragement. On basically any topic and for any scale of problem.

  • Updates to let us know how you are doing. This provides valuable feedback on past advice / encouragement and will hopefully make people feel a little more motivated to follow through. If you want to be reminded to post your update, see the post titled 'update reminders', below.

  • Advice. This can be in response to a request for advice or just something that you think could be generally useful for many people here.

  • Encouragement. Probably best directed at specific users, but if you feel like just encouraging people in general I don't think anyone is going to object. I don't think I really need to say this, but just to be clear; encouragement should have a generally positive tone and not shame people (if people feel that shame might be an effective tool for motivating people, please discuss this so we can form a group consensus on how to use it rather than just trying it).

1

This thread is for anyone working on personal projects to share their progress, and hold themselves somewhat accountable to a group of peers.

Post your project, your progress from last week, and what you hope to accomplish this week.

If you want to be pinged with a reminder asking about your project, let me know, and I'll harass you each week until you cancel the service

4

This is the Quality Contributions Roundup. It showcases interesting and well-written comments and posts from the period covered. If you want to get an idea of what this community is about or how we want you to participate, look no further (except the rules maybe--those might be important too).

As a reminder, you can nominate Quality Contributions by hitting the report button and selecting the "Actually A Quality Contribution!" option. Additionally, links to all of the roundups can be found in the wiki of /r/theThread which can be found here. For a list of other great community content, see here.

These are mostly chronologically ordered, but I have in some cases tried to cluster comments by topic so if there is something you are looking for (or trying to avoid), this might be helpful.


Particular thanks/congratulations this month to @Rov_Scam, who double-tapped two weeks and the Main Motte category this month, carrying nearly 20% of the total report. Some of you may recall that one of the ways I whittle down the list is, if you have multiple QC nominations in a single month, each comment included in the final report weighs against including an additional comment in the report. Nevertheless, the primary driver of the AAQC report is community feedback, and of the dozen or so comments @Rov_Scam had nominated, every comment included here was in the top ten posts of the month.


Quality Contributions to the Main Motte

@Rov_Scam:

@problem_redditor:

@comicsansstein:

@roystgnr:

Contributions for the week of October 27, 2025

@FiveHourMarathon:

Contributions for the week of November 3, 2025

@OliveTapenade:

@Hieronymus:

@Rov_Scam:

@BahRamYou:

@Amadan:

@BreakerofHorsesandMen:

@clo:

@Bartender_Venator:

Contributions for the week of November 10, 2025

@aqouta:

@Agentorange:

@Dean:

@Rov_Scam:

@FtttG:

@faceh:

Contributions for the week of November 17, 2025

@BahRamYou:

@teleoplexy:

@Rov_Scam:

@FiveHourMarathon:

@DirtyWaterHotDog:

@ABigGuy4U:

@Dean:

Contributions for the week of November 24, 2025

@self_made_human:

@FiveHourMarathon:

@Dean:

@thejdizzler:

@Iconochasm:

@problem_redditor:

@georgioz:

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

I decided to post it here because it got long, and also because I use either real-world identity or a very transparent one on the book review sites. Unfortunately, we are now living in the times when people are getting murdered for saying unpopular things, domestic terrorists openly put bounties on people, and I see a significant part of the industry this book concerns and I belong to being completely fine with that - of course as long as "bad people" are getting hurt. I suspect many of them might classify me as "bad people". I don't really expect my stupid book review to really be seen by enough crazies for anything to happen, but there's no reason to take the additional risk. Pseudonymous publication is safer. I could also not publish it at all, but if I already bothered to write it, I might as well let others read it.

I am not sure how to describe this book. On one hand, it is a fascinating account of what happened in Facebook from a person who was right in the middle (or rather at the top, as the global public policy director, working directly with the CEO and COO) of it and is certainly worth reading if you want to be educated on what was/is going on (it ends in 2017 when the author had been fired). On the other hand, the sheer blindness of the author to her own role in the events and her limits is impressive. Facebook is now trying to retaliate against Sarah Wynn-Williams (that's the author) for violating various NDAs, which she most likely did, but given their response to it is so far "we don't do that anymore", one could infer at least some of the juicy stuff is actually true. The trick would be to know which parts. All of it, some of it, a little of it? One of her coworkers says "definitely not all of it". Others agree. Wynn-Williams herself in the interview to Business Insider declared that the question of factual accuracy is not the one that matters and witnesses contradicting her claims are "a distraction". Which in my book means "some of it" is the best we are getting.

To get it out of the way, the cases of workplace harassment she describes are horrible. I do not know how accurate the descriptions are - we heard only one side there, so I can not assess any aspect of the veracity of the claims - but if they describe real events at least to some degree, it is absolutely unacceptable and should not have happened to anybody. I was a bit put off by the cavalier attitude with which she approached the Kavanaugh affair in the epilogue, treating the fact the somebody could even stand besides Kavanaugh during the hearing as the ultimate sign of moral degradation (surely everybody knew the verdict before hearing any testimonies, and it was supposed to be just mere formality?). Thus I suspect the matters aren't so black and white and she is not the most reliable narrator. But even with that, what she described per se is totally horrible. That's all I have to say about that.

Moving on to the other parts of the book: if we look at what had been happening, the author literally inserted herself as the main person to drive and shape Facebook's international growth and engagement with top international politicians. If introducing 21st century informational technology into societies that aren't ready for it is dangerous and prone to disasters, she is the person who enabled, engineered and performed the deed. Probably because she was sure with her at the top, it will be alright (spoiler: it wasn't). And it's not some random "caught in the flow" thing - she literally came to Facebook to do just that, and she did. Pretty successfully, given the amount of praise she received from M.Z. and his subordinates. The essence of her complaint is that she did not have enough power to do it exactly like she wanted, and that's why it often turned out wrong. If only she were an all-powerful dictator (or at least, if all the power were given to people who think exactly like her) everything would have been much better. That was her conclusion at the end - wrong people were censored, wrong people were not censored, an all that because they didn't listen to her.

The parts where she describes how she stayed for a long time in Facebook because she otherwise wouldn't have healthcare are quite hard to believe. First of all, there's COBRA, and she is married, and there are ways to buy health insurance without being employed by Facebook. Sure, it's expensive, but I have trouble believing a person who was at the top of Facebook since such early days and speaking to people like Zuckerberg and Sandberg all the time didn't have at least some money going to them. Surely, maybe not fabulously rich like M.Z. himself, but at least solid middle-class level? Even if she were hopelessly naive and saintly unbothered by money concerns, she could not find an hour during all these years to talk to a lawyer and a financial advisor who would explain here how to navigate such things? In Silicon Valley, where these matters are discussed in every second coffee table at every second coffee shop? And being on the top of FB, literally rubbing shoulders with heads of state and personally engaging their closest teams, and having NZ Embassy and Oxfam on her resume too, she had absolutely no prospect of other employment whatsoever, besides Facebook? Utterly unbelievable. What is entirely believable though is that the author found it hard to give up all that shmoozing with heads of states and fixing the world for the rest of us, and trade it for some boring office job where you don't even have a chance to see Xi Jinping once, and don't get to laugh about how insignificant the president of Guatemala is.

Complete lack of reflection and realization of author's own biases permeates the whole book. A lot of the second half is dedicated to the death of the democracy in the US, also known to some as the (first) election of Donald J. Trump. Of course, half of the country voted for him, but what to that? They were sure a bunch of evil people, or fools misled by evil people, and never would win any elections if not for their dirty tricks. The fact that the Clinton win had to be a prescribed, normal way of events is ingrained so deep that the latter campaign is never really mentioned in the book, maybe hardly once. All the evil tricks Trump campaign supposedly played with Facebook are described in detail, but how Clinton campaign used social media at all? And if they did not - why? What were they doing all that time? Why nobody from the right thinking people in Facebook reached out for them if they for some weird turn of events forgot about social media, despite the fact that Obama campaign used the social media very actively and had been publicly on record bragging about it?

These questions are not even asked, never mind answered, because these question only matter if there were a competition between two equal teams. The author never admits the thought. There is the normal turn of events - Democrats win, the power is in the hands of The Experts (TM), people vote for whoever they are told to and behave how they are told to behave, for their own good - and when it happens, there's nothing to discuss, it is as it always should be. Well, maybe let's talk about how to make it even better. Only the departures from the normal events - like people voting for the wrong candidate, clearly because they were deceived and are too stupid to realize that - deserve discussion. And to think there actually were evil people - including inside Facebook! - who thought it was a good thing! They actually talked about some policies they might like, something Trump may do that would be good - as if the Coming of The Antichrist is some kind of normal political event! Imagine the gall, the sheer audacity of not recognizing the suffering of all the right thinking people and not subjugating their own views to the demands of the moment! How do such people even exist? If one were religious, that would be a good moment for the protagonist to have a crisis of faith - but fortunately there's nothing like that in that universe.

This is the quality that is present in the whole text, every discussion of every question concerning any policy or decision. The author never argues for a certain outcome, as one would have in a debate, never presents any deliberate reasoning or substantiation. To do that would be to recognize there could exist multiple opinions on the matter, and people with wrong opinion may need to be convinced by way of logic and reason. That's not how it works, not in this book. There is a normal, obvious, correct and proper opinion or decision, and every normal, proper and decent person already knows it. It does not need to be argued or proven. It does not even need to be pointed out - like if you notice a baby around, you don't need to be told "don't eat the baby!" - you already know the babies are not to be eaten. So the author just describes her own shock and horror at realizing that people in front of her are monsters - if they do not actually follow the proper way. Rarely it goes beyond that - and almost never to actually have a proper argument. Because what's the point arguing with monsters anyway? How would you convince a person who wants to eat a baby that it is not good, and why you are talking to such a person at all?! This is how this book handles most of the controversies.

What the book described about Zuckerberg changed my opinion about him a bit. It looks like he indeed had been the autistic startup techie who just wanted the product to grow, and initially had no interest in wielding the emerging power for anything but improving the service. He seems to indeed have had that libertarian streak in him that many other tech founders had and lost (he lost it too, of course). Wynn-Williams and others successfully convinced him he has to play with world powers, and become a world power himself. That of course would change any person. But looks like the most of the problems with freedom of speech at FB originate from the likes of Wynn-Williams (quelle surprise!) rather than from M.Z. himself, at least initially. That said, as a corporation FB exhibited the typical psychopathic approach most of major corporations now exhibit - be woke on the outside, do anything to expand and profit on the inside, including making deals with most horrible individuals and regimes, if it pushes up the numbers, all while proclaiming high-minded ideals. This part of the book is one that is the most believable because I can observe it from the outside, both in FB and in many other companies. The company as a whole and top persons in particular are all colossal hypocrites - that part I totally believe. That, of course, does not exclude the director of global public policy too.

The author proclaims in multiple places that all the wrongs and evils Facebook did could actually have been avoided, if only. But the "if only" part is regrettably shallow. The author hints she knows what is the right thing to do, and possesses the recipes for fixing of all modern ails of social media - from teen addiction to genocide in Myanmar - but she never actually tells us, what exactly should have been done, and why she thinks it would have worked. It's not that her argument is bad - but here again, she doesn't even see the need to make a proper argument, mere proclamation "you should have done it differently!" is enough. It may be acceptable from a random layperson, but not from somebody who had been the top policy maker for Facebook and is actually writing a book about it! If you say it had to be made different, spend some time on proper argument of how and why it's better! If you think it'd make the book too long, you can drop some episodes like you being bitten by wasps or such, I am sure it was a profound experience for you but I am equally sure the reader could survive without it.

So, is this book worth reading? It was for me. I am by nature and nurture a skeptical reader, and an unreliable narrator is not something I am afraid of, if there's substance to chew on. This book has the substance. It would be a good book if it didn't also have the numerous flaws I described above, but such as it is - I end up with the same I started with - I do not know how to describe it, even though I do not regret having read it.