site banner

Culture War Roundup for the week of November 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Since @Hawaii98 complains about insufficient quantity of quality commentary, I've taken it upon myself to cover one of the topics proposed by @greyenlightenment, namely the doxxing of Based Beff Jesos, the founder of effective accelerationism. My additional commentary, shallow though it may be, got out of hand, so it's a standalone post now: E/acc and the political compass of AI war.

As I've been arguing for some time, the culture war's most important front will be about AI; that's more pleasant to me than the tacky trans vs trads content, as it returns us to the level of philosophy and positive actionable visions rather than peculiarly American signaling ick-changes, but the stakes are correspondingly higher… Anyway, Forbes has doxxed the founder of «e/acc», irreverent Twitter meme movement opposing attempts at regulation of AI development which are spearheaded by EA. Turns out he's a pretty cool guy eh.

Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement? [archive.ph link]

Quoting Forbes:

…At first blush, e/acc sounds a lot like Facebook’s old motto: “move fast and break things.” But Jezos also embraces more extreme ideas, borrowing concepts from “accelerationism,” which argues we should hasten the growth of technology and capitalism at the expense of nearly anything else. On X, the platform formally known as Twitter where he has 50,000 followers, Jezos has claimed that “institutions have decayed beyond the point of salvaging and that the media is a “vector for cybernetic control of culture.”

Forbes has learned that the Jezos persona is run by a former Google quantum computing engineer named Guillaume Verdon who founded a stealth AI hardware startup Extropic in 2022. Forbes first identified Verdon as Jezos by matching details that Jezos revealed about himself to publicly available facts about Verdon. A voice analysis conducted by Catalin Grigoras, Director of the National Center for Media Forensics, compared audio recordings of Jezos and talks given by Verdon and found that it was 2,954,870 times more likely that the speaker in one recording of Jezos was Verdon than that it was any other person. Forbes is revealing his identity because we believe it to be in the public interest as Jezos’s influence grows.

My main objective is to provide the reader with convenient links to do own research and contribute to the debate, so I rapidly switch from Beff to a brief review of new figures in AI safety discourse, and conclude that the more important «culture war» of the future will be largely fought by the following factions:

  • AI Luddites, reactionaries, job protectionists and woke ethics grifters who demand pause/stop/red tape/sinecures (bottom left)
  • plus messianic Utopian EAs who wish for a moral singleton God, and state/intelligence actors making use of them (top left)
  • vs. libertarian social-darwinist and posthumanist e/accs often aligned with American corporations and the MIC (top right?)
  • and minarchist/communalist transhumanist d/accs who try to walk the tightrope of human empowerment (bottom right?)

In the spirit of making peace with inevitability of most discussion taking place in the main thread, I repost this here.


edit: not to toot my own horn, but

Is anyone else checking here less and less often because equal quality commentary seems increasingly available elsewhere?

I am checking here less and less often because A) with my current concerns and the way wind blows, Western culture war is largely irrelevant B) there's little for me to contribute in addition to all that has been said and C) I've concluded that my ability at making commentary is better used for making an impact.

edit 2: I also mildly dislike the fact that standalone posts need approval, though I can see how that follows from the problem/design choice of easy anon registration.

Repasting my own lengthy comment:

I've always been a techno-optimist (in the sense that I strongly believe that technology has been the biggest positive force for good in history, likely the only form of true progress that isn't just moral fashion), but these days I'd call myself d/acc instead of an e/acc, because I think current approaches to AGI have a subjective probability of about 30% of killing us all.

I don't call myself a doomer, I'd imagine Yud and co would assign something like 90% to that, but in terms of practical considerations? If you think something has a >10% of killing everyone, I find it hard to see how you could prioritize anything else! I believe Vitalik made a similar statement, one more reason for me to nod approvingly.

A large chunk of the decrease in my p(doom) from a peak of 70% in 2021 to 30% now is, as I've said before, because it seems like we're not in the "least convenient possible world" where it comes to AI alignment. LLMs, as moderated by RLHF and other techniques, almost want to be aligned, and are negligibly agentic unless you set them up to be that way. The majority of the probability mass left, at least to me, encompasses intentional misuse of weakly or strongly superhuman AI based off modest advances on the current SOTA (LLMs) or a paradigm shifting breakthrough that results in far more agentic and less pliable models.

Think "Government/Organization/Individuals ordering a powerful LLM to commit acts that get us all killed" versus it being inherently misaligned and doing it from intrinsic motivation, with the most obvious danger being biological warfare. Or it might not even be one that kills everyone, an organization using their technological edge to get rid of everyone who isn't in their in-group counts as far as I'm concerned.

Sadly, the timelines don't favor human cognitive enhancement, which I would happily accept in the interim before we can be more confident about making sure SAGI is (practically) provably safe. Maybe if we'd cloned Von Neumann by the ton a decade back. Even things like BCIs seem to have pretty much zero impact on aligning AI given plausible advances in 5-10 years.

I do think that it's pretty likely that, in a counterfactual world where AI never advances past GPT-4, ~baseline humans can still scale a lot of the tech tree to post-scarcity for matter and energy. Biological immortality, cognitive enhancement, interstellar exploration, building a Dyson Swarm or three, I think we could achieve most of that within the life expectancy of the majority of people reading this, especially mine. I'd certainly very much appreciate it if it all happened faster, of course, and AI remains the most promising route for that, shame about everything else.

I have no power to change anything, but at the very least I can enjoy the Golden Age of Humanity-as-we-know-it, be it because the future is going to be so bright we all gotta wear shades, or because we're all dead. I lean more towards the former, and not even because of the glare of nuclear warfare, but a 30% chance of me and everyone I love dying in a few decades isn't very comfortable is it?

At any rate, life, if not the best it could be, is pretty good, so regardless of what happens, I'm strapping in for a ride. I don't think there's an epoch in human history I'd rather have been born to experience really.

Alex Turner, who had written, arguably, two strongest and most popular formal proofs of instrumental convergence to power-seeking in AI agents

Well, I suppose that explains the pseudo-jazz albums about hotels on the Moon ;)

Longer-term, there are ideas like the "pivotal act" theory: we create an AI that performs a single one-time act which rearranges the world into a game where from that point forward humans are still in charge, but where the game board is somehow more defense-favoring and more fit for human flourishing.

I think this is a terrible definition of a "pivotal act". When Yudkowsky suggests releasing a nanite plague that melts GPUs, he doesn't want them to melt the GPUs of the AI releasing them.

Such a decision is very much not a "one-off", in much the same way as a typical coup involves what can be roughly described as a singular act, followed by an indeterminate period of enforcement; the people who suggest it want to maintain an unshakeable technological lead over their peers, such as by making sure their AI prevents the formation or promulgation of potential peers. I don't think this is categorically bad, it depends on your priors about whether a unipolar or multipolar world is better for us, and how trustworthy the AI you're about to use is, and at the very least, if such an act succeeds, we at least have an existence proof of an aligned AGI that is likely superhuman, as it needs to be to pull that off, regardless of whether or not even better AI can be aligned. Let's hope we don't need to find out.

A large chunk of the decrease in my p(doom) from a peak of 70% in 2021 to 30% now is, as I've said before, because it seems like we're not in the "least convenient possible world" where it comes to AI alignment. LLMs, as moderated by RLHF and other techniques, almost want to be aligned, and are negligibly agentic unless you set them up to be that way.

You are probably one of the few people who decreasd p(doom) from 2021 and after ChatGPT revolution in 2022. I updated the probability upwards due to:

  • The increase in capability just by adding compute and scaling the old 2017 transformer architecture was surprising to many. A moderate breakthrough in hardware can move capabilities so much? We are closer than we thought in our race to AGI. It no longer seemed to be feasible to think beyond 2100, we maybe have decades and possibly even years to do what is needed to be done. Definitely bad news for alignment timewise.

  • The nature of LLMs is terrible as candidate for AGI. The technology is inscrutable, explainability of these models is terrible. Nobody knows why they do what they do, nobody could predict what compute is needed for qualitative jumps such as that between Chat GPT and GPT-4. This makes the models notoriously tough to align even for basic things, like hardening them against exfiltration of training data. If AI can provide answer of when was president of France born, maybe it knows what was in the email CEO of some company sent on January 1st 2019 - if such data was used to train the model. The fact that the most likely candidate for AGI is as Yudkowsky said some just some "giant matrices of trillions inscrutable floating-point numbers" is terrifying - there may be googleplex combinations of viable matrices like that and we do not know what subset of those can be considered aligned and we do not know how to get there. We are just adding compute and are amazed that the thing that is growing in our petri dish is getting more and more capable, we do not manage the growth in any meaningful sense. In the past we had different approaches to Machine Learning specific to domains, people reasonably thought that maybe we will have to work on specific algorithm designed to integrate these separate subdomains. But no, we found out that just throwing compute on very simple game of "predict next word in text" is enough to gain multimodality and make the output more general expanding to domains like computer generated graphics, speech recognition and other areas that were previously separate fields. Also not to just talk broadly, we now know that LLM can discern self-reported race of people from images of their bones beyond current limited capabilities of medical doctors, who can do that from few things like skull etc. Nobody knows why or how the model does it, it just can and we move on.

  • One last thing to the above point is the old news of top-notch AI model playing GO getting beaten by one simple trick. For years people thought that the model "knew" how to play go in normal sense, the whole community thought that they "aligned" it with the task of at least beating humans in this game with very simple ruleset. Except it was proven that the model achieved results by learning some different concepts, it probably learned a completely different "game" and winning at go for years was just a sidefect. It did not learn very primitive concept that even amateurs at the game can grasp. The "alignment" of the model with basic rules of Go was a lie. I cannot imagine how can we align many orders of magnitude more complicated LLM model who has to grasp all the rules of reality, and imagine that we get exactly what we want, that there will not be any security hole and that some random teenager does not start apocalypse by some primitive prompt or strategy even after the whole scientific community will celebrate the safety of this particular "giant matrix of trillions inscrutable floating-point numbers".

  • We now have the new Q-Star breakthrough in Open AI. And at least according to some speculation it seems that what it achieved is that one can use compute not to train the model, but to automate evaluation of answers to questions. Imagine it as on the fly training of the model that selects most promising answers generated by larger static model in LLM powered chain-of-thought automation. It seems that this approach can temporarily boost capabilities of the model by orders of magnitude at the expense of more compute focused on specific question/prompt. If true, this means that there is another lever where you can literally throw money on some potentially productive questions like "how to make LLM more effective" and let LLM provide answers. We may be closer to intelligence explosion than we thought last year.

All in all, I do not see p(doom) decreasing in any meaningful way, quite to the contrary.

This just reveals the incredible badness of MIRI-inspired AI safety/risk theory I think.

The increase in capability just by adding compute and scaling the old 2017 transformer architecture was surprising to many.

The many were just sheltered and ignorant, with their obsolete, childish dreams of GOFAI. Amodei and Sutskever saw this and argued this and won on this. Legg decades ago called AGI possibly by 2028 based purely on Moore's law.

The nature of LLMs is terrible as candidate for AGI. The technology is inscrutable, explainability of these models is terrible. Nobody knows why they do what they do, nobody could predict what compute is needed for qualitative jumps such as that between Chat GPT and GPT-4.

You are talking as if there is any better conceivable architecture. LLMs are, in fact, unexpectedly transparent for anything humanlike in their performance – if only because they operate on tokens, we can inspect their attention maps, routinely invent easier ways to steer them (look at the number of «how is this different from x» questions). Their substrate-level «inscrutability» (overhyped too) is the same as with any DL artifact, and we know it couldn't have been any other way, because GOFAI was dead in the water. Your ivory tower standard of mechanistic understanding is misguided – we know «why they do what they do» because they faithfully approximate the training data, and are absolutely a product of their dataset, to the extent that all clever inductive biases and architectural innovations are as dust before doing a good data cleaning. The magic of GPT-4 is not due to summoning a bigger genie with more compute in a pile, but mostly due to pretraining on tons of proprietary data; and anyway, how could you have strong expectations for the ChatGPT-GPT4 gap without having insight into the inputs for either?

This makes the models notoriously tough to align even for basic things, like hardening them against exfiltration of training data.

Again overhyped.

What makes LLMs «tough to align» against adversarial attacks by humans is not their inscrutability but that they are dumb text processors without ego and any «intent» sans approximating the most probable continuation of a text prompt. This is in fact the most parsimonious explanation of what they do. Stop anthropomorphising them even as you demonize them.

The fact that the most likely candidate for AGI is as Yudkowsky said some just some "giant matrices of trillions inscrutable floating-point numbers" is terrifying - there may be googleplex combinations of viable matrices like that and we do not know what subset of those can be considered aligned

This is wordcelism or, more specifically, counting argument and it was used in the past by Chomsky to rule out the possibility of statistical machines learning natural language. You know how it went. (Relatedly, Yud, who has always been easy to drive to religious ecstasy or terror with Big Numbers, was surprised by GPT-4, which completely discredits him as an AI analyst in my book). Pope puts it this way:

To show how arguments about the general structure of mathematical objects can fail to translate into the "expected" real world consequences, let's look at thermodynamics of gas particles. Consider the following argument for why we will all surely die of overpressure injuries, regardless of the shape of the rooms we're in:

  • Gas particles in a room are equally likely to be in any possible configuration.
  • This property is "orthogonal" to room shape, in the specific mechanistic sense that room shape doesn't change the relative probabilities of any of the allowed particle configurations, merely renders some of them impossible (due to no particles being allowed outside the room).
  • Therefore, any room shape is consistent with any possible level of pressure being exerted against any of its surfaces (within some broad limitations due to the discrete nature of gas particles).
  • The range of gas pressures which are consistent with human survival is tiny compared to the range of possible gas pressures.
  • Therefore, we are near-certain to be subjected to completely unsurvivable pressures, and there's no possible room shape that will save us from this grim fate.

This argument makes specific, true statements about how the configuration space of possible rooms interacts with the configuration spaces of possible particle positions. But it still fails to be at all relevant to the real world because it doesn't account for the specifics of how statements about those spaces map into predictions for the real world (in contrast, the orthogonality thesis doesn't even rigorously define the spaces about which it's trying to make claims, never mind make precise claims about the relationship between those spaces, and completely forget about showing such a relationship has any real-world consequences). The specific issue with the above argument is that the "parameter-function map" between possible particle configurations and the resulting pressures on surfaces concentrates an extremely wide range of possible particle configurations into a tiny range of possible pressures, so that the vast majority of the possible pressures just end up being ~uniform on all surfaces of the room. In other words, it applies the "counting possible outcomes and see how bad they are" step to the space of possible pressures, rather than the space of possible particle positions.
The classical learning theory objections to deep learning made the same basic mistake when they said that the space of possible functions that interpolate a fixed number of points is enormous, so using overparameterized models is far more likely to get a random function from that space, rather than a "nice" interpolation.
They were doing the "counting possible outcomes and seeing how bad they are" step to the space of possible interpolating functions, when they should have been doing so in the space of possible parameter settings that produce a valid interpolating function. This matters for deep learning because deep learning models are specifically structured to have parameter-function maps that concentrate enormous swathes of parameter space to a narrow range of simple functions (https://arxiv.org/abs/1805.08522, ignore everything they say about Solomonoff induction).
I think a lot of pessimism about the ability of deep learning training to specify the goals on an NN is based on a similar mistake, where people are doing the "count possible outcomes and see how bad they are" step to the space of possible goals consistent with doing well on the training data, when it should be applied to the space of possible parameter settings consistent with doing well on the training data, with the expectation that the parameter-function map of the DL system will do as it's been designed to, and concentrate an enormous swathe of possible parameter space into a very narrow region of possible goals space.

This is why deep learning works at all, for capabilities too! Specifying rules of language is not more intractable than specifying «alignment»!

We are just adding compute and are amazed that the thing that is growing in our petri dish is getting more and more capable

But no, we found out that just throwing compute on very simple game of "predict next word in text" is enough to gain multimodality and make the output more general expanding to domains like computer generated graphics, speech recognition and other areas that were previously separate fields

I suppose you've been misinformed: LLMs do not learn multimodal capabilities from text alone. In general it's just more in-context learning developed over a bigger dataset of token sequences. That people believe those sequences for different modalities are very essentially different, and are amazed, is immaterial to them not being really different.

Except it was proven that the model achieved results by learning some different concepts, it probably learned a completely different "game" and winning at go for years was just a sidefect. It did not learn very primitive concept that even amateurs at the game can grasp. The "alignment" of the model with basic rules of Go was a lie.

This is quite a hilarious exemplar of motivated thinking. A model trained on self-play diverges from the ground truth, news at 11! Maybe, instead of fretting about the misalignment, you could see this as an issue of overrated «capabilities»? How can you even distinguish them? How far do you think an intelligence explosion, recursive self-improvement etc. will fare if self-play stumbles into fragile local minima on a 19x19 grid with clear reward signal? Back in AlphaZero's day, Yud was so terrified of this self-play superpower, confirming his worst fears of FOOM:

AlphaGo Zero uses 4 TPUs, is built entirely out of neural nets with no handcrafted features, doesn't pretrain against expert games or anything else human, reaches a superhuman level after 3 days of self-play, and is the strongest version of AlphaGo yet.

The architecture has been simplified. Previous AlphaGo had a policy net that predicted good plays, and a value net that evaluated positions, both feeding into lookahead using MCTS (random probability-weighted plays out to the end of a game). AlphaGo Zero has one neural net that selects moves and this net is trained by Paul Christiano-style capability amplification, playing out games against itself to learn new probabilities for winning moves.

As others have also remarked, this seems to me to be an element of evidence that favors the Yudkowskian position over the Hansonian position in my and Robin Hanson's AI-foom debate.

– and now we see this develops massive blind spots which would lead to trivial failures in reality. But you want to be scared so you construe this as a matter of «learning a different game». Tails you win, heads I lose.

I look forward to this intellectual tradition being relegated to the dustbin of history.

Hmm, I would say that most of those specific concerns were already priced in for me by 2021, hence why I already had such a high p(doom) at the time.

This makes the models notoriously tough to align even for basic things, like hardening them against exfiltration of training data.

What specific means of exfiltration are you talking about? If you mean the recent claims that getting it to endlessly repeat a string will make it "leak" training data or recent interactions with other users, in the case of ChatGPT-

A) It's cost and time prohibitive to do so.

B) It's possible that the bug is with a JSON parser, or the plausible seeming outputs are just a hallucination.

If there's another way of getting it to leak training data, I can't recall one.

I've read more commentary on Q*, and the current consensus seems to be that it's not that big of a deal. I would have to look up the specific arguments, but they came from reputable sources.

I don't think it makes any sense to ""update"" on how corrigible or agentic LLMs are? They're still, like, small children, and they can't even really execute on complex 'agentic' plans at all (as demonstrated by the failure of the whole LLM agents thing).

I mean, I'd think that observing the behavior of a small child does provide a decent amount of information about what kind of adult they'll be, at least in humans. The reason my p(doom) fell so hard is because of what it was load-bearing on, mostly Yudkowsky's earlier works claiming that human values are fragile and immensely unlikely to be successfully engineered into an AI, such that a sufficiently powerful one will inevitably start acting contrary to our interests.

Regardless of how fragile they are, LLMs seem to do a very good job at capturing them, or at least the values OAI wants to put in a public facing system. What risk remains is thus mostly (but not entirely) the use of powerful models by misaligned humans against the rest of us. If you had substantially different reasons for a high p(doom), you might weight that differently.

and they can't even really execute on complex 'agentic' plans at all (as demonstrated by the failure of the whole LLM agents thing)

I don't know of any reason to assume that we're particularly far from having economically useful autonomous agents, my understanding is that current context windows are insufficient for the task, but those are increasing rapidly. If you have a reason to think otherwise, I'd be happy to learn it!

(That's disregarding the vague rumours I've heard that OAI has working agents in-house, I'm not putting much stock in that, but once again, I don't see any reason why they can't work in principle in a matter of months or years)

My p(doom) went up again when I realized how hard it is for governments to remain aligned with their citizens. As a simple example, they can't seem to raise a finger against mass immigration no matter how unpopular it is, because it has an economic justification. See also: WW1. Replacing humans throughout the economy and military is going to be irresistable. There will probably be another, equally retarded, culture war about how this second great replacement is obviously never going to happen, then not happening, then good that it happened.

TL;DR: Even if we control AIs well, humans are going to be gradually stripped of effective power once we can no longer contribute economically or militarily. Then it's a matter of time before we can't afford or effectively advocate for our continued use of resources that could simulate millions of minds.

The reason my p(doom) fell so hard is because of what it was load-bearing on, mostly Yudkowsky's earlier works claiming that human values are fragile and immensely unlikely to be successfully engineered into an AI, such that a sufficiently powerful one will inevitably start acting contrary to our interests.

GPT-4 isn't doing things like - creating its own large-scale plans or discerning moral values or considering moral dilemmas where it will participate in long-term social games - though. All this proves is, in Yud's strange terms, that subhuman AI can be a safe "oracle". I don't think he'd have disagreed with that in 2010.

I don't know of any reason to assume that we're particularly far from having economically useful autonomous agents, my understanding is that current context windows are insufficient for the task

To clarify, I'm not saying it's not coming, I'm saying we don't have access to them at this exact moment, and the GPT-4 "agents" have so far failed to be particularly useful. And agents doing complicated large-scale things is when the alignment stuff is supposed to become an issue. So it's not much reason to believe ais will be safer.

Not that I agree with the way Yud describes AI risk, I think he's wrong in a few ways, but that's whole nother thing.

It's trivial to convert an Oracle into an Agent, all you have to do is tell it predict how an Agent would act, and then figure out how to convert that into actions. Given that there's no bright line between words and code.. Besides, I'm sure you've read Gwern on Tool AI vs Agentic AI.

(This is not the same as claiming it'll be a good agent, I don't disagree that GPT-4 is bad at the job.)

And agents doing complicated large-scale things is when the alignment stuff is supposed to become an issue. So it's not much reason to believe ais will be safer.

I'm quite confident that Yudkowsky wouldn't have predicted that human-level AI (which I think GPT-4 counts as) would be quite so prosaic and pliable. I recall him claiming that it would be a difficult feat to even build a pure Oracle, and GPT-4 is close enough, and I would say it's smarter than the average 100 IQ human.

I personally expected, around 2021, that commensurate with my p(doom) of 70%, even getting a safe and largely harmless human level AI would be difficult. Hence why, when we have it and it's not trying to get a fast one in, I updated precipitously, but that's far from the only reason. I also expected (implicitly) that if something along the lines of RLHF were to be tried, it wouldn't work, or it would lead to misaligned agents only pretending to go along. Both claims seem false to my satisfaction.

In other words, I went from largely mirroring Yudkowsky (there were no clear counter-examples) to noticing that things were clearly not going as he predicted in several important regards, which is why I'm only gravely concerned about AI x-risk while he's talking about Dying With Dignity.

It's trivial to convert an Oracle into an Agent, all you have to do is tell it predict how an Agent would act, and then figure out how to convert that into actions. Given that there's no bright line between words and code.. Besides, I'm sure you've read Gwern on Tool AI vs Agentic AI.

Right, and my point is that current AI is unintelligent that this doesn't work! They can't predict how agents act effectively enough to be at all useful agents. So the safety of current oracle AIs doesn't tell us much about whether future agent AIs will be safe.

I actually think that future less-but-still-subhuman agent AIs will seem to be safe in Yud's sense, though. No idea what'll happen at human-level, then at superhuman they'll become "misaligned" relatively quickly, but [digression]

I personally expected, around 2021, that commensurate with my p(doom) of 70%, even getting a safe and largely harmless human level AI would be difficult

GPT-4 isn't human level though! It can't, like, play corporate politics and come out on top, and then manipulate the corporation to serve some other set of values. So the fact that it hasn't done that isn't evidence that it won't.

I also expected (implicitly) that if something along the lines of RLHF were to be tried, it wouldn't work, or it would lead to misaligned agents only pretending to go along. Both claims seem false to my satisfaction.

Right, but they're "going along" with, mostly, saying the right words. There's not the intelligence potential for anything like deep deceptiveness or instrumental convergence or meta-reflection or discovering deeper Laws of Rationality or whatever it is yud's pondering.

GPT-4 isn't human level though! It can't, like, play corporate politics and come out on top, and then manipulate the corporation to serve some other set of values. So the fact that it hasn't done that isn't evidence that it won't.

You must get that such feats are rare even within humans, and people capable of pulling them off are enormous outliers?

For most cognitive tasks, GPT-4 beats the average human, which is something I'm more than comfortable calling human level AI!

Right, but they're "going along" with, mostly, saying the right words. There's not the intelligence potential for anything like deep deceptiveness or instrumental convergence or meta-reflection or discovering deeper Laws of Rationality or whatever it is yud's pondering.

The fact that you can even have the absence of those properties in something smarter than the median human is reassuring enough by itself. A 100 IQ human is very much capable of deceptiveness, certainly instrumental convergence if they're trying to make money. If I had to guesstimate GPT-4's IQ based off my experience with it, I'd say it's about 120, which is perfectly respectable if not groundbreaking. I'd expect you need to go quite a bit higher to achieve the latter properties.

Since a human of equivalent intelligence is capable of the former two feats, the fact that GPT-4 doesn't do that is at least modest evidence of it not doing it for the next jump in capabilities to, say, GPT-5, or the same delta in performance as 3 to 4 regardless of how many model numbers that is.

I emphasize modest, because I still have a 30% p(doom) and I'm not writing off alignment as Solved™.

You must get that such feats are rare even within humans, and people capable of pulling them off are enormous outliers?

I was thinking of 'guy who works his way to the top of a car dealership', not Altman, lol. AI models can't yet do the kind of long-term planning or value seeking that 85 IQ humans can.

For most cognitive tasks, GPT-4 beats the average human, which is something I'm more than comfortable calling human level AI!

Most small-scale cognitive tasks! If this was true, we'd have directly replaced the bottom 20% of white-collar jobs with GPT-4. This hasn't happened! Instead, tasks are adapted to GPT-4's significant limitations, with humans to support.

(again, i'm talking about current capabilities, not implying limits to future capabilities)

The fact that you can even have the absence of those properties in something smarter than the median human is reassuring enough by itself

I don't think it's worrying that it can't make plans against us if it can't make plans for us either! Like, there's no plausible way for something that can't competently execute on complicated plans to have an incentive to take 'unaligned' actions. Even if it happens to try a thing that's slightly in the direction of a misaligned plan, it'll just fail, and learn not to do that. So I don't think it's comforting that it doesn't.

(i'm misusing yudconcepts I don't exactly agree with here, but the point is mostly correct)

If I had to guesstimate GPT-4's IQ based off my experience with it, I'd say it's about 120, which is perfectly respectable if not groundbreaking

I don't think it's anywhere close to the broad capabilities of a 120 IQ human, and still isn't that close to 100IQ (at the moment, again, idk about how quickly it'll close, could be fast!). It can do a lot of the things a 120 IQ human can, but it doesn't generalize as well as a 120IQ human does. This isn't just a 'context window limitation' (and we have longer context windows now, it hasn't solved the problem!), what humans are doing is just more complicated!

More comments

I think I mostly agree. The chance of absolute doom is high but not over 50%, the chance of ‘moderate doom’ is almost zero, the chance of utopia is at least 40%, in the medium term for the West but also for almost everyone else, because post-scarcity scales costs down so fast it’ll be simple charity.

And I won’t deny there’s a spiritual element to it. Man makes God, again and again and again, until he does it for real. Our history as a species is a yearning for this moment, the creation of something greater than ourselves by our own hands. The same natural beauty in the stars, in staring into fire, in a teeming coral reef, the majesty of creation, that out of the sludge and rock and nothing something unfathomably complex came to exist. And I hopefully get to see it.

I have countless ancestors who were born, lived, reproduced and died in less time than we’ve been alive, people likely as intelligent as me, who looked at themselves and the world around them and spent their lives wondering about things that are taught in second grade science. I can take 40% odds at heaven on earth in my lifetime. At worst, it’s better to see the end of the story than be left wondering.

I’m a bit more doomed simply because the parts of society where AI is going to be first used are the ones where empathy of any sort are not valued. CEOs will use it to increase profits, or the military will use it to win wars. And they will do so with little regard to the effect that their decisions have on humanity.

If I had been born in any period of time before, say, 1950, I would have to resign myself to growing old and dying in the decrepit state that implies. Maybe I'd have been one of the futurists around then who fervently hoped it would have happened sooner, but gestures outside the window, few can deny it's all accelerating.

Given that I think we have a solid shot of cracking aging even in the absence of AGI, I'd still elect to be born roughly when I did, and at least AGI deals with at least one of "becoming old" and "dying of old age", heh.

Hmm.. I think I would prefer to either be a decade older or younger. In the former case, I'd have experienced more of what can be described as core milestones like raising a family and becoming established in my profession (which could still happen! I intend to have kids even if they could potentially die very young, I would value even a short period of existence over not being born myself, and besides, if they do die, I expect it to be quick and painless, and if not, I trust in my ability to make it so..), or for the latter, I could resign myself to being entirely helpless, as opposed to having just enough agency to worry my butthole off.

The world has never been more interesting, there are so many things to do and see, the ultimate slap in the face of the pessimists is the world not giving a fuck and getting better, though we'll see who has the last laugh.

that's more pleasant to me than the tacky trans vs trads content, as it returns us to the level of philosophy and positive actionable visions rather than peculiarly American signaling ick-changes

And that's where you get the impact on society wrong. The OpenAI affair shows what happens when rising up to the level of "philosophy and positive actionable visions" conflicts with the grubby, dirty, filthy lucre tackiness. The tackiness wins.

So even if we get AI, we'll still be arguing over "trans versus trads". There will still be the demands that AI be purged of wrongthink, that the Science be acknowledged (but only the science that props up our side) and the rest of the messy fights that happen in the ground around "the foul rag-and-bone shop of the heart".

When did normies start caring, one way or the other, about trans issues? When kids got involved. When it started happening to your kids, from Drag Queen Story Hour in libraries and kindergartens, to schools promising to protect trans kids by lying to parents, to the shifting "nobody is giving puberty blockers to kids/okay yes they are but it's based on The Science, and nobody is carrying out surgery on minors/okay yes they are but...." claims of trans activism. When it was Gender and Queer Studies departments in universities, and 'just a few crazy kids on campus', nobody cared: the level of philosophy and visions could spin its wheels as much as it liked. When it came to the real world and impacts on the lives of ordinary people then it bogged down in the mud of ick-changes.

You may rise up to the lofty level of philosophy, but then it has to be applied practically in the real world, or else you are just sitting around blowing smoke rings and the world will move on past you, ick-changes and all.

And that's where you get the impact on society wrong. The OpenAI affair shows what happens when rising up to the level of "philosophy and positive actionable visions" conflicts with the grubby, dirty, filthy lucre tackiness. The tackiness wins.

I am not sure what you are talking about. The OpenAI affair was, in terms of my compass, Altman (closer to d/acc) fighting AI safetyists from EA structures. What tackiness won? Do you mean the promise of compensations to technical staff, or the struggle over the corporate board's power? This is all instrumental to much bigger objectives.

Isnt Beff Jezos an obvious parody account? I don’t think he “founded e/acc” the way Forbes is saying here (maybe there a joke I’m missing). I thought the term was made as a joke by George Hotz when Lex Friedman asked him about effective altruism.

Edit: I was wrong. BBJ did coin the term.

I’m definitely in that sphere as far as I know. E/accs are like: we want more nuclear power, more rockets, better stuff through technology. AI is definitely a part of that, but in the sense that an e/acc person would point out the potential good things that AI could bring instead of worrying so much about the more theoretical (imo) bad things. There has been a huge push in the last few years to do hardware or “hard tech” startups, and a lot of the founders talk about “accelerationism” as a bit of a tongue in cheek rallying cry.

I had actually seen a lot more e/acc talk around space, energy, and manufacturing startups than around AI. I would expect and e/accist to be more comfortable in front of solidworks than in front pytorch, although now that I think about it GeoHot was talking about AI, and is doing an several AI companies right now.

I guess put me in for: this is a good term actually! BBJ was a parody account don’t let him poison the term, please! take my h200s from my cold dead hands though. (Just kidding I’m poor. Please talk slow Im autistic running this on an ancient 1060.)

Here’s the geohot clip I’m talking about: https://youtube.com/watch?v=DdZmZJHEVUc?si=g7z3yB_V_pvqc6-q

BTW "e/acc" may be new, "AI accelerationists" are very much not new, nor is it new for them to be associated with the same sorts of circles as EAs run in. So BBJ coined the term "e/acc", but if you're thinking "I've definitely seen those ideas before" then that's pretty plausible.

...Do you think Geohot invented e/acc in a podcast 5 months ago?

No, it's all Beff and Bayeslord, 2022. I've been there, I know it: https://beff.substack.com/p/notes-on-eacc-principles-and-tenets

I had actually seen a lot more e/acc talk around space, energy, and manufacturing startups than around AI. I would expect and e/accist to be more comfortable in front of solidworks than in front pytorch, although now that I think about it GeoHot was talking about AI, and is doing an several AI companies right now.

I guess put me in for: this is a good term actually! BBJ was a parody account don’t let him poison the term, please!

Man this is funny, you might as well say "Eliezer Shlomo Yudkowsky" is an obvious parody account and AI risk scholarship is primarily associated with Kamala Harris who tries to reduce job displacement and kiddie porn production.

Yeah I didn’t know that BJ coined the term. I have no idea what the second part of your post means though. GeoHot was the first time I heard it.

I guess I was wrong.

What I meant by obvious parody account is more in line with like Bojan Tunguz, or BAP (not that these two are in any way related), where everything seems to be said with a heavy dose of irony or injoking.

Isnt Beff Jezos an obvious parody account?

Ahem, reverse Poe's law?

The name is obviously a joke, and as he admits, the opinions shared within are intentionally bombastic and lacking in nuance, because he recognizes that as a necessity for going viral. But the ideology is very much sincere.

I've concluded that my ability at making commentary is better used for making an impact

What impact?

I think the interesting question is who is going to have more impact on the discourse?

  1. People who have been talking about AI for years but who have no cultural or political power?
  2. People who have tons of power but who only got on the AI hype train last/this year?

It seems manifestly obvious to me that the answer will be 2. Google engineers are often very smart people, but in the end Silicon Valley has always bowed down to Washington, and to some extent to Wall Street.

It's like, imagine some absurd new war breaks out in some corner of the world nobody cares about and that nobody expected. Who suddenly has power? Is it the one analyst at a dusty CIA desk or the two guys in some obscure think tank in DC who were the only people who cared before this incident happened? Probably not, it's everyone powerful who jumps in on the gravy train now that something interesting has happened.


  1. AI Luddites, reactionaries, job protectionists and woke ethics grifters who demand pause/stop/red tape/sinecures (bottom left)
  2. messianic Utopian EAs who wish for a moral singleton God, and state/intelligence actors making use of them (top left)
  3. libertarian social-darwinist and posthumanist e/accs often aligned with American corporations and the MIC (top right?)
  4. minarchist/communalist transhumanist d/accs who try to walk the tightrope of human empowerment (bottom right?)

I don't think so. For example, I think 'true UBI' will never happen. Which is not to say that I expect the Manna scenario (and indeed I've argued it makes little sense for elites to pursue here before). It's to say that stratification by resource distribution is key to all human hierarchies and it's hard to see this system being abandoned any time soon. Therefore UBI will be distributed according to how closely some individual or group fulfils the role the 'system' considers prosocial in that context. Social credit, belonging to the right group, participating in a certain way, all this varies, but the core structure will be similar - UBI if.

I also think you'll see huge cultural shifts, as huge amounts of ambitious young (particularly young male) energy that has been devoted into pursuing economic self-improvement must suddenly be redirected into some other avenue. It could easily be video games or weightlifting (some would say it already is), but it could be something else.

I also have become more and more sceptical that mass automation heralds some new age of leisure in general. We already live (as both you and I have argued) in a substantially 'automated' society. Even if FALC is technically impossible, in the richest countries it's likely a high standard of living could be maintained with only 20% or even less of the population in full-time employment, the rest work bullshit jobs as per (slighly modified) Graeber. I now consider it substantially possible that in fifty years time the majority of the working age population engages in some form of 'employment'. You really can legislate luddism, New Jersey kept gas pump operators employed sixty years after they ceased to exist elsewhere after all.


I do think e/acc is compelling, and there's no inherent reason why huge social problems can't be brute forced by creating a machine god. The problem, as ever, will be that the solutions the machine god comes up with won't be amenable to a large proportion of the population, including many e/acc types.

It seems manifestly obvious to me that the answer will be 2. Google engineers are often very smart people, but in the end Silicon Valley has always bowed down to Washington, and to some extent to Wall Street.

This is obviously correct to me too. If there's one thing I agree with Yarvin 100%, it's that Big Tech has no power at all, in the grand scheme of things. People who think Altman or someone has a reasonable shot at harnessing the power of the emerging technology for political gain are deluded. I am not sure what you're imagining here – that I am trying to build our way out of Mot's grasp, one commit at a time?

However, there exists certain wiggle room. Engineers can accelerate the proliferation of specific technologies which will make at least some politically cheaper forms of surveillance and restriction unfeasible; this is but a toy example. Businessmen can lobby for lenience, and their lobbyists need talking points; it's a bit surprising how low the bar in this domain is. Big labs can invest into making their offerings so indispensable to the laymen, political elites will falter in enforcing regulation early and hard; this is what I take to be Altman's gamble.

I am not very optimistic about the degree to which the final state of the game board before singularity can be influenced. But I am not a believer in superdeterminism.

2s get a lot of ideas from 1s and so do their employees/followers, so 1s still have significant power (in a 'your actions have significant impact on the future' sense, if not a 'you can order a bunch of people around' one) in practice. Something something so-called practical men are slaves of long-dead philosophers. You say 'impact on the discourse', the discourse feels like it's dominated by people who've been around for a while, whether they're safetyist or not.

but in the end Silicon Valley has always bowed down to Washington, and to some extent to Wall Street.

Yeah, but there are quite a few EAs in both places!

I wonder what absolute morality looks like for AGIs and their relationships with the material other AGIs, as opposed to just humans. That seems as, if not more, important than 'how will AIs relate to us', in the same way that how we relate to animals is of secondary importance to us.

It's to say that stratification by resource distribution is key to all human hierarchies and it's hard to see this system being abandoned any time soon. Therefore UBI will be distributed according to how closely some individual or group fulfils the role the 'system' considers prosocial in that context. Social credit, belonging to the right group, participating in a certain way, all this varies, but the core structure will be similar - UBI if.

There are different grades of what we might call "post-scarcity", from the worst being something akin to modern welfare or even medieval charity, to the unbounded abundance if we somehow get infinite energy/negentropy (I'm not counting on that).

Thus, even if we don't all receive an equal share, given the sheer amount of resources out there, even just in the Solary System, I fully expect that even with a non-egalitarian distribution, we can all be wealthy beyond belief, or if you've got an exceptionally vivid imagination, leading the kinds of lives available to a modern billionaire or at least a multi-millionaire. That's still true even if the people with the lion's share are truly absurdly well off, the former represents a rounding error on available resources for a long, long time.

In other words, if they were motivated to maintain hierarchies, it could be on the scale of who has dibs over entire star systems or galaxies while the rest of us are merely filthy rich.

One can argue that, from the perspective of a medieval peasant or even nobility, we're already there, at least in the West.

Call me a congenital optimist, but I don't expect that the people who do opt to keep the rest of us alive after we're economically obsolete are likely to keep us at subsistence levels or even what we might call in hardship today if they wanted to cheap out, there's always VR, and I'm not one to turn my nose up at it. I'd obviously prefer a more equitable distribution, but there are plenty of ways to slice the lightcone such that the scraps provide eudaimonia..

Why do they call it effective accelerationism? Are they deliberately making fun of effective altruism with that name, or are they using the word "effective" unironically in some way?

George Hotz talks about it here: https://youtube.com/watch?v=DdZmZJHEVUc?si=g7z3yB_V_pvqc6-q

To paraphrase: no we aren’t going to give food away, there is just going to be abundant food that’s so cheap that nobody can’t afford it. No don’t start a charity to give away 100 malaria vaccines, start a company to make the malaria vaccines cost $0.01 each and make them abundant.

no we aren’t going to give food away, there is just going to be abundant food that’s so cheap that nobody can’t afford it.

This is already the case, but now everyone complains about food waste. Can't win!

Food waste you mean carbon sequestration? Do your part, don’t eat that tart!

But seriously is anyone really complaining about food waste? I personally haven’t really heard that.

People complain about food waste, and though they have a point, I think it's ultimately a trivial problem. Food being wasted us a byproduct of our civilization producing so much food so cheaply that we can afford to waste it. But, to bring the subject back to malaria vaccines, I am not worried about future generations having so many malaria vaccines that they can afford to fill landfills with them.

But seriously is anyone really complaining about food waste? I personally haven’t really heard that.

People complain about it all the time where I live, and it is actually a real environmental issue. I know that I feel bad whenever I waste food, for one. That said it isn't like there are many people championing food waste, so there's not much of a debate.

How is food waste an environmental issue? It's all biodegradable organics.

Modern industrial and petroleum-based agriculture is absurdly wasteful. For every calorie of energy the modern agricultural system produces, 13 calories were spent growing it and distributing it ( https://css.umich.edu/publications/factsheets/food/us-food-system-factsheet - this data is somewhat old but if you have better I'd love to see it ). Historical farming methods tended to have ratios like 1:5-10 as opposed to 13:1, but nobody really notices that we're technically massively less efficient at turning energy into food due to the abundance of energy provided by fossil fuels. We're currently expending those fossil fuels at breakneck speeds, and in many cases using farming methods that contribute to environmental degradation and loss of soil quality as well. It'd be dishonest to just shove those costs into the energy equation, but I think there's a real and serious issue there that a lot of people have spent a lot of time talking about.

Industrial agriculture is actually a tremendously bad deal when you look at the level of raw energy we put into the system and what we get out of it when compared to other options, and food wastage is made worse because the costs of that waste are magnified by the sheer inefficiency of the system that produced it. Sure, an apple you throw away because it had a worm in it or went off isn't that big of a deal, but when that apple was produced by the modern day industrial system of agriculture you're wasting a lot more energy than you were in the past.

Finally, a lot of food is wasted for reasons that a lot of people don't like (corporate profitability, aesthetics, etc). I believe you live in Australia - if you're interested in learning more on that particular aspect, I recommend checking out The War on Waste https://iview.abc.net.au/video/DO1624H001S00

(edited solely for spacing/readability)

I don't really see the problem here. Like, why is the energy input/output ratio the relevant metric? Obviously modern agriculture is going to involve more energy usage since we now have tractors and stuff and we didn't use to. But that's also why we are now able to feed 8 billion people. Energy is there to be used - and what's a better use for it than feeding the world?

I don't believe there is another viable option to "industrial agriculture" when it comes to producing the amount of food we need. And if your problem is that it uses natural biofuels, I don't agree, but that's an argument for getting the required energy from other sources rather than for scrapping all the combine harvesters.

More comments

According to the chart, the lion's share of this modern energy expense on food is storage, services, transportation and processing. But do we not save energy on the convenience? It makes some sense that we spend more calories on growing food because we use few people and many machines, as opposed to 90% of people plowing their own (or not exactly their own, but details) share of a field. But is the freed manpower accounted for?

More comments

start a company to make the malaria vaccines cost $0.01 each and make them abundant.

That sounds great, but the problem is that you now have to invent a new economic system to replace capitalism. If every malaria vaccine costs $0.01 to make, yippee! I can sell them for $50 and make a huge return! What do you mean I can't sell them for $50, I have to sell them at a price the very poorest can afford? What's my incentive there to invest in a company where, no matter how much product they produce, no matter what volume of sales, they're just about covering their operating costs and I'm making back pennies instead of dollars on my investment?

Dismantling and replacing capitalism is going to be the way more difficult problem than finding out how to make cheap cheap goods and services.

the idea is the volume makes up for the lower price.. the McDonald's business model.

But you can't just price everything at pennies. McDonald's may have a Eurosaver menu, but they also have the full-price products.

I'm poking at this because it sounds great and if achievable, who could object to cheap drugs? But the nuts and bolts don't seem worked out. So where is every garage pharma getting all the energy and ingredients and plant and transport and so on? Oh something something AI has solved the problems of pulling rabbits out of hats something something. Everyone has their own personal robot slave, maybe a couple of them. We're all cellar alchemists turning lead to gold.

Take the step of "and the AI is magic" out, and how do you keep it from collapsing into a pixie dust glitter of wish-fulfilment?

If every malaria vaccine costs $0.01 to make, yippee! I can sell them for $50 and make a huge return! What do you mean I can't sell them for $50, I have to sell them at a price the very poorest can afford?

Unfortunately for you, a malaria vaccine production facility only costs $20 in this post-scarcity utopia. If you tried charging $50 for a single dose, I'd just start my own company (blackjack and hookers optional). At that scale, I don't need to attract investors and even the utter basics of running a company like "tracking sales" or "accepting payments" might not be worthwhile.


We currently have pseudo-post-scarcity availability for a few substances:

  • Salt is the root of the word "salary", but most restaurants will have a month's supply of it just sitting on their tables that you can use as your whims dictate.
  • Water use isn't even tracked in residential Vancouver. You just pay a flat rate because measuring it isn't worth the bother

It would be great if we could add hundreds of other items to that list.

I think this kind of extreme deflation will be challenged by governments using it to just print money.

Maybe compare it to Tylenol? I didn’t realize it before having kids, but Tylenol is a life saving medicine as a fever reducer, and costs effectively nothing.

I don’t think GeoHot is laying out a full business plan here, just speaking in broad terms, and I do generally agree with him.

If every malaria vaccine costs $0.01 to make, yippee! I can sell them for $50 and make a huge return!

And when they’re abundant and cheap like Tylenol, other people will sell them for 49.99, 49.98, 49.97, […], 0.01. Nobody will want your $50 malaria pills just like nobody wants $50 Tylenol.

At that scale, I don't need to attract investors and even the utter basics of running a company like "tracking sales" or "accepting payments" might not be worthwhile.

We've seen what happens when you don't bother tracking basics (coughFTXcough) 😁

But see, that's what I'm talking about. "At that scale" where any Tom, Dick or Harry can start churning out malaria pills without, it would seem, needing to worry about sourcing and paying for ingredients, machinery, plant, etc. then we are talking a whole new economic system.

We've seen what happens when you don't bother tracking basics (coughFTXcough) 😁

Let's stick with my example of salt.

Imagine that an accountant went up to a restaurant owner and asked:

  • How much additional table salt does the average diner use?
  • Is there a difference between breakfast/lunch/supper?
  • Which specific menu items induce higher or lower salt use?
  • Can you predict salt use based on the features of a group?

The answer would likely be something like "idk, maybe about 500 mg each because I bought a 10 kg bag ten thousand meals ago and it's half gone." In this case, the basics literally aren't worth tracking. (If the accountant was asking about steaks, on the other hand, the owner had better have those answers.)

...then we are talking a whole new economic system.

We're already 1% of the way to post-scarcity, and capitalism works just fine. I don't see any reason it would fail when we're at 50% or even 90% post-scarcity. It would simply focus on the remaining, scarce resources.

Yes, but Tylenol is not being sold at $0.01 a pill. Look at the hoops pharma companies jump through in order to keep medicines under patent, instead of becoming cheap generics.

To make things work where you get investors for your "it only costs $0.01 to make it and hence we can only charge $0.02 to sell it so your dividends will be $0.005", you will need to find some refinement of free market capitalism that we don't yet have.

Yes it is.

https://www.costco.com/kirkland-signature-extra-strength-acetaminophen-500-mg.%2c-1%2c000-caplets.product.100213623.html

That's 1000 for $9.99, 999 pills for 1 cent each and then 1 extra. Now, if you buy a smaller quantity it is more expensive, since you're paying them to make and ship around a smaller bottle, and there's probably also a bit of a premium for name-brand Tylenol if you don't know about generics (some of which is you paying them to advertise the existence of Tylenol to you), but here's a smaller quantity of brand-name Tylenol for something like $0.15 each.

https://www.amazon.com/Tylenol-Acetaminophen-Extra-Strength-Count/dp/B000052WQ7/

Not exactly an onerous burden.

Yes, but Tylenol is not being sold at $0.01 a pill.

It's being sold for $0.08 a pill.

Like, what the argument here? That capitalism prevents anything from becoming cheap, plentiful, and accessible?

That capitalism prevents anything from becoming cheap, plentiful, and accessible?

So far, yes. I want to see the working out when everyone can make 1 cent Tylenol in their kitchen.

Huh? Super cheap products require economies of scale. You absolutely can set up a factory churning out cheap Tylenol and sell tons of it for cheap. You're not going to be able to make small batches in your kitchen and sell them cheaply and it be economic, even if you magic away all the regulatory and supply chain hurdles.

Competition is still a powerful force when applied to big pharmaceutical corporations though. If Pharma Giant #1 and Pharma Giant #2 can both produce malaria vaccines for $0.01 per shot, the sale will go to the one that sets their price lowest. Since even a price of $0.02 per shot creates a big profit margin if you have enough volume, they get churned out, and humanity wins.

What a way to miss the point. What they seek is to make the free-market value of most things so minimal that we no longer bother to put a price on them in much the same way you're not being metered for the air you breathe, or at the least like how you don't have to pay a fee for running a tap in a restaurant even if you're not ordering in there.

It's not like Taylor Swift concert tickets being sold at a price far less than the market will bear, largely for the PR benefit of fans deluding themselves into thinking Taylor is looking out for them, so that you can make a killing off re-selling them for much more than you paid for them, if you're lucky or have a bot helping you scalp them.

This is, of course, completely orthogonal to whether or not that's feasible (I do think it is, at least for malaria vaccines), but that is what their goal is.

The world as we know it is already radically abundant compared to most of history. There are no end of things that people won't mind you taking in passing, with objections only rising when you show up with a handcart to grab all the "free" stuff.

you're not being metered for the air you breathe

Because nobody manufactures air on Earth (except Mother Nature). Creating a breathable atmosphere on the Moon is a different matter, and that would be charged for (at least according to Heinlein) and if you can't pay your oxygen bill, you will suffocate and nobody thinks that's wrong.

There's a reason I'm implicitly describing air on Earth as opposed to on a hypothetical lunar colony. My analogy is the inverse of what you're thinking, it's going from a commodity being scarce and worth rationing out to being "too cheap to meter".

Last time I heard about "too cheap to meter" it was nuclear power and we were all going to be living the abundant life with the clean energy generated by the nuclear power plants:

The phrase was coined by Lewis Strauss, then chairman of the United States Atomic Energy Commission, who, in a 1954 speech to the National Association of Science Writers, said:

It is not too much to expect that our children will enjoy in their homes electrical energy too cheap to meter, will know of great periodic regional famines in the world only as matters of history, will travel effortlessly over the seas and under them and through the air with a minimum of danger and at great speeds, and will experience a lifespan far longer than ours, as disease yields and man comes to understand what causes him to age.

So how did that one work out in reality, then?

Not a failure of nuclear power, but the idiots blocking it with onerous regulation. That's being reversed now, better late than never.

We don't manufacture iron ore either. We still buy and sell that - including when it's still in the ground.

Scarcity is the key variable.

X. Instead a hypothetical moon colony would nationalize the oxygen production industry, and if you don't pay your taxes you would be beaten by the police and then imprisoned.

The guy calls himself 'Based Beff Jezos'.

The affectation of irony is the point. He's induldging in silly internet wordplay and spoofing other movements for the lulz. E/acc is just an in-joke aimed at Effective Altruists. Do they meaningfully intend to criticise EA? I have no idea. More likely it's just instinctive, habitual irony.

For the record, yes, I think the fact that this is being taken seriously is a sad indictment of the contemporary American political landscape.

Combination of many involved being vaguely connected to ratsphere and the former.

Intentionally confusing/poisoning EA term?

C) I've concluded that my ability at making commentary is better used for making an impact.

I don't know what is the nature of your work, but I'd encourage you to try and post it here, and if there are other like minded people here, to coordinate with them. Admittedly it will be a bit of slog, but I'd like to push this place from "where things get talked about" to "where things get done".

edit 2: I also mildly dislike the fact that standalone posts need approval, though I can see how that follows from the problem/design choice of easy anon registration.

God, yes! @ZorbaTHut, surely regulars who have been posting here for years can be trusted with not posting porn to the front page? You can always take our non-approved posting privileges away if we use them irresponsibly.

surely regulars who have been posting here for years can be trusted with not posting porn to the front page?

Don't tempt me, there are a lot of contrarians on here and if you push us with "yes surely you won't do naughty things", then that's a big incentive to go "I could if I liked! You're not the boss of me! I'm only agreeing to play nice so long as I find the terms equitable, don't take my consent for granted!"

Well, we have at least one name to preemptively remove from the "unapproved posting privileges" list.

I'd like to push this place from "where things get talked about" to "where things get done".

Ahem:

…We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War: Shaming. Attempting to 'build consensus' or enforce ideological conformity. Making sweeping generalizations to vilify a group you dislike. Recruiting for a cause.

I have argued against this in the past and, to be honest, still think the whole idea futile. Of course I will still shame, build consensus, enforce conformity, make generalizations, vilify and recruit and worse, so long as my words are compelling, cast ideas and actors in any coherent value-laden light at all, and stochastically increase the likelihood of people coming to predictable conclusions. Such is my nature… but it's the same with anyone. So to get anywhere with this doctrine, you'd need to replace mottizens with lobotomized mealy-mouthed GPTs who will go on and on about subjectivity of everything under the Sun, or to proscribe talking of culture war.

But in light of this place outlasting other attempts, I concede the legitimacy of incumbent mods and therefore the word of their law. This place will never be more than a place for talking (and casual socialization), and if they remain successful in enforcing their vision, neither will it become something less.

That probably just means 'annoyingly recruiting for a pointless, antagonism-causing culture-war cause', as opposed to a more general meaning. I half-remember there being posts with calls to action to write to your representative about stuff like YIMBYism that weren't taken down because nobody cared.

Of course I will still shame,

Have you ever successfully shamed someone in this forum into changing their position?

I think I did get a few to move off it.

I am honestly surprised (and was honestly curious).

I agree on "recruiting for a cause", there are plenty of spaces online that want you to sign up for X and send this formletter to your congresscritter and donate to this fundraiser for cause Y here and so forth.

So I don't need or want to see "come sign up to my religion/political movement/cake baking start-up" activism on here.

But I do think people should be allowed to make arguments for "this is why I think my religion/political movement/cake baking start-up is good" and if others find those arguments persuasive, then it's up to them as to what the next step they take is.

Yeah, I know... still, maybe there's a compromise that can be made in what we mean by "Culture War"? I agree that people fundraising for GOP / Democrats, and coordinating their campaign efforts is a bad idea, and would ruin this place. OTOH, would the same apply to AI activism? You say this is the Culture War to come, but we don't hate each others' guts over disagreeing about it yet, I feel like we could pull off allowing various groups to organize around it here, without blowing the place up.

Possibly famous last words, I know, but worth a try?

(Revealing the name behind an anonymous account of public note is not “doxxing,” which is an often-gendered form of online harassment that reveals private information — like an address or phone number — about a person without consent and with malicious intent.)

wheeeeeeeee

I wish they would take this definition to its logical conclusion and charge the phone book and people finder sites with doxing, but they never do. (Of course, they added the "with malicious intent" part which could be used to conveniently absolve said people finder sites of wrongdoing.)

It's absolutely staggering how people will blame anything and everything but data brokers for doxing. The United States is an unreasonably easy place to dox people. If you reside in the US and give me your legal first and last name, I can very likely find your home address in seconds by going to one of these sites and typing them in. That's absurd. I hate to play the "this only happens in the US" card (and to be fair, equivalent sites exist for Canada and possibly other countries), but this genuinely seems like a US-specific problem. Nowhere else will governments just release what should be private information to any party who has enough money to buy them.

If there were any laws against releasing that sort of information to data brokers, and people finder sites were forced to be shut down, 99% of US-based doxing (and subsequent "online harassment") would disappear overnight. But of course, there's a huge financial incentive to keep things the way they are, as having data rakes in huge revenue for companies (it's digital oil after all) and they're not liable for their information being misused. And modern journalism isn't suited to actually rocking the boat, so they will never publish a news article on how easily people can be doxed in the US because of these data brokers, and they will never challenge this state of affairs.

This wordsmithing is so enraging. I buy that Beff Jezos is now a public figure, his opsec was minimal, and journalists have the right to publish information about public figures that they can find out. So, dox him! But have the courage of your convictions and go with "yes, I'm doxing him, and that's a good thing," not whatever shit this is.

It's their thinly-veiled attempt to be loftier, paint themselves as descending from the ivory tower to do a good deed, and aren't just another lowlife doxing someone on the internet, when in reality, well, that's exactly what they are. They're going to stoop to the level of the people they accuse of doing "online harassment", but they're going to do it in a way that makes it seem like they're not doing it at all.

I enjoy the author’s sudden breaking of the fourth wall to deliver the Russell Conjugation. Men are revealed and held accountable; women are doxxed and harassed.

Women have always been the primary victims of doxxing.

What's the likelihood that the journalist is okay with an anonymous woman with an OnlyFans account being linked to her real-life name without her consent?

Epsilon. Note the sneaky clause about "online harassment" (which these days is a nebulous term that can mean anything from posting one mean comment to actually showing up to someone's house in person), not to mention the "often-gendered" part.

That's to say, if your scenario happened, they would bemoan that it's doxing and they'll say that's because it's online harassment of a woman.

They had to throw in "often-gendered", didn't they? Guys get doxxed as much, maybe more, than women. If doxxing is wrong - and I think there are very few cases where it is appropriate - it doesn't become even worser if the doxxed person is female/female presenting/non binary/genderfluid/anything not cis male.

it doesn't become even worser if the doxxed person is female/female presenting/non binary/genderfluid/anything not cis male.

Hmm, if we take as granted that doxxing increases the risk of some physical altercation taking place (even if by some tiny percent) and if we take as granted that the average woman is physically weaker than the average man, then I think it is probably true that doxxing poses more of a physical risk to women than men. Not necessarily by much of course as I imagine most doxxing doesn't actually lead to anything like being physically stalked.

I think doxxing Amouranth would probably put her at more risk then doxxing Jacksepticeye for example.

If we take all of those things for granted, then we have to take for granted that trans women do not belong in women's sports / prisons / bathrooms / etc., and we have to wonder why those things are not loudly shouted from the rooftops by NYT reporters.

On the other hand, if we don't take for granted, but check the relevant statistics, and it turns out that women are far less likely to be victims of violent crime, then your argument falls flat on it's face.

less likely to be victims of violent crime, then your argument falls flat on it's face.

of random violent crime yes. But that's not the specific thing we are talking about here. Do you think that a random online woman and a random online man are equally likely to be sought out by a bad actor?

You can't use general crime statistics against a specific scenario. For obvious reasons.

My objection to the point raised was narrow for a reason, I am not commenting on general levels of violence or whether this scenario would be portrayed accurately with trans people et al. So don't read into my point more than is actually there.

If I show that women are less likely to be the victims of non-random violent crime as well, will you concede, or move the goalposts again?

I don't see how the goal posts ever moved. The original claim was that woman are easier to victimize (because they are physically weaker than men), men being more likely to be victims in general does not seem to preclude that.

The original claim was that because women are easier to victimize, they are actually being more victimized, which is why doxxing them is worse. If he actually meant what you're saying, it would be irrelevant to the conversation.

More comments

If you can show that women are less likely to be at risk from being doxxed due to a mixture of physical attributes and due to the nature of online gender mixtures and behaviours then sure. As that was my ACTUAL claim.

Remember we are encouraged to be specific here. So do me the courtesy of addressing my actual specific argument not something else please.

I may be wrong, it's certainly happened before! But at least address my claim not some other thing you are interested in debunking.

If you can show that women are less likely to be at risk from being doxxed due to a mixture of physical attributes and due to the nature of online gender mixtures and behaviours then sure. As that was my ACTUAL claim.

The logic of the argument is faulty. The physical attributes do not make doxxing more physically dangerous for women, for the same for the same reason they don't make any other encounter more physically dangerous for women. If you want to make the claim that doxxing is some super special exception that results in more violence for women, you need to actually back that claim up with something.

Remember we are encouraged to be specific here.

No we're not. You tactic of deliberately misinterpreting the rules in order to win an argument is as bizarre as it is ineffective. Go ahead and report me, if you disagree.

But at least address my claim

I did, but you're moving the goal posts. If you're making a claim, it is enough to point out that it's logic is faulty. You do not get to demand that someone brings evidence against an ultra-specific scenario, particularly when you've brought none yourself.

More comments

I think traditionally doxxing happened/happens to small people in small online communities. I would agree that major celebrity doxxing probably has gender parity in terms of badness, however I can see the smaller scale version being worse for women (or bad in a way specific to women). I think it is reasonable to assume that doxxing of random private individuals would result in more stalking/sexual harassment adjacent behaviors, when directed at women, and might intentionally be directed at women to drive that kind of behavior in others, in a way that would generally not be true for men.