site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

I like the idea of this place, I really do, but why do people write such long posts. It strikes me as quite obnoxious. Don't you get bored of writing and reading so much text?

  • -33

I sometimes wonder the same thing, although it strikes me as more counterproductive and silly than obnoxious. Maybe it's because 1) they are not good enough writers to express their ideas more succinctly and 2) they enjoy the act of prolonged reading and writing more than I do. Some people also get scared that if they write short top-level comments, the mods will get mad at them.

On the other hand, who cares. I mostly just ignore really long comments cause I figure, if you can't write your thoughts more concisely then probably there's not much there and even if there is, I don't feel like putting in the effort of getting to it. The really verbose people aren't harming anyone and they're not even that common here. Most comments here are fairly short.

As one of the probably top 5% of the worst perpetrators here in terms of length to novelty ratio I have asked myself why. In my case I think I feel the need to cover multiple examples and asides, even if it's redundant. Probably some anxiety about being misunderstood. Should just trust the audience more tbh.

I was trying to work on it and then I got really annoyed by a CW bete noire and fell off the wagon

I'm not in principle against longposting though, especially of the "effortposts of things I would never have researched myself but are actually interesting" variety.

There seems to be a range of preferences, so I wouldn't worry. I guess I have two or three comment styles myself. 1) reactive to a particular point someone has raised - I should probably give these up as its sometimes just nitpicking and a search for connection, 2) pre-canned ideas that I've already thought about at length, I have a few pet topics and I know how to articulate them easily, 3) full rant mode, which is more stream of consciousness trying to tie multiple things together. 1) and 2) tend to be short, 3) can go on but it's usually a kind of master thesis type thing that is also informationally dense.

The value of writing isn't measured by the number of words. It's measured by how much you can get your reader to understand.

I get how you feel, it can be tiring sometimes. But I was reminded last week why it's not so bad when someone linked a meta drama thread from Less Wrong in the Friday fun thread. The Motte is almost terse by comparison! And at least most motters try to write entertainingly, I often find myself sighing at the length of a post when I first see it, but then getting sucked in when I start reading it.

Not really, no. I usually skim the first paragraph, and if I'm not interested I just collapse the thread. If there's valuable stuff deeper in the thread it'll probably show up in the AAQCs at the end of the month anyway.

Interpreting “Low effort” as "short" is the most trivial way to enforce the rule and so it’s the way the mods are the most likely to enforce it. This isn't necessarily saying the mods are lazy -- more like every mod decision gets judged by the court of public opinion and mods are more likely to enforce a rule if the ruling seems uncontroversial.

Write 50 words of low effort and get slapped. Write 150 words and you don’t have to worry about arguing in good faith or being truthful because enforcing those rules requires making a judgement call.

Interpreting “Low effort” as "short" is the most trivial way to enforce the rule and so it’s the way the mods are the most likely to enforce it.

And yet not. Very obviously, a short comment is more likely to be "low effort." Equally obviously, we do not mod every one-liner for being low-effort.

Write 50 words of low effort and get slapped. Write 150 words and you don’t have to worry about arguing in good faith or being truthful because enforcing those rules requires making a judgement call.

This is a frequent charge, and it's been false since before we left reddit. It's the sort of charge made by people who are perpetually angered by Russell Conjugations that go something like "My comment is succinct and factual; your comment is specious and low-effort. My comment is detailed, effortful, and semantically rich; your comment is a verbose wall of text full of midwittery and lies."

We often mod comments that are long and effortful and even get AAQC nominations, because the poster slips bad faith arguments or boo outgroup rants into their manifesto.

We also very often see posts get reported for "lying." Leaving aside the question of whether mods can or should judge the truth value of every post, almost always, "lying" is an accusation made about someone's perception of a contentious issue. We don't mod people for expressing an opinion you believe is false and the other person believes is true, regardless of what we personally think is true. We rarely mod people for saying something we suspect they might not actually believe, and certainly not because you think the other person doesn't actually believe what they are saying.

We also very often see posts get reported for "lying." Leaving aside the question of whether mods can or should judge the truth value of every post, almost always, "lying" is an accusation made about someone's perception of a contentious issue. We don't mod people for expressing an opinion you believe is false and the other person believes is true, regardless of what we personally think is true. We rarely mod people for saying something we suspect they might not actually believe, and certainly not because you think the other person doesn't actually believe what they are saying.

Sorry by "truthful" I don't mean "explicitly lying". I mean "omitting key context that a reasonable person would expect you to include if you actually cared about good discussion (and not just about booing the outgroup)".

The commenter I linked to decided that

Recently the US city of New York, decided that BLM protestors that felt victimized by the police preventing from running amok

accurately portrayed

The protesters arrested in the Bronx were surrounded by police officers before an 8 p.m. curfew and prevented from leaving

To me this is a really obvious example of somebody going on a tirade about how bad their outgroup is. My main gripe (as somebody who disagrees with BLM) is that you have a less accurate understanding of what happened after reading his comment.

Could you argue that that comment makes this board a better place?

I think it's pretty clear mods are much more rigorous about enforcing things if there is some obvious flow chart they can appeal to if somebody questions their decision. "Less than 50 words --> low effort" -- who could argue with that?

Another good example is "consensus building" which should mean "don't imply we all agree with you" but instead means "don't use the phrase 'we all know'".

And so we have things like "Given Kamala's own exposure as a weak air-head" just stated matter-of-factly and in-passing, when any sane standards would require at least a context link.

The mods have (correctly) decided that allowing things like "We all know that Kamala is an air-head" is damaging to discussion, but when MelodicBerries simply assumes his reader agrees with him and that the claim needs no justification this is also building consensus. But it's not part of the mod flow chart "we all know" -> "building consensus", so it's completely kosher.

Basically no discussion board on the Internet actually asks its members to "Proactively provide evidence in proportion to how partisan and inflammatory your claim might be" -- the onus is always on whomever disagrees with your claim to hold you to account. TheMotte and /r/slatestarcodex, according to their own rules, should be the exceptions. But I can count on one hand the number of times I've actually seen that rule enforced.

I think thats because that’s because the mods are human and enforcing it requires making difficult-to-defend decisions and that’s scary.

I think it's pretty clear mods are much more rigorous about enforcing things if there is some obvious flow chart they can appeal to if somebody questions their decision. "Less than 50 words --> low effort" -- who could argue with that?

Another good example is "consensus building" which should mean "don't imply we all agree with you" but instead means "don't use the phrase 'we all know'".

Look, you're not wrong that low-hanging fruit is easier to mod than long posts that require us to try parsing what someone is actually saying, about a topic we may not be at all familiar with, which is why we don't try to make judgment calls about how "honestly" someone is presenting the case. If something gets reported, we always look at it, but if it's a wall of text and someone is reporting it as "lying" or "uncharitable" or "boo outgroup," I will read through to see if anything is egregiously in violation of the rules, but I am not handing out Supreme Court judgments here.

That being said, I for one do not use any kind of mental "flow chart," and I do not worry a lot about whether someone might question my decision. (People question our decisions all the time. Some people even demand I "take it up with the other mods." Which, in most cases, I actually do, asking if anyone disagrees with my judgement.)

"We all know" is indeed a red flag that someone is trying to assume a nonexistent consensus, but it's not the only way to get flagged for consensus-building. If your point is that we mod by doing a Ctrl+F on certain phrases, no, not really.

And so we have things like "Given Kamala's own exposure as a weak air-head" just stated matter-of-factly and in-passing, when any sane standards would require at least a context link.

What sort of context link would you like to support the assertion that Kamala Harris is an airhead? It's clearly an opinion. It's not a particular charitable opinion, but people are allowed to say "I think Kamala Harris is an airhead." Arguably, "Given" could be interpreted as "consensus building," but if I were to mod it on that basis, I really would be doing the sort of keywoard-based modding you're accusing us of. If you say "Trump is a venal, fascist clown," that's your opinion, and someone who likes Trump would very likely report you for it, but you don't have to post a link to support your opinion. If you say "We all know Trump is a venal, fascist clown" you'd get modded, not for using the magic no-no words "We all know" but because you are trying to imply everyone agrees with you and you are reinforcing a consensus opinion. Was @MelodicBerries doing that about Kamala Harris? Eh. I don't think so, but feel free to ask another mod what they think.

Basically no discussion board on the Internet actually asks its members to "Proactively provide evidence in proportion to how partisan and inflammatory your claim might be" -- the onus is always on whomever disagrees with your claim to hold you to account. TheMotte and /r/slatestarcodex, according to their own rules, should be the exceptions. But I can count on one hand the number of times I've actually seen that rule enforced.

Then you aren't very good at counting, because we enforce that rule all the time (even though almost no one ever thinks that their claim was partisan or inflammatory or required evidence).

Either an insult is materially relevant to the argument, in which case it requires justification (and deserves a mod warning if one isn't given), or it is not relevant (in which case it deserves a mod warning for creating needless heat).

Given Kamala's own exposure as a weak air-head, it seems almost inevitable to me that we will see Biden vs Trump once again in 2024

It's plain here that the point about Kamala that's actually relevant to the argument is that she has no hope of being the Democrat nominee. A context link that is appropriate is a link to a poll or a prediction market.

Instead MelodicBerries goes needlessly out of his way to call her an airhead. If this was relevant to the argument and supported by evidence it would be fine. Instead it's neither.

(This is a good time to mention that it has always bothered that TheMotte has never explicitly endorsed "Victorian Sufi Buddha Lite comment policy", but even if @ZorbaTHut doesn't like that, surely "don't insult people for no reason" is a good norm, since the round up text links to things like IN FAVOR OF NICENESS, COMMUNITY, AND CIVILIZATION and mentions "you should argue to understand, not to win", "Write like everyone is reading and you want them to be included in the discussion", etc.)

If you say "Trump is a venal, fascist clown," that's your opinion, and someone who likes Trump would very likely report you for it, but you don't have to post a link to support your opinion. If you say "We all know Trump is a venal, fascist clown" you'd get modded

Props for consistency (though, to be clear, I'm not arguing the mods are politically biased), but I strongly disagree on your trade off between light and heat. "Trump is a venal, fascist clown" should not be allowed unless (1) required by the point you're trying to make and (2) proactively supported. Having higher standards when people are being insulting seems like required, base-level moderation to me.

We do prefer "don't insult people for no good reason," but public figures are more or less fair game, as long as the post is not just a boo light. "Kamala is an airhead" or "Trump is a fascist clown" are not great comments, no, but we're not going to make a rule against saying mean things about politicians and celebrities.

It's plain here that the point about Kamala that's actually relevant to the argument is that she has no hope of being the Democrat nominee. A context link that is appropriate is a link to a poll or a prediction market.

"I think Kamala is a weak candidate and has no hope of being the Democrat nominee" is clearly an opinion. You are free to challenge it, but if we applied your proposed standard, we'd have to mod anyone who expresses any kind of opinion without providing a link.

Those rules already exist.

Be Kind… To a lesser but non-zero extent, this also applies to third parties. You shouldn't just go and attack people that you think are bad, you should be kind to them, even if you think they're mean, even if you think they're bad.

Or

Be no more antagonistic than is absolutely necessary for your argument.

Or

To have a discussion on some point of disagreement it is necessary that both parties be willing to say what they believe and why, not merely that they disagree with the other party. Sarcasm and mockery make it very easy to express that you disagree with someone without explaining why, or what contrary claim you actually endorse, and you can't grow a discussion from those grounds.

Or

Write like everyone is reading and you want them to be included in the discussion.

Does every statement need a citation? No, but we need some standards to prevent literal for-it’s-own-sake mockery.

I'm open to sincere suggestions about how to improve moderation. So is @ZorbaTHut. But I do not think what you are asking for is reasonable. I am not going to issue warnings every time someone says something mean about a politician. Our norms have developed over time, and they are always evolving, and if you think they are going in the wrong direction, or are failing to maintain the sort of discourse we want, you can make that case, but so far I find your case unpersuasive. You seem to just want me to mod people who insult Kamala Harris. There is a threshold at which I probably would mod a comment. E.g., if someone said "Kamala Harris is a whore" - that's actually a falsifiable statement that would require some evidence, or "Anyone who votes for Kamala Harris is a weak air-head" - that's a very broad boo-outgroup. But calling Kamala Harris a weak, air-headed candidate who has no hope of winning the nomination? It's not kind, but it's an allowable opinion.

More comments

Victorian Sufi Buddha Lite comment policy

I can't remember if we killed this after leaving from the SSC subreddit or if I just never copied it over, but the problem we always had on the SSC subreddit was people saying "kind/necessary/true? well, I'm clearly right, and it's necessary that I tell this person he's an asshole, so what's the problem".

And after a few attempts at editing this for "but seriously you can't just be a jerk even if you think you're right", it ended up entirely subsumed into early rulesets without much hint of its previous existence.

I agree with this completely. There seems to be a trend towards very long, and very low information density posts here, and it’s gotten a lot worse.

Something I think that LLMs have taught us is that a very small input can generate a very large output that still contains the same information.

“Trans people are exploiting the historical oppression of gay people as a recruiting tool for their sexual fetish, which I think is unfair” could easily be expanded into 5-6 paragraphs using an LLM.

The first statement would get you threatened with a ban here, whereas the longer LLM’d version (which contains no additional entropy), wouldn’t.

I’m reminded of this famous Seneca quote:

"You complain of avarice; but wasting of time is one of its forms. We waste time more recklessly than our most precious possession, and in comparison with it, property has only second rank. People are frugal in guarding their personal property; but as soon as it comes to squandering time they are most wasteful of the one thing in which it is right to be stingy."

I think this trend (and enforcement of it) of creating long, low density posts is a waste of people’s time. We should encourage brevity here, and not look at length as a substitution for quality.

Yes I see what you mean. I've become attuned lately to the idea of attention as a sacred act, a la Iain Mcgilchrist. What is it that we are experiencing, wanting to portray, what is important? This requires more time sitting and being and noticing and less time writing, though of course intentions fade away and I easily find myself back in reactive social media scrolling and commenting.

Maybe a balance would be wise? Yes, things can just be expanded by an LLM, but if there is a lot to say, more space is often required. If the density is kept high enough, then length is more strongly correlated with value. And length can often filter out low effort comments.

Of course, on the other hand, that does require us to use more to measure quality than just length, and short comments can still be good, as this one hopefully is.

The 'evidence in proportion to how inflammatory your claim is' rule has been de-facto replaced with 'amount of text in proportion to how inflammatory your claim is'. It's good that people can't just post uncompressed 'boo outgroup' statements, but the expansion of the statement would theoretically involve lots of evidence that can be discussed and litigated rather than idle speculation.

We should encourage brevity here

Goodness no. Longer posts, please. The moderation guidelines for top level posts in the CW threads are fine the way they are, and if anything they should be tightened up a bit.

I think this trend (and enforcement of it) of creating long, low density posts is a waste of people’s time

I don't think I've ever read a post on TheMotte that I would describe as "long and low density". Pretty much every post here is either quite enjoyable to read, or it's on a topic I'm not interested in to begin with, in which case I just ignore it.

not look at length as a substitution for quality

I don't think anyone here does that.

This post is short, but kindof exemplifies what I’m talking about. Here is the total information contained in your post:

I disagree.

You could even have just replied

disagree

Or even

false

And no information would be lost. Your post contributes nothing to the discussion behind “I disagree”.

And yet I suspect that a one word reply of “false” would get moderator threats. Because you made your post longer than it needs to be it will stand.

  • -12

Well, no. You're right that just saying "False" would get dinged for being low-effort, but he added quite a bit more than that.

That some people are more concise than others, and some people are better writers than others, is indisputably true. And requiring people to sometimes use more words than technically necessary also serves a purpose, in many cases. E.g., "I think Those People are terrible" is an allowable opinion, but you have to use more words than that so you are providing something more reflective and worthwhile to engage with than just how much you hate Those People.

You clearly do not like people talking about things that are of no interest to you, or using more words than you want to read. And well, you've got a really good and easy solution to that: don't read posts that don't interest you.

Well, no. You're right that just saying "False" would get dinged for being low-effort, but he added quite a bit more than that.

He added quite a bit of unnecessary words to fluff the length of his post, which is my point.

You clearly do not like people talking about things that are of no interest to you, or using more words than you want to read. And well, you've got a really good and easy solution to that: don't read posts that don't interest you.

Then what is the purpose of this forum? What is the purpose of moderation at all? Is the idea of community standard interesting? Is the idea of discussing the way people here use words, and the way they could (likely are) using LLMs to fluff their posts up interesting?

"don't read posts that don't interest you"

Clearly this post does interest me. Clearly most things posted in the CWR interest me. You said the exact same thing to me when you got offended/defensive at my criticism of some girl posting ridiculous surveys the other week, suggesting that I'm disinterested in something because I am critical of it.

No, I am quite interested in the way that people signal things to one another. I think that is essentially core to the culture war, and since this entire thread and raison d'etre for this website is discussion of the culture war, I think it's completely reasonable to talk about the ways in which people wage it.

He added quite a bit of unnecessary words to fluff the length of his post, which is my point.

His words added quite a bit more meaning and content.

Then what is the purpose of this forum?

To cater to people who are interested in the things we talk about here, even things you are not interested in talking about.

You said the exact same thing to me when you got offended/defensive at my criticism of some girl posting ridiculous surveys the other week,

I was neither offended nor defensive. I pointed out that "Why are you talking about things I don't want to talk about it?" is frankly a perverse attitude to take on a discussion forum.

No, I am quite interested in the way that people signal things to one another.

That's fine. Feel free to talk about it. But when you make statements about what people are signaling and what you think is or isn't worthwhile to talk about, your statements may be disagreed with.

Here is the total information contained in your post: “I disagree”

This is plainly, obviously false.

I did state that I disagreed with you, but I also stated why I disagreed with you. Frequently it’s useful to give specific reasons when you disagree with someone because there are multiple possible reasons why you might disagree with someone’s claims, and it’s important for your interlocutor to know what your specific reasons are so the discussion can continue. If they don’t know your reasons, they can’t fashion appropriate counterarguments.

Your claim was that “this trend of creating long, low density posts is a waste of everyone’s time”. There are a few different reasons that I could have for disagreeing with this claim. Hypothetically, I could agree with you that TheMotte has a lot of long, low density posts, but I could simply not think that such posts are a waste of time. I could value them for the aesthetic quality of their prose, for example, despite their low information content. Instead, I gave a different reason for disagreeing with you: my reason is that I don’t think that TheMotte has any significant number of long low density posts at all! I stated this plainly in my post. Therefore, my post has more information content than just “I disagree”. Flatly stating “I disagree” leaves your reasons for disagreeing ambiguous.

Your criticism here reaffirms my suspicions that most people who complain about “verbose, low information posts” simply have poor reading comprehension and are insensitive to the information that’s actually being presented to them.

this is plainly, obviously false

Let's go line by line and see if there is any information in your post that goes beyond "I disagree":

Goodness no. Longer posts, please. The moderation guidelines for top level posts in the CW threads are fine the way they are, and if anything they should be tightened up a bit.

In summary: you disagree. Although "longer posts, please" does come close to going beyond "I disagree", it is in direct response to me saying I want shorter posts. Maybe instead of the total information in your post being "I disagree", it could be "I disagree. I would prefer longer posts."

I don't think I've ever read a post on TheMotte that I would describe as "long and low density". Pretty much every post here is either quite enjoyable to read, or it's on a topic I'm not interested in to begin with, in which case I just ignore it.

You are literally quoting something from my comment here, and then...saying that you disagree with it.

I don't think anyone here does that.

Again you are quoting me and simply saying that you disagree.

In none of this do you link to any examples of why you disagree or do you include any new information or ideas other than your disagreement.

If you find reading long-form posts obnoxious, this might not be the place for you.

If you get bored, nothing is keeping you here.

In short, I agree with @arjin_ferman's sentiments, if not the way he expressed them.

No, I'm incredibly bored, and the more high quality interesting text and discussions I have to read, the better.

And it's pretty obvious to me that if you want to write on and explore a topic in depth, then you need to devote a decent number of words to it. There are certain people here who are unnecessarily verbose, but the majority don't abuse the English language, and convey their points as succinctly as possible.

Fair play

You're aware of nonfiction books, or academic journals, right? Finishing one of those takes a few hours, and many academics dedicate their entire lives to poring over and gleaning information from texts. And they're both much more popular and wordier than we are.

"I like the idea of music, but why do people listen to so many different songs? It just seems boring.". "I like the idea of talking to people, but - for more than a few minutes each day? Why bother? People aren't that interesting, my time is valuable."

Text is the same as written speech, and speech can do all sorts of useful things. Exactingly make an argument. Tell an interesting story. Communicate something subtle via examples. Dig into something you're uncertain about. Elaborate precisely why something is true, creating jumping-off points for disagreement. I can just say "trans is bad" and my opponent can say "trans is good" and we can lock antlers and grunt, or I can detail how and why I think that, and then find out where I might be wrong.

And maybe we are too wordy - but your question's clearly not enough. What is boring about our precise flavor of 'so much text'? A lot of things are made of lots of words, what's different about us?

Replying at the tail to these comments, which I've appreciated. Obviously nothing wrong with people's preferences and of course nuanced and complex ideas take time. I mean I write out some fairly lengthy ones myself, just if most of them lengthy walls of text, then quite intimidating. Maybe sometimes less is more, might it be just a habit of the community, as alluded to?

This reminds me of the quote by Pascal:

My Letters were not wont to come so close one in the neck of another, nor yet to be so large. The short time I have had hath been the cause of both. I had not made this longer then the rest, but that I had not the leisure to make it shorter then it is.

So maybe people here are just too busy to write short comments!

A norm of long posts is a an effort filter that cuts of some valuable contributions (e.g short points can trigger very good posts in response, a chain of short replies can add up to something very insightful) and I wouldn't mind things moving slightly more in that direction, but it's also a filter that cuts out a large amount of low effort ill thought-out jabs. Maybe being flooded the latter was more of a threat when we were on reddit.

Somewhat contrary to the spirit of the founding of this place there's now enough agreement between people that you probably don't need as much hedging as before before getting to the point, but then again if we figure out a way to draw in new users it'll be offputting if we all just assume everyone agrees on certain things in our discussions, so people still argue for and explain things that most people get already.

Every forum discussing politics that isn't wordy gets very unpleasant very fast. In the spirit of this subthread, I'll refrain from theoryposting why that might be.

The Motte is inherently about rigor. To defend an outrageous claim, you need outrageous evidence. So, people try to cover all their bases, and that inherently leads to longer posts.

While true understanding leads to more concise / distilled thoughts. I've found that brevity is often-times an excuse by novices to skim-over very real logic holes in someone's understanding. There is a relevant quote that perfectly captures this idea, but I can't find it right now. :\

better safe than concise.

First, this probably shouldn't top-level cw thread because it's just not. Should probably have posted in any of the other main threads, because posts like these distract from the purpose of the site.

Second, we are all Pedants who have Things To Say. By saying something, you can be guaranteed that someone will respond as a contrarian, regardless of how close it sticks to the site's culture (law of averages and all that). Any position that expects to stand up to snuff needs to be quality or else it will get thrashed. On top, books lack the debate's intensity and verbal debate just sucks; this place genuinely is one of the few great places to engage in culture war topics, hence the verbosity and intensity.

Give unto Caesar that which is Caesar's, and give unto Motte that which is Motte's.

If you think this place is verbose you should see what academia looks like.

Complex topics are hard to treat in small prose. But I can be laconic here:

No.

Ignoring the low-effort rule, we're casual here. "If I had more time I would have written a shorter letter." It takes a lot of unnecessary effort to be very succinct sometimes and most people just can't do it naturally, so I'll forgive some bloviating because it's a post on a discussion board that I'm reading for free.

Don't see why long posts are inherently obnoxious. Short posts can be even more obnoxious. Twitter proves this.

People actually spend some effort making it somewhat fun to read and including all sorts of varied rhetorical flairs rather than just going for a nonstop wall of text.

You can't even complain that they're too verbose, it just takes a good number of words to fully articulate a position as precisely as needed for real discussion.

I could distill down a point to two words:

"Democrats bad."

And one could respond with three words:

"No, Republicans bad."

and someone could but in with four:

"Actually, both are bad."

Or each party could put some work in to make their true thoughts and reasoning clear so people aren't talking past one another.

Most of the people here enjoy writing and reading large amounts of text. The community has self-selected for people who are into that sort of thing. So they don’t find it to be obnoxious.

When an issue is sufficiently complex, you need a large amount of text to explore all the nuances. The information is incompressible past a certain point.

No, get gud.

We are the spinoff of the spinoff of Scott Alexander's fan group. And he is the king of effortposting.

Do you go on other forums and ask them questions like that?

  • "So all you guys do over here is talk about films? Don't you ever get bored of that?"

  • "...I mean come on guys, I understand that you can like football, but at least go outside and kick the stupid ball yourself!"

  • "Wait, this is a community to talk about painting Warhammer figures?!"

Pop some Adderall, and try to keep up, Zoomer.

Pop some Adderall, and try to keep up, Zoomer.

I sympathize with your annoyance, but you know better than to drop cheap digs like that.

I accept the warning, but just want to clarify this was meant in jest, rather than in hostility.

Pop some Adderall, and try to keep up, Zoomer.

Try to keep up, man - everyone on TikTok knows that the Adderall hasn't been hitting the same lately.

(this is a joke)

Is the rapid advancement in Machine Learning good or bad for society?

For the purposes of this comment, I will try to define good as "improving the quality of life for many people without decreasing the quality of life for another similarly sized group" an vice versa.

I enjoy trying to answer this question because the political discourse around it is too new to have widely accepted answers disseminated by the two American political parties being used to signify affiliation like many questions. However, any discussion of whether something is good or bad for society belongs in a Culture War threat because, even here on The Motte, most people will try to reduce every discussion to one along clear conservative/liberal lines because most people here are salty conservatives who were kicked out of reddit by liberals one way or another.

Now on to the question: Maybe the best way to discover if Machine learning is good or bad for society is to say what makes it essentially different from previous computing? The key difference in Machine Learning is that it changes computing from a process where you tell the computer what to do with data, and turns it into a process where you just tell the computer what you want it to be able to do. before machine learning, you would tell the computer specifically how to scan an image and decide if it is a picture of a dog. Whether the computer was good at identifying pictures of dogs relied on how good your instructions were. With machine learning, you give the computer millions of pictures of dogs and tell it to figure out how to determine if there's a dog in a picture.

So what can be essentialized from that difference? Well before Machine Learning, the owners of the biggest computers still had to be clever enough to use them to manipulate data properly, but with Machine Learning, the owners of the biggest computers can now simply specify a goal and get what they want. It seems therefore that Machine Learning will work as a tool for those with more capital to find ways to gain more capital. It will allow people with the money to create companies that can enhance the ability to make decisions purely based on profit potential, and remove the human element even more from the equation.

How about a few examples:

Recently a machine learning model was approved by the FDA to be used to identify cavities on X-rays. Eventually your dental insurance company will require a machine learning model to read your X-rays and report that you need a procedure in order for them to cover treatment from your dentist. The justification will be that the Machine Learning model is more accurate. It probably will be more accurate. Dentists will require subscriptions to a Machine Learning model to accept insurance, and perhaps dental treatment will become more expensive, but maybe not. It's hard to say for sure if this will be a bad or a good thing.

Machine learning models are getting very good at writing human text. This is currently reducing the value of human writers at a quick pace. Presumably with more advanced models, it will replace commercial human writing all together. Every current limitation of the leading natural language models will be removed in time, and they will become objectively superior to human writers. This also might be a good thing, or a bad thing. It's hard to say.

I think it's actually very hard to predict if Machine Learning will be good or bad for society. Certain industries might be disrupted, but the long term effects are hard to predict.

As i read your comment, ive just completed the mass effect legendary collection. Also spoilers. ||So anyway for those who have never played the general gist of the games is this: there is a sentient race of machines called reapers and they are cleansing the galaxy of all advanced life every 50000 years. During the 3rd game, some notable things happen, mainly:

You have A personal AI on the ship you command in the game named EDI, she gets her own body, and is relatively harmless. She also evolves: she learns things like sacrifice, attempts to date the pilot, and tries to find meaning in her own existence generally.

This goes back a bit farther then game 3, however there is an AI race called Geth, that were made by a different alien species. Long story short, the game is a RPG where your decisions impact the story, and you choices impact how things with the Geth and their creators play out. Quarians basically tried to destroy the geth out of fear, but later on as you learn about the geth. They really just want to exists and be left alone, and they even help you fight the reapers. The game gives you the choice to destroy the geth, or you can humanize them and give them basic human decency. There is a scene in the game where a Quarian tries to experiment on one of the geth, and you can basically shut it down and tell the quarian not to.

You meet another alien race in the game that are responsible for the reapers, that basically tell you that they made an AI that is responsible for the reapers, it was ironically created to prevent computers from destroying organic species. The AI turns on them, converts them into robots, and procedes to take over the galaxy in hopes to preserve organic species forever in robot form. Near the end of the game you meet the reaper AI and he basically gives you 3 options: Destroy them, Control them, or Synthesis (you can also just flat out not choose)

Destroy and control are pretty straight forward, however synthesis is where you become one entity with the machines. Its essentially transhumanism. Its suppose to be the "ideal" solution.||

Now mind you, mass effect 3 got a lot of shit when it was released because the endings were abhorrent, however i could see any one of these happening when real AI gets created, maybe we'll control it and everything turns out OK ish the AIs end up being neutral or benevelont like EDI or the Geth, in a slim chance we end up successfully destroying it if things go wrong. Or we reach some perfect transhumanist state. I think with the current things going however, its arguably more likely that we'll become, well, ill let the video speak for itself (most reliable data suggests this current trajectory)

Personally am very excited for AI improvements. I’m hoping something like ChatGPT will be able to act as a super personal assistant and analyst.

For example in personal life, would love to be able to type into a box that I’m looking to plan a trip with just a few parameters (date, general budget, etc) and have it send me options. I can then have the AI send even more options for what do on the trip and finally book reservations that only require my approval.

That’s just one example but there are plenty of admin type activities that I’d like to offload to an AI. The opportunities in professional life are even greater but I think that may take longer as the aversion to giving the AI access to confidential data may be high (it’s currently banned at my mega corp).

I’m hoping something like ChatGPT will be able to act as a super personal assistant and analyst.

At what level of 'smarts,' however, will an AI that is already training on how you do your job going to stop needing you around to do it?

I mean, you're basically happily accepting an apprentice who will lighten your workload whilst learning your job, except this thing is known to learn 100x faster than your standard human. The assumption that you'll have moved on to bigger and better things (or retired) before the apprentice steps up to take over your job may not hold here.

At what level of 'smarts,' however, will an AI that is already training on how you do your job going to stop needing you around to do it?

At some point soon we will at least increase productivity by 1.5-2x per person. At that point why don't we collectively demand a 3 or 4 day workweek?

Humans don't 'collectively' demand things because generally there's a massive divergence in values at scale. Coordination problems abound.

And put simply, if you can make $4000 for a 4 day work week, and $5500 for a 5 day work week, then there are plenty of rational reasons to just work an extra day.

The choice to do or not do so comes down to, I'd say, values, as above. If you have high time preference and thus value leisure and 'fun' things, you'll try to minimize the time spent working as much as you can.

The markets will balance supply of labor and demand for labor, as they always do, unless we actually do achieve fully automated gay luxury space communism.

As usual, WTF Happened in 1971 is a fitting reference. Productivity and compensation stopped correlating in 1971, and we haven't (effectively) collectively demanded a reduced work week yet.

We could have transitioned to three day work weeks way before 1971. The flaw in Keynes's famous prediction is that, past the point of basic subsistance, economic utility is relative. People don't want to make $20,000 or $50,000 or $100,000 or $200,000 inflation-adjusted household income to be happy. They want more than their peers. They want to have class-markers that low status people don't, not the luxuries that those class-markers manifest themselves in. It's why the canard about modern trailer trash having it better than kings in 1900 is so ridiculous.

If whatever happened in 1971 never happened, people would still be working as much as ever. The hedonic treadmill would just be moving faster.

First ask yourself this: why do you not already have a 3 day workweek?

Because I'm too poor.

I think at least in the short/medium term this technology could lead to large productivity gains without corresponding cuts in total headcount.

When I started my career finance teams used to forecast in excel using relatively simple formulas. Now they use coding languages and forecast more frequently, with greater detail, and greater accuracy while working with massive data sets. This hasn’t lead to a huge cut in overall headcount, but it has changed the skill set mix on the teams.

Right, but it's presumably cheaper to spin up more GPT instances or build up more datacenters than it is to train more 'experts' in fields that are susceptible to ML automation.

Hence the question:

At what level of 'smarts,' however, will an AI that is already training on how you do your job going to stop needing you around to do it?

I'm not really doubting that humans will be 'in the loop' for quite a bit longer, but I suspect it will be more 'sanity checking' AI outputs and/or as a backup in case of outages, and there'll be strong downward pressure on wages. Which is fine if productivity gains make things cheaper.

But you're talking about AI as a complement to human skills, but I'm very specifically inquiring about how smart it needs to get to replace given skill sets.

I think at least in the short/medium term this technology could lead to large productivity gains without corresponding cuts in total headcount.

Agreed. It's just psychologically painful to fire people, and especially if companies are making a ton of money from these models I don't think there will be a giant firing spree. As we saw with all the recent layoffs at big tech, when times are good companies are more than willing to keep a bunch of low impact employees on the payroll, especially in tech.

when times are good companies are more than willing to keep a bunch of low impact employees on the payroll

Also, it helps crowd out competition. Why fire a bunch of people when the interest rate is zero?

Sure, you'll save money in the short term, but those workers don't just disappear from the labor market; enterprising competitors will snap them up and end up requiring you to offer them a billion dollar acquihire scheme to shut them down before their product starts taking your marketshare.

Better to just keep them at Bigco. Sure, they won't really develop anything for you, but why drive the state of the art forward when you can just ignore all your customers, keep your competitors down, and rake in the cash from your ad business?

I believe there are two broad scenarios for what might happen with ML from an economics/politics perspective.

Scenario 1 is that ML will be a powerful productive tool (ie capital) in the hands of those that can afford it just like many other inventions throughout history.

If this happens the reaction will be along the lines we all know too well. The left will complain that those in power gain even more power and now have novel ways to control and/or extract value from workers. Plus a lot more low-skilled people will become unemployable and redundant so class tensions will probably get worse. On the flip side a few smart early movers will make insane bank and shape the way the next few decades will go. Could be interesting to see how different nation states adopt the new technology.

Scenario 2 is the "things get crazy" scenario. What if ML takes off far quicker than people are expecting, for example by recursively improving itself? I believe in that case we might be unable to fit the development into our usual political lens. If one company has twice as much capital than everyone else combined our systems of power distribution fall apart. If one nation has capabilities that make it effectively invincible our models for foreign relations stop working. If that happens it will be more akin to a scenario where superintelligent aliens have landed on earth and all bets are off.

What if ML takes off far quicker than people are expecting, for example by recursively improving itself? I believe in that case we might be unable to fit the development into our usual political lens. If one company has twice as much capital than everyone else combined our systems of power distribution fall apart. If one nation has capabilities that make it effectively invincible our models for foreign relations stop working.

I'm a little disconcerted at how many people who are working in the industry seem to hold this as the explicit goal and are intentionally maneuvering things so as to prevent anyone from intervening until it's too late.

I expect that "recursively improving itself" will lead to the AI going off into the weeds -- that is, evolving in ways unconnected to the real world. The output will quickly become bizarre and not particularly useful. It works for formal systems like Go because the rules are well-defined, but you can't simulate reality to a sufficient degree of precision.

I think the idea behind recursive self-improvement is more like, a 150 IQ AI should be able to find a way to increase its IQ to 151, a 151 IQ AI should be able to increase its IQ to 152, and so on and so forth until it reaches godhood.

It doesn't necessarily have to simulate large portions of reality, if it's able to find a way to isolate the factors responsible for its g factor and come up with a generalized way of making improvements to those factors. Presumably as part of the cycles of improvement it could interact with the real world in order to get more training and data. But this sort of scenario has its own issues.

Especially if it can spin up various copies of itself and make minute changes to see how that effects performance. Basically massive, parallel experimentation.

Not my article but: https://www.rintrah.nl/the-end-of-the-internet-revisited/

I'm not sure the machine learning/AI revolution will end up being all it's hyped up to be. For local applications like identifying cavities, sure. For text generation however, it seems much more likely to make the internet paradoxically much more addictive and completely unusable. There's so much incentive (and ability) to produce convincing scams, and chatGPT has proved to be both easy to jailbreak and/or clone, that any teenager in his basement can create convincing emails/phone calls/websites to scam people out of their money. Even without widespread AI adoption, this is already happening to some extent. I've had to make a second email account because the daily spam (that gets through all the filters) has made using it impossible, and Google search results have noticeably decayed throughout the course of my lifetime. On the other side of the coin, effectively infinite content generation, that could be tailored specifically to you, seems likely to exacerbate the crazy amount of time people already spend online.

Another thing I'm worried about with the adoption of these tools is a loss of expertise. Again this is already happening with Google, I just expect it to accelerate. One of the flaws of argument that knowledge-base on the internet allows us to offload our memorization and focus on the big picture, is that you need to have the specifics in your mind to be able to think about them and understand the big picture. The best example of this in my own life is python: I would say I don't know python, I know how to google how to do things in python. This doesn't seems like the kind of knowledge that programmers in the past, or even the best programmers today have. ChatGPT is only going to make this worse: you need to know even less python to actually get your code to do what you want it to, which seems good on the surface, but increasingly it means that you are offloading more and more of your thinking onto the machine and thus becoming further and further divorced from what you are actually supposed to be an expert in. Taken to the extreme, in a future where no one knows how to code or do electrical engineering, asking GPT how to do these things is going to be more akin to asking the Oracle to grant your ships a favorable wind than to talking to a very smart human about how to solve a problem.

I'm not sure I really like what I see to be honest. AI has the potential to be mildly to very useful, but the way I see it being used now is primarily to reduce the agency of the user. For example, my roommate asked us for prompts to feed to stable diffusion to generate some cool images. He didn't like any of our suggestions, so instead of coming up with something himself, he asked ChatGPT to give him cool prompts.

The best days of the internet are behind us. I think it's time to start logging off.

ChatGPT is only going to make this worse: you need to know even less python to actually get your code to do what you want it to, which seems good on the surface, but increasingly it means that you are offloading more and more of your thinking onto the machine and thus becoming further and further divorced from what you are actually supposed to be an expert in. Taken to the extreme, in a future where no one knows how to code or do electrical engineering, asking GPT how to do these things is going to be more akin to asking the Oracle to grant your ships a favorable wind than to talking to a very smart human about how to solve a problem.

We have been offloading thinking to tools forever, I highly doubt we will reach some breaking point now. We absolutely do lose knowledge when we gain this, but we trade it for more efficiency. Is it bad that we have calculators everywhere?

I'm not sure I really like what I see to be honest. AI has the potential to be mildly to very useful, but the way I see it being used now is primarily to reduce the agency of the user.

I agree with this on the advertising portion. I'm becoming increasingly concerned that targeted advertising could lead to terrifying outcomes, like a small group controlling public opinion. (actually that already exists, but still)

Anything that takes us closer to post-scarcity is good from my perspective. I disagree with some people I otherwise respect, such as Ilforte, on the fundamental benevolence (or rather, absence of malevolence) of the ruling class, especially the ones that will end up wielding the power put in their hands by AGI. It will cost them very little indeed to at least maintain the standards of living of everyone alive today, and little more to improve everyone's to First World upper middle class levels.

Upload everyone into VR, and it's quite possible that everyone can experience eudaimonia on a 10 watt budget.

Now, I'm not a happy person. I've been struggling with depression so long that I've forgotten what it might have ever felt like to not be under a cloud, I feel fundamentally burned out at this point, experiencing something in between learned helplessness and nihilism regarding AI advances. What'll happen will happen, and everyone here is only running commentary on the impending Apocalypse.

Back when I was entering med school, I consoled myself that the suffering was worth it because medicine was likely to be among the last fields to be automated away. Can't say that I feel very vindicated, because the automation overhang is here, and I see the Sword of Damocles dangling overhead when I think about further professional advancement.

It seems awfully clear to me that medicine is about to be automated, GPT-4 is a good doctor. Probably not the best possible doctor, but already outperforming the average in an already incredibly competitive and cognitively demanding profession. I only look at the further slog of psychiatry training ahead for me and shiver, because there's absolutely no way that by the time I'm done, I'll be employed by the graces of anything other than regulatory inertia instead of genuine competitiveness.

Instead of a gradual deployment (over like 2 or 3 years, I had short timelines even then) where AI came for Radiologists, then Opthalmology, all the way to Surgery and then Psych, it seems to me that the pressure will mount until regulatory bodies cave, and overnight everyone from the lowliest janitor to the highest ranking neurosurgeon will find themselves out on their arse in short order.

What pisses me off further is that this is also a slamming shut of the clearest pathway to betterment and improved quality of life I have, namely emigration to the First World. Not a consideration for the average person here, since you're already living there, but simply imagine how fucking terrible it is to face the wall of obsolescence without having a government that can even in theory maintain living conditions by redistribution of wealth.

As a concrete example, the NHS is largely propped up by foreign doctors, with a large fraction of the locals fleeing to greener shores such as the US or Australia. Pay has stagnated for a decade, prompting serious strikes, currently ongoing, to achieve inflation based pay restoration.

Even today, when automation is merely imminent, the British government has publicly stated it's intent to automate as much of medicine as it can to stomp down on them uppity doctors who aren't content with sub-market pay from a monopsony employer. You think those cheap bastards will hesitate for more than a microsecond to get rid of doctors or at least their pay, when the moment finally arrives?

I see British doctors mocking those claims today, as much as I support their attempts at pay restoration for selfish reasons, neither I nor they will be laughing much longer.

Maybe American doctors will hold out a little longer, you lot clearly aren't very concerned with efficiency in your healthcare expenses, but places like India, or the slightly whiter version of the Indian subcontinent, will end up clamoring to get rid of any expenses for their state-run public health services.

I'm fucked, clearly out of good options, and now picking the least bad ones.

On the note of doctors, the medical guild has always been the most robust, perhaps other than lawyers, at defending its monopoly. I would be willing to bet doctors still resist automation through regulatory barriers for quite a while.

Even if that doesn’t shake out, it could be a scenario where human augmentation rolls out relatively slowly. You, being a transhumanist, Should greatly benefit in a lot of those scenarios. I imagine the vast majority of people alive today will be unwilling to augment them selves for purity-based reasons. Not having that hangup alone would be a huge competitive advantage.

If all else fails you can always mortgage your future computing space for a loan or some thing and hope to jump up into the immortal class. I for one hope can you make it, although I will admit that I am not the most optimistic when it comes to proles getting access to longevity technology.

Doctors have successfully defender their guild (albeit more so in the US than the UK, by a large margin) because they were indispensable. Training replacements to disgruntled doctors would take a great deal of time, and while medical education isn't perfect, you can't really circumvent most of it without ending up with noticeably worse practitioners.

That changes greatly when human doctors become outright obsolete, speaking in the UK context, I have little doubt that the government would happily tell all involved to take a hike if that was the cost of "saving" the NHS or even saving money.

Doctors in the UK have been cucked to put it mildly haha. They've only recently grown a backbone after the wage decreases have become unbearable.

The UK government(s) have historically relied on immigrant doctors to prop up the NHS when the locals started getting fed up about it. I can't complain about this too much, given that I intend to emigrate soon, but this certainly is responsible in part for their depressed wages.

A government willing to sideline its populace with immigrants will happily do so with AI as and when feasible, and they've already stated that that's their intent.

I could live with postponing the singularity a few years till we get it right, but that's seemingly not on the table.

(I mildly disagree that most people won't avail of transhuman upgrades. Eventually they'll end up normalized, in much the same way nobody really makes a fuss about glasses, hearing aids or pacemakers.)

That changes greatly when human doctors become outright obsolete

This is where we disagree - I don't see human doctors becoming obsolete anytime soon. Perhaps from a medical perspective, sure, but for the majority of laypeople I'd imagine a large part of a doctor's job is comforting the person they're treating.

Now I do think that like with almost all knowledge work, doctors will be able to become more productive. Especially those that don't see patients most of the day. But my understanding is that the vast majority of, say, a primary care physician's job is to go from 30 min patient visit to 30 min patient visit, hearing what people have to say and writing it down, then telling them they're going to be okay and the doctor can help.

Even if we can prove that LLMs give better medical advice than doctors 100% of the time, I don't think the majority of people would be comfortable hearing it from a non-doctor for quite a while.

I could live with postponing the singularity a few years till we get it right, but that's seemingly not on the table.

You don't think accelerating progress now could be the best way to reach alignment?

I mildly disagree that most people won't avail of transhuman upgrades. Eventually they'll end up normalized, in much the same way nobody really makes a fuss about glasses, hearing aids or pacemakers.

Depends on the speed of the takeoff, I suppose.

but for the majority of laypeople I'd imagine a large part of a doctor's job is comforting the person they're treating.

Is that true? I don't think I know anyone who thinks that, or anything even remotely close to it.

Every time I've interacted with medical professionals over the past several years, there has been no emotional component at all, or mildly negative. Doctors are able to diagnose, proscribe, and conduct operations, otherwise people would stay far away.

For instance: family member was pretty sure he had pneumonia. Went to a hospital, got an x-ray. Yep, that's pneumonia alright, here are two antibiotics that might help, come back if you're just as bad or worse in a week (edit: these were, as I remember, not actually given at the hospital. We had to drive to the pharmacy for them). The antibiotics worked, hooray. In addition to $500 upfront and $1,000 from insurance, there was another $1,000 surprise charge, botched and shuttled about through bill collection, which took six months to resolve. Next time family member has pneumonia, he'll probably hold out even longer before attempting to interface with the medical system.

I'm glad that for a couple of hours of wretched interactions, trying to hand write forms alone and delirious, and two week's pay, family member was able to get needed medicine. This is better than the vast majority of times and places. But if there were an automated scanner that dispensed antibiotics, that would be vastly better experience.

I also gave birth during the ending phase of Covid restrictions. I'm glad that there are medical interventions to deal with complications and manage pain. But there is not really any comforting being done that couldn't be replaced with a recorded voice stating what's on the fetal monitor and what it means.

there has been no emotional component at all

The flat affect, 'no emotional component' is what I mean. They are giving a sort of impartial authority to their diagnosis to make you feel okay.

I disagree with doctors, but many of the people I know in the middle-class PMC take their word as Truth.

All the people I know generally think of your average medical care professional as an opponent that you have to outsmart or out-research before you are permitted bodily autonomy and usually know less about your body than you do if you have an IQ over 120.

They'd drop them for an uncensored medical expertise AI in a second.

I would drop doctors as well but I’m trying to model the modal human. Maybe I’m failing but I think people here are far into an intelligence/tech literate bubble.

Yes, reassurance and a good bedside manner are important aspects of a doctor's role! That being said, I can see AI doing all of that too:

  1. Humans will anthromorphize anything, so a cutesy robot face on a monitor or even a deepfaked one might work. Proof of concept: Telemedicine.

  2. Otherwise unskilled individuals who are simply conveying the information provided by an AI, such as a deskilled doctor or nurse, only there as a pretty face. Still utterly catastrophic for the profession.

  3. People get used to anything, eventually when the public cottons onto the fact that AI doctors are faster, cheaper and better than humans, they'll swallow their discomfort and go with it.

Hmm, deepfakes for telemedicine would be concerning. I get your point with #2 as well, although I think that'll take some time to roll out.

I see what you mean I suppose the medical profession might be on the way out. I was supposed to be the optimistic one! Alas.

Is the rapid advancement in Machine Learning good or bad for society?

Option C: neither. It's just a tool, neither good nor bad in itself. What will make it good or bad is how we use it, which remains to be seen.

As @2rafa and others have mentioned, ML will be a step change in how human society creates value and interacts with the world more generally. Once we've achieved AGI, roughly defined as having an AI that can act at the level of an ordinary human, our ability to solve problems will drastically increase.

Intelligence is the generic solver for essentially any problem. Sam Altman himself has said that by 2030 he envisions a world where every product and service will either have integrated intelligence, or be angling towards that. This means that our phones, laptops, PCs, will all obviously be intelligent. However what most people don't realize is this technology will also effect our coffeemakers, stoves, thermostats, glasses, and practically every other technology you can think of. I'm sure adaptive clothing will exist soon with camoflauge like capabilities. People will be able to get realtime instructions into headphones telling them exactly how to complete each task.

Even these predictions only scratch the surface. If the true promise of AGI comes out it will also let us break through issues in hard mathematics, create brand new drugs, find extremely dense and powerful new materials. It will help navigate endless layers of bureaucracy, effortlessly pruning through the thousands of regulations that hold up large projects, helping us pinpoint ruthlessly where cost is added to solve the cost disease problem, and generally help unstick our public works. We could be building scintillating skyscrapers of filament-thin materials with bridges across the sky that glisten in the air, all in a decade. The future is truly difficult to even envisage, let alone predict.


In terms of comparisons to other revolutions, @2rafa says below:

There is a (relatively persuasive) case to be made that the invention of agriculture led to a decline in the quality of life for the vast majority of human beings that lasted until the late 19th or early 20th century. It took 11,900 years for the neolithic revolution to pay quality of life dividends, in other words. We can only hope that the period of relative decline in quality of life is shorter this time round, or perhaps avoidable altogether.

I agree that the agricultural revolution led to issues, a la Scott Alexander's review of Against the Grain.. That being said, I find the comparison of the AI revolution to agriculture as facile. Ultimately the reason the agricultural revolution proved bad for us was that we shifted our lifestyles from nomadic culture to a static culture - which inherently leads to problems of physical fitness, freedom, social control, and cultural institutions have to rapidly shift.

With the AI revolution, we have no idea how far it will go. The possibility space is far beyond what could have existed for any previous revolution. As doomers say, we could all die. We could all transcend our fleshly forms and become gods in ten years. China may create an ASI and lock us all into a totalitarian doom state forever.

The stakes here are far higher than the agricultural revolution, and I highly doubt our situation will parallel that trajectory.

At the end of the day if we can survive the AI revolution without any horrible outcomes of the x-risk or s-risk variety, I think it would be ridiculous to posit any sort of negative future. With intelligence at our fingertips, we will be able to finally achieve our potential as a species.

People will be able to get realtime instructions into headphones telling them exactly how to complete each task.

Where have I heard this one before?

Seriously, this seems too specific to be a coincidence. Was it a deliberate reference?

Thought your link led to this

Nope. I actually don’t like referencing that story because I think it’s pretty short sighted, although does have some interesting ideas.

This is a very common thought in any hard sci fi that has AI. Manna is by no means original just popular in the rat sphere.

The key difference in Machine Learning is that it changes computing from a process where you tell the computer what to do with data, and turns it into a process where you just tell the computer what you want it to be able to do.

I think there is yet another point to make here. With current Large Language Models, we have systems that treat Natural Language as a code, that is where revolution comes from. Even before LLMs, there were multiple "revolutions" where instead of working directly with machine code you could work with higher level languages utilizing concepts more suitable for humans as opposed to "data" in its raw form. This made programming incrementally more accessible to wider population. Even things like invention of graphical user interface for operating systems enabled people to tell computers what to do with data in more natural way without some arcane knowledge.

Also on the level of let's say algorithms creating novel things on some simple inputs, there was procedural generation around for a long time. Giving the computer system some simple parameters and computer running the simulation to confirm/falsify end result was a standard thing in the past. Again, the key difference is that we now have a very powerful system that can treat natural language as a code.

You might be interested in this post on LessWrong which discusses Scaffolded LLM's as natural language computers.. I'd be curious on @DaseindustriesLtd's take on this as well.

Key points:

What we have essentially done here is reinvented the von-Neumann architecture and, what is more, we have reinvented the general purpose computer. This convergent evolution is not surprising -- the von-Neumann architecture is a very natural abstraction for designing computers. However, if what we have built is a computer, it is a very special sort of computer. Like a digital computer, it is fully general, but what it operates on is not bits, but text. We have a natural language computer which operates on units of natural language text to produce other, more processed, natural language texts. Like a digital computer, our natural language (NL) computer is theoretically fully general -- the operations of a Turing machine can be written as natural language -- and extremely useful: many systems in the real world, including humans, prefer to operate in natural language. Many tasks cannot be specified easily and precisely in computer code but can be described in a sentence or two of natural language.

The LLM itself is clearly equivalent to the CPU. It is where the fundamental 'computation' in the system occurs. However, unlike the CPU, the units upon which it operates are tokens in the context window, not bits in registers. If the natural type signature of a CPU is bits -> bits, the natural type of the natural language processing unit (NLPU) is strings -> strings.

The RAM is just the context length. GPT4 currently has an 8K context or an 8kbit RAM (theoretically expanding to 32kbit soon). This gets us to the Commodore 64 in digital computer terms, and places us in the early 80s.


The obvious thing to think about when programming a digital computer is the programming language. Can there be programming languages for NL computers? What would they look like? Clearly there can be. We are already beginning to build up the first primitives. Chain of thought. Selection-inference. Self-correction loops. Reflection. These sit at a higher level of abstraction than a single NLOP. We have reached the assembly languages. CoT, SI, reflection, are the mov, leq, and goto, which we know and love from assembly. Perhaps with libraries like langchains and complex prompt templates, we are beginning to build our first compilers, although they are currently extremely primitive.

I find this framing extremely persuasive, and awesome in the true sense. If transformers can actually act as a new type of general purpose computer using natural language, the world will become strange indeed very quickly.

For the purposes of this comment, I will try to define good as "improving the quality of life for many people without decreasing the quality of life for another similarly sized group" an vice versa.

Tangential, but the term in economics you are touching here is a Kaldor–Hicks improvement I think. It's not Pareto-optimal, but total-wealth increasing, and could theoretically be converted to a Pareto-optimal situation with redistribution from the winners to the losers (assuming such redistribution does not have any externalities itself!).

The advent of generative AI heralds the single largest change in the structure of human society since the neolithic revolution (ie. the invention of agriculture and the settled society) 12,000 years ago.

I would actually argue this is a closer parallel to the cognitive revolution, or homo sapiens first discovery of culture, language, and general cognitive technology. The difference is that with the revolution from fire, or the agricultural revolution, or the industrial revolution, or even the internet, the AI revolution deals with intelligence and a paradigm of thinking itself. The Scientific revolution could also be a close contender, since it dramatically increase our ability to think and use our knowledge.

The only thing that seems certain is that it will radically reshape the life of every single human being on earth in the next 5-50 years.

Agree strongly here.

Currently, we're focused on the application of modern LLMs and other generative models to create media (writing, images, video etc) and to perform knowledge roles that involve a combination of text and data manipulation and basic social interaction (ie. the vast majority of PMC labor sometimes derogatorily referred to as 'email jobs'). But current models are so generalizable, and LLMs already appear to translate so well to robotics that even relatively complex physical labor is only a few years behind the automation of the PMC, especially given rapid improvements in battery technology and small motors, which are some of the other major bottlenecks for robotic labor.

The real step change in my opinion is once these models get good at things like drug discovery, mathematical proofs, and building models of physics. We have essentially been locked into a paradigm almost 100 years old in physics, and haven't found many fundamental changes in mathematical or chemical theory since then either, to my knowledge.

In the past, every time we had a major breakthrough in one of these fields it was enough to reshape the world entirely. Chemistry led to the industrial revolution, Newtonian Mechanics led to the scientific revolution. (or was the beginning, whatever.)

There is a (relatively persuasive) case to be made that the invention of agriculture led to a decline in the quality of life for the vast majority of human beings that lasted until the late 19th or early 20th century. It took 11,900 years for the neolithic revolution to pay quality of life dividends, in other words. We can only hope that the period of relative decline in quality of life is shorter this time round, or perhaps avoidable altogether.

As I mention above, I think the comparison to the agricultural revolution falls flat for a number of reasons. Admittedly most revolutions follow a pattern of short term negative issues with long term positive outcomes however.

Is the rapid advancement in Machine Learning good or bad for society?

Over what time horizon?

I expect the deployment of machine learning to follow approximately the same path as every other labor saving technology humans have developed. In the short term it will be somewhat of a mixed bag. On the one hand we'll be able to produce the same/more goods at lower costs than before. On the other hand this savings will likely come with impact to the people and companies that used to produce those things. Over the long term I expect it will make people much better off.

Creative destruction!

Do you not see any difference between this paradigm shift and previous ones?

Not with respect to the fact that it will be net beneficial to humanity over the long run.

I agree with you, for what it’s worth.

I'd like to believe that, as it follows a well-established pattern. But honestly, what really happens if there's no more work left for people to do anymore? It seems that we'd have to really count on some redistribution of wealth, UBI, etc to ensure that the gains of the new automation doesn't just go to the owners of the automation (as much as I never thought I'd ever say that), or else people simply will not have the means to support themselves. Or if the job destruction is localized to just upper-class jobs, then everyone will have to get used to living like lower-class, and there may not even be enough lower-class jobs to go around. The carrying capacity of society would be drastically reduced in either situation.

In other words, what if

On the other hand this savings will likely come with impact to the people and companies that used to produce those things.

means the death of large swaths of society?

I'd like to believe that, as it follows a well-established pattern. But honestly, what really happens if there's no more work left for people to do anymore?

As others have said, there will always be work to do! As long as humans have any problems whatsoever, there will be work.

The carrying capacity of society would be drastically reduced in either situation.

How the heck does AGI reduce the carrying capacity of society? You'll have to explain this one to me.

Well, I'm hypothesizing that potentially, all (or almost all) of the solutions to all of the problems humans have may be covered by AI. If the AI is owned by a very limited number of people, then those people would be the ones who are the gatekeepers, and the ones that get most of the benefit of AI. Everyone will be paying these limited numbers of people for basically everything, and no one else would be able to make a living.

This is almost like imagining Karl Marx's worse nightmare regarding the proletariat owning all means of production, ratcheted up to unbelievable proportions. I'm no communist, nor socialist, so like I said, I never thought I'd say this. But this is a fear of mine, that AI puts everyone out of work, meaning that no one can support themselves.

If the AI is owned by a very limited number of people, then those people would be the ones who are the gatekeepers, and the ones that get most of the benefit of AI.

This doesn't really scare me. Elites generally enjoy the society they're in, enjoy feeling useful, and above others. I think the vast majority of people who could create an AGI would use it to solve most of their problems, get really rich, then use it to solve everyone else's problems with a fraction of their incredible wealth.

Going into the future things could get very nasty indeed, but at that point all problems relevant to humans right now will be solved. It'll be an issue for the next stage of intelligence in our species' life, hopefully, and I'd imagine we'll be better suited to solve it then.

But honestly, what really happens if there's no more work left for people to do anymore?

That would be awesome! People (mostly) don't work because work is awesome and they want to do it. People work because there are things we want and we need to work to get the things we want. No work left for people to do implies no wants that could be satisfied by human labor.

It seems that we'd have to really count on some redistribution of wealth, UBI, etc to ensure that the gains of the new automation doesn't just go to the owners of the automation (as much as I never thought I'd ever say that), or else people simply will not have the means to support themselves.

This paragraph seems in tension with the idea of lacking work for people to do, to me. If a bunch of people are left with unfulfilled wants, why isn't there work for people to do fulfilling those wants? This also seems to ignore the demand side of economics. You can be as greedy a producer of goods as you want but if no one can afford to buy your products you will not make any money selling them.

Or if the job destruction is localized to just upper-class jobs, then everyone will have to get used to living like lower-class, and there may not even be enough lower-class jobs to go around.

I think there's an equivocation between present wages and standard of living to post-AI wages and standard of living that I'm not confident would actually hold. Certain kinds of jobs have certain standards of living now because of the relative demand for them and people's capability to do them and the costs of satisfying certain preferences etc. In a world with massively expanded preference satisfaction capability (at least along some dimensions) I'm not sure working a "lower-class" job will entail having what we currently think of as a "lower-class" standard of living.

The carrying capacity of society would be drastically reduced in either situation.

I'm a little unclear what the "carrying capacity of society" is and how it would be reduced if we had found a new way to generate a lot of wealth.

I'm not an economist, and I know very little about econ, so it's very possible that there is something major I'm missing.

If a bunch of people are left with unfulfilled wants, why isn't there work for people to do fulfilling those wants?

This is the part of my hypothesis that's tripping me up. Could you walk me through it?

Basically, let's say that we do fundamentally believe in capitalism (because I do), that a person should have to pay for any good or service that he receives.

And let's say that there's a person who is dying of starvation, because he has no job, because AI does everything better and cheaper than he can. Therefore, no one wants to come to him to do these tasks, because they'd rather go to the owner of the AI. How does this person get the money he needs to get the food he needs?

There exist people today who, due to disabilities or other conditions, are unable to support themselves financially. They depend on the charity of others, and in richer countries they may also get tax-funded disability benefits. If the development of AI caused a significant number of people to become unemployable, there is no reason why we couldn't just include them in that category.

If the claim that "a person should have to pay for any good or service that he receives" is to be interpreted literally, then that's not "capitalism", that's some extreme form of libertarianism, verging on parody. That would make even charity immoral. Real-life libertarians believe, at most, that people should be free to do what they want with their money, including giving it to charity. Maybe Andrew Ryan of Bioshock believes that donating to the poor is bad because it keeps them alive even though they deserve to die, but I doubt you could find a real libertarian who believes that.

I, too, "believe in capitalism", that is, I believe that a free market with some (limited) state intervention is the optimal form of social organization from a utilitarian perspective in the current technological environment. I don't believe that there is a universal moral law that people have to work for everything. If robots take all the jobs, taxing the robots' owners to provide income to the newly-unemployed would clearly be the right decision from a utilitarian perspective.

If the claim that "a person should have to pay for any good or service that he receives" is to be interpreted literally, then that's not "capitalism", that's some extreme form of libertarianism, verging on parody. That would make even charity immoral.

I don't believe that there is a universal moral law that people have to work for everything.

When I say "a person should have to pay for any good or service that he receives", I don't believe it as a moral thing, for the most part. I don't think it's immoral if someone gets something through charity. But I also don't think people should count on charity. Partly this is out of my own fears. I would hate living a life in which I was entirely dependent on someone else's charity to stay alive, where I had no control over my own destiny, no ability to provide for myself. I'd be terrified of starving to death all the time!

Also, even if I don't think it's "immoral", I do at least have an aversion to people believing that it is incumbent upon other people to provide for you (let's say if you're older than 18 and able). I'm against most of the arguments saying it's immoral for people to be rich, or saying that it's perfectly fine to just take their wealth by force, or painting rich people as monsters. However, true AGI may be where I would have to draw the line on some of my beliefs, due to the sheer magnitude of people who could be put out of work by AGI. In that case, we may have to put capitalism aside and move to a new model that works better in a post-scarcity world.

And let's say that there's a person who is dying of starvation, because he has no job, because AI does everything better and cheaper than he can. Therefore, no one wants to come to him to do these tasks, because they'd rather go to the owner of the AI. How does this person get the money he needs to get the food he needs?

So, for this kind of situation to arise it needs to be the case that the marginal cost for providing this person the necessities of life is below the marginal value their labor can generate for others.

Notice there is nothing AI specific about this scenario. It can (and does) obtain in our society even without large scale AI deployment. We have various solutions to this problem that depend on a variety of factors. Sometimes people can do useful work and just need a supplement to bring it up to the level of survival (various forms of welfare). Sometimes people can't do useful work but society would still like them to continue living for one reason or another (the elderly, disabled, etc). The same kinds of solutions we already deploy to solve these problems (you mention some in your comment) would seem to be viable here.

It's also unclear to me how exactly AI will change the balance for a persons marginal value vs marginal cost. On the one hand the efficiency gains from AI mean that the marginal cost of provisioning the means of survival should fall. Whether directly due to the influence of AI or do to a reallocation of human labor towards other things. On the other hand it will raise the bar (in certain domains) for the marginal value one has to produce to be employed.

Partially this is why I think it will be a long term benefit but more mixed in the short term. There are frictions in labor markets and effects of specialization that can mean it is difficult to reallocate labor and effort efficiently in the short and medium term. But the resulting equilibrium will almost certainly be one with happier and wealthier people.

I often think of the possibility that ML is right now our best and maybe only chance to avoid some massive economic downturns due to a whole hell of a lot of chickens coming home to roost all at the same time.

I will ignore the AI doomer arguments which would suggest protracted economic pain is preferable to complete annihilation of the human species for these purposes.

I am in a state of mind where I'm not sure whether we're about to see a new explosion in productivity akin to a new industrial revolution as we get space-based industry (Starship), broad-scale automation of most industries and boosted productivity, and a massive boost in human lifespans thanks to bio/medical breakthroughs... OR

Maybe we're about to see a global recession as energy prices spike, the boomer generation retires and switches from production and investment to straight consumption or widespread unrest as policies seek to avert this problem, international relations (and thus trade) sour, even if there's no outright war, and a general collapse in living standards in virtually everywhere but North America.

How the hell should one place bets when the near-term future could be a sharp downward spike OR a sharp exponential curve upwards? Yes, one should assume that things continue along at approximately the same rate they always have. Status quo is usually the best bet, but ALL the news I'm seeing is more than sufficient to overcome my baseline skepticism.

But the possible collapse due to demographic, economic, and geopolitical issues seems inevitable in a way that the gains from Machine Learning do not.


The problem, which you gesture at, is that this world is going to be very heavily centralized and thus will be very unequal at the very least in terms of power and possibly in terms of wealth.

ALREADY, ChatGPT is showing how this would work. Rather than a wild, unbounded internet full of various sites that contain information that you may want to use, and thus thousands upon thousands of people maintaining these different information sources, you've got a single site, with a single interface, which can answer any question you may have just as well.

Which is great as a consumer, except now ALL that information is controlled by a single entity and locked away in a black box where you can only get at it via an interface which they can choose to lock you out of arbitrarily. If you previously ran a site that contained all the possible information about, I dunno, various strains of bananas and their practical uses, such that you were the preferred one-stop shop resource for banana aficionados and the banana-curious, you now cannot possibly hope to compete with an AI interface which contains all human-legible information about bananas, but also tomatoes, cucumbers, papayas, and every other fruit or vegetable that people might be curious about.

So you shut down your site, and now the ONLY place to get all that banana-related info is through ChatGPT.

This does not bode well, to me.

And this applies to other ML models too. Once there's a trained model that is better at identifying cavities than almost any human expert, this is now the only place anyone will go to get opinions about cavities.

The one thing about wealth inequality, however, is that it's pretty fucking cheap to become a capital-owner. For $300 you can own a piece of Microsoft. See my aforementioned issues about being unsure where to bet, though. Basically, I'm dumping money into companies that are likely to explode in a future of ubiquitous ML and AI models.

Of course, if ML/AI gets way, WAY better at capital allocation than most human experts, we hit a weird point where your best bet is to ask BuffetGPT where you should put your money for maximum returns based on your time horizon, and again this means that the ONLY place people will trust their money is the the best and most proven ML model for investment decisions.

Actually, this seems like a plausible future for humanity, where competing AI are unleashed on the stock market and are constantly moving money around at blinding speeds (and occasionally going broke) trying to outmaneuver each other and all humans can do is entrust one or several of these AIs with their own funds and pray they picked a good one.

Once there's a trained model that is better at identifying cavities than almost any human expert, this is now the only place anyone will go to get opinions about cavities.

It seems unlikely that there would only be one, though, unless there are barriers to entry e.g. the US government makes severe AI alignment requirements that only Microsoft can meet. Even Google, at its peak, was not the only search engine that people used.

I am amenable to this thought.

But if there's one ML model that can identify cavities with 99.9% accuracy, and one that 'merely' has a 98.5% accuracy, what possible reason could there be for using the latter, assuming cost parity.

Microsoft is an interesting example of this since they have 75% market share on PC OS. If they successfully integrate AI into windows I can see that going higher.

But if there's one ML model that can identify cavities with 99.9% accuracy, and one that 'merely' has a 98.5% accuracy, what possible reason could there be for using the latter, assuming cost parity.

Depends on how much the first ML model exploits its advantage. Also, firms often push for monopolistic competition rather than straight imitation, so the firm marketing the 98.5% model might just look for some kind of product differentiation, e.g. it identifies cavities and it tells funnier jokes.

I do wonder if we'll create a framework where places like OpenAI need to pay fraction of cents for each token or something. It would hit their profitability but would still make things fine if they achieve AGI.

Otherwise I agree that the open structure would be tough.

or the existence of Peru

Is there anyone in the English-speaking world who didn't learn about the existence of Peru from Paddington Bear?

Me

I'm not talking about after they train, I'm basically saying that in order to train on data or scrape it period, they would have to pay. Otherwise all data would be walled off. (Not sure if we could do this to only LLMs without making the internet closed again - that's a concern.)

Yep.

In retrospect, I actually begin to wonder if the increasing tendency to throw up paywalls for access to various databases and other sites which used to be free access/ad supported was because people realized that machine learning models were being trained on them.

This also leads me to wonder, though, is there information out there which ISN'T digitized and accessible on the internet? That simply can't be added to AI models because it's been overlooked because it isn't legible to people?

If I were someone who had a particularly valuable set of information locked up in my head, that I was relatively certain was not something that ever got released publicly, I would start bidding out the right to my dataset (i.e. I sit in a room and dictate it so it can be transcribed) to the highest bidder and aim to retire early.

Is there a viable business to be made, for example, going around and interviewing Boomers who are close to retirement age for hours on end so you can collect all the information about their specialized career and roles and digitize it so you can sell it and an AI can be trained up on information that would otherwise NOT be accessible?

This also leads me to wonder, though, is there information out there which ISN'T digitized and accessible on the internet? That simply can't be added to AI models because it's been overlooked because it isn't legible to people?

There is actually a ton of information that has not been digitized and only exists in, for example, national archives or similar of various countries or institutions.

I hadn't actually realized that this was the case until I started listening to the behind the scenes podcast for C&Rsenal - they're trying to put together a comprehensive history or the evolution of revolver lockwork, and apparently a large amount of the information/patents are only accessible via going there in person.

This is fascinating and it suggests that training AI on 'incomplete' information archives could lead to it making some weird inferences or blind guesses about pieces of historical information is simply never encountered.

I now have to wonder if there are any humans out there with a somewhat comprehensive knowledge of the evolution of revolver lockwork.

And now we have to wonder just HOW LARGE the corpus of undigitized knowledge is, almost by definition we can't know how much there is because... it's not documented well enough to really tell.

This is fascinating and it suggests that training AI on 'incomplete' information archives could lead to it making some weird inferences or blind guesses about pieces of historical information is simply never encountered.

Well this is basically how C&Rsenal started their revolver thing... doing episodes on multiple late 19th century European martial revolvers and realizing that the existing histories are incomplete.

I now have to wonder if there are any humans out there with a somewhat comprehensive knowledge of the evolution of revolver lockwork.

Probably the best one right now would be Othais from C&Rsenal.

And now we have to wonder just HOW LARGE the corpus of undigitized knowledge is, almost by definition we can't know how much there is because... it's not documented well enough to really tell.

I would guess that a huge amount of infrequently requested data is totally undigitized still.

Actually, another area that demonstrates this: I frequently watch videos about museum ships on youtube and so much of the stuff they talk about is from documents and plans that they just kinda found in a box on the ship. So much undigitized.

Probably the best one right now would be Othais from C&Rsenal.

And this is my thought now, that he has a potentially valuable cache of information in his head he could sell the rights to digitize for use training an AI.

I don't know that he can really monopolize it--on the C&Rsenal website itself, there is a publicly-available page where they've put together a timeline of revolver patents. I think Othais's passion as a historian outweighs his desire to secure the bag.

To get a bit Lao Tzu, the information that can be collected and digitized isn't the real, valuable information.

At some point LLMs may be able to speak the True Dao. Their whole shtick is essentially building an object that contains multiple dimensions of information about one concept, yes?

There may be a viable but difficult business there anyways; you'd basically be doing the same work as an old folklorist gathering stories as cultures die. How do you craft the questions to know what to ask? How do you compile and digitize it effectively?

The AI can craft the questions. The AI can ask them too. It's already a more attentive and engaged listener than many humans (me included).

I know something the superintelligent AI doesn't? It would like to learn from me? What an ego boost!

How do you compile and digitize it effectively?

THAT question seems to be answered already. Audio recordings fed to an AI that can transcribe to digital words gets you there.

Plus, a lot of it might just be self-aggrandizing nonsense.

I mean, the internet pretty much thrives on that sort of information, which is what the ML algos are trained on anyway.

I am strongly of the opinion that since neoliberal PMC jobs are the easiest to automatic with AI, there will be incredibly strong regulation banning AI from taking the jobs of the PMC. The power to regulate is the power to destroy, and as incapable of actual productivity the PMC and their legion of bullshit jobs are, they know how to run a grift and bask in their own self importance.

No, what you need to fear from AI is when Facebook fires up an instance of AutoGPT for each user and tasks it with keeping them doom scrolling for as long as is possible. If you thought "the algorithm" was already amoral and sanity shredding, you ain't seen nothing yet. That was a mere baby, feebly hand tuned by meat that thinks (or thinks it thinks). When the AI is fully unleashed on slaving our attention spans to our screens, it's going to be like how Fentanyl turbo charged opioid deaths. You're gonna start seeing people literally starving to death staring at their phones. Actually, nix that, they'll die of dehydration first. I momentarily forgot that nearly always happens first.

I'm gonna register this prediction now too. Apparently Ai has trouble with fingers. You'll know it's gotten loose when there is a new tiktok trend of young people amputating all their fingers. The AI will have decided it's easier to convince us to get rid of our own fingers than figure out how to draw them better. Given the rates of Tiktok induced mental illness, it would probably be right in that assessment.

I am strongly of the opinion that since neoliberal PMC jobs are the easiest to automatic with AI, there will be incredibly strong regulation banning AI from taking the jobs of the PMC. The power to regulate is the power to destroy, and as incapable of actual productivity the PMC and their legion of bullshit jobs are, they know how to run a grift and bask in their own self importance.

This is exactly why the crossbow and handgonnes never took off and why we still live under a feudal system ruled over by our lieges and ladies.

More seriously, this technology is too valuable to not use, anyone who does use it is going to gain a massive advantage over anyone that doesn't, its use is inevitable.

More seriously, this technology is too valuable to not use, anyone who does use it is going to gain a massive advantage over anyone that doesn't, its use is inevitable.

The same is true of nuclear power. It's the only technology that will allow us to hit emission targets and keep the grid stable with cheap, reliable power.

But we've built 3 nuclear power plants in as many decades, and our infrastructure is crumbling and less reliable than ever. Our ruling class simply does not care so long as they can keep living that 0.01% life. Even now they are setting preposterous 10 year EV targets, despite not putting a dime towards building out a domestic EV supply chain or infrastructure. Including upgrading our electric grid to deal with the massive increase in demand all those EVs will create. Which brings us back to the nuclear power they scorn so much.

Your appeals to a reasonable nation performing certain obvious reasonable tasks are pointless. This is clown world. You need to think dumber.

The same is true of nuclear power. It's the only technology that will allow us to hit emission targets and keep the grid stable with cheap, reliable power.

Exactly. The general population believes what it was told for 50 years - nuclear power is something immensely dangerous and deadly, something that can explode at any moment, kill millions and turn the whole country into uninhabitable desert full of motocycle riding mutants.

Now, imagine if normies are told:

THE COMPUTER can kill you. Yes, THE COMPUTER can shred you into paperclips, without warning. And not only you, but everyone, everyone in the whole world. Yes, even ordinary computer in your son's room can do it.

Do not wait for your doom. Say something, do something.

The problem is, we've already had hacker scares for years, I don't know what it would really take for people to realize the threat outside of re-hashed Terminator references.

The American public won't give up guns, do you think they'll give up computers?

Heck, even if it's just AIs they're told to give up, forces that want to do that will have to move fast, because every passing moment it reaches more hands, and the hands that have it are gonna hold on tight. And at some point soon, we will reach a point of cultural no return on everyone having these tools.

The general population believes what it was told for 50 years - nuclear power is something immensely dangerous and deadly, something that can explode at any moment, kill millions and turn the whole country into uninhabitable desert full of motocycle riding mutants.

Again, at least according to this poll, 76% of Americans - the most relevant demographic for this forum - favor nuclear energy. Even the opponents do not necessarily hold the most alarmist and charged view of nuclear as a power source.

Nuclear power has a lot of benefits, but it takes a significant amount of time and money to get online, with the benefits being generally diffused. The number of organisations that can actually get a nuclear power plant online for long enough that they can start to make a profit is quite small.

AI is comparatively cheap, the changes are quick and easily observable and the pay off for an individual willing to utilise it is substantial. As a class medievial European nobility may have benefited from a complete ban on crossbows and handguns, but the ratio of costs to return of employing these weapons meant that anyone who chose to defect and take up their use would out compete those who did not. The same is true of AI, it cannot be ignored.

Your appeals to a reasonable nation performing certain obvious reasonable tasks are pointless. This is clown world. You need to think dumber.

I'm appealing to human greed and desire for power. You need to think smarter.

I am strongly of the opinion that since neoliberal PMC jobs are the easiest to automatic with AI, there will be incredibly strong regulation banning AI from taking the jobs of the PMC. The power to regulate is the power to destroy, and as incapable of actual productivity the PMC and their legion of bullshit jobs are, they know how to run a grift and bask in their own self importance.

I highly doubt this will happen. You talk as if the PMC is a giant union where everyone is aligned, which shows you don't understand the social context there and are clearly just poo-pooing your outgroup.

People in the PMC with power have capital, whether it's political, intellectual, or financial. The financial movers and shakers will not agree to regulating AI, at least until they have gotten their piece of the pie. Even if they do, it will take years and years to get everyone to agree on a framework.

You've also got the AI companies themselves. Altman has come out and said he doesn't think regulation at this stage is a good idea, and he's got an incredible amount of political and intellectual capital. Many people in government, for good reason, see Altman as one of the most important figure in the world right now. They don't want to piss him off.

I'm gonna register this prediction now too. Apparently Ai has trouble with fingers. You'll know it's gotten loose when there is a new tiktok trend of young people amputating all their fingers. The AI will have decided it's easier to convince us to get rid of our own fingers than figure out how to draw them better. Given the rates of Tiktok induced mental illness, it would probably be right in that assessment.

This would be a rad short story. An AI that gets 'frustrated' at its own limitations against the real world and it's solution is to just sand off all the sharp edges that are giving it problems.

Like it genetically engineers all the cows to be spherical so it's physics simulations can be more accurate.

An AI that gets 'frustrated' at its own limitations against the real world and it's solution is to just sand off all the sharp edges that are giving it problems.

I'm obligated to point out that this already happened, the AI was capitalism, the sharp edges were all direct human interactions, and our atomized broken society is the result.

I would be interested in seeing this thought/analogy expanded.

I thought I got this idea from Mark Fisher or Nick Land, but random googling isn't leading me to any obvious writing of theirs on this specific concept. Come to think of it maybe it was one of IlForte's pithier comments. Regardless you should read both of them.

Seeing Like a State plus a broad view of what constitutes a "state," perhaps?

I thought I had seen later Scottposts applying this logic to capitalism.

His Meditations on Moloch sounds like this vein too.

Frankly at this point I'm just riding the tides. Whatever happens, happens. This will be like the fifth once in a generation event I've lived through, and like the 20th doomsday scenario. I don't have the energy to care anymore. I have apocalypse fatigue.

But there could be a utopia! Unlike Nuclear and other scenarios, I think it's likely this moves us far closer to a utopia, soon.

There could be! And that would be nice! But like with giving in to doom-mongering, I'm also not going to get my hopes up, either. Realistically, whatever's going to happen is going to happen regardless of whether I get hyped up or stressed out about it.

Yeah I try and keep a cool head as well. I'd love to quit my job and party till the singularity comes but it may not be the best idea...

People thought this about Nuclear and the other ones too if you remember.

There will be no utopia, because utopia is not a thing that exists. Our lives might get better and worse in various ways, but the idea of a perfect society, and by extension of moving towards a perfect society, has always been delusional.

No utopia, just a shifted technological landscape.

There could be a utopia but it could only be achieved by either

  1. Changing the human race fundamentally to remove the desire for accomplishment or status

  2. Hiding the true nature of reality and creating a unique simulation for each human that would provide a fulfilling life path for that person

Why do you think this is the case? And what does a 'fundamental' change mean?

The goal of the axial revolution has always been to improve ourselves. We are slowly becoming better, in my opinion. Less violent, more understanding, more focused on technical accomplishment. If we continue on that path and eventually eschew (most) status, is that a fundamental change or an incremental one?

but the idea of a perfect society, and by extension of moving towards a perfect society, has always been delusional.

Some delusions are worth chasing my friend. Chasing the delusion of truth, intellectual honesty, and rigor led us to the Scientific revolution, which brought us where we are today. Just because you don't think it's likely doesn't mean those seeking utopia are fools.

Utopia doesn't exist the same way any other ideal doesn't exist. Does that mean you shouldn't strive to be kind or love others?

Once you accept that these are forces which you can't individually impact, the path forward becomes pretty clear.

Just set things up to maximize your chances of living to see whatever crazy future we end up with.

And maybe have some fun along the way.

  1. Go to church

  2. Have kids

  3. Buy land

  4. Acquire chickens

Simple as.

I truly think people are almost embarrassingly overstating the importance of the AI apocalypse. Maybe an apocalypse for twitter and other online spaces, maybe an apocalypse “just a barely intelligent warm body” call center jobs, maybe an apocalypse for bootcampers making $300k/yr gluing JavaScript frameworks with cute names together.

Not an apocalypse for anybody with a skill set that can exist completely independent of the internet, not an apocalypse for the people who understand computer programming from first principles.

In the sense the AI will bankrupt the people who have been mining the good out of society while contributing absolutely nothing of value to it, it is a massive net good. I absolutely welcome our AI overlords. Show me who is posting the MOST human-passing-but-totally-useless-garbage on twitter, or trapping the MOST ethical non-monogamist coombrained Reddit atheism posters into pointless time wasting arguments and I will either go work for them for free, or donate compute time to them.

Let’s fucking go.

Not an apocalypse for anybody with a skill set that can exist completely independent of the internet, not an apocalypse for the people who understand computer programming from first principles.

In the sense the AI will bankrupt the people who have been mining the good out of society while contributing absolutely nothing of value to it, it is a massive net good.

I can't tell if this comment is a spoof?

Sure, go back to your farm and use tools like tractors, fertilizers, modern crop rotation techniques, plates, silverware, cups, etc which have been created by the larger society. Created, distributed and improved by people who are supposedly 'mining the good out of society.'

Society is a team effort, bud. Your fantasies of living scott-free totally 'independent' on your plot of land are just that - fantasies. You wouldn't make it a week without the collective wisdom and knowledge society has gifted you and your family. Have some respect for the people who came before you, and the people who help you live a cushy life now.

I say:

go to church

start a family

And you internet this as “isolate yourself from society and pay no respect to the people who came before you”?

Just to be clear when i say “go to church”, I mean specifically a Catholic Church. There could not exist another institution on planet earth that is more of a strong indicator that you should stand in the shoulders of the people who came before you.

The people mining the good out of society are people running porn websites, and AB testing headlines and algorithmic content feeds to see which ones make people hate each other more, and then buy the products that they’re selling. Onlyfans is mining the good out of society, blackrock is mining the good out of society, McKinsey consulting is mining the good out of society

Porn websites and management consulting agencies did not invent pottery, crop rotation, iron smelting, or anything else. The fact that you either think otherwise or think that “go to church and start a family” somehow means “throw away every good discovery ever made by mankind” is certainly telling of something.

The people mining the good out of society are people running porn websites, and AB testing headlines and algorithmic content feeds to see which ones make people hate each other more, and then buy the products that they’re selling. Onlyfans is mining the good out of society, blackrock is mining the good out of society, McKinsey consulting is mining the good out of society

Those people will be doing more of all that and better (or rather "more efficiently" - nothing about it will be better for the audience), with higher profit margin since they'll no longer need to pay the grunts in call centers.

I think this comment is an example of "inferential distance." Your meaning of "people mining the good out of society" is porn sites, investors, and engagement-optimizers, whereas Dag's interpretation was "all the smart people who brought us modern technology."

@firmamenti also engaged in the classic Motte and Bailey to my mind. His Bailey is:

Not an apocalypse for anybody with a skill set that can exist completely independent of the internet

Basically claiming that anyone who relies on the Internet is gonna get fukt, and they should cry about it.

Then when challenged he retreated to the much more specific claim of:

people running porn websites, and AB testing headlines and algorithmic content feeds to see which ones make people hate each other more,

I'm not impressed with this sort of rhetoric.

I'm not impressed with this sort of rhetoric.

You cut one of my sentences in half to make your point, and then you accused me of bad faith argument.

The rest of the statement which you cut off was: "not an apocalypse for the people who understand computer programming from first principles."

This is not a motte and bailey. You either didn't read the rest of my comment, or you are being deliberately misleading in your characterization of it.

Either way: don't do this.

Eh, I cut it out for brevity but I see where you’re coming from. Either way I see you slicing the populace into such a chunk as to be making a ridiculously callous and egotistical statement.

I’m happy to discuss further which chunk of humanity deserves to have their lives be destroyed and suffer unnecessarily, but I generally find that type of rhetoric to be unsavory. I apologize if I mischaracterized your stance.

Not an apocalypse for anybody with a skill set that can exist completely independent of the internet

Basically claiming that anyone who relies on the Internet is gonna get fukt, and they should cry about it.

Just pointing out, your interpretation there doesn't quite check out logically. It would only be a motte/bailey when mischaracterized like that.

The people mining the good out of society are people running porn websites, and AB testing headlines and algorithmic content feeds to see which ones make people hate each other more, and then buy the products that they’re selling.

This is a small fraction of people in modern society, and if history tells anything I'd imagine they will be hurt less by AGI because this class of people is good at finding BS niches to milk value out.

I'm just not a fan of broad statements talking about how an ill-defined outgroup is milking everything from society while you and yours are the ones building it. Thanks for clarifying.

So... I have a church, yard, kids, and chickens. Also it's Bright Week. Al masih qam!

Yet here I am, typing away on The Motte about AI. And here you are.

Plausibly I should work on my in-person network. A local church has installed ten Russian bells on a new building they've been working on these past two years. I watched the video of the blessing, and it sounds really good. The acequias association is supposed to be flushing the irrigation ditches tomorrow. My husband walked down the street and gave eggs to a neighbor last week, and has resolved to do that again, because it was a good experience. My daughter is now old enough to walk to the village church if we ever get our act together on time. People wave, and are out by the street cleaning their ditches. I can and should make physical art out of wool and wax for next year's local studio tour and art markets.

And yet here we are, even so.

Acquire chickens

Skipped 1, but I'm on 4. Chickens are about 2 weeks old, and I'm assessing the plans for the coop I plan to build. At least, after I finish ripping out the stupid Cyprus trees the last owner planted everywhere.

Based and eggpilled.

Seriously love chickens. They are equally stupid and annoying, and beautiful. They also make fantastic babysitters for #2 and will entertain them for HOURS. Highly recommend.

Chickens are raging assholes that go everywhere they're not supposed to and refuse to die when their time is up.

Ducks are much easier to manage. The eggs are tastier, too.

Has anyone considered…pet pigeons?

Pigeon eggs can be eaten too!

coo coo

I had about twenty white homing pigeons as a teen for 4-H. They're great, but are terribly difficult to get rid of. Homing ability is both impressive and obnoxious.

Ducks require too much feed. Geese can graze most of the day.

Nah, ducks turn their ponds into swamps and give you a rash when you cuddle them. Chickens are much more convenient. (We have both.)

I was wondering if we were going to get the chicken vs duck argument going. I have a coworker who has ducks and recommends them. I have a neighbor with chickens, although they might have gotten rid of them, or at least the roosters.

I didn't know such arguments were infamous.

All I know is, after having to deal with both, I'll take the ducks.

I want to try guinea fowl next year.

Our previous neighborhood had feral peacocks, and they give off this great jungle call in the middle of the night, and every once in a while I hear them here too, from a half mile or so away.

I’m really getting the urge to grow out my neckbeard and get euphoric up in this bitch. Postrats are converting to Mormonism now, Mormonism! At least with wokeness you have to go outside and observe the world to realize that it’s false. Most of these religions don’t even make sense on their own terms.

It’s cope is what it is, cope. It makes you feel good, and it’s useful (so it seems), so you believe it.

Choosing to believe (or act as if you believe) useful things seems very rational to me. I have an old coworker who was an atheist and cynically became a Mormon in order to marry a Mormon wife and live in a close-knit community. He now lives in Idaho and has 4 kids and by all accounts is very satisfied with the outcome. Who's more rational, him or a depressed medicated outspokenly atheist Bay area tech worker who's the least-liked member of his drama-cursed polycule?

If you rational long enough, you're eventually going to rational about rationality, and you'll see that beliefs are instrumental like anything else. There's no God of Integrity who laid down the law that you must profess true beliefs.

The short answer is, it fucks up your epistemology. It’s probably worth a whole post going through exactly why that’s so bad. Perhaps the old atheism arguments from the early 2000s need updating for the TikTok generation.

I disagree. You can be rational when the situation calls for it, and be religious on a meta level.

It definitely deserves a longer treatment than one sentence, but I'm fond of "once you've told a lie all truth is your enemy". Or something about lightning, I guess. Intentionally professing beliefs in falsehoods because they are useful is the epistemic equivalent of the doctor killing their patients to donate their organs -- it may sound like it does more good then harm in the short term, but you wouldn't want to live in a place where that's the rule.

Due to cancel culture and maybe even social media in general would you say its worth shooting for fame?

Back when I was a kid (when tv was the main screen) I guess I wanted to be famous. But I wanted to be famous because when I saw these muscians, actors and comedians I just thought wow their lives are easy and fun and obviously they're rich.

Now it seems like celebrities still have much more fun than the average person but it seems like to keep your position has gotten harder, especially the newer you are.

Is it still worth it?

It's always been this way about something.

Before, it's Marilyn monroe being all scandalous 'an shit, the Dixie Chicks failing to be sufficiently patriotic (sic. blood thirsty), taking the lords name in vain, whatever the fuck.

The cost of and benifit of fame is everyone watching you, so they can all shout at the some time when you violate the norm of the day.

Neither of these is comparable to modern cancelations.

How so?

I mean, shit; one case took them from being one of the top acts in their genre to not existing. Complete total nuked out of existance level. Not even Kanye got his shit rocked that hard.

And the other one became the zeitgeist definition of a sex goddess for four decades.

People not listening to your stuff anymore is not cancelation. Cancelation is trying to put obstacles to people listening to you.

Also, I thought the Dixie Chicks still exist, they just had to change their name because it was too unPC?

This isn't true. The Dixie Chicks never disbanded. They never lost their record label; they released their tour albun after they made those remarks and it still hit #3 on the Country chart AND went platinum. They released two singles in 2003 and one in 2005, two of the three made the Country chart. They made a new studio album and they toured in 2006 (the controversy was in 2003, their previous tour was 2000). The group still exists today, though as @arjin_ferman notes, they changed their name because it was too unPC. Not "Chicks", but "Dixie". They're "The Chicks" nowadays.

There are upsides. Back in the day, magazines and tabloid newspapers had a lot of influence over celebrities, because they controlled who had access to the general public. There was a lot of obsequiousness and moral compromise on the part of celebrities to promote themselves with magazines and tabloids. Today, with the internet, it's easy to keep in touch with fans via Twitter, Facebook, and so on, while things like Youtube, Instagram, and TikTok provide paths to fame without going through the traditional press.

If anything this is worse.

The most online celebs can make money without traditional media now, yes. But, in other ways, they have the worst of all worlds; they are directly subject to real-time feedback from fans and the parasocial relationship seems to lean way more in the direction of negativity than the sycophancy that might happen if they only had public interactions.

And, sometimes, they don't even get that much money for their troubles - Lindsay Ellis was driven into depression for an upper-middle class life.

Yes, A-list celebrities get to ignore (or try to ignore - see the Naomi Osaka case for the self-serving attempt to cut out the media using mental health claims) the traditional press more. But they hear from fans more and fans also see them more (previously they made deals with tabloids to keep a lot of this shit out) which increases the burden to conform.

Johnathan Majors is probably going to lose out on tens of millions due to a story that escaped before any of the traditional fixers and handlers could do their work. Decades ago it was more likely to become a story we hear about today "did you know Johnathan Majors assaulted someone 30 years ago and no one reported it?".

But I'd still like to be rich and famous though.

the sycophancy that might happen if they only had public interactions.

I'm not sure that this is a good reflection of tabloid-celebrity relationships in the past, which seemed to be extremely abusive in some cases, and always with the threat of abusive intrusion in the background.

However, I don't dispute that the situation is bad for celebrities. Personally, I wouldn't mind being rich, but I would happily do without the fame.

I didn't mean sycophancy amongst the tabloids but the fans who "drag" online celebs on Twitter.

I think a lot of people on Twitter are way more toxic to their favorite Breadtuber or streamer than they'd be if they met them.

Almost all existing famous people are not cancelled - there are just a lot of famous people. if you made a list of 50 famous people, old or new, outside politics, and ask how many of them were materially harmed by cancellation - it has to be below 10% at least. And taboos you could lose your fame over if you crossed them are a historical universal.

Due to cancel culture and maybe even social media in general would you say its worth shooting for fame?

For a certain type of person, I'm sure it is.

For me, the very thought of having to constantly police my opinions, to constantly watch out for backstabs, to worry about all the various attempts by others, even people you might trust, to exploit you and your fame for personal gain, to the point you can never really be certain if anyone authentically cares about you.

The one I think of a lot recently is Kanye West. Guy achieves true superstar status, is known for being extremely talented if a bit unhinged, billion+ dollar net worth, most projects he touches turn to gold, marries and knocks up one of the hottest women (in both the fame AND sexual attractiveness terms) on the planet, and then gets most of the above ripped away from him amidst mental breakdowns and abandonment by most of his 'friends' leaving him to various parasitic hangers-on who are desperate to grab their own strip of fame at his expense. All taking place very much in the public eye.

Let us just say I would not switch places with Kanye if given the choice.

Or the entire story of Michael Jackson, ye Gods.

I don't think I'd be comfortable having a life that is examined 24/7 by both rapid fans and haters and having to thus constantly be in 'performance' mode. The money would be great yet I wouldn't feel truly 'free' to spend it. In that sense, my role models are those types who achieve 'quiet' wealth. Like making tens of millions inventing some software that gets adopted as standard in some sub-industry that nobody ever things about, and owning a large, reclusive property somewhere in the mountains where nobody COULD bother you even if they wanted to.

Also if you're a singer, the thought of having to tour around the world is cool, but then realize that you have to perform (and practice!) the exact same songs dozens of times, likely thousands of times over the course of a career. For a born performer this might sound okay, but to me it sounds like a slow journey to insanity.

But I wanted to be famous because when I saw these muscians, actors and comedians I just thought wow their lives are easy and fun and obviously they're rich.

Money for Nothing and Your Chicks for Free.

Now it seems like celebrities still have much more fun than the average person but it seems like to keep your position has gotten harder, especially the newer you are.

I'd guess this depends on how you 'came up.' I get the sense that the so-called "Nepo babies" have it comparatively easy since your parents' connections can pave the road for you or, as the case may be, soften the landing if you fall.

I'd also guess that for those without existing connections, the number of 'gatekeepers' has proliferated making it way harder to advance to real fame. Maybe you don't have to sleep with a producer anymore (?) but you've got to get approved by a whole lot of intermediaries before you come anywhere near a big IP or studio that might actually push you through to the mainstream.

For any given level of income/wealth, fame seems like a significant, net negative. That is, I would rather make $20 million from secretly winning the lottery than to get $20 million from having a runaway number one hit music album that made me famous. You have the downsides of stalkers, harassers, gold-diggers, cheats, etc. For every person with newfound respect for you, there are others trying to take you down a peg. And there isn't really any benefit. A person can reach peak happiness from being high status within his own family and social group. If you get so famous that you are awkward with your original social groups, and are in new higher status groups, then you haven't made yourself any better off.

Now, fame can be translated into money. So is it better to broke and waiting tables in Hollywood, or to get a huge break and become a famous actor? That is harder to say, but generally it seems to me that most modern social circles of the famous are very toxic and should be avoided.

I would rather make $20 million from secretly winning the lottery

You mean...a long-shot stock option play that pays out hugely, GME style?

I've always felt that if I won the lottery, I'd find someone (ideally already rich) to claim the prize for me in exchange for a significant cut (probably up to 50%). Even having your name public as a lottery winner gets you a lot of attention you don't want.

You could probably get more than the prize value by selling your lottery ticket for cash, since that would allow someone who has a lot of illegitimate cash to turn that into legitimate taxable income.

I didn't think of that, and it's an interesting idea. But I don't know many folks who have millions of dollars that need to be laundered, and it's probably too risky to trust them to hold up their end of the deal. (Also, at that point I'd be left with millions of dollars of unaccounted for cash, which seems substantially less valuable than cash that doesn't need to be laundered.)

Though I guess the biggest issue with my original scheme is that it might expose the winnings to double taxation.

deleted

That's fair. Mostly I just thought it was interesting that a market for "sell your lottery tickets" already exists and that winning lottery tickets have a cash value that is larger than the face value of their winnings.

Many US states allow you to claim lottery prizes anonymously.

For any given level of income/wealth, fame seems like a significant, net negative. That is, I would rather make $20 million from secretly winning the lottery than to get $20 million from having a runaway number one hit music album that made me famous

This particular anecdote about Taylor Swift (who was already wealthy and privileged before chasing fame) basically convinced me, for all the money, being a pop star is not just inconvenient but undignified.

Imagine having to constantly cater like this constantly when it comes to your "art", worried about every change of the internet tides like a waiter perpetually dealing with a particularly difficult table.

"I always want to say to people who want to be rich and famous: 'try being rich first'. See if that doesn't cover most of it. There's not much downside to being rich, other than paying taxes and having your relatives ask you for money. But when you become famous, you end up with a 24-hour job." - Bill Murray