site banner

Culture War Roundup for the week of July 10, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

The greater replacement

I've completed SIGNALIS the other day, on account of it being an internationally acclaimed piece of contemporary German art (something I've previously claimed barely exists, to my chargin); better yet, consciously Old World art, cleansed of the HFCS-heavy «Universal» American culture to the limit of the authors' ability. It was good. Not exactly my glass-in-podstakannik of tea, and sadly compressing the Old World spirit into a thick layer of totalitarian dread covering all aspects of the «Eusan Nation», but compelling.

This isn't a SIGNALIS review.

The core plot device of the game, and the only one relevant to the post, is Replikas – in a nutshell, synthetic-flesh cyborgs driven by uploads of humans deemed particularly well-suited for some jobs; there exist like a dozen models, from the mass-produced laborer Allzweck-Reparatur-Arbeiter «Ara» to the towering BDSM fetish fuel Führungskommando-Leitenheit «Falke». Replikas, often described in home appliance-like terms, aren't superhuman in any interesting sense, but boast «860% higher survivability» in harsh environments (very economical too: can be repaired on the go with an expanding foam gun), predictable well-documented response to stimuli, and are contrasted to legacy Eusians, «Gestalts», whom they're actively replacing in many niches by the time of the game's events, and seem to dominate politically, as befits their greater utility in the glorious struggle against the accursed Empire.

All of this is to say: I think Peter Zeihan might eat crow with his thesis that Demographics is Destiny and a political entity needs a ton of working age people to be relevant in the foreseeable future (and specifically that China is doomed due to its aging population). The whole demographic discourse as we know it, and the complementary geopolitics angle, will likely be derailed quite rapidly. Not the first time: we've gone through population bomb/Limits To Growth delusion, then through the HBD naivete and expectation for nations to grow which never could. Now, mired in comical obstinance of credentialed prognosticators and noise of «democratic» dissent, having failed to reckon with these mistakes, we're going through the Humans-Need-Not-Apply-Denial stage.

Today, I've thought this while watching the vid about the GR-1 (General Robotics?) device by the Chinese startup Fourier Intelligence. Fourier is mostly known for their rehab equipment designs – lower body exoskeletons for people with mobility problems. They've come a long way since 2015 – it so happens that you can keep adding details to the lower body support system and, well, before you know it… Kinda reminds me of Xiaomi's path from bloated Android ROMs to a general electronics and hardware giant. Anyway, they're but one competitor in a space that is rapidly heating up. There's Tesla Optimus, Boston Dynamics' Atlas (admittedly a hydraulic monstrousity that'd never be economically viable outside of a more realistic Terminator reenactment), and lesser-known DIGIT, 1X Eve, Xiaomi CyberOne and probably others I've missed. All (except Atlas) have similar basic powertrain specs comparable to a short human (and leagues above gimmicky old prototypes like ASIMO), and all rely on the promise of AI to make it more adroit; AI that is plummeting in training costs, even faster than USG can kneecap Chinese semiconductors industry. What's unusual in Fouriers is that they're still putting this in the medical frame: «caregiver for the elderly, therapy assistant». The same message had been pushed by Everyday Robots, a Google X company (recent victim to tech cuts).

Technology has delivered us from the Population Explosion Doom. Tech may well deliver us from the Population Implosion Doom too. But… who «us»?

And speaking of Boston Dynamics, there's this thing, Unitree Go2, shamelessly ripping off MIT's Mini Cheetah (rip real Cheetah) and making it sexy. Hardware-wise it's just a very decent quadruped bot, on the smaller side, can carry 7-8 kg, run at ≤5m/s, do backflips and so on. There are two interesting things about it: cost ($1600-$5000, to wit, 15-45x cheaper than BD Spot) and advertised parallel AI training, no doubt inspired by Tesla's fleet-scale data flywheel idea. Well, that and how fucking well it moves already – watch it to the end. It's not vaporware, you can see people using their previous gen robots, I regularly notice them in ML materials, even Western stuff like this. (For comparison, here's a Tencent equivalent).

Here's the deal. I believe this is it, after so many false starts. Robot adoption will accelerate in an exponential manner from now on; the only realistic constraint on this is investor money remaining mostly tied up in AI/Big Tech, but I do not think this'll be enough. There have been two main mutually reinforcing obstacles: software that's laughably inadequate for the task, and extremely expensive actuators, owing to small-scale production and the whole business being tied in institutional deals (and high-liability crap like power plant inspections). Software side is being improved by AI very quickly. Quadruped movement, even over complex terrain, has been independently solved many times over in the post-COVID era (add this to all examples above); simulation and big data approaches like Unitree's will no doubt iron out remaining kinks. Biped movement is lagging but starts to move onto the same curve. As this happens, demand for components will surge, and their price will crash; first for quadrupeds, then for androids. There really isn't any legitimate reason why crappy robots must cost more like a Tesla than a Macbook; it's just a matter of economies of scale. Remaining issues (chiefly: hands; robot hands still suck) will yield to the usual market magic of finding cheap paths through a multidimensional R&D landscape. Did you know that Facebook has developed and opensourced superhuman, dirt cheap tactile sensors? There are oodles of such stuff, waiting to click together, the puzzle to resolve itself (I love watching it; I've been watching it ever so slowly move toward this stage for all my life; seeking for the same feel in toy short-term puzzles). Unitree Go2 relies on GPT for interpreting commands into motion. Have you known that China has like 4 projects to replicate GPT-4 running in parallel? But GPT-4 is already scientifically obsolete, soon to be commodified. This whole stack, whole paradigm will keep getting cheaper and cheaper faster and faster, standards rising, wires-out prototypes making way for slick productivized consumer goods that are about as useful as their users.

…In conclusion, we might be tempted to think in more detail of current destinations of working-age Chinese, like EU, Canada and the US. I can't recall who said this first, probably some guy on Twitter. The point is sound: a nation (or culture) that is willing to replace its population with immigrants when that's economically advantageous – instead of seriously trying to improve demography – may prove equally willing to replace immigrants with robots and AI next. Sure, robots have the demerit of not being able to vote for more of themselves. On the flipside, they can remain plentiful even as the stream of immigrants dries up with their mothers becoming barren, and the global population pyramid inverts and stands on a sharp point. And Dementia Villages (that the Developed World may largely turn into) will be easy to coax to vote for maintenance of their modest palliative creature comforts and pension/UBI. The Glorious Eusian Nation, this future is not; but one not worth living in, it might well be.

If I am right, the Culture War of the near future will be increasingly influenced by this issue.

And Dementia Villages (that the Developed World may largely turn into) will be easy to coax to vote for maintenance of their modest palliative creature comforts and pension/UBI. The Glorious Eusian Nation, this future is not; but one not worth living in, it might well be.

but one not worth living in, it might well be.

On the contrary, a world full of robots and capable AI systems with not that much agency and a declining populations would be very interesting. If you could get organised and start using all the industrial capacity for your own ends. Remember the Chinese billionaire who has his own preschool ? Once the boomers are all dead, who's gonna stop you from repopulation efforts involving cloning of people who were very smart and enjoyed life?

....an internationally acclaimed piece of contemporary German art (something I've previously claimed barely exists, to my chargin)

This depends on your idiosyncratic personal preferences about what counts as art, meaningful, or beautiful. As far as I'm concerned, Germany contains arguable the world's premier center for producing beauty and meaning. I personally think it's crazy to judge a country producing ideas like these as not producing internationally acclaimed contemporary art just on the basis of video games, paintings, or music (even this example is questionable since this German produces quite a bit of internationally acclaimed music).

This is the general problem with basing your morality too heavily aesthetic preferences. These vary too much from person to person, are really hard to reconcile, and therefore produce unresolvable differences in judging which places are successful/which policies work/etc. It just leads to an all-against-all war between 100,000,000 orthogonal value systems.

Hans Zimmer is German, but he is one of the last living (true) German Jews, so not really what Dase is talking about.

I don't think I understand. Can you explain why that's a relevant difference instead of just no-true-Scottsmanning? Whatever the super fine details of the exact specific type of German he is, if you want to follow the tradition of dramatic, operatic German music like Wagner to the modern day, Hans Zimmer is probably the best example.

Unrelated to your post but does Signalis live up to the hype?

Note: if you grew up in East Germany, Signalis is going to be a headfuck. This game is "DDR in space" through and through. It's actually painful how familiar it feels.

Pretty fun though. In downside, I'd say it feels like it's designed "to imitate RE" more than "to be good" at some points. Some puzzles are very much for the sake of puzzles. But RE is a good design anyways, so it mostly works, and it's short enough that it doesn't end up overstaying its welcome too badly.

Also, lesbian robots.

If you like OG resident evil and 90's anime aesthetics: Hell yeah! 10/10!

If no: Strong maybe! 7.5/10!

I don't know about the hype, and my experience with survival horrors is very limited. «It's a good game, anon». The worst part is running in circles, dodging reviving zombies, to assemble a trivial puzzle. If you're a skilled gamer you can complete it in like 5 hours (default ending).

People I otherwise respect cock an eyebrow when I point out that GPT-4 being a competent clinician and programmer is a cause for concern, reasoning that since it lacks appendages or a means of communication outside text (multimedia output aside), it can't replace human labor or human hands.

Firstly, they're the SOTA today, anyone wanting to bet that GPT-5 won't be another leap ahead within a couple years is welcome to take it up with me, I could use the money.

Secondly, advances in both soft and hardbody robotics continue apace, including hooking them up to LLMs, such as PaLM-e, at which point the LLM is also writing some of the code for controlling the robot, giving it goals, and using it to manifest itself in a live environment.

I've certainly expounded before on why demographics are unlikely to matter in the least, since as you've also pointed out, automation and widespread robotics will make an aging population much easier to bear, let alone retain the need for large numbers of skilled and unskilled immigrants many nations have come to rely on to bolster their numbers and maintain their QOL. Be it industry or military might, we're unlikely to be competing in either by throwing humans at the problem.

As usual, you pack the citations to back up the intuitions and fears I've developed from years of being an observant bystander, so I can only endorse this wholeheartedly.

From a personal standpoint, all of this is concerning to say the least, I am a skilled immigrant to the UK, and on one hand, Rishi Sunak and co are trying to desperately alleviate Britain's ever greater irrelevance by jumping on the AI bandwagon first, and on the other hand, threatening their uppity doctors that they better accept their lot in life, because they'll be replaced with cheaper AI and bots if they don't behave. Said British doctors are laughing at the threat, I could only wish I was half as blasé about it. They might not be able to pull it off today, but in a handful of years? 5 years? Yes. Add in the trained monkeys in lab coats who are already undercutting doctors, PAs and NPs, who will be able to compete with us on manual dexterity when the easy job of merely thinking is better done in a datacenter on the outskirts of London. Or imagine a deskilled doctor who uses AR glasses and AI-cues to do pretty much everything, with any attempt at deviation or display of artistic skill being a strict negative in terms of outcome. 90% of us becoming useless is almost as bad as 100% of us. Curses that the one place where the British government shows a degree of foresight and wisdom is the one that fucks me over the most. They've already shown their willingness to screw over their own doctors, hence the rampant demand for IMGs like me, who still see even the declining NHS as an upgrade, albeit a less appealing one each day.

To the extent that the immigration pathways I have rely on me bringing in great value as a trained and ready to work doctor, my impending economic obsolescence threatens all available pathways to remaining in a Western country, barring throwing my lot in with outright refugees.

Things like a need for human interaction or touch don't matter, when the economic incentives are so gargantuan, and especially since we already have a paucity of doctors globally, with new human ones needing to go through a lengthy training and deployment phase. You've already gone from the genteel days when most doctors could take their sweet time gossiping with patients and sipping tea, to a far more aggressive and target oriented approach where most people are given just as much time as needed. The market bore going from half an hour consults to 5-10 minutes with a frazzled GP, it'll go from 5 to 0 even faster. Even the people with a sentimental attachment to us can't keep us going, since they're likely to become economically obsolete themselves.

I can only hope I make enough money to insulate myself from the coming troubles, or at least become a citizen somewhere in the West so they'll put me on UBI. Ah, what I would give to be 5 years older, with enough runway to mess about and take my time. But at least I have alpha in knowing what's coming, and that puts me miles ahead of those who will be introduced to automation-induced unemployment when it comes for their "skilled" job.

You guys have no idea that how terrifying it is to see the curtains drawing close before your eyes, without the modest safety of a government committed to taking care of you when you're nothing but a drag. The Indian government certainly engages in welfare, but it has absolutely no hope of doing so when literally >90% of its citizens will end up unemployed, and even the UK will have to cut down on people who've outlived their usefulness. All I can say is that I'm grateful to have gotten this far, when I still have a fighting chance. God knows my little brother will likely never get to specialize into anything, if he even manages to practise as a junior for a handful of years.

I wasn't born a Doomer, quite the opposite. I spent most of my life looking forward to the bright future that technology can bring about, where we come to rule the Earth and the stars, and I still think that's more likely than not, it just comes with a significant risk of killing or starving me along the way. Pour one out if I don't make it, if you do, you can afford it.

I'm, if anything, more of a doomer than you are in that I think the chance of catastrophic changes from ai are much more concerning than job displacement. But I don't quite understand this particular angle of doom. AI's introduction to economies, assuming it's in the mundane form of doing our jobs better than us, will almost certainly be positive sum. At the same time you're losing your competitive advantage so will the rest of us doing non-physical labor. Any part of your spending that goes to these nonphysical labor will also drop precariously. And that's a lot of your spending. If you are correct and your fate is to become a medical conduit for gpt and be paid half as much while servicing four times as many people it seems like quite an assumption that your cost of living won't also fall by at least half. Perhaps that sounds absurd but you pay being cut in half should sound at least as absurd. What an economy where large swathes of thinking are automated is a very difficult question and any simple "This is world ending for people who primarily make their income via their cognitive ability" is not dealing with nuance. I think it's more likely that we'd all move to physical labor and live better than we do now than we're all out on the streets. At a macro economic level I would be willing to wager, if the friction of doing so is not too annoying, that AI will be seen to have had a positive impact overall on the lives of Indians in India in a decade if its primary impact is on jobs. All bets are off if it triggers some non-economic catastrophe.

I'm a knowledge worker who may see their competitive value halve (and I Live in the UK so am paid not that much to begin with), and my cost of living is almost entirely housing, food, energy.

The things that AI will make cheaper are an insignificant slither of my budget. But AI will vastly reduce my market value while vastly increasing the minimum productivity I will have to hit in order to retain that much reduced market value.

Just because automation is good for the overall economy doesn't mean that the benefits will reach those displaced, or that the period of displacement won't be extremely disruptive or dangerous.

As it stands, it encourages monopoly and scale, you no longer have a lot of the coordination bottlenecks that prevent any single successful company from swallowing the majority of GDP, especially when it's even modestly superhuman AGI in charge.

I have more to lose than most Mottizens, because I am not a citizen of a wealthy Western country. India can't afford UBI, nor is it in a good position to ramp up its manufacturing to make good use of it. Countries like America or Australia can probably manage the former, and China the latter. My stay in the West is contingent on me being a value add, and that will almost certainly be weakened. After all, you're not going to be deported because your visa expired and you couldn't get a job anymore.

As for why I focus on AIU instead of AI simply killing all of us, I can do much better when it comes to preparing for the former, and I'm as helpless as anyone else if it's the latter. My marginal effort and concern is far better suited for the worlds where it makes a difference.

Is it possible that the gains of automation are taxed to subsidize UBI, or that decreased costs of goods helps soften some of the shock? Certainly. I'm not banking on it though, not before a lot of suffering.

India is already much more volatile than the West, and I have reason to suspect that the risk of things going sharply south for at least a while as all the sectors of the economy reliant on producing goods and services for international consumption become obsolete is unacceptably high.

Firstly, they're the SOTA today, anyone wanting to bet that GPT-5 won't be another leap ahead within a couple years is welcome to take it up with me, I could use the money.

I don't bet. And actually, someone independently posted pointing out that most LW-style bets are irrational from the point of view of profit motive and are signalling.

Also, they said the same thing about self-driving cars. It turns out that the last bit is a lot harder to get right than the first bit.

I find none of the reasons you stated convincing.

Firstly, bets are a tax on bullshit, you either pay up with money or lost social prestige. That's less of an issue on a pseudonymous forum, but no amount of words can convey the force of putting your money where your mouth is.

Secondly, you're not dealing with a malicious genie or rules-lawyer here, if someone wanted to bet a modest amount, up to say a thousand dollars, I see no reason for me to refuse to pay up. I can only hope my reputation speaks for itself. You shouldn't make a bet with someone if you don't expect them to pay up after all.

Third, if I wanted to be "purely profit maximizing", I'd be heading to prediction markets that pay out real money, where that approach scales. Unfortunately, they're often better calibrated than the kind of person I expect to take me up on my offer, but I still intend to dabble in them when I have more money to do so. I enjoy the satisfaction of being proven right in a public forum, with a small amount of money to spice it up.

I'd bet up to like 5000 USD, on terms and metrics for evaluation that could be discussed further, but I would strongly prefer lower stakes bets of 10-100 USD where the incentives to defect are smaller.

If your wife and kids are averse to you spending the equivalent of a succulent Chinese meal, then you really ought to be doing better (if anyone thinks I'm being an asshole for no reason, this references a reason in Jiro's own first link) I can understand if larger sums make you leery.

Oh, and on the self-driving cars, they're already operating autonomously in California and Arizona. While self-driving cars may or may not be human level, you can take my word for it that GPT-4 is a good doctor, what reason do I have to lie as one myself? I ought to have the opposite incentive, barring my general desire to be honest. Kind of a moot point whether it can happen in the future when we're already there.

I'm not gonna bet either, because I'm pretty agnostic-yet-skeptical of this approach -- no strong feelings, open to being surprised. (unlike say self-driving cars)

But I'm curious what would be your criteria for "great leap forward" in GPT-5? It all seems a bit subjective.

(the main reason to be skeptical is that AFAIK there has been no great leap forward in anything other than the size of the model and that of the corpus over the past few GPT iterations -- the former is typically subject to diminishing returns at a certain point, and the latter is probably pretty maxed out. of course that doesn't say that some clever Dick at OAI won't come up with improvements to the underlying algo (which is why I don't want to bet), but it's far from a given)

Hmm, the most important one in my eyes is performance on the USMLE, GPT-4 is 95th percentile today, I expect GPT-5 or the best SOTA model to reach 99% at the least by the end of 2025.

There are plenty of other benchmarks, and I could eyeball them as needed to formulate the bet, but I'm not particularly interested if nobody wants to take up the bet. Those are the closest to objective ways of assessing this as far as I know.

(the main reason to be skeptical is that AFAIK there has been no great leap forward in anything other than the size of the model and that of the corpus over the past few GPT iterations -- the former is typically subject to diminishing returns at a certain point, and the latter is probably pretty maxed out. of course that doesn't say that some clever Dick at OAI won't come up with improvements to the underlying algo (which is why I don't want to bet), but it's far from a given)

  1. Diminishing returns !=no returns or negative returns. The scaling laws still hold firm. In fact, the latest scaling laws suggest existing models are undertrained for their size and would benefit from more data.
  2. I've seen figures for the GPT-4 training run being around ~$50 million. That is nowhere near the limit of what FAANG tier or Tier 2 companies or nations can afford, we can easily go into the tens of billions.
  3. I contest the idea that we're tapped out on text, there's plenty of things like proprietary datasets, video transcripts and the like that are within the budget when text tokens become a truly limiting factor. You can trade-off compute in multiple ways, often training a model on a fixed data set but scaling parameters, and while it may not be optimal, even the best modern models can do more with the same number of tokens.
  4. Synthetic datasets are already being tested and may serve as a route to bootstrapping even without having more "real" data. Models can learn by self-play or self-debate, the former is already how AlphaGo works, and the latter is brand new but seems promising.
  5. Filtering for good data is also beneficial, LLMs of a given size trained on corpuses that are of the same size but one having better data than the other(code, scientific papers) will perform differently, with the one with better data doing better.
  6. Newer models can be taught with multimodal data, not just text.

Will we run out of ML data? Evidence from projecting dataset size trends

Our projections predict that we will have exhausted the stock of low-quality language data by 2030 to 2050, high-quality language data before 2026, and vision data by 2030 to 2060. This might slow down ML progress.

All of our conclusions rely on the unrealistic assumptions that current trends in ML data usage and production will continue and that there will be no major innovations in data efficiency. Relaxing these and other assumptions would be promising future work.

Even considering only high quality data, we're unlikely to run out before 2025, enough for at least a GPT-3 to GPT-4 delta.

Points 1 and 2 suggest that if the marginal return on training is positive, models will only get better. After all, they will also be able to do much higher value cognitive and physical labor, so instead of just replacing the average doctor or code monkey, they can promise to even kill the specialists.

@DasenidustriesLtd will be better positioned to answer all of this, even though I am confident I'm better versed on the topic than the overwhelming majority of Mottizens.

If bets are a tax on bullshit, they are the regressive tax that is put there by special interest groups in the government to benefit themselves.

Your willingness to bet can mean

  • you have justified confidence in X.
  • you have unjustified confidence in X
  • you are bad at general risk assessment or just very foolhardy with your money
  • You are rich or otherwise value money less than the other guy, and you're using your money to buy status
  • you aren't very risk averse. I won't take a bet with a positive expected value that gives me a 90% chance of winning money and a 10% chance of losing, unless the amounts are unbalanced by far more than 9 to 1.

I'm not going to bet with someone unless at a minimum I'd be willing to lend them money. And I'm not going to lend money to some guy over the Internet.

Once again, I must point out that I'm not endorsing betting with absolutely anyone who asks. At least in rat and rat-adjacent circles, almost anyone with any degree of reputation who makes bets falls into the "you have justified confidence in X", and if it turns out to be unjustified, it's often in hindsight.

Since, barring unresolvable cases, someone must have been wrong for the bet to pay out, calling being wrong the same as unjustified isn't warranted.

Certainly the argument that I don't value the money is trivially false, I'm a Third Worlder. Nor does the risk averse aspect play into it, because I have very strong confidence in my assessment.

I'm not going to bet with someone unless at a minimum I'd be willing to lend them money. And I'm not going to lend money to some guy over the Internet.

You do you. If my reputation doesn't meet your requirements, then so be it. I still think worse or you for turning it down, especially for trivial stakes. After all, unlike simply lending someone the money straight away, neither of us will be out on anything right now since I never asked for money to change hands until it resolves.

Still, points for a smart decision, because I do expect that if you took it up, you'd lose the money. If I didn't, why would I even offer?

Once again, I must point out that I'm not endorsing betting with absolutely anyone who asks.

There's no bright line between "you" and "anyone on the internet who asks". Which means the best policy is to not do these bets with you.

Secondly, you're not dealing with a malicious genie or rules-lawyer here

In my experience, people often act exactly like these.

You shouldn't make a bet with someone if you don't expect them to pay up after all.

Well, yes, but that's Jiro's point.

You also missed the point about making the challenge being a way to "win" right now while the loss (even if the loser doesn't weasel out of it) ends up in the future when no one cares any more.

In my experience, people often act exactly like these.

I'm not most people. If you don't think I will keep my word, then simply don't bet at all. You don't see me offering this bet on 4chan, nor would I offer it to a throwaway account. If you're a regular here with even a minimal amount to lose, I can at least debate the odds and stakes.

You also missed the point about making the challenge being a way to "win" right now while the loss (even if the loser doesn't weasel out of it) ends up in the future when no one cares any more.

I read that point and disagree with it. It signals that both people have strong conviction and confidence in their beliefs, and I fail to see how that makes either of them losers. I would respect someone with the confidence to stake anything at all beyond words more, and so would many other rats or rat-adjacents, regardless of whether they win or not. I certainly lose a great deal of respect for people who bow out before they even get to that point, no matter what excuses they raise to justify it.

Presuming the Motte still exists when said bet resolves, he would have my permission to point and laugh if I reneged on the deal, as long as he returned the favor.

I'm not most people. If you don't think I will keep my word, then simply don't bet at all.

No, I am not sufficiently confident that you will keep your word.

Of course, politeness norms normally preclude saying that, but you are taking advantage of politeness norms when you use my failure to say that as reason why I shouldn't mind betting. So I have to say it.

Like I said, it's your call. You're evidently willing to pay the small price of losing a portion of my respect, not that I expect you to lose sleep over it.

I certainly am not so full of myself that I can't accept that someone might not want to take up a bet with a pseudonymous stranger, my issue is only that you claimed to have a general aversion to betting at all, and didn't bother to caveat it with even (excessive) qualifiers like offering bets to people you'd lend money. If that counts as "talking advantage of politeness norms" to you, I clearly disagree.

For what it's worth, a person here here has already offered me substantial sums with absolutely no strings attached, and I haven't taken it up because my condition isn't so dire that I can't do without it. No, I'm not going to post proof, unless said person sees this and approves disclosure. I'm happy that someone values me enough to make the same offer.

and didn't bother to caveat it with even (excessive) qualifiers like offering bets to people you'd lend money.

Because most people don't routinely add nitpicky qualifiers to statements like that.

If I tell you that I don't eat brussels sprouts, I may in fact eat brussels sprouts if I was offered $500 to do so, or if there was a gun at my head. The fact that I left that out isn't "didn't bother to caveat it", that's talking normally.

Besides, I did say:

I'm not going to bet with someone unless at a minimum I'd be willing to lend them money.

in a different post. Pointing out that I didn't say it in the exact post you're referring to is an even worse nitpick.

If your wife and kids are averse to you spending the equivalent of a succulent Chinese meal, then you really ought to be doing better. I can understand if larger sums make you leery.

Someone who has spent the past several posts wringing his hands about how obsolete and poor he's gonna be instead of getting his promised future as a rich doctor really ought to knock that shit off. You don't get to mock "poorcel!" when you're begging "anyone know a way I can get into America, I want to be rich and live a good life?"

It's not just a question of affording the money, it's also a question of how you budget and spend, and the wife and kids question is separate from that.

I could afford an arbitrary expense of a few hundred dollars. But if you want me to risk money, you'll have to do better then tell me "if you get that unlucky 10% chance, you'll be out a few hundred dollars, but hey, you can afford it", unless you have a very sophisticated idea of "afford" which is not just "you have $X and no plans to use it for anything".

On the contrary, this seems consistent with the belief that any functioning adult in the US is richer than God.

Since God has a net worth of about zero, I can't disagree.

If not, I hope he's filing his taxes properly, assuming they're not tax exempt on religious grounds.

C'mon man -- He's got the whole world in His hands, just because he struggles with liquidity doesn't make him poor.

I might be poor in 5 years. Well, if not poor, then unemployed, which isn't the same.

I can afford a 100 dollar bet over a timespan of 1 year, and I'm pretty sure anyone else here could too, including justifying it to the missus. Even 1-5k is feasible, especially since I don't have to pay out right now. If my circumstances are so dire that I can't afford that in a mere 2 years, things are so FUBAR that we all have bigger problems to worry about

You're mistaking it as an accusation of poverty in the first place, it's more of an accusation of mild hypocrisy, I think it's a lame excuse when people regularly make far bigger spends on impulse. I know I have, and I'm already among the poorest Mottizens in absolute terms.

Snarking at someone about "if your family would be impoverished by the price of a meal" is pretty much sounding like an accusation of poverty. I don't bet because I think it's stupid, and I never win anyway, so whether or not I can afford to bet a tenner isn't why I avoid bets. Maybe the other person has a different reason, but "if you don't take up my bet it's because you're poor, boo sucks to be you" isn't a mature argument.

You might want to open the first link he shared, because among the myriad (bad, IMHO) reasons he shared for refusing to take bets, one was that he couldn't justify it to the wife and kids.

If memory serves, he's a lawyer or in an associated field, and he's certainly not going to end up in the doghouse for a sum that small. Even if he isn't one, he's almost certainly wealthier than I am by a country mile.

I already am amongst the poorest Mottizens around, at least in absolute terms, and given that I don't want to be paid out adjusted to purchasing power parity, there is no way this represents a worse deal for him than it does me.

After all, both parties taking bets expect to win, with a net positive expectation after taking the odds into account.

I can't force anyone to take a bet can I? If he can't afford to bet, it's a completely different scenario to giving reasons why he doesn't bet in general.

I don't bet because I think it's stupid, and I never win anyway, so whether or not I can afford to bet a tenner isn't why I avoid bets

If you keep losing bets, then the smart decision is both to not bet, and temper your expectations on how right you are about things.

This isn't something like horse racing, where you have a middleman taking a cut, meaning that you have to be better than merely being right more often than not to have it be worth your time.

I'm right on things I care to bet on more often than not, and since I recently missed an opportunity to make bank on Nvidia because I couldn't convince my dad to invest in time nor had the money to do so myself, I have no qualms about taking on one.

because among the myriad (bad, IMHO) reasons he shared for refusing to take bets, one was that he couldn't justify it to the wife and kids.

In your opinion, indeed. You sound like you're offended he wouldn't bet with you. My view on this is nobody has to bet on anything and "I don't want to" is sufficient reason. "Oh, you're so pussy-whipped your missus won't let you bet" and "boo-hoo, your family will starve if you bet a small amount of money, loser" are not, as I said, convincing arguments and make you sound like a playground bully.

His reasons are his reasons. You proposed a bet, he refused, there we are. Again, my own view is that "if you really believed the postion you hold, you'd bet money on it and if you don't you're a pussy/coward/poorcel loser" is fucking stupid, like the local tough guy trying to chivvy someone into drinking drunk because "a real man can hold his booze and if you don't want to go pint for pint with me, you're a dum-dum loser!"

Then again, I'm a woman, and these male dick-measuring rituals don't impress me much.

More comments

For what it's worth, Brockman and Altman seem to say that GPT-5 will be in an entirely different format, either a purely B2B offer or something direct-to-research institutions, so I am not sure if that bet would be resolvable.

where did you learn that from?

https://youtube.com/watch?v=65zOlQV1qto&t=1854

Looking back at it I've read too much into his words. I do think it's a possible interpretation though.

I'm sure they'll provide evidence of its competence publicly, even if they hide the details (at least in the case of GPT-4, most likely because it's a rather standard Mixture of Experts, rather than some kind of earthshaking breakthrough, wait, let me get my "All You Need is Scale" t-shirt ready) or refuse to let us plebs use it. I think the most likely scenario where that's the case is if they think GPT-4 is sufficient for most use cases, and think that they might make more money by market segmentation and offering the best models to companies that will pay $$$ for them.

And I'm not strictly concerned with just something coming out of OAI, I could easily be pursuaded to consider models from other companies, and posit that at least 1 of them will be superior to GPT-4 on terms that are easily verifiable.

If the bet doesn't resolve, it doesn't resolve and nobody loses anything.