@SnapDragon's banner p

SnapDragon


				

				

				
1 follower   follows 0 users  
joined 2022 October 10 20:44:11 UTC
Verified Email

				

User ID: 1550

SnapDragon


				
				
				

				
1 follower   follows 0 users   joined 2022 October 10 20:44:11 UTC

					

No bio...


					

User ID: 1550

Verified Email

If Sleeping Beauty is asked each time she awakens for a probability distribution over which side the coin landed on, and will be paid on Wednesday an amount of money proportional to the actual answer times the average probability she put on that answer across wakings, she should be a halfer to maximize payout.

I appreciate that you're trying to steelman the halfer position, but that's a really artificial construction. In fact, in this framing, the payout is 1/2 regardless of what she answers (as long as she's consistent). That's what happens when you try to sidestep the obvious way to bet (where even the Wikipedia article admits she should wager 1/3 on heads - and then somehow fails to definitively end the article there).

p.s. you might enjoy the technicolor sleeping beauty problem.

Nice, I think I'd encountered it before (I've unfortunately read a lot of "Ape in the coat"'s voluminous but misguided Sleeping Beauty posts), but I didn't specifically remember that one. Commit to betting only if the room is red. Then of the four equal-weight possibilities (Monday is red/blue) x (heads/tails), you win in red/tails and blue/tails, you lose in red/heads, and you don't bet in blue/heads. Expected payout per experiment is 1/4*(200+200-300) = 25.

He does seem to be wrong about "for reference, in regular Sleeping Beauty problem utility neutral betting odds for once per experiment bet are 1:1", because if you have any source of randomness yourself, you can actually get better odds (by ensuring that you'll "take the bet" more often when you have two chances at it). I see you actually posted a really nice analysis of the problem yourself in the link. It's fun that there's a distinction between an external source of randomness (where the results on Monday/Tuesday are dependent) and an internal source (where the results on Monday/Tuesday must be independent).

I believe, but can’t prove, that a lot of Elden Ring’s difficulty complaints come from DS3 veterans who refuse to do anything but dodge roll and light attack and don’t engage with the new mechanics.

Or, I guess, people like me who were new to FromSoftware games and were trying to bash our way through without looking up online explanations for all the mechanics the game doesn't explain.

Only partially - I genuinely think this is an example of a failure of Wikipedia as a repository of knowledge. And believe me, I'd like nothing more than for rationalists to grok Sleeping Beauty like they (mostly) grok Monty Hall.

Believe me, Tanya does not think she just "missed" the ambiguous phrasing of the problem. What the problem is asking is quite clear - you will not get a different answer from different mathematicians based on their reading of it. The defense that it's "ambiguous" is how people try to retrofit the fact that their bad intuition of "what probability is" - which you've done a pretty good job of describing - somehow gets the wrong answer.

Do you count getting a correct answer twice "more valuable" than getting it once?

Um, yes? The field of probability arose because Pascal was trying to analyze gambling, where you want to be correct more often in an unpredictable situation. If you're in a situation where you will observe heads 1/3 of the time, either you say the probability is 1/3, or you're wrong. If I roll a die and you keep betting 50-50 odds on whether it's a 6, you don't get a pity refund because you were at least correct once, and we shouldn't say that's "less valuable" than the other five times...

If she is told that she's going to get cash ONLY if she correctly answers on the last waking, then it doesn't matter what she picks, her odds of a payday are equal.

Nothing in the problem says that only the last waking counts. But yes, if you add something to the problem that was never there, then the answer changes too.

This problem strongly reminds me of the Monty Hall problem, where of course the key insight is that the ordering matters and that eliminating possibilities skews the odds off of 50%.

Actually, the key insight of the Monty Hall problem is that the host knows which door the prize is behind. Ironically, unlike Sleeping Beauty, the usual way the Monty Hall problem is stated is actually ambiguous, because it's usually left implicit that the host could never open the prize door accidentally.

Indeed, in the "ignorant host" case, it's actually analogous to the Sleeping Beauty problem. Out of the 6 equal-probability possibilities (your choice of door) x (host's choice of door), seeing no prize behind the host's door gives you information that restricts you to four of the possibilities. You should only switch in two of them, so the odds are indeed 50/50.

Similarly, in the Sleeping Beauty problem, there are 4 equal-probability possibilities (Monday/Tuesday) x (heads/tails), and you waking up gives you information that restricts you to three of them.

You may like Dark Souls 2 more than Elden Ring, as IMO it satisfies both the "less mindless mashing of the roll button" and "more rewarding exploration" criteria. (It's my understanding that Dark Souls 2 is a controversial member of the series. But I haven't played any other Soulslike games. Some 4chan users feel that Dark Souls 3 and Elden Ring degenerated into mindless "rollslop" in comparison to Demon's Souls, Dark Souls, and Dark Souls 2.)

Ah, that's good to hear - and "rollslop" is a great word! I have enough FOMO that I probably will try dipping my toes into the genre a few more times, even after a few negative experiences.

Well, yes, this is what I mean when I say that some people don't understand what probability measures. If you pretend "schmrobability" is some weird mystical floaty value that somehow gets permanently attached to events like coin flips, then you get confused as to why the answer, as you can observe by trying forms of the experiment yourself, somehow becomes 1/3. Mathematicians say "ok, please fix your incorrect understanding of probability." Philosophers say "oh, look at this fascinating paradox I've discovered." Yeesh.

With Wikipedia, if I read an article on Abraham Lincoln, I am pretty confident the dates will be correct and the life and political events will be real and sourced. Sure, sometimes there are errors and there are occasional trolls and saboteurs (I once found an article on a species of water snake that said their chief diet was mermaids), and if you are a Confederate apologist you will probably be annoyed at the glazing, but you still won't find anything that would be contradicted by an actual biography.

So, yes, I'm sure most of us are aware that Wikipedia political articles are going to be as misleading as they can get away with, but let me just say that there are some completely non-political articles that are factually wrong, too. If you look up the Sleeping Beauty problem, the article states that there is "ongoing debate", which is ridiculous. For actual mathematicians, there's no debate; the answer is simple. The only reason there's a "debate" is because some people don't quite understand what probability measures. Imagine if the Flat Earth page said that there was "ongoing debate" on the validity of the theory...

And don't even get me started on the Doomsday argument, which is just as badly formed but has a bunch of advocates who are happy to maintain a 20-page article full of philosobabble to make it sound worthy of consideration.

I'm sure there are many other examples from fields where I'm not informed enough to smell the bullshit. Crowdsourcing knowledge has more failure modes than just the well-known political one.

First, it's "metroidvania", not "metrovania".

Hollow Knight and Silksong are both masterpieces, standing at the pinnacle of the metroidvania genre. But their lore (which is really unique and cool, and I like that you can deconstruct and analyze it to death) isn't really why I feel that way. It just comes down to gameplay - the controls are near-perfect and the challenges they throw in your way, particularly the bosses, are amazing and incredibly varied.

I actually kind of resent the Dark Souls comparison. I've barely played a real Dark Souls game, but I actively disliked Elden Ring (despite it, too, having incredible aesthetics and ridiculously deep lore). So many of the bosses felt exactly the same - oh, here's a screen-filling attack, I've memorized how many frames it takes so I can dodge-roll at the right time. Oh whoops, it was his fake-out attack instead, now I'm dead. (I guess I should have allocated my stats differently in their ridiculously-badly-explained leveling system so I could take two hits instead of one.) And I hate that other games considered soulslikes (Salt & Sanctuary, Nine Sols) have latched on to this style, too. You know, you can have a good, challenging game without making it ALL about i-frames!

Notably, there are no i-frames at all in Silksong (I believe the same was true in HK, but it's been a while). You are expected to move all around the screen to actively avoid boss attacks, not weirdly absorb an attack because you rolled with the correct timing. And the bosses are incredibly varied - from huge slow juggernauts, to ranged jerks firing at you while you leap around crumbling platforms, to nimble teleporting fighters who can parry and punish your attacks if you're overeager. For many of them, even on my 20th run back I still had a smile on my face, thinking about how I could do better next time. That's what 10 years of genius-level game craftsmanship can do.

The other thing that HK and Silksong do better than almost any other game is rewarding exploration. In most games, finding a secret wall will give you a small optional upgrade, and you do it because you want the 100% completion mark. In Silksong (even more than HK), finding a well-hidden secret might unlock a key quest item, or a hidden encounter, or even an entire new zone. It's kind of nuts, and it did mean I missed some big things by playing without a guide, but I loved it anyway.

If it weren't for Blue Prince, I think Silksong would easily have been my Game of the Year. (Disclaimer: I have not played Clair Obscur yet.)

You seem to be very invested in your contrarian take, but I'll try to spell this out one more time. Shooting Trump is strong evidence of his opinion on Trump. You don't get to exclude the one huge and highly unusual piece of evidence that we all have and then say the pithy culture-warrior line "there has yet to be produced a single piece of evidence...".

The default boring position is that he hated Trump for political reasons, because Trump is a divisive political figure and he shot Trump.

Now, it's possible the default boring position is wrong, but you need strong evidence if you want to convince non-ideologues like us of this. Searching for Biden campaign stops he could attend does not even distinguish him from fans of Biden. I would struggle to call it "evidence" of anything.

So, to get this straight, your position is that shooting Trump and having Biden in his browser history are roughly equivalent levels of evidence as to whom he wanted dead?

Ok, this has been an extremely frustrating thread. I'm going to try and take a step back, unilaterally disarm, and try to figure out how we got here.

I believe I misinterpreted @crushedoranges at the start. To paraphrase, what he meant was "ROI is extremely high, which is evidence that this is a buyer's market." Upon rereading, that makes more sense, and is what you and @whatihear have been defending. But (as I stated multiple times) I interpreted it as "ROI is extremely high, which leads to this being a buyer's market." Flipping the implication makes the sign incorrect (and "the market is inefficient" is not a fix).

I could nitpick your phrasing, but I've already wasted enough of everyone's time. I basically agree with your and @whatihear's latest posts. I hope we're all on the same page now.

What did I do to deserve this thread? Yes, Generic Economic Trope #1 is that markets aren't always efficient. That's why I, y'know, explicitly said "efficient" in my post. It's irrelevant to the main point. "ROI being high makes it a buyer's market" is factually incorrect. The effect goes in the exact wrong direction. You don't get to say that "if my product becomes more valuable, I'm less capable of raising prices, because something something INELASTICITY."

Fair enough! It's definitely a hard game.

Silksong does not have this kind of challenge implemented as of now.

Yeah, Hollow Knight also didn't have the Pantheons (boss rush) until the DLC came out. And while the first four (much much much easier) Pantheons contribute to the final 112% completion percentage, the fifth one with that ending does not. It really is intended to be optional content. (There IS an achievement for it, though.)

I haven't finished all Silksong's extra content yet, but I found the bosses to be tough but fair - especially if you count this as a soulslike, which IMHO is a genre rife with badly-designed overtuned bosses. I would cite Nine Sols, another recent metroidvania, as an example of how NOT to do it.

From the wiki: "Behind a breakable wall on your left is the entrance to the Path of Pain - an optional and particularly hard area. Note that beating this area isn't required for the true ending."

Yes, you do have to do some platforming in the White Palace. But you do not even have to find, let alone complete, the insane secret section that's showcased in the linked video.

I only just stumbled across this post, but I feel like this deserves correction. The Path of Pain and Absolute Radiance are both very optional, intentionally brutal content. The canonical good ending only requires you to defeat normal Radiance (after fighting the Hollow Knight), which is a bit tricky but nothing compared to what you linked. It sounds like you checked some guides, misinterpreted them, then gave up prematurely?

Ok, @crushedoranges made an easy-to-understand mistake, but you're specifically trying to be a pedant and correct my correction. You really should have taken a few minutes to think this through first. (And this from an account called "MathWizard"? Really?)

If ROI is high, then more people will want to buy. That's what makes it a seller's market. In an ideal marketplace, prices rise to equilibrium (where perceived value = price). Your example is incoherent - in an efficient market, potatoes being "a great deal for shoppers" is not compatible with "not many people buying potatoes."

To use a pithy example, if we're selling $100 bills for $1, then the ROI for customers is 100, ridiculously high. If the guy next to me has 10 to sell, and I have 10 to sell, and I foolishly decide not to listen to a "MathWizard" and to sell them at $2 instead ... do you think I'm going to have trouble finding buyers?

I for one will be quite happy if political discourse returns to slinging rude memes and videos at each other, rather than rioting and hoping for each other's deaths.

Er, those two sentences contradict one another. If the bribe has incredible ROI, that means the seller (the politician) is in control, and you'd expect to see much larger bribes.

The real explanation is that the competent companies and politicians don't need to bother with illegal bribery. There are subtler ways for companies to "reward" loyal politicians that are near-impossible to outlaw. (For instance, revolving-door sinecures.)

Absolutely. Doing code reviews (even comprehensive line-by-line ones) is a lot less effort for me than writing the code in the first place.

Wow, that is truly amazing. And, hilariously, it's going to be a very powerful datapoint for our constant "are LLMs actually useful?" debates.

Huh? Did you reply to the wrong person? I was complimenting your post.

An excellent takedown of a really dumb article. Good job. I hate it when pundits try to describe and predict our unbelievably complex national/global economy using a couple of pithy ideas ("We're investing too much into AI!" and "Silicon Valley is allied with Trump!"), exaggerated beyond the point of all usefulness. If we overspend on AI, well then darn, we've somewhat misallocated our abundant resources. Maybe we'll optimize better next decade.

No problem, this is something we're all still trying to figure out. I wonder if there'll be a future career path of "prompt engineer", or, more fancifully, "LLM whisperer"...

Your travel analogy is awful - it is often very valuable to solve 80% of a problem. A better analogy would be if your travel agent offered you a brand-new cheap teleportation device that had a range of "only" 80% of the way to Hawaii, but you had to purchase a flight for the last 20%. Which would obviously be great! AVs are the exception here, since you need to actually solve 99% of the driving problem for them to be useful (telepresent drivers "stepping in" can help a bit, but you don't want to depend on them).

Uh, and I don't think $64 per licensed driver in America is going to buy them two Ford-F150s. You might want to check Car and Driver's math. (What is with people being unable to properly divide by the population of the US? Does their common sense break down when dealing with big numbers?) Amusingly, I've never seen GPT4+ make this magnitude of a mistake.

Anyway, we should (and will) be taking the next decade to put smart models absolutely everywhere, even though they sometimes make mistakes. And that's going to be expensive. The major risk of AI investment is definitely not the lack of demand. As OP mentioned, the risk really is the lack of "moat" - if you only have to wait a year for an open-source model to catch up with GPT, why pay OpenAI's premium prices?