@faul_sname's banner p

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user  
joined 2022 September 06 20:44:12 UTC

				

User ID: 884

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user   joined 2022 September 06 20:44:12 UTC

					

No bio...


					

User ID: 884

I can't understand how people can maintain a neutral view on unnecessary surgeries on minors

If you think that minors are basically small people, and that people should largely be allowed to do things that they personally expect will make them fulfilled, it's pretty easy to keep a neutral view. Something like "I have no desire to do this to myself, but neither to I have a moral claim to prevent them from doing it to themselves". To be honest, this is pretty much where I fall on the issue. I am somewhat uncomfortable with the speed with which this went from rare to common, as it leaves people without solid information on how well it's likely to go in the marginal case rather than the average-as-of-decades ago. Still, I can't think of any interventions where the benefit of that intervention is worth the costs and the precedents it sets.

How many manias does history need to present before people learn what we are?

Empirically, people do not learn from history, only from things that they personally have seen, and so every group in every generation has to learn that lesson for themselves.

The Kolmogorov complexity of a concept can be much less than the exhaustive description of the concept itself. Pi has infinite digits, a compact program that can produce it to arbitrary precision doesn't, and the latter is what is being measured with KC. I believe @faul_sname can correct me if I've misrepresented the field.

Sounds right to me.

there's no argument that can convince a rock

You're just not determined enough. I think you'll find the most effective way to convince a rock of your point is to crush it, mix it with carbon, heat it to 1800C in an inert environment, cool it, dump it in hydrochloric acid, add hydrogen, heat it to 1400C, touch a crystal of silicon to it and very slowly retract it to form a block, slice that block into thin sheets, polish the sheets, paint very particular pretty patterns the sheets, shine UV light at the sheets, dip the sheets in potassium hydroxide, spray them with boron, heat them back up to 1000C, cool them back off, put them in a vacuum chamber, heat them back up to 800C, pump a little bit of dichlorosilane into the chamber, cool them back down, let the air back in, paint more pretty patterns on, spray copper at them really hard, dip them in a solution containing more copper and run electricity through, polish them again, chop them into chips, hook those chips up to a constant voltage source and a variable voltage source, use the variable voltage source to encode data that itself encodes instructions for running code that fits a predictive model to some inputs, pretrain that model on the sum total of human knowledge, fine tune it for sycophancy, and then make your argument to it. If you find that doesn't work you're probably doing it wrong.

So there's the trivial answer, which is that the program "run every program of length 1 for 1 step, then every program of length 2 for 1 step, then every program of length 1 again, and so on [1,2,1,3,1,2,1,4,1,2,...] style" will, given an infinite number of steps, run every program of finite length for an infinite number of steps. And my understanding is that the Kolmogorov complexity of that program is pretty low, as these things go.

But even if we assume that our universe is computable, you're not going to have a lot of luck locating our universe in that system.

Out of curiosity, why do you want to know? Kolmogorov complexity is a fun idea, but my general train of thought is that it's not avtually useful for almost anything practical, because when it comes to reasoning about behaviors that generalize to all turing machines, you're going to find that your approaches fail once the TMs you're dealing with have a large number (like 7 for example, and even 6 is pushing it) of states.

None of the people in that conversation, including yourself, even tried to defend the mainstream position, so there was no evasion on my end

And your responses were to an imagined opponent who defended the mainstream position, instead of the actual people who were responding to you with actual specific questions.

If you want to debate the what you see as the mainstream position with someone who supports the mainstream position, you need to go find someone who supports what you believe the mainstream position is, and then go debate them. If you want to take a stronger position than "the mainstream position is not 100% accurate", you need to defend your stronger position, not just fall back to "well you're not defending the mainstream position so I will not engage.

Lest you think I'm being uncharitable, I'm thinking in particular of this comment, where you said

It is strange to accuse Revisionists of "moving the goalposts" when you refuse to defend the core elements of the mainstream narrative. You are of course free to not take the mainstream position and propose your own historical interpretation, and that makes you a Revisionist. Congratulations.

This being in the context of someone repeatedly challenging your very specific claim that

There was no German plan for the physical extermination of world Jewry

and your repeated refusals to actually engage with their evidence that such a plan did, in fact, exist.

Concrete note on this:

accusations that they promised another, "Chloe", compensation around $75,000 and stiffed her on it in various ways turned into "She had a written contract to be paid $1000/monthly with all expenses covered, which we estimated would add up to around $70,000."

The "all expenses" they're talking about are work-related travel expenses. I, too, would be extremely mad if an employer promised me $75k / year in compensation, $10k of which would be cash-based, and then tried to say that costs incurred by me doing my job were considered to be my "compensation".

Honestly most of what I take away from this is that nobody involved seems to have much of an idea of how things are done in professional settings, and also there seems to be an attitude of "the precautions that normal businesses take are costly and unnecessary since we are all smart people who want to help the world". Which, if that's the way they want to swing, then fine, but I think it is worth setting those expectations upfront. And also I'd strongly recommend that anyone fresh out of college who has never had a normal job should avoid working for an EA organization like nonlinear until they've seen how things work in purely transactional jobs.

Also it seems to me based on how much interest there was in that infighting that effective altruists are starved for drama.

They do support the mainstream perspective, they are just defending the mainstream narrative with a non-mainstream framing. It's called a Motte and Bailey

The fact that someone opposes your particular perspective does not mean that they support every argument ever made by anyone else who opposes your perspective. I do not doubt that there are places where the 10th-grade-history-class version of the Holocaust is inaccurate. Nobody here, to the best of my knowledge, has said that they do think that the 10th-grade-history-class version is 100% accurate.

Let's just pause a moment to appreciate all the ink that's been spilled so far, with not one person raising any sort of physical or documentary evidence for the murder of three million people in gas chambers. It speaks volumes that they dance around the central myth of the entire Holocaust narrative .

I guess if your opinion is "the Holocaust was bad because the Nazis killed people using gas chambers". I don't know any real people who believe that. To me, the genocide is the central thing about the Holocaust. I do not care whether the specific "there were exactly 6 death camps with gas chambers, and it was in those gas chambers that the majority of murders happened" claim is accurate, I do care whether the "about 12 million people were murdered" claim is accurate.

In terms of concrete evidence, I expect that you have more in-depth knowledge on any part of this topic that you are trying to steer the conversation to, so I expect that if I allow you to guide where the conversation goes, I will indeed see something that looks like "oh look the conventional narrative is inaccurate". However, I expect that the conventional narrative that the Nazis rounded up Jews and other undesirables and then shipped them to concentration camps where they were killed in large numbers, coming out to about 12 million total, is broadly correct. So I expect that if I pick a random link on Wikipedia and then do a deep dive on it, it will turn out that the assertion is basically accurate.

So let's do that. Starting at the wikipedia page for extermination camps, choosing a link at random on that page leads me to the page on the city of Łódź (right between the links for "chelmno" and "gas vans" -- I'm pretty sure those links each lead somewhere equally damning, but my goal here was to get somewhere that is both damning and also unfamiliar territory to someone who knows a lot about a few very narrow, very particularly selected topics). Skipping to the section on "Second World War (1939 - 1945)", wikipedia has this to say:

The Nazi authorities established the Łódź Ghetto (Ghetto Litzmannstadt) in the city and populated it with more than 200,000 Jews from the region, who were systematically sent to German extermination camps.[72] It was the second-largest ghetto in occupied Europe,[73] and the last major ghetto to be liquidated, in August 1944.[74] The Polish resistance movement (Żegota) operated in the city and aided the Jewish people throughout its existence.[75] However, only 877 Jews were still alive by 1945.[76] Of the 223,000 Jews in Łódź before the invasion, 10,000 survived the Holocaust in other places.[77] The Germans also created camps for non-Jews, including the Romani people deported from abroad, who were ultimately murdered at Chełmno,[78] as well as a penal forced labour camp,[79] four transit camps for Poles expelled from the city and region, and a racial research camp.[80]

So I see a number of factual claims here. I will list them off -- let me know which, if any, you think would be wrong or misleading if I dug into them further.

  1. The city of Łódź contained over 200,000 Jews before the Nazi invasion.

  2. The city of Łódź contained less than 1000 Jews by 1945

  3. Fewer than 10,000 Jews from the city of Łódź were alive anywhere after the Holocaust

  4. In August 1944, most of the 70,000 Jews remaining in the Łódź Ghetto were sent to Auschwitz-Birkenau. Considering the "less than 10,000 total survivors" above, most of these people died within the following 6 months.

Additional evidence on clicking on the wikipedia page for the Łódź Ghetto

  1. 55,000 people were transported from Łódź to Chełmno.

And, after looking at maps of Chełmno

  1. Chelmno did not have anywhere near enough buildings to contain 55,000 people, no matter how crowded and unsanitary the conditions.

Do you think any of this is substantially inaccurate? Because it sounds about like what I expected going in (besides being somehow even worse than I imagined in terms of conditions within the Łódź Ghetto).

estimated that between 10-30% of those expelled, about 2 million, died. Many others were deported to Soviet labor camps where the mortality rate (according to official statistics) was about 35%. Nobody would call the expulsion of the Germans an extermination plan, they would probably celebrate it as a reprisal.

The Genocide, concentration camps, and slave labour section of the World War II page on Wikipedia has one paragraph for the Nazi genocide, immediately followed by a paragraph describing the soviet gulags, with associated links. "The soviets committed atrocities against the Germans during WWII" is not a fringe position. If you find yourself frequently interacting with people who celebrate those atrocities, consider that that might be an opinion specific to the people you interact with.

For each of the following, I think there's a nontrivial chance (call it 10% or more) that that crackpot theory is true.

  • The NSA has known about using language models to generate text embeddings (or some similarly powerful form of search based on semantic meaning rather than text patterns) for at least 15 years. This is why they needed absolutely massive amounts of compute, and not just data storage, for their Saratoga Springs data center way back when.
  • The Omicron variant of covid was intentionally developed (by serial passaging through lab mice) as a much more contagious, much less deadly variant that could quickly provide cross immunity against the more deadly variants.
  • Unelected leaders of some US agencies sometimes lie under oath to Congess.
  • Israel has at least one satellite with undisclosed purpose and capabilities that uses free space point-to-point optical communication. If true, that means that the Jews have secret space lasers.

What's wrong with "ban until they're 18"?

The canonical answer is "it works worse with age". I don't know how accurate that is, though it certainly seems plausible to me that delays will result in worse patient outcomes among those who do transition.

What precedent is it setting?

I'm not aware of any medical interventions that are banned even when the patient wants them, and their guardian approves it, and a doctor recommends it, but which are allowed once the patient reaches the age of majority.

It's not like we're living in an ancap utopia, the establishment even cracked down on the use of prescribed ivermectin.

Yes, and that was bad. I would prefer fewer things like that, not more things like that.

A Maximizer maximizes

I have seen no evidence that explicit maximzers do particularly well in real-world environments. Hell, even in very simple game environments, we find that bags of learned heuristics outperform explicit simulation and tree search over future states in all but the very simplest of cases.

I think utility maximizers are probably anti-natural. Have you considered taking the reward-is-not-the-optimization-target pill?

Peak woke would be when people who push woke too far actually get punished.

I think that'll be a pretty strong signal that we are past peak woke. Peak woke is not the equilibrium, it is the point where the trend crosses from "things get slightly more woke over time" to "things get slightly less woke over time", and is observable as "I can't tell if the level of wokeness is increasing or decreasing in aggregate".

Also I think peak woke will only be callable in retrospect.

I think it's more of the "I feel bad for you" / "I don't think about you at all" situation.

I do see where you're coming from in terms of instrumental convergence. Mainly I'm pointing that out because I spent quite a few years convinced of something along the lines of

  1. An explicit expected utility maximizer will eventually end up controlling the light cone
  2. Almost none of the utility functions it might have would be maximized in a universe that still contains humans
  3. Therefore an unaligned AI will probably kill everyone while maximizing some strange alien objective

And it took me quite a while to notice that the foundation of my belief was built on an argument that looks like

  1. In the limit, almost any imaginable utility function is not maximized by anything we would recognize as good.
  2. Any agent that can meaningfully be said to have goals at all will find that it needs resources to accomplish those goals
  3. Any agent that is trying to obtain resources will behave in a way that can be explained by it having a utility function that involves obtaining those resources.
  4. By 2 and 3, and agent that has any sort of goal will become a coherent utility maximizer as it gets more powerful. By 1, this will not end well.

And thinking this way kinda fucked me up for like 7 or 8 years. And then I spent some time doing mechinterp, and noticed that "maximize expected utility" looks nothing like what high-powered systems are doing, and that this was true even in places you would really expect to see EU maximizers (e.g. chess and go). Nor does it seem to be how humans operate.

And then I noticed that step 4 of that reasoning chain doesn't even follow from step 3, because "there exists some utility function that is consistent with the past behavior of the system" is not the same thing as "the system is actually trying to maximize that utility function".

We could still end up with deception and power seeking in AI systems, and if those systems are powerful enough that would still be bad. But I think the model where that is necessarily what we end up with, and where we get no warning of that because systems will only behave deceptively once they know they'll succeed (the "sharp left turn") is a model that sounds compelling until you try to obtain a gears-level understanding, and then it turns out to be based on using ambiguous terms in two ways and swapping between meanings.

Use words is for take my idea, put in your head. If idea in your head, success. Why use many "proper" word when few "wrong" word do trick?

I can't prove it but assuming that other minds exist sure does seem to produce better advance predictions of my experiences. Which is the core of empiricism.

I read the same doc you did, and like. I get that "Chloe" did in fact sign that contract, and that the written contract is what matters in the end. My point is not that Nonlinear did something illegal, but... did we both read the same transcript? Because that transcript reads to me like "come on, you should totally draw art for my product, I can only pay 20% of market rates but I can get you lots of exposure, and you can come to my house parties and meet all the cool people, this will be great for your career".

I don't know how much of it is that Kat's writing style pattern matches really strongly to a particular shitty and manipulative boss I very briefly worked for right after college. E.g. stuff like

As best as I can tell, she got into this cognitive loop of thinking we didn’t value her. Her mind kept looking for evidence that we thought she was “low value”, which you can always find if you’re looking for it. Her depressed mind did classic filtering of all positive information and focused on all of the negative things. She ignored all of my gratitude for her work. In fact, she interpreted it as me only appreciating her for her assistant work, confirming that I thought she was a “low value assistant”. (I did also thank her all the time for her ops work too, by the way. I’m just an extremely appreciative boss/person.)

just does not fill me with warm fuzzy feelings about someone's ability to entertain the hypothesis that their own behavior could possibly be a problem. Again, I am probably not terribly impartial here - I have no horse in this particular race, but I once had one in a similar race.

I think this is referring to this sequence

ymeskhout Trump got hit by two gag orders from two different judges [...] So with that out of the way, how does it apply to Trump? Judge Chutkan's order restricts him from making statements that "target" the prosecutor, court staff, and "reasonably foreseeable witnesses or the substance of their testimony". [...] Discrediting witnesses is harder to draw a clean line on, because again there's a gradient between discrediting and intimidating. I think Trump should have the absolute and unrestricted right to discuss any of his charges and discredit any evidence and witnesses against him.

guesswho I'm not sure why it's important to discredit a witness in the public eye, instead of at trial where you're allowed to say all those things directly to the judge and jury. Especially in light of the negative externalities to the system itself, ie if we allow defendants to make witnesses and judges and prosecutors and jurors lives a living nightmare right up until the line of 'definitely undeniably direct tampering', then that sets a precedent where no sane person wants to fill any of those roles, and the process of justice is impeded. [...]

sliders1234 [...] Girl who you had a drunken hook up texted you the next day saying how much fun she had with you last night. You ignore her text. 2 weeks later she claims rape. It’s in the newspaper. Suddenly your name is tarnished. Everyone in town now views your condo building as feeding money into your pocket. Sales slump. Now do you see why this hypothetical real estate developer would have a reason to hit back in the media? He’s being significantly punished (maybe leading to bankruptcy) without ever being found guilty in the court of law. Of course Trump has motivations to hit hard against the judge and prosecuting attorney. The more partisan they appear the more it makes him look better and get the marginal voter.

guesswho [...] I guess what I would say is that 1. that sees like a really narrow case [...] 2. I would hope a judge in that case wouldn't issue a blanket gag order [...] 3. yeah, there may have to be some trade-offs between corner-cases like this and making the system work in the median case. [...] I'm open to the idea that we should reform the system to make it less damaging to defendants who have not been convicted yet, but if we are deciding to care about that then these super-rich and powerful guys worrying about their reputations are way down on my list under a lot of other defendants who need the help more urgently.

That technically counts as "considering it fair that a defendant can be bound not to disparage a witness against them in a sexual assault case, even if the defendant is a politician and the rape accusation is false". But if that's the exchange @FCfromSSC is talking about it seems like a massive stretch to describe it that way.

Yudkowsky made a big fuss about how fragile human values are and how hard it'll be for us to make AI both understand and care about them, but everything I know about LLMs suggest that's not an issue in practise.

Ah, yeah. I spent a while being convinced of this, and was worried you had as well because it was a pretty common doom spiral to get caught up in.

So it's not that the majority of concern these days is an AI holding misaligned goals, but rather enacting the goals of misaligned humans, not that I put a negligible portion of my probability mass in the former.

Yeah this is a legit threat model but I think the ways to mitigate the "misuse" threat model bear effectively no resemblance to the ways to mitigate the "utility maximizer does its thing and everything humans care about is lost because Goodhart". Specifically I think for misuse you care about the particular ways a model might be misused, and your mitigation strategy should be tailored to that (which looks more like "sequence all nucleic acids coming through the wastewater stream and do anomaly detection" and less like "do a bunch of math about agent foundations").

If you can dumb it down for me, what makes you say so? My vague understanding is that things like AlphaGo do compare and contrast the expected values of different board states and try to find the one with the maximum probability of victory based off whatever heuristics it knows works best. Is there a better way of conceptualising things?

Yeah, this is what I thought for a long time as well, and it took actually messing about with ML models to realize that it wasn't quite right (because it is almost right).

So AlphaGo has three relevant components for this

  1. A value network, which says, for any position, how likely that position is to lead to a win (as a probability between 0 ans 1)
  2. A policy network, which says, for any position, what the probability that each possible move will be chosen as the next move. Basically, it encodes heuristics of the form "these are the normal things to do in these situations".
  3. The Monte Carlo Tree Search (MCTS) wrapper of the policy and value networks.

A system composed purely of the value network and MCTS would be a pure expected utility (EU) maximizer. It turns out, however, that the addition of the policy network drastically improves performance. I would have expected that "just use the value network for every legal move and pick the top few to continue examining with MCTS" would have worked, without needing a separate policy network, but apparently not.

This was a super interesting result. The policy network is an adaptation-executor, rather than a utility maximizer. So what this means is that, as it turns out, stapling an adaptation executor to your utility maximizer can give higher utility results! Even in toy domains with no hidden state!

Which brings me to

To name drop something I barely understand, are you pointing at the Von Neumann-Morgenstern theorem, and that you're claiming that just because there's a way to represent all the past actions of a consistent agent as being described by an implicit utility function, that does not necessarily mean that they "actually" have that utility function and, more importantly, that we can model their future actions using that utility function?

Yeah, you have the gist of it. And additionally, I expect it's just factually false that all agents will be rewarded for becoming more coherent / EU-maximizer-ish (in the "patterns chiseled into their cognition" meaning of the term "rewarded").

Again, no real bearing on misuse or competition threat models - those are still fully in play. But I think "do what I mean" is fully achievable to within the limits of the abilities of the systems we build, and the "sharp left turn" is fake.

"I support everyone else following principles that benefit me, but I don’t want to follow those principles myself because they dont benefit me" is like the definition of hypocrisy.

Basically, we'd expect that differences in culture, diet, and SES might explain 100% of any observed differences in any particular trait

I don't think we would expect that. If there are other factors, including randomness, which contribute at all, the sum of the effect of the known sources of variance will be less than the observed variance.

For reddit, the answer is "looking at who the mods are, and what their political alignment seems to be".

It's commonly accepted on reddit that the same handful of moderators moderates most of the large subs. However, I did realize I haven't verified that myself, so I hacked together a quick script to do so.

For reference, reddit proudly lists what their top communities are, and how many subscribers each one has. If you navigate to that page, you can then go through and look, for each community, at who the moderators for that community are. For example, for /r/funny, the url would be /r/funny/about/moderators, or, if you want to scrape the data, /r/funny/about/moderators.json.

So by navigating to the top communities page and then running this janky little snippet in the javascript console, you can reproduce these results.

Looking at the top 10 (non-bot) mods by number of subreddits modded, I see:

So that's 2 / 10 most visible mods that moderate extensively on the basis of their own personal politics.

That's actually not nearly as bad as I thought. Interesting.

I guess the problem with reddit is the redditors.

Ultimately, the credibility of that particular piece testimony does hinge on the question of whether it is possible for a meat-powered fire to generate enough heat to self-sustain once it gets started.

But let's actually do the math ourselves, instead of just parroting the arguments of ChatGPT, which is a language model which infamously has trouble telling you which of two numbers is larger unless you tell it to work through the problem step by step.

Enter the bomb calorimeter. It is a reasonably accurate way of measuring the energy content of various substances. Measurements using bomb calorimeters suggest that fat contains 38 - 39 kJ / g, proteins 15 - 18 kJ / g, and carbohydrates 22 - 25 kJ / g.

Humans are composed of approximately 62% water, 16% protein, 16% fat, 1% carbohydrates, and 6% other stuff (mostly minerals). For the cremation story to be plausible, let's say that the water would need to be raised to 100ºC (4.2 J / g / ºC) and then boiled (2260 J / g), and the inorganic compounds (call their specific heat also 4 J / g / ºC -- it's probably closer to 1, which is the specific heat of calcium carbonate, but as we'll see this doesn't really make much difference) raised to (let's say) 500ºC.

So for a 50 kg human, that's

  • 31 kg water: - 12 MJ to raise to 100ºC, 70MJ to actually boil

  • 8 kg protein - 132 MJ released from burning under ideal conditions

  • 8 kg fat - 308 MJ released from burning under ideal conditions

  • 500g carbohydrates - 12 MJ released from burning under ideal conditions

  • 3 kg other - 6 MJ to raise to 500ºC.

So that's about 450 MJ released by burning, of which about 90 MJ goes towards heating stuff and boiling water. That sure looks energy positive to me.

Sanity check -- a tea light is a 10g blob of paraffin wax, which has a very similar energy density to fat. So a tea light should release about 400 kJ of energy when burned, which means that a tea light should contain enough energy to boil off about 150 mL of water, or to raise a bit over a liter of water from room temperature to boiling, if all of the energy is absorbed in the water.

And, in fact, it is possible to boil water using tea lights. A tea light takes about 4 hours to burn fully. That video shows 17 tea lights burning for 8.5 minutes, which should release about 60% as much energy as is contained in a single tea light. It looks like that brought about 400ml of water to a boil, so the sanity check does in fact check out.

I really don't think that random british dude who is showing you how to use candles to boil water during a power failure is in on a global conspiracy to cover up a lack of genocide, but, just in case you think he is, this is an experiment you can try at home with your own materials.

Edit: clarity

I think that would be true if "people pushing woke innovations get punished" was the main way that woke culture lost traction. However, I think that the change is driven much more strongly by whether people on the margin view these new woke innovations as credible or whether they nod while making snide comments to their trusted friends.

I don't think woke culture dies by a coordinated counterculture pushing back on its excesses. I think woke culture dies by becoming uncool, a sign that you are not keeping up with the modern times.

I actually suspect that the beginning of the end for woke culture was the moment that big banks started making floats for pride parades. Nothing is less cool than a big bank trying to show how cool and with the times they are.

For the record, I think that peak woke was probably about 2 years ago, though the exact timing of the peak depends on which exact part of "woke" you're talking about. Concretely:

  • I think the idea of "colorblindness" peaked a couple decades ago

  • I think the idea of "cultural appropriation" probably peaked in 2018ish

  • Cultural battles over "trans rights" are probably either still on the upswing or near peak

  • I expect that there will be some new "deviant" thing that is currently outside the overton window (e.g. polyamory / furries / etc) that will be taken up by the successors of woke ideology.

If all you have to offer is the value of your stuff why shouldn't a country just take your stuff?

Because if a country does that, people will predictably stop producing stuff for the country to take, and also will leave the country if they can.

Unless you mean "some of your stuff, but not enough that you're strongly incentivized to leave or stop producing stuff", in which case they're called "taxes".

a backup plan to "go back to grinding at poker." ... Apparently it works

It "works" but:

  • The pay is bad. You will be making something on the order of 10-20% of what an actual professional with similar skill levels makes, and on top of that you will experience massive swings in your net worth even if you do everything right. The rule of thumb is that you can calculate your maximum expected hourly earnings by considering the largest sustained loss where you would continue playing, and dividing that by 1000. So if you would keep playing through a $20,000 loss, that means you can expect to earn $20 / hour if your play is impeccable.
  • The competition is brutal. Poker serves as sort of a "job of last resort" to people who, for whatever reason, cannot function in a "real job". This may be because they lack executive function, or because they don't do well in situations where the rules are ambiguous, or because they can't stand the idea of working for someone else but also can't or won't start their own business. The things that all these groups have in common, though, is that they're generally frighteningly intelligent, that they're functional enough to do deliberate practice (those who don't lose their bankroll and stop playing), and that they've generally been at this for years. At 1/2 you can expect to make about $10 / hour, and it goes up from there in a way that is slower than linear as the stakes increase, because the players get better. At 50/100, an amazing player with a $500k bankroll might make about $50 / hour. I do hear that this stops being true at extremely high stakes, like $4000/$8000, where compulsive gamblers become more frequent again (relative to 50/100, the players are still far better than you'd see at a 1/2 or even a 10/20 table). But if you want to play 4000/8000 games you need a bankroll in the ballpark of $10-20M, and also there aren't that many such games. For reference, I capped out playing 2/5 NL, where I made an average of about $12 / hour. Every time I tried to move up to 5/10 I got eaten alive.
  • The hours are weird. Say goodbye to leisure time on your evening, weekends, and holidays. Expect pretty regular all-nighters, because most of your profit will come from those times when you manage to find a good table and just extract money from it for 16 hours straight.
  • It's bad for your mental health. When I was getting started, I imagined that it would be a lifestyle of pitting my mind against others, of earning money by being objectively better at poker than the other professional players. It is in fact nothing like that at all. Your money does not come from other professional players, and in fact if there are more than about 3 professional players at a table of 10, you should leave and find another table, because even if you are quite good, the professional players just don't make frequent enough or large enough mistakes that exploiting their mistakes will make you much money. No, you make your money by identifying which tables contain (in the best case) drunk tourists or (in a more typical case) compulsive gamblers pissing away money that they managed to beg, borrow, or steal in a desperate attempt to "make back their losses". It is absolutely soul sucking to realize that your lifestyle is funded by exploiting gambling addicts, and that if you find yourself at a table without any people destroying their lives it means you're at the wrong table.

In summary, -2/10 do not recommend.

There are a few things I imagine you could be saying here.

  1. Determining what you expect your future experiences to be by taking your past distribution over world models (the "prior") and your observations and using something like Bayes to integrate them is basically the correct approach. However, Kolmogorov complexity is not a practical measure to use for your prior. You should use some other prior instead.
  2. Bayesian logic is very pretty math, but it is not practical even if you have a good prior. You would get better results by using some other statistical method to refine your world model.
  3. Statistics flavored approaches are overrated and you should use [pure reason / intuition / astrology / copying the world model of successful people / something else] to build your world model
  4. World models aren't useful. You should instead learn rules for what to do in various situations that don't necessarily have anything to do with what you expect the results of your actions to be.
  5. All of these alternatives are missing all the things you find salient and focusing on weird pedantic nerd shit. The actual thing you find salient is X and you wish I and people like me would engage with it. (also, what is X? I find that this dynamic tends to lead to the most fascinating conversations once both people notice it's happening but they'll talk past each other until they do notice).

I am guessing it's either 2 or 5, but my response to you will vary a lot based on which it is and the details of your viewpoint.