@faul_sname's banner p

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user  
joined 2022 September 06 20:44:12 UTC

				

User ID: 884

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user   joined 2022 September 06 20:44:12 UTC

					

No bio...


					

User ID: 884

a backup plan to "go back to grinding at poker." ... Apparently it works

It "works" but:

  • The pay is bad. You will be making something on the order of 10-20% of what an actual professional with similar skill levels makes, and on top of that you will experience massive swings in your net worth even if you do everything right. The rule of thumb is that you can calculate your maximum expected hourly earnings by considering the largest sustained loss where you would continue playing, and dividing that by 1000. So if you would keep playing through a $20,000 loss, that means you can expect to earn $20 / hour if your play is impeccable.
  • The competition is brutal. Poker serves as sort of a "job of last resort" to people who, for whatever reason, cannot function in a "real job". This may be because they lack executive function, or because they don't do well in situations where the rules are ambiguous, or because they can't stand the idea of working for someone else but also can't or won't start their own business. The things that all these groups have in common, though, is that they're generally frighteningly intelligent, that they're functional enough to do deliberate practice (those who don't lose their bankroll and stop playing), and that they've generally been at this for years. At 1/2 you can expect to make about $10 / hour, and it goes up from there in a way that is slower than linear as the stakes increase, because the players get better. At 50/100, an amazing player with a $500k bankroll might make about $50 / hour. I do hear that this stops being true at extremely high stakes, like $4000/$8000, where compulsive gamblers become more frequent again (relative to 50/100, the players are still far better than you'd see at a 1/2 or even a 10/20 table). But if you want to play 4000/8000 games you need a bankroll in the ballpark of $10-20M, and also there aren't that many such games. For reference, I capped out playing 2/5 NL, where I made an average of about $12 / hour. Every time I tried to move up to 5/10 I got eaten alive.
  • The hours are weird. Say goodbye to leisure time on your evening, weekends, and holidays. Expect pretty regular all-nighters, because most of your profit will come from those times when you manage to find a good table and just extract money from it for 16 hours straight.
  • It's bad for your mental health. When I was getting started, I imagined that it would be a lifestyle of pitting my mind against others, of earning money by being objectively better at poker than the other professional players. It is in fact nothing like that at all. Your money does not come from other professional players, and in fact if there are more than about 3 professional players at a table of 10, you should leave and find another table, because even if you are quite good, the professional players just don't make frequent enough or large enough mistakes that exploiting their mistakes will make you much money. No, you make your money by identifying which tables contain (in the best case) drunk tourists or (in a more typical case) compulsive gamblers pissing away money that they managed to beg, borrow, or steal in a desperate attempt to "make back their losses". It is absolutely soul sucking to realize that your lifestyle is funded by exploiting gambling addicts, and that if you find yourself at a table without any people destroying their lives it means you're at the wrong table.

In summary, -2/10 do not recommend.

For reddit, the answer is "looking at who the mods are, and what their political alignment seems to be".

It's commonly accepted on reddit that the same handful of moderators moderates most of the large subs. However, I did realize I haven't verified that myself, so I hacked together a quick script to do so.

For reference, reddit proudly lists what their top communities are, and how many subscribers each one has. If you navigate to that page, you can then go through and look, for each community, at who the moderators for that community are. For example, for /r/funny, the url would be /r/funny/about/moderators, or, if you want to scrape the data, /r/funny/about/moderators.json.

So by navigating to the top communities page and then running this janky little snippet in the javascript console, you can reproduce these results.

Looking at the top 10 (non-bot) mods by number of subreddits modded, I see:

So that's 2 / 10 most visible mods that moderate extensively on the basis of their own personal politics.

That's actually not nearly as bad as I thought. Interesting.

I guess the problem with reddit is the redditors.

You can totally say what's wrong with this passage. Translating from Hegelian to English, Hegel is saying

Immediate perception is our direct, unreflective perceptions of the world. By contrast, intellectual perception is a higher form of knowledge that involves recognizing the unity and interconnectedness of self-consciousness and the fundamental essence of reality. Through intellectual perception, we can understand that the absolute meaning (content) of something is the same as its absolute structure or appearance (form).

Self-consciousness can be understood in three stages:

1: As a negative relation: Someone who is self-conscious can identify the part of the world that is not themselves as "other," and then define their "self" as everything that is not "other."

2: As a positive relation: Someone who is self-conscious can recognize that they exist in relation to the outside world and understand what that relationship is.

3: As a synthesis of these positive and negative relations, called "intellectual perception": Someone who is self-conscious can see that their thoughts and self-identity are both connected to and separate from the outside world. This synthesis allows them to recognize the unity of content and form, and achieve a deeper understanding of reality.

True intellectual perception goes beyond immediate knowledge derived purely from thoughts and sensory experience. It is a type of absolute knowledge.

A possible critique might look like

  1. Someone who takes a heroic dose of LSD can experience ego death. Such a person experiences a merging of their self-identity with the outside world. This proves that their "absolute knowledge" of their personal identity is contingent on their sensory experiences, and as such is not absolute knowledge.

  2. Also this writing style frankly sucks. Use simple words. Use paragraphs. If you find yourself using pronouns like "it" and "that" to refer to three or more different things in a single sentence, you should replace those pronouns with their referents.

The significant intersection between programmers (and particularly the functional programming side, e.g. Haskell/OCaml/Rust) and trans people (and furries, don't forget the furries) has been noted numerous times. Examples: 1, 2/3, 4, 5/6/7

I personally have, several times, had the experience of "I start learning some math-heavy programming tool, I go through the tutorials but get stuck on a new or undocumented use case, I discover that the place to go to talk to people who know things is the discord server for the project, and I find a significant fraction (think "half") of the helpful and active people in the project are (vocally) trans." I've also observed the same thing in IRL meetups for those kinds of topics.

So yeah, pretty sure you're observing a real phenomenon. I, too, would love to know what's going on here. I've seen the explanation of "autism" floated, but if that were the case I'd expect there to be strong trans representation among railfans (people who really really like trains), and as far as I know that is not the case? I do suspect "a surprising fraction of network engineers are furries" is the same sort of phenomenon.

provide vouchers to homeless people and to require hotels to report vacancies daily and accept vouchers if they have room

New startup idea: uber for staying in hotel rooms, where hotels pay background-checked people to stay in hotel rooms to prevent them from being vacant.

I think it's worse than that. I think that if you have 99 infographics that talk about the values of western culture, and 1 infographic that talks about the values of white culture, the one about white culture will go viral and the rest will be ignored.

If there was a Coalition of Activists, and the Coalition of Activists had decided that the best way to achieve their goals was to sow racial grievances, then it would in principle be possible to convince the leadership of the Coalition of Activists not to do that. If it's "the most divisive stuff goes viral" though, you would have to convince every single activist to refrain from creating divisive stuff. There would be no single person, or small group of people, you could reason into making it stop.

I think we live in the latter world, and "can then be used to justify more activism" is attributing far more agency than actually exists to the structures that cause this sort of stuff to enter the discourse.

We are discussing the claim that the Germans murdered 6 million Jews.

I agree. Let's discuss that claim.

So Mattogno found documentation for over 11,000 prisoners from Lodz, including women and children, which were transferred from the "extermination camp" Auschwitz to the concentration camp Stutthof:

The two brothers Michael Salomonowicz... and Josef traveled with their mother Dora Salomonowicz, born 28 August 1904, number 1652 on the transport list, registered under number 83619 at Stutthof

All three of those Jews survived the war, so it's interesting to note that all three names appear in the Yad Vashem “Central Database of Shoah Victims’ Names"!

Yes. The Yad Vashem “Central Database of Shoah Victims’ Names" includes both people who survived and people who died. It also includes whether those people survived or died. According to said database

  • Dora Salomonowicz is listed by Yad Vashem as having survived.

  • Michael Salomonowicz is listed by Yad Vashem as having survived

  • Josef Salomonowicz is listed by Yad Vashem as having survived

"Three people who were listed in the database (as survivors) actually survived" is not the slam dunk you seem to think it is.

Here is a list of all of the people who were on the same transport (Transport E, Train Da 20 from Praha) - there are 1020 identified names on that list. You can further filter that list by whether they survived (47 people, including your 3 examples) or were murdered (973 people).

Do you think those 973 people who were listed as murdered were fabricated? Or maybe they survived, but were listed as deceased? I personally think that most of the 973 people who are listed as murdered were actually real people, and really did die. Still, there is virtue in actually looking at the world as it is, in making your beliefs pay rent in anticipated experiences, so I chose 5 random numbers between 1 and 973. Those numbers are 258, 817, 811, 273, and 153, and correspond to the 258th, 817th, etc person in that list of 973 people in alphabetical order.

By contrast if you look at one of the survivors from the same transport, you can see that he shows up in several genealogical databases, and has a number of living descendants.

We do not live in the fucking dark ages. Genealogical records exist. Those people who survived went on to live their lives, to marry and have children and eventually die of something else at a later time, and their lives left echoes on the modern world. Since we're talking about something that happened less than a hundred years ago, those echoes are not exactly faint. There are Facebook groups for people whose parents died in the Holocaust, because the end of the Holocaust and the creation of Facebook were separated by less than 60 years.

The exact fate of the 973 people on Transport E, Train Da 20 from Praha to Lodz may be lost to time and the destruction of evidence, but we do know that there was an explicit plan to rid Europe of Jews, we know that a large number of people who survived the ghettoes and camps described the details within, we know that there was a specific effort led by Paul Blobel to destroy evidence of mass murders, and we know that Blobel's defense at the Nuremberg trials in regards to that effort to destroy evidence was "I was following orders and thus did nothing wrong", not "that did not happen".

If you take a group of people into custody, prevent them from leaving for a period of years, use them as forced labor in documented terrible conditions, and then at the end of those few years only a few people from that original group are anywhere to be found, and those few people say you murdered the remaining people, and the remaining people are never heard from again, and you say "yeah, I did it and destroyed the evidence after" - then yes, I think it's fair to conclude that you murdered the remaining people. I think it remains fair to conclude that the missing people were murdered, even if there is doubt about how specifically those murders were performed, or what specifically happened to the bodies.

I believe that

  • If the 4.8 million names from Yad Vashem were largely fabricated, the effort to compile passport and other documents would have been immense, and an immense effort like that would have left marks on the world.

  • If the 4.8 million names from Yad Vashem had been largely duplicates, I would expect to see a lot more duplicate names and birth dates in the search for people on that particular transport.

  • If the 4.8 million names from Yad Vashem had mostly referred to people who survived, I would expect to see genealogical records from those survivors.

You will note that I am making specific, concrete predictions of things I will not see. Thus, if you want to convince me, you could try to show

  • There has been a massive effort to create millions of falsified documents from before the war. Note that this effort would have either been recent or made mistakes that are easily detectable by modern techniques.

  • If you select 10 people at random from the Yad Vashem list, there are a substantial number of records that Yad Vashem claims are different people but in fact share the same names / birth dates / origins (if your claim is that the actual Jewish death toll was 1.4 million, you would need over 20 duplicate people from your sample of 10).

  • If you select 10 people documented as "murdered" at random from the Yad Vashem list, a significant fraction of those actually survived, and documents showing their survival (genealogical records, obituaries, etc) will exist, because we don't live in the dark ages.

Note that the "at random" is doing quite a bit of work in the latter two examples - random samples are vital when operating in an environment where people want you to conclude false things.

Do you have any specific, falsifiable beliefs about the provenance of those 4.8 million names and the fate of the people those names referred to?

I believe that my trans friends should be able to browse the internet without seeing content they deem hateful/disturbing (like harry potter content).

How can I support my trans friends

Teach them that uBlock is not just an adblocker, and that they can use it to hide any youtube videos with specific words or phrases in the title, by adding a rule that looks like

youtube.com###dismissable:has-text(/harry\s*potter|rowling/i)

I think "trans people should be able to browse the internet without seeing content that disturbs them" is a perfectly reasonable opinion. I think the best way to achieve that is to teach them how to filter out the bits of the internet that disturb them on their end, rather than trying to change the internet as a whole so that an unfiltered stream is not disturbing.

[Omicron]

<1%? My vague memory is that there were a lot of variants, and that in general 'virus mutates to spread more and be less harmful' is fairly common, so imo there's not that much reason to believe this.

For a random variant I'd agree. But omicron was really weird in a lot of ways though, and I'd actually put this one at more like 30% (and 80% that something weird and mouse-shaped happened).

  1. Omicron was really really far (as measured by mutation distance) from any other sars-cov-2 variant. Like seriously look at this phylogenetic tree (figure 1 in this paper)
  2. The most recent common ancestor of B.1.1.529 (omicron) and B.1.617.2 (delta, the predominant variant at the time) dates back to approximately February 2020. It is not descended from any variant that was common at the time it started spreading.
  3. The omicron variant spike protein exhibited unusually high binding affinity for the mouse cell entry receptor (source)
  4. Demand for humanized mice was absurdly high during the pandemic - researchers were definitely attempting to study coronavirus disease and spread dynamics in mouse models.

The astute reader will object "hey that just sounds like a researcher who couldn't get enough humanized mice decided to induce sars-cov-2 to jump to normal mice, and then study it there. Why do you assume they intentionally induced a jump back to humans rather than accidentally getting sick from their research mice". To which I say "the timing was suspicious, the level of infectiousness was enormously higher in humans which I don’t think I'd expect in the absence of passaging back through humanized mice, and also hey look over there a distraction from my weak arguments".

For each of the following, I think there's a nontrivial chance (call it 10% or more) that that crackpot theory is true.

  • The NSA has known about using language models to generate text embeddings (or some similarly powerful form of search based on semantic meaning rather than text patterns) for at least 15 years. This is why they needed absolutely massive amounts of compute, and not just data storage, for their Saratoga Springs data center way back when.
  • The Omicron variant of covid was intentionally developed (by serial passaging through lab mice) as a much more contagious, much less deadly variant that could quickly provide cross immunity against the more deadly variants.
  • Unelected leaders of some US agencies sometimes lie under oath to Congess.
  • Israel has at least one satellite with undisclosed purpose and capabilities that uses free space point-to-point optical communication. If true, that means that the Jews have secret space lasers.

I did the same with ChatGPT4, but as a more iterative process. The summary I was able to produce as a result of that iterative process is still a kinda disconnected jumble of unsupported ideas, but at least it's possible to see what those ideas are.

My process for disentangling that was

  1. Find the full passage from Hegel -- the quoted bit was the 8th item in a list of 8 items.

  2. Feed that into ChatGPT with the prompt "I am trying to understand the following passage by Hegel. [the 8 bullet points]. Specifically, I am confused by point 8; please replace all all pronouns in that passage with their referent, but make no other changes".

  3. Verify that each of the replacements makes sense (in this case, a few of them didn't).

  4. In a new chat, prompt with "I am trying to understand the following passage by Hegel. [the passage with pronoun replacements]. Can you explain what Hegel is referring to when he talks about (absolute form/substance|negative/positive relations to the world|immediate/intellectual perception)" (with one prompt for each).

  5. Using that information, write my own, non-obscurantist summary.

  6. In another new chat, prompt "I am trying to summarize the following passage by Hegel. [the raw passage]. I read that as saying approximately the following: [my summary]. I think my summary is basically correct, but can you confirm that?"

  7. Repeat a number of times until ChatGPT tells me that my summary is good (it turns out ChatGPT has _ very strong opinions_ about Hegel if you write like someone who is Wrong On The Internet about Hegel)

The specific intuitions I have about GPT4, which drove this process, are

  1. It is mildly superhuman at the Winograd task. In other words, it's better than me at taking a bunch of pronoun-heavy text and identifying what the pronouns refer to.

  2. The task it spent most of its training budget on was "given a bunch of text, predict the next token". As such, the next token it predicts is generally going to look like it was generated by the same process as the previous tokens. As such, if the previous tokens contain ChatGPT making a mistake or being unable to do a task, it will predict tokens that look like "ChatGPT makes mistakes / is unable to do the task", even if it could do the task with the correct prompt. As such, it is very important not to have "ChatGPT fails to do the task" in your context -- if that starts happening, reroll the failure response; if rerolling a few times fails, start a new chat.

  3. If you fail to do a task but ChatGPT succeeds, that is fine and good. If you flail around in the general direction of the answer you want in the chat, and then ask ChatGPT to help, and it does, and you say "thanks, that was helpful", it will be helpful in the same direction in later messages in the same chat.

  4. The training data can be modeled as "all text ever written". That's not literally true but it's directionally correct. As such, if a bunch has been written about a topic, ChatGPT actually has quite a lot of knowledge about that topic, and the trick is creating a context where a human who knew the thing you wanted to know would have expressed that knowledge. The internet being the internet, that context is frequently "someone is wrong on the internet and I must correct them".

  5. The RLHF step did meaningfully change the distribution of output responses, but as far as I can tell the main effect is that it strongly wants to write in its specific assistant persona when it's writing in its own voice. However, it is perfectly happy to quote or edit stuff that is not in its own voice, as long as it's in a context it recognizes as "these are not the words of ChatGPT".

I have heard that approximately the same is true of Bing Chat, though Bing Chat performs best if you speak Binglish to it (e.g. instead of saying "Thanks, that was helpful. Can you condense that down to a brief summary for me?", say "thanks 😊. now can you write 📝 me a summary? 🙏").

In my experience the typical woke, progressive, or liberal has an attitude of "my opinions are so obviously at least directionally good for humanity, and my political opponents are so obviously vile reactionaries whose opinions are beneath contempt..."

This is my experience of the most vocal woke/progressive/liberal people. My experience of the typical person who puts their pronouns on their slack profile without protest when HR asks, votes for whoever has a (D) next to their name if they bother to vote, and has a vaguely positive affect towards the idea of minorities is that they want to be on the "right side of history" but they don't want to have to think about it or make any decisions.

Which, IMO, is super valid. Getting into twitter flamewars about politics is bad for the world and bad for your mental health, so people who make the decision that instead of doing that they just want to grill are making a good choice.

Jury duty is an example of a service that people are universally compelled to provide. So looking at the working conditions and pay of jurors may also be instructive towards answering this question.

I've actually come to the opposite conclusion, at least in the short term -- I've found that ChatGPT (4, unless otherwise specified) often understands the purpose and high-level practice of my job better than I do, but also can't do some of the simplest (to me) parts of my job. Concrete examples:

  1. I had a nasty architectural problem in the application I work with, where the design was pulling the shape of the code in two incompatible directions. I gave ChatGPT some context on what directions the code was being pulled (think "REST vs RPC", if that means anything to you) and what considerations were causing the codebase to be pulled in those incompatible directions, and ChatGPT was able to suggest some frankly great changes to our process and codebase structure to alleviate that conflict.

  2. I am trying to do a crash course in mechanistic interpretability for ML, and this involves reading a lot of dense, math-heavy research papers. My level of math expertise is "I took math classes in high school". Despite that, I was able to feed ChatGPT a paragraph of a research paper involving esoteric stuff about finite groups and representation theory, and it was able to explain the concepts to me on a level where I could implement, in code, demonstrations of the phenomena the paper was talking about (and understand exactly why my code worked).

  3. When I was trying to understand "what happens if you literally just turn on backpropagation to train a language model on its own output, why doesn't that Just Work for solving the long-term memory problem?" I asked ChatGPT for 5 examples of things that would go wrong and search terms to look up to explain the academic research around each failure mode. And it did. And it was right, and looking up the search terms gave me a much deeper understanding than I had before.

  4. On the flip side, ChatGPT is hilariously incapable of debugging stuff. For instance, when I provide it with code and a stack trace of an error, it suggests doing vaguely plausible actions that occasionally work but it kinda feels like a coincidence when they do. It honestly feels like working with a junior developer who struggles with basic concepts like "how does a for loop work".

  5. I have also found that ChatGPT struggles quite a bit if I feed it a paragraph of an academic paper that makes a specific claim and ask it to come up with potential observations that would provide evidence against that claim. I've tried a bunch of different wordings of this task, without much success. I suspect that the reason for this is the same reason ChatGPT is so bad at debugging.

Specifically where I think ChatGPT falls down is in things that require a specific, gears-level understanding of how things work. What I mean by that is that sometimes, to truly understand a system, you have to be able to understand each individual part of the system, and then you have to understand how those parts fit together to determine the behavior of the system as a whole.

Where it excels is in solving problems where the strategy "look at similar problems in the past, see how those were solved, and suggest an analogous solution" works. It's really fucking amazing at analogies, and it has seen approximately every heuristic everyone has ever used in writing, so this strategy works sometimes even when you don't expect it to.

Still, "debugging stuff that is not working like the theory says it should" is a significant fraction of my job, and I suspect a significant part of many people's jobs. I don't think ChatGPT in its current form will directly replace people in those kinds of roles, and as such I don't think it would work as a drop-in replacement for most jobs. However, people who don't adapt can and probably will be left behind, as it's a pretty strong force multiplier.

Here are five more.

  • the switch from integenerational to single-generational households: a kid is a much bigger burden on "two people who have full-time jobs" than on "a bunch of people, some of which might work only part time".

  • reduced mixing of people-who-have-kids with people-who-do-not-have kids: people mostly do what the people around them do. If childless people mainly interact with other childless people, "having kids" is mostly not "the sort of thing people around them do"

  • salience of the negative aspects of having kids: news stories about kids tend to be either "look at the terrible stuff that happens to kids" or "look at the terrible stuff that happens to parents who let their kids act like kids". Observing the news and going "it's news because it's rare" is not a natural mental motion for people.

  • general doomerism incentives lead people to expect that they will be bringing their children into a world that is bad for children: if you express optimism, you will face criticism if things go badly, but if you express cynicism and pessimism, and the bad outcome you predicted hasn't happened, you can just darkly say "...yet".

  • culture of specialization: in general, the incentives of our culture are to do the things where you have comparative advantage. Most people don't think they have comparative advantage at child rearing.

You could probably collapse this list down to three bullet points of "media" / "culture" / "household size" if you're going for brevity.

Analysis, Context, Hook, Own Opinion.

ACHOO.

An economic zone that cannot or will not pay its soldiers sufficiently well that they are willing to fight because the pay is worth it considering the risk does not deserve the name of "economic zone". It's legitimate for economic zones to exist, and there are benefits of being an economic zone rather than a nation, but if a geopolitical body goes that route they should not expect to reap the benefits of being an economic zone while getting culturally-unified-nation levels of devotion in their armed forces.

I personally quite like the standard of "if you're going to bring up a controversial topic it should be because you personally care about and have a well-thought-out stance on the topic, and you are willing to either defend your stance or change your mind".

For a trivial example, human infant behavior in the first hour after birth seems pretty strongly genetically determined. Pretty much all healthy infants do the same things in the same order (see table 1 here).

See also the field of evolutionary aesthetics:

When young human children from different nations are asked to select which landscape they prefer, from a selection of standardized landscape photographs, there is a strong preference for savannas with trees [...] There is also a preference for landscapes with water, with both open and wooded areas, with trees with branches at a suitable height for climbing and taking foods, with features encouraging exploration such as a path or river curving out of view, with seen or implied game animals, and with some clouds.

That's a very specific type of landscape, and it does in fact seem to be pretty tied to our genetics.

Crystallizing this further, I think particularly in the case of depression / anxiety / ADHD, what happens is that a cultural meme develops that some common facet of the human experience is caused by some specific disease, and that the appropriate way to fix this is to obtain treatment.

Examples:

  • Alice notices that she does not enjoy things that she's "supposed" to enjoy. She's heard that this can be a symptom of depression. She looks up "how to tell if you have depression", and reads that common symptoms include apathy, lack of interest, excessive sleepiness, and insomnia. Now, every time she has trouble falling asleep, she thinks "wow, this depression sucks" and not "I am having trouble falling asleep". She looks up "what to do if you have depression", and sees the usual suggestions about sunlight / therapy / medication. She thinks "well, they were definitely right about my symptoms, so they're probably right about the treatment as well", and gets a therapist and a sunlamp.

  • Bob notices that he's having a lot of trouble focusing on his job as Senior Manipulator of Boring Numbers. He has heard that trouble focusing can be indicative of ADHD. He looks up "symptoms of ADHD", sees fidgeting, absent-mindedness, difficulty focusing, and forgetfulness. Now, the next time he is introduced to a room full of people and has trouble remembering their names, he thinks "wow, ADHD sucks" and not "wow, I'm bad at names". He obtains some amphetamines, which is what you do when you have ADHD.

  • Carol notices that her heart rate is elevated and her muscles are tense before her board meeting. This has happened before the last three board meetings too. She googles "elevated heart rate tense muscles" and sees that, according to WebMD, she either has anxiety or lupus. She knows that WebMD is strangely likely to say that people have lupus, but the description of anxiety is on-point. Additionally, there are some new ones on there, like "difficulty concentrating", which she didn't think were caused by the same thing as the thing where she gets way too nervous before important meetings, but maybe it is after all. She talks to a therapist, and learns that indeed, all of her problems are because she has a disease called "Anxiety", but with the proper therapy schedule and medications, she can probably live some semblance of a normal life.

  • Dan notices that he's been having trouble with his sexual performance. He goes to the friendly neighborhood elder, who informs him that this is a common symptom of being cursed by witches. When you are cursed by witches, lots of bad things can happen, including livestock death, sudden inexplicable vomiting, and impotence, and in extreme cases, your penis sometimes even disappears! The next day, one of Dan's chickens keels over and dies for no apparent reason, and what's worse, he starts violently vomiting after eating the dead chicken. And oddly his penis feels smaller than usual. What was it that elder said he should hang above his door again?

Hypothesis if this is a usefully predictive model of the world: People who read their horoscope on a daily basis are more likely to experience chronic pain than those who don't, even when controlling for all of the obvious confounding factors. I expect that this would be the case because I expect "reads the horoscope daily" to be a reasonably good proxy for both "is searching for an overarching narrative of why things are they way things are" and also "is prone to confirmation bias", and I expect that "you have chronic pain" is one of those things you're more likely to believe if you're searching for an overarching explanation and tend to look for evidence under streetlamps.

Crackpot theory time: It would be possible to significantly reduce the burden on chronic pain by doing something like the following:

  1. Experienced debilitating, chronic pain for some period of time

  2. Changed something plausible about their lives

  3. Immediately after making the change, noticed something that was an obvious consequence of making the change

  4. Now mostly find that, while they do sometimes experience pain, the pain is no longer continuous, is usually telling them something specific, and usually does not interfere with their ability to function

and then loudly broadcast the existence of this group of people at people who have chronic pain. I expect that this intervention would work even if people knew you were doing it, as long as you (correctly, I think) pointed out that your narrative is more plausible than the narrative of "sometime in the recent past, a phenomenon started happening where otherwise-healthy people started experiencing significant pain for no apparent reason, and found themselves unable to live their lives normally due to that pain, and found that, though the pain might sometimes temporarily improve, it always comes back". Because "I do sometimes experience pain, but it's not continuous" and "I sometimes experience a reduction in pain to the point where it's not noticeable, but the pain always comes back" in fact describe exactly the same set of experiences.

The arguments I've seen that OpenAI is an existential threat are from people who think it's an existential threat in the same way that a business whose product was consumer-grade bioweapon development kits would be an existential threat.

If your problem is that a company is selling bioweapon development kits, starting a competing company that also sells bioweapon development kits does not help.

Concrete note on this:

accusations that they promised another, "Chloe", compensation around $75,000 and stiffed her on it in various ways turned into "She had a written contract to be paid $1000/monthly with all expenses covered, which we estimated would add up to around $70,000."

The "all expenses" they're talking about are work-related travel expenses. I, too, would be extremely mad if an employer promised me $75k / year in compensation, $10k of which would be cash-based, and then tried to say that costs incurred by me doing my job were considered to be my "compensation".

Honestly most of what I take away from this is that nobody involved seems to have much of an idea of how things are done in professional settings, and also there seems to be an attitude of "the precautions that normal businesses take are costly and unnecessary since we are all smart people who want to help the world". Which, if that's the way they want to swing, then fine, but I think it is worth setting those expectations upfront. And also I'd strongly recommend that anyone fresh out of college who has never had a normal job should avoid working for an EA organization like nonlinear until they've seen how things work in purely transactional jobs.

Also it seems to me based on how much interest there was in that infighting that effective altruists are starved for drama.

I dunno, my highest voted comment ever had lots of formatting and upwards of 50 links.

And looking at my top comments in general, and also specifically those ones that have been AAQC'd, I see that "strong thesis supported by a bulleted or numbered list" seems to do well.

I do agree that eloquent and passionate rants also do well here, but I think that effortposts with backing links are also well-received when people put in the effort to make them.

Were you thinking of some particular well-formatted and linked post(s) that didn't get the support you expected it to recently?

I think you're missing

8: Go meta. If someone has an idea, you should try applying that idea to itself. It's fine if it's a stretch, the idea of applying an idea to itself means that you probably read Godel, Escher, Bach, which means that you are Smart and therefore Good. Or at least it proved that you read a summary of it. Or interacted a lot with people who did.

(I say this as someone who does (8) way too much, including arguably right now)

Use words is for take my idea, put in your head. If idea in your head, success. Why use many "proper" word when few "wrong" word do trick?