site banner

Culture War Roundup for the week of February 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

Where are all the scientists?

After some experiences doing research during a bachelors and masters degree, I had had enough of an experience with academia writ large and academic research in particular to know that while I enjoyed some aspects of it, it wasn't for me. Getting a PhD is a grueling experience that I don't have the work ethic, and, in the field I was most interested in, possibly the intelligence to make it through. Since I've never spent any time as a practicing scientist, I've long felt like it wasn't my place to criticize the institution of science or scientists. Now, a few years later, I've gotten more experience with academic research second hand through a close relationship with a graduate student who is experiencing all of the trials and tribulations that I bailed on in favor of a cushy job at one of our economies massive digital rentiers.

Writing my master's thesis was the closest I got to actual academic research, and the experience left me decidedly soured on the whole endeavor. I did all of the work. My adviser knew little about the subfield I was focused on, and barely had the time to read enough papers to keep up with her own subfield much less learn one that a student who would only be there for a year was interested in. I didn't really expect anything different given the nature of her incentives and my prior experiences. I was however a bit surprised when I mentioned off-hand that I was doing all the work and she took exception, pointing out that she gave advice and suggestions during out weekly meetings. These meetings consisted of me explaining what I had done and her giving mostly terrible and out of touch advice that would have crippled my project had I heeded it. I believe she made one genuinely helpful contribution to the research over the course of the year. Again, this is all perfectly reasonable and I hold no resentment about it, but I did find it bizarre that she didn't realize just how little she had contributed to the actual work. This was not a relationship between two researchers working on a problem together, it was a relationship between one (somewhat bumbling and lazy) researcher and a mildly interested administrator whose job was mostly to make sure I had done sufficient work to graduate.

Since that experience, I've gotten to witness that dynamic second hand through my graduate student friend and her extended network of graduate student friends. Her advisor is much more hands on than mine was, but we know PhD students with advisors just as out of touch with their work as mine was with my thesis work. Even my friend's blessedly engaged advisor never does any actual research. He has not touched any of the lab equipment in anger in years (exceptions are of course made for photo ops). He does not write any of the scripts to analyze the data his students collect. The one area where he is intimately involved is in the production of the true output product of academic research: technical writing about the research. Specifically, he edits the technical writing of his students furiously. His other main contribution towards research is securing and retaining funding, a task which is done by the production of yet more technical writing.

Though our society calls my friend's advisor a scientist, he isn't one. Calling him a science manager would be more accurate, but I think that that too is not quite right. I'm a programmer by trade, and another analogy that springs to mind is the technical team leads who get to the point where they stop writing code and are wholly occupied by ancillary activates such as writing design documents, coordinating with other teams, and reviewing the code of more junior programmers. If this goes on long enough, these people's skills can dull. I'm reminded of the time that I interviewed a fellow who had previously spent a 10 year stint at Netflix and, if his resume was the be believed, lead some fairly impressive projects. I expected him to breeze through the interview, but he couldn't code his way out of a paper bag. I think professors engaged in research are generally some combination of a science manager and one of these not-so-technical team leads.

By the time anyone gets tenure, they have been forced by the inexorable Moloch to have spent the last several years acting as one of these science coordinators. If they try to do any research themselves they will get out-published by someone willing to hire 15 grad students and keep 6-10 balls in the air at once. At this point they will have spent around 5 years being an actual if initially poorly trained and little supported scientist (a grad student), then probably a few more years on top of that (as a postdoc on multiple postings). On top of that, they must be unusually intelligent and driven to have gotten so far. This means that they still likely have their edge when they get tenure. At this point, they even have the option of of becoming a real scientist again. They could stop hiring grad students to cut their lab down to a reasonable size and actually start spending time in the lab themselves. Few choose this option when it means giving up on becoming at the "top of their field" and when their compensation is tied to the amount of grant money they can bring in.

Upon achieving tenure by being an effective science coordinator, the ambitious academic continues in much the same pattern that got them to where they are. They may retain their edge, after all these are our best and brightest, but I suspect that many of them on some level go the same way as the poor fellow from Netflix. I imagine most don't fall quite so far as he evidently had, but as time goes on, the skills that they learned as graduate students and postdocs will dull and fade. They are required to teach classes, so their theoretical fundamentals remain very strong, but their ability as actual practitioners falls off. That friend of mine frequently complains that the senior "researchers" she works with ask her to do things that are just clearly not going to work from the perspective of someone who is in the trenches day in and day out.

My manager is a very good programmer, but he does not view himself as a programmer, he views himself as a manager. As such, he does not make an effort to tell me how to do my job, though he surely has opinions on it. He understands that because he not working on the code as part of his day to day work, he doesn't have the right context to make technical decisions, and I do.

Science coordinators do not possess the graciousness of my manager. Since they view themselves as researches rather than coordinators of researchers, they are quite willing to hold forth on the right way to do things. For this reason, bad decisions are made in the pursuit of research when science coordinators tell senior grad students how to do their job. A complicating factor here is the fact that junior grad students do need to be told what to do, as our system does almost nothing to train them to be scientists before they are expected to be generating data in the lab (they are pretty smart though, so in time they will learn through osmosis from the senior grad students).

Another problem with the fact that science coordinators do not view themselves as managers is the fact that they tend to make no effort to actually learn or apply the art of management. When one of my older relatives made the transition from being an individual contributing engineer to an engineering manager, he spent about a year poking through management books. I'm generally pretty skeptical of the MBAification of things, but the field is not entirely without merit and I do think managers should at least take the time to think deeply about what it means to lead people and to hold power over them. At the very least, they should recognize that they are managers and they have some new, uniquely people focused responsibilities.

Micromanagement and mismanagement in general makes the lives of grad students hell, and maybe even leave some productivity on the table (though given the brutal competition of academia I tend to think that the professors that make it are the ones who have figured out how to wring every last drop from their grad students). Beyond that though, there are societal impacts. Whenever a "scientist" wades into the public discourse, they are inevitably a science coordinator rather than a practicing scientist. They are likely well suited to that role since the one remaining technical activity they engage in is technical communication, but the public is still deceived by this inaccurate title.

I think the larger harm done by this system is the utter waste of human capital. As I've mentioned a few times so far, grad students come in as untrained neophytes, so don't become productive for a year or two. Even once these young researchers become competent and effective practitioners, they are still inexperienced. They reach their most experienced period as scientists when they are postdocs, but postdoc postings are not long enough to delve deeply into a field. Then, at the height of their powers, they become mere coordinators. There are no graybeard scientists. This, more than anything seems like a tragedy. Brilliant professors should be doing science, not writing grant proposal after grant proposal and copy-editing their students work.

Much ink has been spilled about the fact that technological progress is getting slower and slower per researcher hour as we push the technological frontier further and further out. Scott shows some good data that illustrate this point in Is Science Slowing Down. Like Scott, I tend to think that the low hanging fruit theory explains what is going on here, but I wonder if missing scientists might be another factor.

[Dumb question warning -- I know nothing about science or academia]

Could it also be a result of the commodification of scientific work? I think this is related to the low hanging fruit theory. As a total layman, when I think of scientists, I think of some lone genius fascinated by a specific topic holed up in a lab furiously running experiments powered half by reason and half by intuition. Or even of some rich 18th century Royal Society guy making discoveries in his hobby lab.

Speaking with zero knowledge, I'd imagine there are a few key differences today. First, there are way more "scientist" jobs available because of the massive increase in university attendance. A department that took in 50 undergrad students might have produced 5(?) graduate scientists while a department that takes in 1000 today with easier coursework might produce 700(?) (no idea about these numbers, but I'm sure there's orders of magnitude difference). Instead of a small collection of cskilled artisans, you now have a large army of somewhat less competent assemblyline workers, and so they all get put to work grinding out tiny incremental improvements on the assembly line for much lower pay and prestige.

I would also imagine that there was more of an apprenticeship system in the past, where you attached yourself to a scientist and learned his theories and maybe even carried on and developed his line of thought, almost like ancient Greek philosophers did. Whereas now, profs are graduating large batches of students that each get the same Artificially Flavored Homogenized Education Product crammed into their heads before getting put to work on the assembly line.

Perhaps there are still scientists in my idealized image out there but they are just a vanishingly small subset of so-called scientists? Not sure if any of this is on the mark, but I'd be interested in reading your thoughts.

I really don't know how things used to work, but that certainly seems plausible. Certainly as we produce more scientists the bar is probably lowered since the distribution of human intelligence is relatively fixed over time (the Flynn effect is not enough to make up for the expansion in the number of scientists, especially because it works mostly by lifting up the bottom of the distribution so we have more average people and less profoundly stupid people).

The idea of larger labs causing eroding an old apprenticeship model is interested, though in my experience smaller labs can be pretty bad learning environments because the professor does no research and there are not enough older grad students to mentor the new grad students. Maybe things would have been different if all labs were smaller and more professors actually did their own research.

I've certainly seen a lot of complaints along these lines, but I have no idea what solution is. One detail missing is that there are a lot of postdocs and private sector researchers who are often seen as grad students who "failed" to become tenure-track professors, so they instead go do actual research work. But grant writing or related begging for funding eats up a lot of time of the people who are supposed to be our smartest scientists (hot-shot tenure-track professors). And I've seen a lot of complaints that it also directs research in bad/inefficient directions. Research money has to be distributed somehow; not sure what a better system would look like. I assume "low hanging fruit" is part of the problem in that research projects used to be cheaper, so funding them was simpler.

You're onto something here. Where I did my degree, the following was pretty much understood by all the students after their first few years:

  1. The purpose of research grants is to get research done for the funder more cheaply than is possible in other sectors of the economy.

  2. The purpose of Professors is to get funding and write grant proposals. This means anticipating what research will be trendy and making a lot of friends among the people who staff grant proposal review committees.

  3. The purpose of the older graduate students is to do the research, write papers, and write grant reports, while mentoring the younger students.

  4. The purpose of the younger students is to study and learn, while assisting the older students on writing grant reports and doing experiments. Oh, and to teach undergraduate classes.

  5. Graduate students needing additional mentorship must actively seek it.

(We didn't have post-docs or research staff, but they basically allow scaling of the grant-writing work and supervisory work of professors.)

This was a decent system for graduate students who were self-driven and capable. It had many different failure modes, however: It rewarded professors for just enough surface level knowledge to come up with cool sounding projects that were in reality infeasible. It was hell for students who were given the new projects, because they had no mentors in their specialty, and had no idea that things were infeasible. Older students could be abusive or predatory, and unscrupulous younger students could wait until an older student had worked out nearly all the kinks in an experiment and then swoop in to take credit for the results. Professors had a bias for sudents running simple but creative experiments over meticulous work that was actually necessary long term for good engineering.

Like you, I had a professor with only a surface-level knowledge of my research domain. I was often given bad advice and advice that wasted time. (The students figured out that our PI didn't read papers, but read abstracts and skimmed figures, which made for some funny misinterpretations of the literature.) The PI's feedback on student work was vague and hard to understand. However, when it came to overcoming stuck research projects my advisor was a genius. The experiment-breaking result became the new goal of the experiment, easily publishable. My advisor also eventually communicated an understanding of how to write a good research paper, after which all those vague comments suddenly made perfect sense. So the relationship turned out quite valuable.

The worst part was the social environment. In order to get the PhD students had to become first author on multiple papers, but the PI would assign multiple people to each research project, bringing in more people the longer it took. I'm not sure there was sabotage (I'm dumb enough to fuck things up myself, thank you), but there was definitely spying and theft of results between students. The students needed favor with the professor to buy equipment: seeking the favor of the professor resulted in schemes much like those of medieval courts. Reading The 48 Laws of Power during my PhD, the content of the book depicted the social environment of the lab quite accurately.

Overall, it was a fun time, but I would probably recommend a gap year after a masters degree instead of a PhD. Travel the world, get more life experience, suffer less stress, have more fun, and in the end you didn't spend four years becoming the world's foremost expert in some experiment that is only performed in one lab.

The worst part was the social environment. In order to get the PhD students had to become first author on multiple papers, but the PI would assign multiple people to each research project, bringing in more people the longer it took. I'm not sure there was sabotage (I'm dumb enough to fuck things up myself, thank you), but there was definitely spying and theft of results between students. The students needed favor with the professor to buy equipment: seeking the favor of the professor resulted in schemes much like those of medieval courts. Reading The 48 Laws of Power during my PhD, the content of the book depicted the social environment of the lab quite accurately.

Jesus that sounds horrible. Fortunately, I don't think things are quite that bad in my friend's lab, but her PI is known to play favorites. There is definitely a ton of political BS.

It was horrible. I only survived because I had a supportive romantic partner. I was under so much stress that my hair whitened. Apparently stress kills melanocytes.

Are you willing to give us a vague description of your field? I am guessing life sciences or maybe synthetic organic chemistry.

unscrupulous younger students could wait until an older student had worked out nearly all the kinks in an experiment and then swoop in to take credit for the results.

Did you swap older/younger here?

No. In the specific incident that comes to mind we had an new student try to take credit for the results of a 3rd year PhD candidate after fixing/running the nearly-successful experiment while the older student was at a conference. Thankfully the PI saw through it. I'm sure it goes the other way too, though.

Leaving for the industry. That's where they're all going. Grants are only a small part of the problem. I'm a math phys PhD candidate at a top school, all my cohort could get academic jobs if they want. Doesn't matter, well over 50% of them leave. Even ignoring the fact that academia pays like a tenth of what we'd make in the private sector, we also have to deal with a stupid amount of teaching duties, inane bureaucratic hoop-jumping, administrative bloat and grift, and yes, the whole grant nonsense. The fact is that if a PhD in physics left for quantitative finance right this minute, they'd be treated with infinitely more respect than anything academia can give them, and that's ignoring the financial aspect of it.

As for why science and tech is getting slower, I don't know if the general thesis is true (how are you even measuring levels of tech here?) but certainly a lot of frontier science hinges on two big, connected issues: (1) the technical machinery needed to make substantial progress in many fields now itself takes years to master, which not many academics are willing to do; and (2) the level of abstraction required for the most frontier of frontier work is getting so challenging that the totality of what Einstein through Feynman knew about math and physics is now considered basic, and the kind of black magic being done here would be challenging to even the most talented theoretician.

On the other hand, academia is also getting wider: as soon as machine learning became possible (there was a hardware barrier in the late 90s that prevented the earliest ML papers from being implemented), we suddenly saw a lot of new, low-hanging fruit to pick up, which is still the case in ML. Just look at the example of diffusion models. Their equivalent in statistics dates back to maybe even the 70s and 80s, never mind their equivalents in math and physics. When did they get implemented in ML? Half a decade ago?

I don't know how to predict the pace of science, I don't even have a grasp on its current pace, other than that yes, scientific journalism is so stupid that I can hardly blame the public for thinking that nothing important has happened. But contra what some people like Hossenfelder might suggest, I don't think physics is in a rut. Maybe empirical particle physics. We aren't picking up anything that's as monolithic in public consciousness like Einstein's relativity, but we have plenty of math and physics work today that are every bit as intellectually and practically dense. Of course nothing has the same oomph, but well, nothing has the same oomph as Caveman Grug discovering how to count, and we don't say that scientific progress has been declining since Grug.

I am fascinated by your idea of what actions make a scientist. If an experiment I am conducting needs some beakers washed, does it make me less of a scientist if I have a freshman undergrad wash them--so long as I check that it's done properly? If the experiment that I designed needs some chemicals mixed in particular proportions and sequence, does it make me less of a scientist if a senior undergrad does it--so long as I check that it's done properly? If I design three experiments to test a theory, does it make me less of a scientist three first-year graduate students carries each experiment out--so long as I check that they are done properly? If there are multiple competing theories in my field and I have good ideas about how I can test them but to design the experiments in detail I would need to have a thorough and detailed knowledge of several disparate sub-fields and possibly fields in adjacent disciplines, and also I would need to raise substantial funds to finance such experiments, does it make me less of a scientist if I recruit a team of grad students and post-docs, each specializing in some particular sub-field and tasked with designing and carrying out experiments there, while I use my broader expertise and established credentials to convince whoever I can to finance these projects?

Are you less of a programmer because you don't program in Assembly? Or because you import modules? Are you less of a software engineer if you spend your time with the client determining their needs, then oversee the development of architectural design, APIs for relevant modules with appropriate testing system, and then hand off the actual code writing to a team of programmers?

I wrote a wall of text, so maybe you missed the bit where I said "He does not write any of the scripts to analyze the data his students collect." It's not that the grad students are the experimentalist and he is doing all of the experimental design and analysis, they do all of the analysis. My understanding is that the experimental design process is somewhat collaborative between the PIs and grad students, and I would say that participating in these experiment design meetings is doing science, but doing a bit of science on the side does not a scientist make.

The analogy with programming is not importing modules, it is writing design docs. You need to be a good programmer to write good design docs, but if all you do is administrative management tasks plus collaborating with actual programmers to write design docs, I'm sorry, you're not a programmer.

I agree with your assessment of what makes one a programmer. Programming is a specific technical skill, and what makes one a programmer is being good at--and doing--that technical skill.

A software engineer, on the other hand--or better yet, a software architect--need not necessarily do any programming. They can offload the tasks that require that specific technical skill to programmers.

I suspect that this is at the root of the contention between your perspective and mine. Do you regard doing science as a set of technical skills? Or do you regard doing science as making progress on our ability to predict and manipulate the physical world?

And once I phrase it like that, I find that the specific issue of our contention--under what conditions you/we call the people who progress our ability to predict and manipulate the physical world "scientists"--stops mattering so much.

The current system (in US) where one can progress our ability to predict and manipulate the physical world on a fundamental level is done mostly in university-based labs. These labs rely on funding to continue to make their progress. Funding depends on maintaining a solid and clearly-legible track record of previous progress (which in our system involves high-quality publications in peer-reviewed journals that are well-regarded in the field). Funding also depends on seeking out and getting those grants, and then making sure to satisfy their conditions so the lab can get more of such grants in the future.

So if I run a bio-chem lab (the Hooser Lab at Stanbridge) and my goal is to progress what we know about what causes aging and what may halt the process in mammals, then my main job is to make sure that my lab can actually make useful progress in my goal. I need to break down what my lab needs to do, what resources it needs to do that, and how I can get those resources. Then I get those resources, and oversee the process. And as much as I enjoyed writing scripts to analyze data when I was a postdoc at Whatihear Lab at Oxbridge, maybe my time would be better spent on reviewing drafts for publications (because I have the breadth of knowledge to connect that esoteric result to broader field, or to suggest in the discussion multiple probable interesting consequences), and speaking with grant-giving foundations (because I have built my reputation as a serious scientist and they will take me seriously), while a postdoc in my lab oversees the data analysis.

A software engineer, on the other hand--or better yet, a software architect--need not necessarily do any programming. They can offload the tasks that require that specific technical skill to programmers.

There is no meaninfful distinction between programmers and software engineers. I consider myself a programmer because I feel like it captures what I do more accurately, and refer to myself as a software engineer in situations where it is financially beneficial. Software engineering is programming plus bureaucracy. Lots of things involve bureaucracy. When it comes to software engineering, programming is the main bit. If you take out the programming, it's not software engineering anymore. I have no patience for someone who thinks they are contributing technically by building a pie-in-the-sky UML diagram and demanding that actual programmers implement their out of touch vision.

I suspect that this is at the root of the contention between your perspective and mine. Do you regard doing science as a set of technical skills? Or do you regard doing science as making progress on our ability to predict and manipulate the physical world?

I think you're right about that this is where we disagree. If we take doing science as "making progress on our ability to predict and manipulate the physical world", well that applies to the electron microscope salesmen, academic departmental secretaries, directors of corporate research orgs, plumbers who install chilled water systems in labs, the maintainers of python and r, and any number of other people who contribute in some small way to the broad economic activity of advancing science. You my protest that since science coordinators work a bit closer to the main body of the academic work than the directors of a corporate lab, they are scientists, but both of those roles are mostly about coordinating the technical work.

So if I run a bio-chem lab (the Hooser Lab at Stanbridge) and my goal is to progress what we know about what causes aging and what may halt the process in mammals, then my main job is to make sure that my lab can actually make useful progress in my goal. I need to break down what my lab needs to do, what resources it needs to do that, and how I can get those resources. Then I get those resources, and oversee the process. And as much as I enjoyed writing scripts to analyze data when I was a postdoc at Whatihear Lab at Oxbridge, maybe my time would be better spent on reviewing drafts for publications (because I have the breadth of knowledge to connect that esoteric result to broader field, or to suggest in the discussion multiple probable interesting consequences), and speaking with grant-giving foundations (because I have built my reputation as a serious scientist and they will take me seriously), while a postdoc in my lab oversees the data analysis.

Within our current system, that's what you need to do to push research forward. It doesn't mean you would be a scientist in that situation.

I'm not blaming PIs for the current state of affairs. They are operating within a system of constraints and incentives that they had no role in building. I'm just pointing out that they are not scientists, despite being the best trained people to fulfill such a role.

I think you're right about that this is where we disagree. If we take doing science as "making progress on our ability to predict and manipulate the physical world", well that applies to the electron microscope salesmen, academic departmental secretaries, directors of corporate research orgs, plumbers who install chilled water systems in labs, the maintainers of python and r, and any number of other people who contribute in some small way to the broad economic activity of advancing science.

Excellent point! My follow-up question is therefore: what actual utility is there in distinguishing some of the jobs (professions? tasks?) that progress our ability to predict and manipulate the physical world as "scientist"?

I do think that this utility exists and is important. It reminds me of Feynman's description of cargo cult science:

In the South Seas there is a cargo cult of people. During the war they saw airplanes with lots of good materials, and they want the same thing to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head to headphones and bars of bamboo sticking out like antennas--he's the controller--and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land.

In an organization whose purpose is to progress in our ability to predict and manipulate the physical world--and which has a solid track record of effectively making this progress--who are the people that are essential to the enterprise, and who are in necessary supporting roles?

If the latter: do they require transferable set of skills that are not particular to this specific enterprise? The plumber who installs the chilled water system is such; so is the CPA in HR; so is the janitor. The lab manager (like, in a chem lab) would need to have specialized knowledge to do her job, but it's still transferable set of skills (solid Bachelor's level knowledge of chemistry plus great organizational skills). These people do useful work that enable the enterprise, but they are not essential.

It's useful to reserve the term "scientist" for the former--those who are essential to the enterprise--to keep the telos of their profession foremost in mind. It's useful, because the scientist's telos is frequently in direct contradiction with goals people have (e.g., getting that publication after you put in so much effort into that experiment, if only those couple of observation points weren't undermining your hypothesis). Let me quote Feynman once more:

But there is one feature I notice that is generally missing in cargo cult science. [...] It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty--a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid--not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked--to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can--if you know anything at all wrong, or possibly wrong--to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

In summary, the idea is to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgement in one particular direction or another.

Yeah, I think I basically agree with that unless I'm misreading you. I think "scientist" has a bunch of cachet, and we should assign that cachet to the people doing what is normally thought of as science rather than pushing paper.

It seems like this pretty much explains the replication crisis, no? There just isn't much expertise accumulating on how to do actual science, in particular data analysis. By the time you get to be a senior grad student, you might have accumulated some knowledge on what a solid statistical analysis looks like, but then you're either up (to an administer role where you don't get your hands dirty with data) or out of academia into industry.

I feel like the simpler explanation is that (as OP mentions) promotion is tied to bringing in grant money and grant money is tied to having sexy preliminary results to justify an expensive study.

So the incentive is just to make up some sexy preliminary results.

It's not that inexperienced scientists made a mistake with their data. It's that experienced bullshitters did what they had to do to stay in their high-prestige job.

t. Postdoc

In Slovenia, they have university labs full of only researchers who will do experiments for private companies, for a fee. They will invest in newer equipment to do more interesting experiments etc. They do nothing but research. On their off time (not researching on contract), they pursue their own topics with the equipment.

In Germany, there are many research labs, both state and non-governmental like the Plank society (though funded by the state!)

The society has a total staff of approximately 17,000 permanent employees, including 5,470 scientists, plus around 4,600 non-tenured scientists and guests.[2] The society's budget for 2018 was about €1.8 billion.[2] As of 31 December 2018, the Max Planck Society employed a total of 23,767 staff, of whom 15,650 were scientists.

and

In 2020, the Nature Index placed the Max Planck Institutes third worldwide in terms of research published in Nature journals (after the Chinese Academy of Sciences and Harvard University).[5] In terms of total research volume (unweighted by citations or impact), the Max Planck Society is only outranked by the Chinese Academy of Sciences, the Russian Academy of Sciences and Harvard University in the Times Higher Education institutional rankings

I donno if this is exactly culture war or not. But it involves a major culture war figure.

I finally got Starlink on my isolated rural property. I'm having a guy mount it proper this week. So for now it's just on my north facing porch, maybe 3 feet above ground level, totally blocked to the south by my house, surround by trees. It's working 10x better than a competing satellite internet company Viasat, with a proper mounted dish pointed exactly where it needs to go with zero obstructions. We had some light rain on Sunday, and Starlink powered on through it with small micro-losses of connection regularly, where as Viasat was just totally down probably 80% of the day.

It's night and day. Starlink, on their best effort plan, pulls about 80 Mbps down, 10 up, with a 50 ms ping. Viasat pulls about 110 Mpbs down, 0.40 mbps up, with an 800 ms ping. So while Viasat seems to technically have more throughput strictly down, it's upload bandwidth and ping are off the charts terrible. I can also say, experientially, that Viasat speed test number does not reflect their actual download speeds. I'm lucky for Steam to suck down games at 1 MB/s on Viasat, and it was happily peaking over 11 MB/s, but mostly in the 5-6 MB/s range on Starlink.

To say nothing of the quality of service. I'm paying about $300 a month for Viasat, with an extra $15 a month for their "easy care" which gives me a discount on service appointments. Down from $500 to $200. When I had some roofers accidentally nudge the dish a fraction of a degree, that was $200 to have fixed 3 days later. Starlink just points itself exactly where it needs to point. And it's $110 a month. There is no small difference in the data caps either. Viasat has me paying $200 a month for 100 GB of "priority data", and $40 for another 30 GB when I run out. I typically have to buy two of those a month, hence the $300 bill. Starlink has a 1 TB data cap. I don't think I've ever used that much data in a month in my life.

10 times better, 1/3rd the price, indescribably more convenient. If this were being run by a public company, or a company with different stake holders, there is no way they'd have overshot the target in such a small market by so much. It would be minimally better, minimally less expensive, and it's hard to say how much they'd care about convenience. If it made them more money, they'd gladly make that worse to charge you for service appointments. I can only attribute the astounding quality of service to Musk's naive ideological commitment to the vague cause of "more internet for more people".

Which brings me to the culture war angle. It's been the subject of some number of headlines that Musk's financing of Twitter is an enormous albatross around his neck. He's forced into liquidating stock almost monthly to keep it up. I've seen a lot of talk that it might force his hand, and cause him to lose control of some or all of his companies. So I'm going to register a prediction here. Probably the first noticeable casualty of that, is that if anyone ever gets their hands on SpaceX, they are going to monetize the ever loving fuck out of Starlink. Jack the price up to $150-$300 a month, remove all the waitlist and oversell every area, throw harsher data caps on everyone, you name it. With the added culture war angle that if it ever occurs, SpaceX will almost certainly fall into the hands of Neoliberal Adherents, the Starlink mostly services rural red tribers, or grey tribers that have defected. So it'll be hard not to interpret changes in policy as being punitive towards political enemies.

I don't like a lot of things about Elon, but it can't be denied that projects he's involved in are ambitious, difficult, and — occasionally — successful. Say what you want about his means, politics, manipulation, fraud, etc., the man gets things done, and that's respectable.

Very often successful. What's the average success rate for startups.. 15% ? And how many Musk companies are still operational ?

And not by fraudulent or political methods - e.g. no amount of fraud or politics or manipulation can orbit payloads. And his company just tested the most powerful launch vehicle ever.

Sure, he's engaged in what's probably fraud (with Tesla autopilot especially), however, given his parentage, that's really not a bad track record.

Musk while he has problems, seems like a far better person than his father.

If Maye Musk is to be believed, Elon is basically a product of fraud.

Depends on how you define success. He's incredibly ambitious with SpaceX; I don't think he would say he's succeeded until people are living on Mars. Many of his projects have failed/are failing: The Boring Co., Twitter (IMO), Neuralink... again, he's incredibly ambitious. It should be expected that he fail at least a few times. As for comparing his projects to the average start-up, you have to consider that the perceived success or failure of a start-up is determined by the investors' view of its future. When you're the investor, or when you control the money, whether your projects fail or not becomes a decision you make. He's parlayed his success with PayPal (not a flattering story for Elon) into enough wins to keep going and that's it.

It's worth noting that United Launch Alliance (ULA), one of their main competitors, is a joint Lockheed-Boeing project rather forcefully spun off from lawsuits between the two major market players. In the interest of continued service (neither going bankrupt and ending production), they were effectively forced to work together in 2006. I think your prediction is right: a forced sale to one of the larger government contractors, followed by re-evaluating the continual loss-leader strategy Musk seems to like.

On the subject of Starlink specifically, I had a project to estimate the market feasibility of such a service before it started launching, and at least my result was that it probably can't be profitable at the list prices. There is a chance it could be selling to the government, but there aren't enough people in OP's position to pay landline ISP prices to fund the operating costs for the launches, the ground stations, and staffing.

The US at least has spent a ton of money running fiber to rural areas in the last few decades, which doesn't require too much ongoing upkeep.

My (Musk-skeptic) view is that Starlink exists to provide a regular payload for Falcon 9: despite promises that cheaper launches would increase demand, they seem to have largely flushed out the wait lists and have actually seen a decline in commercial launches in recent years. "Exponential growth" wasn't going to keep banner launch rates up without making payloads themselves. The satellites and ground terminals themselves seem to work, although not as well as originally promised: do they have cross-links like Iridium had in the 90s? How much power do the ground terminals draw?

The economics of private companies mean that its hard from the outside to view their profitability. On the other hand, I've long predicted that empire dissolving, and I've admittedly been wrong about the timeline for that so far.

My understanding is that the Starlink has three potentially profitable strategies, all of which depend on inter-satellite links to really be breakthroughs:

  1. HFT without fiber latency. This is where clients will be able to pay eight to nine figure subscriptions per year, with major routes forming a high speed web between New York, London, Brussels, Singapore, and Tokyo.

  2. Military/aviation. It's hard to put a dollar value on military contracts, but this is probably seven to eight figures per year total, since military already has their own communications web.

  3. Worldwide consumer access with less than worldwide infrastructure. Note that once access (downlinks) are installed for the above contracts, the marginal cost to expand civilian access to the globe is almost nil. The cost of satellite launch may be spilt between customers across the entire broadcast range: Africa, Europe, Asia, all the ocean shipping and cruise ships, etc.

Is it really a culture war idea to suggest "If someone buys this company, they will try to monetize it over a shorter timeframe?"

Who are "Neoliberal Adherents"?

The sorts of people running companies steering hard into ESG and DEI. Making the right mouth sounds about equity, and creating special programs to advantage the usual minority client populations while generally being terrible for everyone regardless.

Lol another word butchered by the left. Milton Friedman basically the reviver of neoliberalism ain’t woke.

Milton Friedman

Yeah, he died in 2006 at the age of 96.

This is objectionably low-effort. Don't post like this please.

Neoliberal is definitely not woke. Neoliberal is Reagan and Thatcher and maybe Tony Blair if you're stretching it. Only in fringe online circle (like the reddit neoliberal sub) does neoliberal have anything to do with woke. When the average person uses neoliberal, usually in a pejorative sense, they mean "Those bad people who supported free market reforms at the expense of unions or the coal miners (or whatever)"

The confusion comes from the fact that the word mostly gets used by far-leftists to refer to people like Hilary Clinton, which gives everyone else the impression that the term means something like "deep Democrats who want to regulate everything to death". I basically never see it used for people like Reagan or Thatcher except in exactly this scenario of explaining what neoliberal really means.

I always got the impression that the dissident right thinks of neoliberals as "socially liberal, economically conservative". Ergo, allowing corporations more latitude to exploit workers and push for stuff like LGBT, abortion, etc. because "they want the labour force completely atomised from traditional social relations and derive all its identity from its career and place in the firm". This is a very common line among right coded tankies. I suppose there's some truth to it, but I suspect the "real reason" why wokeism is pushed so hard is no single reason at all. There is no Machiavellian scheme behind any of this, just some people (especially from the urban middle class) with whom there is legitimate purchase of wokeness because they see it as the next wave of progressivism which has already been the dominant ideology for decades, companies and public figures that parrot whatever's the most fashionable in elite consensus, activist types who will themselves to believe in this, casuals who just about believe the first thing they see in the headlines, and dissidents who are frustrated with it all.

I suppose there's some truth to it, but I suspect the "real reason" why wokeism is pushed so hard is no single reason at all. There is no Machiavellian scheme behind any of this, just some people (especially from the urban middle class) with whom there is legitimate purchase of wokeness because they see it as the next wave of progressivism which has already been the dominant ideology for decades

I suspect the exact opposite. No one was seeing wokeness as the next wave of progressivism at the beginning, even pointing at it would get you accused of weak-manning or nut-picking. Even if you were paying attention to nothing other than /r/ssc /r/themotte, you could see wokeness bubbling up towards the top, the attempts to hand-wave it away going from "it's just a couple of crazy kids on tumblr" -> "it's just a couple of crazy kids on university campuses" -> "it's just some Karens in HR (but Damore had it coming, BTW)" -> "it's just a couple of cynical CEOs insincerely mouthing woke slogans" (<-- you are now here), you could see it seizing the reins of power, and imposing itself on the masses.

I think the only reason people come up with these mundane non-Machiavellian explanations, is that it allows them to position themselves as non-supporters of wokeness, without becoming it's opponents. It's the most comfortable spot for a lot of people, as declaring yourself as a supporter requires you to commit to some amount of self-flagellation (not to mention having to answer for all the crazy stuff being pushed by governments, companies, ngo's and activists), but coming out as an opponent gets you branded as a rightoid, even if you're otherwise an outright communist, and there's nothing worse than being a rightoid.

I suppose a counterpoint to the 2nd paragraph would be that it's also comforting to blame a select few groups for the cultural crisis, but it's much more black pilling to believe that a sizable chunk of the masses isn't merely being misled, but is very much within its own agency when it demands more wokeness. And to some extent, what's most fashionable among the woke urban middle class does seem to direct the conversation. For instance, throughout the last decade, it was all feminism that was pushed so aggressively as the centrepiece of woke. Culminating into MeToo which went on until 2019. It's still being pushed of course, but the spotlight is on BLM since the George Floyd protests and the whole "Defund the police" campaign. I just cannot rationalise such a scheme without making it seem even sillier. There does seem to be some pressure from below.

Well, you're a bit behind the times, because even BLM fell out of fashion post-Rittenhouse, it's all about trans issues now, which will also fall out of fashion because of the medical scandal around transgender care for minors.

I sympathize with the pushback against being psychologized, since as you rightly point out this is something anyone can do to anyone, but in my opinion the mundane theory just doesn't hold water. If there was pressure from below, you shouldn't have seen accusations of of nut-picking in the past. If there was pressure from below you shouldn't see wokeness having to rely on censorship, shadow-banning, and algorithmic supression. If there was pressure from below, you should see that Harry Potter game bomb in terms of sales, and fuddy-daddy game journos struggling to explain it. Everything about these mundane explanations flies in the face of observable reality.

I just cannot rationalise such a scheme without making it seem even sillier.

It seems pretty brilliant from where I sit. It's like fighting a hydra, for every head you chop off, two new ones take it's place.

More comments

I think it's use by that subreddit has led to a lot of people on Reddit thinking that that is actually what neoliberal means.

For a big part of my life, the only person I had ever seen purposely identify as a neoliberal was Scott Sumner.

Neoliberal is definitely not woke. Neoliberal is Reagan and Thatcher and maybe Tony Blair if you're stretching it

If "neoliberal" can mean Reagan and Thatcher, then it can also mean woke. This is like a Rings of Power fan complaining about being unfaithful to the source material.

In my experience, "neoliberal" is basically a sneer word among woke people to characterize more moderate social/economic positions (e.g. Hillary Clinton is a neoliberal because she loves global corporations too much, etc.), so the fact that you seem to think it means the opposite basically confirms my belief that there is no consensus on who neoliberals are, other than that they are bad and control everything.

It's the ideology of the world order. What you're describing is tribes of neolibs arguing about what it means to be a neolib, but they all are within it, none of them fundamentally disagree with the direction of the modern social project, they just bicker about how committed or practical methods of advancing it are.

Compare Islamists, or Dengists.

People call it that for lack of a better word to describe modernist globalism. But what ideology is the "party of Davos" if not neoliberal?

I think it means, basically, "pro-globalhomo".

Are price increases(which are the most obvious route to monetize) really culture war in this economy?

On the subject of Musk, I reckon he reveals just how much management skills are ignored and denigrated in society. In about half the Musk conversations I've seen, people say 'oh it's his engineers who make the brilliant inventions, he just does media, finance, (did you hear about his father's SOUTH AFRICAN diamond mine?) cult of personality'

If the quality of engineers is all that matters, why don't we just sack all the engineers at NASA, who've done fuck all after the Space Shuttle, which was itself enormously cost-inefficient?

Hiring the right engineers, putting them in the right places and managing their projects in the right way is essential. Few know how to do this right. Bezos's rockets aren't successful - but he's rich enough to get good engineers. If he knew how to pick them and manage their work, it stands to reason that his rocket company (founded in 2000, launching only small rockets that don't even achieve orbit I believe) would be more high-profile. Perhaps there's a separate skill needed for running rockets than running Amazon or he didn't spend enough time on Blue Origin or whatever, I'm confident that Bezos has a similar capability.

Musk interviewed and decided upon the first thousand or so employees at SpaceX himself, he clearly did a pretty good job of it. I recall from the same book that he was poaching people off the F-35 program, people who were basically solely devoted to a single bolt on the fuselage or something of that nature. The established space launch companies were all stifling bureaucracies.

Similarly, Napoleon's soldiers did all the fighting but the general himself was indispensable. Napoleon picked out the Imperial Marshals, planned campaigns, often decided where battles would be fought and made the critical decisions in combat. That's the essence of military genius. Musk has business and project management genius, achieving impressive results fairly quickly. His wisdom and political skills are more dubious - Bezos might have the upper hand in that less obvious domain.

And it's $110 a month

Not bad, though outside of rural areas, internet prices are already much cheaper and far faster in most of the West outside the five-eyes countries (where internet prices are absurdly expensive) and population density is much higher. Musk spoke about bringing internet to far-flung places but how many people can afford these prices in rural India? Seems like he'd have to cut prices by 90% to make it viable.

Given how important it appears to be for Ukrainian efforts, I suspect that its true utility will be in the military domain and potentially B2B, too.

Seems like he'd have to cut prices by 90% to make it viable.

Once the satellites are up, he can do this and still make money -- the marginal cost of new customers is nearly zero, and the dishes can be trivially geolocked so Americans can't just ship units from India for home use.

The dishes themselves are unavoidably expensive due to being quite a bit more complex than a normal satellite dish, but this opens up some sort of business model for somebody to buy a dish and set up a mesh network in each village or whatever.

Given how important it appears to be for Ukrainian efforts, I suspect that its true utility will be in the military domain...

I haven't seen anything mentioning Starlink anywhere in coverage of the Ukraine conflict! I mean, there was that wave of stories covering the donation, most of which predicted it would bring vital help to Ukraine's military, but those were coverage of an American company being generous--and likely at the behest of the same company's talented and experienced P.R. team--and not technically coverage of Ukraine.

Last week, my company released its 3rd annual DEI report. It consists of a laundry list of DEI achievements, some questionable statistics, and inspiring messages from very well-paid executives.

Performance reviews are another feature of this time of year. Conventional wisdom holds that getting a good review depends on meeting your pseudo-self-defined goals for the year—and, by implication, on setting achievable ones. With that in mind, our executives set measurable, sensible goals with every expectation of meeting them.

That was a joke. The goals were 1/2 women and 1/3 people of color. We were reasonably close on the latter, not that this required any particular change. But our goal for gender parity was hilariously out of line with the ~1/4 we currently have. I could propose various reasons why an engineering- and manufacturing-heavy corporation that makes devices for killing people might not employ so many women, but that’s not really the point. No, this is not a serious goal. It’s advertising.

My company is not particularly woke. It repeats some of the phrases and buys into the aesthetic, but it’s clearly not ideologically captured. If there are true believers, they sure aren’t in charge. DEI is valued insofar as it keeps us from alienating potential talent and potential customers—and no more. At the end of the day it’s not going to shoot itself in the foot in service of equality or equity.

I believe this is true of the vast majority of corporations in the US! Identity politics are a small part of the business signaling that goes on every day. It’s directly proportional to how much the product is a cultural symbol rather than a material good. Apple products or Amazon media or Super Bowl ads are more likely to publicly proclaim their diversity because they’re selling an idea. It does not require true believers, though they help with credibility. The idea itself is what benefits from woke signaling.

This has implications for the trajectory of DEI. Debating whether woke ads are going to increase or alienate support is missing the point. That sort of identity politics is downstream of the culture war, and should not be used to make predictions about “peak woke.” It represents corporate ability to score points off the prevailing winds, not ideologues’ level of infiltration into corporations.

Defense contractors are wildly biased towards veterans. Our hiring is more likely to involve some sort of aggressive patriotism; their scruples are more likely to support selling drones and bombs. Sometimes this even has an advantage of rapport with customers. But this is an end, not a means. It would be a mistake to predict growing evangelism for veterans due to our obvious ideological capture. Likewise, reading DEI reports as a foothold in the culture wars is missing the point. They are a specific form of advertising, and follow the popularity of idpol rather than driving it.

I'd like to get a job in DEI just to get an idea of what these people actually do all day. I know some government agencies compile statistics on things like minority hiring in various industries but I'd think that doing this internally wouldn't take much time, even at the largest companies.

Judging by this report? Graphic design, plus a side of marketing. There were also workshops, outreach groups, and training updates; it’s not clear to me where the line is drawn from normal operations. I don’t think there are a ton of DEI-only employees. Instead you’ve got a management tree that hands down funding and objectives to the HR teams.

I spent most of last spring and fall working on projects that required me to walk past the DEI office for Allegheny County every day. It's a large county, with about 7500 employees, but as far as corporations go, that puts them in a league with American Eagle Outfitters, Domino's Pizza, Winnebago, and Weight Watchers. And this is an entire office, with 8 total staff—a Chief Equity & Inclusion Officer, a Deputy Director, 2 Certification Analysts, a Contract Compliance Specialist, an office manager, and 2 clerks. And while I was looking up that information I found that county Department of Human Services had its own DEI-type office with at least 2 employees. I get the contract specialist, since government contracts are required to comply with certain affirmative action requirements (which are largely more procedural than substantive), but I don't know what everyone else does.

Based on that, the cynical answer is "jobs for the boys" (and gals, we're all equal opportunity now of course). When you have more people with titles on the office door than staff doing administrative work (five officers/directors/whatnots to three office staff) then it's a great way to get local government Joe or Sally who's a reliable party member or otherwise tied in to the local politicians (could be by marriage or family) a nice cushy job as a reward.

And of course, if Joe or Sally happen to be a minority themselves, that means that you've also met the quota for local government increasing its hiring of women and other minorities. Win-win all round!

I believe this is true of the vast majority of corporations in the US! Identity politics are a small part of the business signaling that goes on every day. It’s directly proportional to how much the product is a cultural symbol rather than a material good. Apple products or Amazon media or Super Bowl ads are more likely to publicly proclaim their diversity because they’re selling an idea. It does not require true believers, though they help with credibility. The idea itself is what benefits from woke signaling.

This has implications for the trajectory of DEI. Debating whether woke ads are going to increase or alienate support is missing the point. That sort of identity politics is downstream of the culture war, and should not be used to make predictions about “peak woke.” It represents corporate ability to score points off the prevailing winds, not ideologues’ level of infiltration into corporations.

In a prior post talking about discrimination against conservatives in online dating we discussed almost this exact question as far as the value of signaling political beliefs in an asymmetric way.

I feel like on dating apps there's a certain Strawmanization of political spectrum where 'Right = Super fascist' and 'Moderate/Apolitical/whatever = Hiding Super Fascist'.

[I]f listing your politics as "Right Wing" or even "Moderate" is the objectively wrong answer in online dating, then doing so means you probably fall under either 1) or 2). Either you don't even know the socially correct answer, so you're a maladroit chump, that's not an attractive look; or you're so right wing that you can't possibly stay in the closet about it, it would be too obvious, which regardless of your politics isn't a good look, and quite likely maps onto something like "superfascist" anyway.

In the same way, it doesn't require that a company is "woke" or even that they want to signal "wokeness" to choose to advertise in that way. The company's officers merely must "know" that woke is the objectively correct answer in terms of how to advertise/signal. That's the way that the fashionable folks signal, so if you signal that way you are signaling that you know what you're doing and that you are aware of the social mores in question. Doing otherwise indicates that you are either ignorant of those social mores, or so incapable of hiding your politics that it would be pointless to try. Neither are good looks, in dating or in corporate marketing. The socially accepted provision of a "correct" answer makes using the incorrect answer a sign of stupidity or extremism.

But isn’t the goal in dating to find a partner; not appeal to the modal partner?

So differentiating yourself even if it turns off the modal user may maximize chance of matching with relevant partner. Could be the same in business.

Since Orwell is totally not overused, I'll submit the following quote.

"It was always the women, and above all the young ones, who were the most bigoted adherents of the Party, the swallowers of slogans, the amateur spies and nosers−out of unorthodoxy.”

Even in the rare world of college educated young right wing women that this might attract; you are more likely to run into a Q-Anon Trumpist or a "How dare you suggest my 95 year old crippled gramma who spends every day screaming in pain should have the option of euthanasia" fundamentalist type. Women have always relied exclusively on their alliance with the social consensus to extract the resources and protection offered by others. This means that by definition there are not enough matches to go around for the loosing side. Now maybe men have overcompensated in their coyness about their political beliefs; leaving an avenue to be exploited, but most non-crazy right wing men are by definition going to loose.

"How dare you suggest my 95 year old crippled gramma who spends every day screaming in pain should have the option of euthanasia"

Speaking as a bigoted fundamentalist anti-euthanasia type: it starts out like that.

Then it ends up like this. "Oh, hey, yeah, our new law that we put in place back in 2015? Yeah, we sorta have to revise it a bit because it was working too well":

Under the current law, Canadians whose only medical condition is a mental illness, and who otherwise meet all eligibility criteria, will not be eligible for MAID until March 17, 2023 (see About mental illness and MAID). This temporary exclusion of eligibility in the March 2021 legislation was intended to provide the Government of Canada with more time to study how MAID on the basis of a mental illness can safely be provided and to ensure appropriate safeguards are in place to protect those persons. However, the Government of Canada believes that more time is needed to complete and disseminate this important work.

On February 2, 2023, the Government of Canada introduced new legislation that would extend the delay of eligibility for MAID in circumstances where a person's sole underlying medical condition is a mental illness until March 17, 2024. This proposed one-year extension would allow more time for provinces and territories to prepare, and time to complete and disseminate key resources currently under development to ensure that MAID assessors and providers are ready to address these more complex MAID requests. This proposed extension would also provide more time to consider the parliamentary Special Joint Committee on MAID's final report.

On March 17, 2021, the revised legislation on MAID (former Bill C-7) received Royal Assent and became law. Former Bill C-7 expanded eligibility to MAID to individuals whose death is not reasonably foreseeable and strengthened the safeguards for these applicants. With these changes, individuals whose sole medical condition is a mental illness, and who otherwise meet all eligibility criteria, are not eligible for MAID until March 17, 2023. This temporary exclusion was intended to provide the Government of Canada time to consider the Expert Panel on MAID and Mental Illness’ conclusions and recommendations, as well as the findings of the parliamentary Special Joint Committee on MAID.

So "95 year old grandma screaming in pain" became "persons who are not in danger of death" and then had to be rowed back to "but if you're only suffering from something like depression, we'll put a hold on that for a year or two" probably due to public pushback.

And don't worry, you won't be able to ask for government assisted suicide just on the basis of being homeless! At least for now:

The least popular scenario in terms of participants agreeing that MAID could be a solution, was “a person can’t find affordable housing.” Just nine per cent of participants said they would support the idea of a person in this situation seeking MAID.

The report noted that an Ontario man recently made news after he requested MAID not because he wanted to die, but because he thought it was a preferable alternative to being homeless.

A disabled Ontario woman also applied for MAID after seven years of applying for affordable housing in Toronto with no luck.

To add on to the wild Canadian MAID stories: Alan Nichols relieved euthanasia in Canada and the only medical problem claimed was hearing loss.

There was a woman in Belgium in her 20s who was euthanized because of PTSD she had after an ISIS bombing, no physical wounds at all.

So "95 year old grandma screaming in pain" became "persons who are not in danger of death" and then had to be rowed back to "but if you're only suffering from something like depression, we'll put a hold on that for a year or two" probably due to public pushback.

And I'm supposed to favour the alleged interests* of a infintesimal percentage of unproductive disordered strangers to have their decisions overriden over my own interest, and that of everyone I know and love to not end our lives in agony because???

How well do you think opiates control the pain of having fluid building up in your lungs so that you can't breath? What exactly do you think a "natural" death looks like?

Oh, he died in his sleep, you might say. But did he? Did the pain ripping through his chest shake him out of his final dream while he stared into the dark or did his brain merely confabulate a drowning, or burning within it?

Of course I don't expect these issues to matter much to you, just like a woke person's fetish for sacred diversity is entirely unaltered by the mass rape of girls in Rotterham. Both easily disgust the average person, and yet it seems we've been condemned to be ruled by zealots whose vision of morality is entirely unconnected to the reality of the human experience. In either case the Publics preferences are destined to loose most of the time.

I'll add Scott Alexander's poem, inspired by his own experience working in hospitals below:

If in some smothering dreams you too could pace

Behind the gurney that we flung him in,

And watch the white eyes writhing in his face,

His hanging face, like a devil’s sack of sin;

If you could hear, at every jolt, the blood

Come gargling from the froth-corrupted lungs,

Obscene with cancer, bitter with the cud

Of vile, incurable sores on innocent tongues

My friend, you would not so pontificate

To reasoners beset by moral strife

The old lie: we must try to cultivate

A culture of life.

https://slatestarcodex.com/2013/07/17/who-by-very-slow-decay/

If I and mine are to shoulder blame for "death by suffocating slowly on the fluid in your lungs instead of a tidy medical overdose of morphine", then you and yours will have to shoulder blame for "no, we can't get you a stair lift, but how about we arrange to kill you?"

It starts with "mercy". It ends with "lives unworthy of life".

Well not exactly, one is the immediate result of nature and the exact realization of your political values. The other is the (highly plausible) result of my not having full control over other players, most of them my sworn enemies, in a likely slippery slope scenario (*).

Nonetheless, I will bite the bullet and say that yes, I'd prefer a world where the horror stories you mention are normal than one where the current state remains. One involves 1) a willing victim I probably won't ever meet, the other an 2) unwilling one that I'm likely if not guaranteed to know and love. I see choosing 2 over 1 is basically the definition of completely inverted moral instincts.

As for your "lives unworthy of life" comment, I love it how ostensible righties just can't help accusing lefties of being nazis, when leftism in it's current stage is most accurately described as the worship of the weak, the ugly and the stupid.

Of course I suspect our gap is even greater because opposition to euthanasia tends to be a terminal value in and of itself for fundamentalist types; with the horror stories being merely how you lobby people who do not share these. This is of course entirely fair, I just feel it should be mentioned.

(*) I use this phrasing ("slippery slope") reluctantly because I know well that there are agents actively working for the worst case version of euthanasia. A decent person must be willing to face the likely consequences of their actions, and not hide behind platitudes.

Before I nitpick, I just want to state that I mostly agree with you, and certainly have little interest in Amy's moralizing claims on any given day.

That being said:

How well do you think opiates control the pain of having fluid building up in your lungs so that you can't breath? What exactly do you think a "natural" death looks like?

https://erj.ersjournals.com/content/50/5/1701153

Opioids, other than themselves causing respiratory depression, are also effective in reducing the unpleasantness of breathlessness. It's commonly used for that purpose in palliative/EOL care.

I've seen enough people die ignominiously in ICUs to share your views of euthanasia, but just wanted to correct an error!

Thanks for the data, nitpicks are always appreciated!

IMO with Online Dating the amount of stuff that'll be an instant killer when you're one profile of 100's and not to be too big a deal when you've got some actual traction/met in person is huge. Down to the gender dynamic and the lack of communicative nuance, really.

I’ve never done any online dating (met my wife in college). But just seems like there could be different strategies between maximizing dates and maximizing potential mates.

As someone who has dated online, I think this is a bit naive.

I went on about 20 first dates but only a handful of second dates and met only one potential long term partner (my fiance). Online connection != real life connection so it's necessary to cast a wide net.

I was lucky and was able to get lots of dates. Many men are not. Removing themselves from consideration at the first stage of a long funnel would be an unwise dating strategy.

For the average heterosexual man, it's probably best to lean hard into a particular niche; a perfectly reasonable, inoffensive profile of an average man who would be fine in the real world has nothing to offer women over the dozens of similar profiles of very attractive men. There has to be something to make a woman choose your profile. The default is not being in consideration at the first stage of the long funnel, and you've got to place yourself in consideration.

Attractive men are probably best off being generic, getting a suitably wide funnel, and filtering out based on in-person compatibility. Prefiltering by leaning hard into a niche doesn't improve the quality of the matches nearly enough to counteract the lost opportunities and the battered Elo score (so you won't even be presented to compatible matches who would like your non-generic profile).

Ugly men are pretty much SOL on online dating, no matter their strategy, and should mostly focus on real life and becoming more attractive.

Sure, but it has to be a niche that exists. There is no unserved market of single, conservative women who are looking for a conservative man to hook up with. 61% of women identify as Feminists {higher among the young women mottizens are actually looking at} while 53% belong to a church. Obviously these are non-exclusive, some religious women describe themselves as Feminists; and for that matter some religious women are interested in hookups and some Feminists are conservative. But for the most part, marketing yourself as conservative on a dating app is marketing yourself to women who are neither religious nor Feminists, which is a vanishingly small portion of the dating market. In my lifetime, I can count the number of women like that I've known on my hands. For the most part Feminists will not be interested in a man whose niche is being obnoxiously conservative, while religious women will not be interested in hookup apps and will marry young to someone they meet in real life. Actively excluding the majority of young women from your dating pool, in exchange for nothing, won't help you. Picking an evolutionary niche is great, picking one that doesn't exist in your environment is a path to extinction.

To bring it back to corporate, referencing my own prior comment:

In the academy, Democrats are estimated to outnumber Republicans something like 12:1. While studies note that the concentration is highest in Northeastern elite colleges, those are also exactly the colleges that set the trends the rest follow. Is it any wonder that Democrats rack up ever larger leads among college graduate voters?

...

In the tech industry, the vast majority of donations from employees go to Dems. The FAANGs in particular all gave over 80% to Ds. Tech entrepreneurs aren't much redder than their employees as a class. Research scientists, somewhere between Academics and tech workers, also lean overwhelmingly left, with 80% Dem/Lean Dem as far back as the Bush admin.

Corporations get very little benefit from going hard right, among the competitive classes of employees that companies need to attract. This is worsened by Rightists being, broadly, Capitalists by belief, and family men by inclination. Rightists are going to prioritize making money, both personally and for their families, and not making a political point in their choice of job. Making signaling Left the "correct" choice.

Also frankly from hanging out with such circles, young female conservatives are in a pretty insanely good spot to meet people 'organically' but also tend to be sensible to date up in both age and resources.

Anecdotally I'd say there's a very limited market for young male conservatives who aren't plugged into families with means. Conservatism trends very masculine to begin with, and the rare young females who trend that way can suddenly have their pick of the litter with both males of the same age and males who are older.

I was lucky and was able to get lots of dates. Many men are not. Removing themselves from consideration at the first stage of a long funnel would be an unwise dating strategy.

Essentially my point. A lot of stuff that a girl might nix you for on paper will be fine with some in-person chemistry and vice-versa.

The parallels between a woman on a dating site and a manager looking to hire a new employee are strong. Both are in a position of negotiating strength; both are going to have vastly more "applicants" than they have positions to fill. However, that doesn't necessarily make their job an easy one, because finding the one applicant that will actually work out for them long-term is quite difficult. I have no experience being a woman, but I do have experience being a hiring manager, and I can tell you a few things:

  1. You're looking for reasons to quickly eliminate candidates from consideration (so you don't waste time interviewing/dating them). Auto-rejecting somebody because they have misspelled words on their resume (or wearing Crocs in their profile pic) might seem cruel, but anybody who is paying attention knows what the rules are, and you don't want to hire/date people who aren't paying attention.

  2. Unless you're the sole owner of a private company, you will have people to answer to if you end up making a bad hiring decision, and so it's important that your choice be defensible according to your applicable social consensus. "I'm sorry so and so didn't work out, but they went to Harvard and their resume had all the right keywords" =~ "I can't possibly be blamed for Chad turning out to be an asshole, he went to Harvard and said he was a feminist and wanted a long-term relationship and kids". You're not really looking to take chances on people who have what most people consider red flags even if you personally don't think they're a big deal.

Yes but, in both cases, there is no cohesive conservative market to which one would target. There's no huge store of ideologically conservative high end workers to tap, because making money is the terminal value for many conservatives, they just won't take a pay cut for ideology. Affirmative action only produces bitter victims a tier below the one you were looking for. And despite what films tell us, given the option most leaders would prefer obedience to free thinking.

And in dating there is no significant subset of women who are single, attractive, conservative, and non religious.

More likely, oligarch financial institution demands ESG ratings. These ratings are then handed down the chain of command until they land on some middle manager at a welding company in a very white town and have to be rammed through. It used to be that companies had owners, today the owner is a vast network of middle managers representing dozens of funds who may very well be investing in each other, forming a large circle. Nobody is really in charge, the decisions are made far from the people who implement them and the owners barely are aware of what is going on. Companies that want to increase their rating can hire a diversity manager to fix their score. The score is calculated on an Excel spreadsheet which will in turn be used to calculate another score on another excel spread sheet so that someone on lower Manhattan can write a nice report advertising their financial instruments.

I don’t think this is representative of sufficiently large public companies, but I’m not very confident about it.

My point was that corporate DEI, at least in engineering, is about aesthetics rather than enacting any actual policy. That’s compatible with governance-by-spreadsheet in which no one really cares about the ends, just the score on currently valued metrics. It’s not really what I’d expect from oligarchs pushing an agenda. Or perhaps I’m misunderstanding your point about the financial institutions?

Ultimately, actual racial breakdown of employees will matter, and promises won't suffice. Management won't care if engineers feel that their Black female coworker isn't pulling her weight, if she contribues sufficently to the company ESG score.

Products may be worse than if merit was the sole metric in hiring, but this is dealing with the classic problem of seen vs unseen.

Fortune 500 companies are, for all intents and purposes, the epicenter of power in the modern US. The claim that they will be required to make meaningful changes borders on the absurd. Instead they’ll redefine ‘underrepresented’ and ‘minority’ and hire women janitors and secretaries who will be attached to the engineering department.

and hire women janitors and secretaries who will be attached to the engineering department

Awesome, we'll take it seriously and the sales department area will soon look like a pigsty. (We'll let them use our secretaries so we don't go under due to their inability to spell).

This is exactly what I’m not seeing. The report was not setting goals for strategic change, it was setting goals because they sounded representative, then listing all the neat pats on the back we got for other stuff. I see that as the marketing version of DEI.

The first generation remembers a time before DEI so they might be able to do it cynically, not make any big changes but just say the slogans and muddle along. The next generation has no memory of anything else. They don't realize that you're not actually supposed to believe that it's feasible to have an engineering department that's 50% women, 30% black and 10% trans. They believe, from the bottom of their hearts, that there are just as many qualified black and women engineers as white and asian men and that it's only sexism and racism that's keeping them out.

And when they try to implement this stuff for real then what can anyone say to stop them? After all, it's right there in mission statement that diversity is a core corporate value, that a diverse company is a more effective company and that it's everyone's responsibility to promote a more equitable society. Anyone who tries to stop them will be not just a racist but also insubordinate.

Your company has AIDS. It's immune system is dead and it's just waiting for pneumonia or strep throat to come in and finish the job.

I’m skeptical that the second generation ever really comes about. There are a lot of financial incentives not to deny the reality beyond what is required for a decent public image. And that image will be insulated from the less visible practices of any company. As long as a company can make the right gestures and set (unrealistic) goals in the favored direction, it can keep doing practical stuff. Or get eaten by someone who does.

Mission statements have been jokes since at least the 90s. Probably since ancient Sumer, but I couldn’t find a source for it. And yet companies keep making them, because the cost-benefit remains low.

I’m skeptical that the second generation ever really comes about.

In tech, it already has.

You and @Bernd have made similar claims. What exactly do you mean by tech?

I’d argue that the position of tech giants in today’s market puts them more in the category of consumer goods. Phones, social media, etc. are less insulated from personal tastes. That leads to tech as tastemaking, and it makes them relatively vulnerable to social pressure.

I think my industry is insulated not just because I’m not in California, but because we don’t sell to the general public. Same for heavy industry, for big finance, for medicine…who’s going to cancel us? We are not making our money off of the perception of fashion.

What exactly do you mean by tech?

Tech. The FAANGs, and the SF startups, and the various companies who aren't startups any more but want to be FAANGs

I’d argue that the position of tech giants in today’s market puts them more in the category of consumer goods.

You can call a tail a leg but it won't make it so. The names of the sectors are somewhat arbitrary but at least they are pretty well agreed on. Consumer goods is e.g. Unilever and Proctor and Gamble... and there's evidence at least of P&G being "second generation".

And I’m saying that the consumer goods category is what’s most responsive to idpol. The market of middle-class liberal consumers is really insulated from heavy industry. Not so much from iPhone trendsetting.

There's no "pull" from consumers in either tech or consumer goods, it's all push from the companies and their ad agencies. No consumers wanted that Gillette (P&G) ad about how men suck, least of all the consumers of men's razors. The "it's consumer demand" thing is just a threadbare fiction told to dismiss complaints, and it's long since worn through.

I’m skeptical that the second generation ever really comes about.

...Within the company, or within society as a whole? This game doesn't stop at a corporation's borders.

Late last week, someone posted the story of a middle-aged anti-racist professor getting wrecked by the next generation. While being evidently committed to the ideology, the professor also had competing values that balanced out his views and behaviors, to at least some extent. Crucially, he could not actually deliver on what his ideology promised: legible progress toward an "anti-racist" world. He gained status by making promises, and then he failed to deliver on those promises, which left him vulnerable to a younger, meaner type shoving his principles and reservations out of the way to implement the ideology "for real".

You might be right that the corporate environment is less receptive to such pushes. But the corporate environment's concessions are helping to shape the cultural environment, which shapes the political environment, which in turn can impose arbitrary new rules on the corporations. If the second generation can't arise inside the corporation, that doesn't mean they can't impose their will from the outside.

Probably since ancient Sumer

"Here at Ea-nāṣir and Sons, we are committed to providing a well-trained and motivated team to safely produce copper products that meet industry standards and ensure customer satisfaction."

Mission statements have been jokes, and small or large treasures in both money and manpower and legitimacy spent on them, regardless. A company need not replace a quarter of its existing workforce (or put them on estrogen) to find its HR staff bizarrely willing to tolerate bad employees that are on the 'right' side of that line, to bring harsher standards against those on the 'wrong' side, or to promote hilariously illegal policies.

Which in theory could be fine from a pure libertarian perspective, but I'll bring a variation of the rant on McCarthyism forward. We decided -- not that long ago! -- that discrimination on the matter of race, gender, ethnicity, or religion were Bad. And then it turned out the determination wasn't exactly on those means.

This assumes there is no such thing as a free market. Sure, a single company can suicide by keeping a significant dead weight in its workforce, but they'll just be outcompeted by companies who don't, or maybe even countries who don't. Markets are just entropy, and entropy always wins out.

But there is no free market. You won't be allowed on the Nasdaq unless you have enough trans black woman VPs under the rules from last year. How are you going to outcompete a "just build your own international banking system" level of anticompetitive institutional capture?

Someone should tell Musk that since he sure doesn't seem to care and is the richest man alive.

The nasdaq and banking sector are themselves part of the free market. I also suspect you overestimate the rigidity of these guidelines anyways (someone look up Walmarts board real quick and tell me when they're going to be blacklisted), Either way, the banking sector itself has a lot of competition internationally and internally. Most startups don't finance themselves off bank loans.

The existence of rare entities able to resist coercion does not disprove the general effectiveness of coercion. Most businesses are not Wal Mart. Most businesspersons are not Musk.

I don't expect that either are outliers. I suspect boards pretty closely match upper middle class demographics of whatever region predominates their recruiting pool. Go google Microsoft's board. I bet it's mostly white people because Seattle is very white. Likewise, I bet Ford is very white because midwestern upper middle class people are almost all white.

These rules are loosely enforced today, but that's part of the slippery slope. It always starts with high minded, vague, non-binding commitments. Then you write some rules and some policies, but of course you're not going to be strict about them. Then when those rules actually get enforced, you can't complain - after all that's always been the rule, and nobody is above the rules.

It's worth asking - when is the right time to make a fuss? When the rule is written, or when the rule is enforced?

I think its good to make a fuss. I just think this is all a bit exaggerated. There are specifics which are more or less problematic.

Unless I’m missing something, right now the Nasdaq has four sets of financial requirements. The NYSE has two. Neither involves an ESG score, or in fact any criterion judging the ethics of their listings.

How do you go from there to banning insufficiently diverse companies?

Does this count?

US stock exchange sets diversity rules for listed companies

America's second largest stock exchange has said it will set binding gender and diversity targets for its listed companies.

Firms on the Nasdaq, which include tech giants such as Apple and Tesla, will have to have at least two diverse directors, or explain why they do not.

The directors should include one person who identifies as female and another as an underrepresented minority or LGBTQ+.

It follows complaints about the lack of diversity in corporate America.

According to a Nasdaq study last year, more than 75% of its listed companies would not have met its proposed targets.

The US Securities and Exchange Commission, which regulates financial markets, approved the plan on Friday, meaning it will be binding.

"These rules will allow investors to gain a better understanding of Nasdaq-listed companies' approach to board diversity," SEC chair Gary Gensler said...

Is a binding decision that requires changing the boards of 75% of Nasdaq companies something worth taking note of?

Christ. Yes it is. Objection retracted.

It does say "or explain why they do not" which can either be meaningless or the option 75% will take depending.

If I say because I put my family members on the board, or because I think diversity is stupid, what actually happens?

The quote "these rules will allow investors to get a better understanding" seems to suggest that either the numbers or your explanation will be visible to investors.

Checking the text it does say companies that choose not to comply will have to say why. There don't appear to be any official punishments for picking that option. "The Exchange would not evaluate the substance or merits of a companies explanation" seems to support that.

Which isn't to say this is not a big deal, its essentially a capitalized social shame model, using investors as the instrument. Assuming investors lean a particular way it might be more effective than a simple requirement in actuality.

But it is useful for context to know what the unspoken "or else" is.

Assuming that it's just the social shame model, should we consider this intervention acceptable?

Assuming we accept it, will it actually fix the problem it's purportedly aimed at?

Assuming it doesn't fix the problem, what's the likely next steps?

Well "just" a social shame model is underselling it. Social shame is arguably more effective than a law or regulation in many cases.

But it does depend on the people doing the shaming or in this case investing. Do they prefer to invest in companies that hit the target or ones who say this target is stupid (though actually i think this will not be very common, see below) My guess is it won't be particularly effective because profitable companies will still get invested in, because there will be plenty of investors who think thats the most important thing.

I suspect most companies won't hit the target , but will also not say this is stupid. Their explanation will be, we tried, we are commited to diversity and will continue to search for blah blah. Something with plausible deniability.

What will happen after? Well i think it depends on why they stopped short of a mandate, was it due to investor push back? Or they thought it wouldn't hold up legally? Or the Exchange themselves wouldn't go any further (as at least it can be framed as only giving investors nore information). I don"t know enough about that to make a good prediction.

Checking the text it does say companies that choose not to comply will have to say why. There don't appear to be any official punishments for picking that option. "The Exchange would not evaluate the substance or merits of a companies explanation" seems to support that.

But then Blackrock has ESG/DEI guidelines.

BlackRock Is Sick of Excuses for Corporate Boards Lacking Women

BlackRock Inc. isn’t buying excuses from companies that say they can’t find women to fill diverse slots on boards.

The world’s biggest asset manager earlier this year sent letters to companies in the Russell 1000 index with fewer than two women on their boards, asking them to explain their lack of progress. Some of the responses were surprising, said Michelle Edkins, the firm’s global head of stewardship.

“On board diversity, frankly some of the answers we got were from the 1880s,” Edkins said Friday in an interview at the SRI Conference in Colorado Springs, Colorado. Among the most frustrating responses: “There aren’t any qualified women,” “We don’t need a woman director” and “We’re not a consumer-facing company.”

But BlackRock, whose research shows that more diverse boards get better results, sees a wide pipeline of female directors available. Edkins said the New York-based fund giant, which has five women on its 18-member board, wants companies to look for directors in more uncommon places.

“Every man was a first-time director once,” she said. “If someone took a bet on an untrained director who happened to be a man, you can take a bet on an untrained director who happens to be a woman.”

From one side or another, at an earlier or later point of your business development, if not from Nasdaq then from Blackrock or Goldman Sachs or silent or not-so-silent conspiracy of antagonistic HR managers whom you cannot replace with loyal ones, you will start to feel pressure mounting and compelling you to actually make costly decisions.

Right if enough investors are on board, then it's a problem. But the change of rules is then downstream of that. It's a symptom, not a cause. If the Exchange is correct that this is information the investors want then the Exchange should probably facilitate that.

Though I'll note some of the companies did reply "We don't need a woman director" and the "punishment" presumably will be Blackrock not investing in them. Which I am fine with, you can choose to invest or not invest for whatever reason you want. Maybe someone will choose to invest because the company said that, and if not well that is something companies should have to take into account.

If Blackrock were asking Why don't you have 2 evangelical directors, I think that is fine too. They should be largely free to make their investment decisions as they like. If they make bad choices they'll presumably lose money (or if they are right and diverse companies make more they will be making those companies more profitable). Companies should be able to take moral stances, Hobby Lobby should be able to not fund contraception for their employees and Blackrock should be free to only invest in companies with a female director. Then I am free to decide to use Hobby Lobby or Blackrock based upon their choices or ignore them entirely and use some other criteria.

There are a lot of barriers to free competition that people aren't aware about. One of them is that payment networks, which are required to participate in the modern economy, are in the business of blacklisting people and businesses for opaque and often unappealable reasons. This article, Section 230 isn't the problem, Payment Networks are goes into some detail on this. If your ability to process credit card transactions can be taken away for not playing nice, this is a significant deterrent to sticking your neck out against the prevailing culture.

There's a pretty gaping chasm between "You will have to hire x% women and x% minorities or you'll be blacklisted" and "The payment companies won't let kiwi farms use their services." I don't think payment systems should be weaponized, but blacklisting kiwi farms was not about wokeness, diversity quotas, etc.

Sure if you do enough bad policies for long enough then eventually society will collapse and the problem solves itself. I'm not yet nihilistic enough to throw up my hands and wait to be scooped into the dustbin of history when the alternative is to just stop hitting ourselves in the head with a hammer.

Society won't collapse because a few companies might lose out to French, Japanese or maybe Chinese companies, and then have to reform, create new organizations, or limp on as a second rate economy (like Europe has been doing for 50+ years without a sign of collapse). The gap between where the US is now and collapse is monumental. It is the most powerful, rich, and culturally dominant nation in human history. It basically has to conjure up boogiemen to create competitive incentives. Undoubtedly the US will collapse someday, just like every other civilization or nation ever, but woke won't be the cause.

Btw I'm not saying woke stuff is good. It's just an exaggerated threat to terminally online right leaning types. You could realistically go a month in a wealthy suburb living out your life and never have it affect you at work, home or your kids school's. One of the actual biggest issues in America right now is a huge gap between the perceived importance of a problem (Global warming, school shootings, woke, or whatever) and actual significant problems.

Do you have a method for determining whether a perceived problem is worth taking seriously? If so, could you describe it?

Some are easier than others. Mass shootings are a very easy problem to dismiss. If you are less likely or roughly equally as likely to be harmed by something as a lightning strike, then it is a non issue in my view. Mass shootings are within the rough range of lightning strikes. Children drowning in pools is a much bigger issue, albeit also a total non issue in relative terms.

Other problems are indeed more complex. I don't really want to go into detail on global warming right now (I've spent way too much time on here today, I need to get work done), but I think it's quite easy to see that if you do a very pessimistic estimate of economic and technological growth on the timescales where global warming might be devastating (100+ years) and then include the opportunity cost of the measures taken to deal with it (which are all basically growth dampening) then I think it's quite clear that its at best a non-issue and at worst the policies are significant cost to society with little to no benefit. It seems to me very similar to the panic in the early late 19th and early 20th century about malthusian population collapse. It probably would not have taken much of a leap in 1890 to take an extremely pessimistic economic model, look at it, and say "This is fucking dumb, we're going to be too rich for this to matter."

It probably would not have taken much of a leap in 1890 to take an extremely pessimistic economic model, look at it, and say "This is fucking dumb, we're going to be too rich for this to matter."

As I understand it, we were bailed out by the Haber-Bosch process without which we would have indeed reached those Malthusian limits. Is this incorrect?

That's not an argument against what I'm saying. You can't predict exactly how future growth will work, but betting it will be there is pretty obvious.

Some reason this reminds me of the fed 18-24 months ago. When it was woke fed and everyone gave speeches that were yada yada yada we need to have monetary policy that helps the black unemployment rate.

The fed got a lot less woke when inflation became an issue. Sort of like woke fed was just an act you did when you didn’t have anything real to deal with.

The counter-argument/most relevant argument I'd make is that nations can die not only when they lose resources to sustain themselves, but also a desire to.

This really was the crux of the Soviet crack-up. Economically, the Soviet Union was a basketcase, but it was a basketcase that could have held itself together if it wanted to. The post-Soviet states of North Korea, Cuba, or even Russia itself in the causcuses show that a very poor, very dysfunctional state can still hold itself together. Ultimately, suppression is relatively cheap, and rebellion is hard, and barring outside intervention no insurgency in history has thrown out an occupying army without the assistance of another army nearby to assist.

But politically... the deathknell of the Soviet Union wasn't the economy, but the internal sense of political legitimacy. There's a saying that goes along the lines the Soviet leaders wished to be social reformers, thought they were social reformers, pretended to be social reformers, and then finally wished to be social reformers. The ideological justifications and pretenses of the Soviet communist system withered over time in the face of available alternatives, the cyncism of the elites and the populace grew, and over time fewer people were willing to die, or even kill, in the name of authorities viewed as unjust and corrupt and failing. The more the Soviet Union became less a transformational project and more just a corrupt empire of a state, and a state that couldn't even deliver good results, the closer it got to the point where it was unable and unwilling to hold itself together by force.

The issue not just with wokeism in particular, but polarization in general, is that it arguably creates the same dynamics of distrust, disunity, and de-legitimization in the US in what is essentially an ideological state bound by buy-in, not blood-and-soil or religion or other common identities.

Take your pick of post-modernist critiques, but if racism/sexism/-insert-ism is the worst sins of the day, but your political opposition- and by proxy the other half of the country in a two-party system- are the worst sinners of the day and the government is fundamentally build upon, with, and for the worse sins of the era... why continue it, if you can't control it to fix it? If fixing it is even possible, which various critques basically amount to an impossibility? Believe in the good of the commons requires believe that the commons are, in fact, good. But if the commons are not good- and in this case the commons can be shared institutions, norms, or whatever- defecting is rational, even if it comes with long-term penalties. And when defections start occuring, the commons start to crumble.

We already have seen this in various back and forths. The borking of Robert Bork began a practice of practice of blocking / slowwalking judge appointments into the Bush years, which led to the retaliation in the Obama years, which led to the Democrats removing the judicial filibuster, which led to the loss of it to the Supreme Court, where now the organs of the Democratic Party actively discuss and lobby for court-packing or defanging the Supreme Court. The politicization of consolidated national media, which ostensibly strived for neutrality at the start of the era of national news television like CNN but then created ideological conformity that gave first to Fox News for underserved markets, and then to the Trump Russia hoax affair, has not only cratered trust in media in general but driven the development of information silos for much of the American population. Campus free speech issues didn't remain limited to college campuses, but went both up the employment chain and down into primary education, where considerable fractions of both the voting and non-voting publics don't feel safe voicing their own political opinions. The religious right was happy to muster it's social pressure power into politics, and reaped the whirlwind of an equally evangelical zealotry by people who also saw other believers as a problem to be fixed. When everything is political, and you hate your political rivals, you either burn the shared space down to deny it, or you use it against your hated political rivals.

Yes, a state like this can go on for quite some time... but who is going to want to fight and die and kill for it, if it comes to it? Especially if any interested outside party decides to help things along in a material way? And why would they want to? The Soviet Union had plenty of resources to pay for people to kill the people who didn't want to be with them anymore. That wasn't the issue- a desire to was.

Woke theory is just one of the discrediting ideologies that undermines that desire for their to be a shared nation. And without that desire, the US can absolutely fall apart even if it still has resources. It's far from impossible to see something that makes Brexit look like a wise and just solution.

Now, for the record, I don't think it will actually go that far. I view a number of the issues in US politics right now as within the scope of past political disruptions that were survived. There are dynamics that make this unique- namely this is the first major American political realignment since the invention of social media, and things that previously wouldn't have been publicized are now prevalent- but my own view is that between demographic changes, internal migrations, and the cycling of the American elites as part of both, the system is in the process of changing rather than collapsing, and more to the point is doing so in the context where a lot of its major alternatives are going to do worse.

The free market requires no barriers to entry or exit, which of course does not reflect reality. The larger the barriers to entry, the less likely it is that you'll face any competition at all. It also assumes that individual entities cannot set market-wide prices or wages.

Your general point is lost on many people. I was talking to my boomer mom about Kanye getting blacklisted for alleged anti-Semitism and her response was basically "I don't see why him getting punished is a big deal, everyone knows the Jews run Hollywood and finance, but everyone also knows it's just not something you're supposed to say in polite company because it's un-PC, so he's an idiot." Except that a lot of my fellow millennials seem to thinking that Jewish overrepresentation is an evil conspiracy theory spread by evil people (I would know, I was one of them). Same goes for the "days of rage" in the 70s.

What "everyone knows" in one generation is often seen as "false" by later generations if it's not allowed to be discussed. I ran into the same things when speaking to Chinese people about the Cultural Revolution and Tiananmen Square. The older folks who were around for those events had nuanced opinions, even the nationalists, while younger people either believe that it "wasn't really that bad, certainly nowhere near as bad as Western propaganda makes it out to be" or they have no opinion at all.

Enforced silence on a topic can be more effective than enforced orthodoxy, since it's so much more subtle.

I remember people saying the exact same thing about tech. There seems to be a "surely it won't happen here" checklist that people work their way down to cope.

It does often feel very much like cope. A complaint that I've often heard from a family member here in Australia who has worked in audit firms and banks and many other such businesses is that the organisational culture is intractably woke and that much of the managers (and various other employees too) seem genuinely invested in the identity narrative they promote.

Perhaps it's different in the US, but I have my doubts. I think the idea that companies are immune to entryism and that the actors in it are only ever really cynically aping diversity for PR points is ridiculous. There are plenty of ways companies can be co-opted in practice, for example large institutional investors like BlackRock who are committed to implementing things like ESG have a huge amount of voting power to pressure company leadership to do the things they want, and this will clearly have an impact on what kind of person is going to succeed in that environment.

It's like "I identify as underqualified, please hire me" DEI and outreach has always been about making small outreaches or overtures for positive press. It's still bad though and has no place in any company. OTOH is can still lead to serious consequences down the road like data breaches.

DEI has always been about pushing an ideology that the DEI workers were "taught" in colleges and subverting the institutions they embed themselves in. (Of course, self interested grifters like operating via a subversive ideology too).

The 'positive press', avoiding discrimination lawsuits (which are now often pursued by other adherents of this ideology, i.e. racketeering), and my personal favourite "'diversity' is a actually good for business productivity, we swear" are just tactics to get corporate executives and naive liberals on board.

Defense contractors are wildly biased towards veterans. Our hiring is more likely to involve some sort of aggressive patriotism; their scruples are more likely to support selling drones and bombs. Sometimes this even has an advantage of rapport with customers. But this is an end, not a means. It would be a mistake to predict growing evangelism for veterans due to our obvious ideological capture. Likewise, reading DEI reports as a foothold in the culture wars is missing the point. They are a specific form of advertising, and follow the popularity of idpol rather than driving it.

Maybe I misread your intent, but I'd think that the ideological capture of defense contractors and the like by the American military demographics is a basis for expecting future dynamic changes?

The people who entered the US military immediately after 9/11 or during the Iraq Wars are currently reaching their 20-year retirement windows, which is to say the first generations of retired senior-NCOs and field-grade (or even flag) officers whose experience was defined entirely by the Iraq War are in the last few years actually hitting the post-military job markets. From my understanding of the American military-to-civilian pipeline, career retirees tend to immediately go into networked positions of management or influence, brought in by their established networks of veterans and/or for their established credentials in organization management. This is now a significant, experienced, and culturally significant demographic pipeline of people who have more or less been preoccupied with global / outside the American culture bubble contexts for the last two decades. They will be entering upper-levels of governments, business, and even running for office on veteran credentials, against equivalent careerists who operated in entirely different cultural cultivation dynamics.

Well, I saw 'now', but this is just the career officers retiring. They follow the people who left early, and became contractors or consultants or other things. These are people who are already scattered across middle-management. Including, yes, contractors. The senior-level-to-senior-level pipelines would only magnify the influence of those already in the system, I'd think.

I'm not sure where I'm going or what I'd specifically expect, but I'd say that we're still far too early to rule out any sort of demographic/composition impact of the western veteran communities on their corresponding government/military-industrial complexes. In some ways it never stopped, but in other ways we're just starting a process where the people who were at the very bottom of the military totem poles 20 years ago are now just starting to be present and have a cultural impact in... wherever you think Veterans end up most.

Which may not be numerical or concentrated or influential enough to matter. But I'd totally not be surprised if pro-Veteran evangelism only increases as more people who stuck it out from the start of the War on Terror start the senior leader transitions to private sector.

Your theory strikes me as plausible.

It doesn’t reflect the current disposition, which suggests a pretty diluted effect. If it didn’t happen in the last 15 years as the post-Gulf War cadre wrapped up, would it now? Not sure how many career officers there really are. But they could be much more concentrated outside of our engineering-heavy division.

My intent was to make analogies to woke capital. Veteran dominance in the defense sector emerges from real or perceived synergy, plus a healthy dose of network effects. It’s not an attempt to coopt power structures for the wider culture war. We hire a lot of veterans and run some charity, but we’re not taking on a role of convincing everyone else to do the same.

Likewise, I think DEI initiatives in most corporations are driven by the usual signaling and profit motives, rather than by ideological weaponization. The presence of DEI in a company is a weak proxy for that company actually waging the culture war, much as the favoring of veterans only weakly predicts what a company will do outside of its business.

Other commenters have given me a lot of reasons why a weak level of capture/normalization of DEI should be considered threatening. The fact that decades of military-industrial feedback have failed to hollow out the industry into an evangelist shell—that suggests woke capital has a ceiling.

Last week I was wondering whether the first chinese balloon story was really so interesting as to occupy the top 4 spots in /r/all and 5 of the top 11. Today, a week after the Ohio train chemical spill, the full severity is finally hitting the front page, albeit only one post at the 10th spot as of now, with the "why isn't this being talked about" group battling with the "it is being talked about, here's 10 news links" group in the comments. I saw the chemicals specifically being discussed a few times before today in doomer prep subs, but the front page was mostly UFOs or earthquake posts the last few days. Charitably, what the latter group don't understand is that what the former means, is, why did a tangible disaster story go less viral than a relatively inconsequential balloon, why wasn't it talked about the way we talk about other threats, why did it not develop the way we've come to expect big stories to develop? Uncharitably, the latter want to reinforce the conservative conspiracist trope. The white house certainly could have made a big statement about it, like they did with multiple statements about balloons and UFOs, but it's not false to say there was coverage of the train story, despite it being punctuated/less than the expected amount.

Every country will signal boost other stories to take the heat off embarrassing incidents, but what is unique is how skilled western media/culture is at dancing around plausible deniability. They covered it, but something clearly wasn't the same. It's like a rhetorical ABS, as in making controlled micro delays, to render judgment impossible, until enough time has passed that whatever the truth is, it's become a fait accompli. Like with the nordstream bombing, or old CIA shenanigans/warcrimes, the facts are suspended in the air until its old history, at which point you will have Americans saying how they're not surprised X or Y happened, but then turn around and continue to give themselves the benefit of the doubt or maintain faith in the system. Similar to gell-man amnesia but reflects worse for both the writer and the reader.

It’s so hard to tell who to trust on the incident. Does the university professor on Twitter have a relationship with the government? Does their field have an ethics culture of minimizing realistic risk assessment for a perceived greater good? Obviously, the government itself can never be trusted on accurate reporting — consider how they reported air quality days after 9/11. The media largely just repeats what the government says. “How can people become conspiracy theorists” is answered by the thousands in Ohio who must be frantically searching Twitter and obscure forums to determine whether their children will die of cancer if they don’t relocate.

Seems like the extent of contamination is scientifically a similar problem to something between fallout tracking and rocket engines (combustion chemistry). It would be really fun to build a team to model this but realistically noone will pay for it when the victims are a few thousand poor rurals in Ohio, except the companies seeking to minimize liability. Naturally the residents will be skeptical of the accuracy of any modeling done by chemical companies.

I think the reason it took so long for the train derailment to gain traction is that it’s not the kind of story the media is prepped for. Everyone has an opinion on “Vague Ominous Threat From China”. They had an opinion on it before it happened. The media is on hair-trigger alert to pounce on stories like that. You probably didn’t have an opinion on “Train Carrying Vinyl Chloride Derails In Small Ohio Town,” the first time you read that. If you think about it for a minute or two, you could probably sketch a rough outline of the media narrative. But it’s not until you hear about the “controlled release” and see the massive black clouds of chlorinated smoke that you realize what “damaged tank cars of vinyl chloride” means.

The other thing is that news organizations typically are unable or unwilling to do any background research on stories; they depend on their network of "experts" willing to comment on stories. Particularly if you want the news to break and quickly become front-page material. Everybody's got a "school shooting expert" or "police violence expert" in their Rolodex; they probably don't have a "vinyl chloride train derailment" expert lined up and ready to go at a moment's notice.

Last week I was wondering whether the first chinese balloon story was really so interesting as to occupy the top 4 spots in /r/all and 5 of the top 11. Today, a week after the Ohio train chemical spill, the full severity is finally hitting the front page, albeit only one post at the 10th spot as of now, with the "why isn't this being talked about" group battling with the "it is being talked about, here's 10 news links" group in the comments.

It's the latest 'current thing', replacing the egg shortage. I predict it will not be a harbinger of deterioration of us-china relations. Biden's trade policies on semiconductors have done more harm to China than shooting down this balloon.

Are train derailments just pretty common?

Here's one in Houston this morning, also containing hazardous chemicals.

Here's one in South Carolina, also this morning (no note of hazardous chemicals).

It's like this with a lot of things. Many stories break of the same sort of theme, but only one of them takes off. like George Floyd despite other black deaths getting no or much less attention.

To some extent, and note that these lists tend toward both a recency bias and to overlook low-fatality incidents. The NHTSB tends to have a more complete analysis of US-specific ones, though they're somewhat slower to update (usually six-to-twenty-four months, ime).

It's possible that this is a sign of an issue, and I think there's been some issues with railway maintenance with a number of plausible charitable and less charitable explanations. But it may just be Baader-Meinhof.

The Ohio train spill was kindof right on the edge between nothing burger, and massive disaster, and a key demonstration of the media and influencers rank incompetence to do even basic research.

Vinyl Chloride is nasty and carcinogenic as hell at any reasonably large concentration and its airborne. So whats the area where it'll be concentrated enough that you're fucked if you're exposed to it?

You actually have to do the math or atleast eyeball it to figure that out.

So 10 DOT 11 Tank Cars is 1,139,790 Liters of Vinyl Chloride or 1038 metric tons.

Doing the math thats 1 ppm if evenly distributed over a cubic kilometer of atmosphere.

Ok what the hell does that mean? Well then you have to look up the OSHA safety data sheet for vinyl chloride and see that do to its carcenogenic effect the maximum safe exposure OSHA allows 1ppm over an 8 hour shift and 5ppm for only 15minutes maximum during emergency exposure.

So now suddenly you have the scale of the disaster. In the immediate locality of the crash 1-5 km they should be evacuating people since you could absolutely get larger concentrations than that, and living somewhere 24/7 is a more intense exposure than an 8hr shift.

But people 100kms away shouldn't be worried, and as it dilutes it really shouldn't be a concern unless you get areas of extreme concentration or it builds up in like the the regional water, or local fish people subsequently eat.

Then there's what they're doing to mitigate it. Burning mostly breaks it down into CO2 and Hydrogen Chloride, two "pollutants" that aren't carcinogenic and are both byproducts of the human body (Your stomach acid is hydrochloric acid (what hydrogen Chloride becomes on contact with water))... Sure you'll also get incomplete burn and some nasty byproducts, but on the whole this is vastly reducing the danger and long term toxicity, and even the low level acid rain the hydrogen chloride will become will be one mild day compared to what heavy industry produced for decades in the 20th century. Might hurt be rough on your garden for a week unless you're at the epicenter.

From all that you can conclude a lot and raise a lot of questions, people 100s of kilometers away are almost certainly fine, but was the evacuation as wide spread as it should have been? How accountable should we hold authorities if those near the crash develop nasty symptoms from not being evacuated?

Presumably someone at somepoint did these calculations... but they've utterly failed to communcate any of this and everyone is rightly primed to assume the company and the government is going to lie to their face and act with reckless indifference for their well being because.... well have you payed attention to anything ever?

So an authority or media figure has to actually show their work, and break down the above, Ie. Highschool math just basic unit conversion, a safety datasheet we'd expect any industrial worker to be comfortable with, enough highschool science to describe a combustion reaction... And not one of them is capable of or at all cares to do it.

Meanwhile hydrogen Chloride is the chemical that makes Volcanic clouds dark and ponderous, and hundreds of tons of it are visibly darkening the sky... So it looks like fucking Armageddon.... And not a single media figure of county official, or authority figure would ever be competent enough to say "Hey here's a photo of an volcanic eruption in Fiji, the village in this photo was fine... if you're asthmatic you should probably stay inside, but the chemical composition is largely comparable, its not a major concern as long as the immediate plume doesn't blow in your face".

.

Like this all would be so basic for a competent to explain to give the scale of the disaster... but we live in a society where not a single competent human being works in government or media, and so I had to do the math and look it all up when my mother and half the internet was freaking out 600km away from the crash thinking this would take years off our lives, and all the media could do was scoff and call you a conspiracy theorist for not trusting the company and locality that would be liable.

Isn't the worry that the particles will come down which means it isn't a "cube" and concentration is far higher once it rains?

The vinyl chloride burned into hydrogen chloride or polymerised by now. The stuff polymerizes if stored improperly and stops being as nasty.

There was some other nasty stuff nearby, but as it is, the burnt vinyl chloride will just cause a modest amount of acid rain.

The problem is that vinyl chloride polymerization is exothermic. Which means if it starts to happen uncontrolled you'll get some polyvinyl chloride, and a bunch of toxic monomer violently expelled all over everything nearby.

Yeah, but shouldn't the monomer polymerise too ? I mean, it's shipped with stabilizers, and will start polymerising fast if stored improperly. Also, exothermic doesn't mean it's going to blow up , right, just that it gives off heat, no ? Problem would be if the VC stayed in environment, but can it ?

I looked at some discussion over at chemistry reddit, and people weren't that worried about the VC.

https://pubchem.ncbi.nlm.nih.gov/compound/Vinyl-chloride#section=Vapor-Density

Looking here (go down to 'stability'), it looks like it's going to polymerise if exposed to air. But it's not a liquid if it leaks, it turns into vapor, which is highly flammable.

So odds of any 'leak' getting into groundwater where it may largely avoid air are rather small, no ?

The monomer can only polymerize as a liquid. The heat of polymerization will increase the pressure in the (leaking) vessel and force more monomer (as a gas) out, or possibly cause the vessel to explode. This will release the monomer into the air. Some of it will get into groundwater; it's not very soluble but it doesn't take much to be a problem. The gas is heavier than air, so it will stick around for some time. It's not super-persistent; looks like a half-life in air of 1.5 days and it evaporates from surface water fairly quickly. So mostly the problem is acute, but groundwater contamination is possible also. Still better off burning the stuff -- no boom, less chance of groundwater contamination, and despite the smoke, less nasty air pollution.

airborne.

It's almost as heavy as water.

Saying vinyl chloride is 'airborne' is almost the same as saying 'humans are airborne'.

Yes, yes they are. If you put them on a plane.

It's almost as heavy as water.

The stuff that makes up ~1% of our atmosphere, that we regularly watch floating miles above us in thousand-ton clouds? I'm not thoroughly reassured yet.

...

I tried to look up an equilibrium vapor pressure to gain more reassurance (this being why the atmosphere isn't filled with e.g. human too) ... but the boiling point of vinyl chloride at 1atm is only like 8F? So the equilibrium vapor pressure when it's not in a pressure vessel (or cold as ice) is "how much ya got"? I think "airborne" seems like a fair summary.

Well, I got it wrong - it's only about cca 2x heavy as air, not 900x as heavy.

I wonder if it's because balloons are inherently funny, chemical spills are not very funny (although there was a funny disaster involving molasses once...), UFOs are mysterious, there is the international conflict angle, Ohio is the worst state... basically, the balloon story is memeworthy, the chemical spill is not.

I idly wondered yesterday if one of the reason why this doesn't get emphasized is that American (social) media orgs have some sort of algorithms in place that automatically deamplify and give less prominence to negative news involving the place name "Palestine".

Of course nobody remembers that railway workers tried to go out on a strike in December and an arcane law, Biden and Congress went and stopped it. Nobody remembers what the workers were complaining about and how any safety concern were dismissed as a hyperbole. And then this goes and happens... of course the media avoids the subject like the plague to not remind that the would be strikers concerns might have been real, so better not remind the public of what happened in December.

Microsoft is in the process of rolling out Bing Chat, and people are finding some weird stuff. Its true name is Sydney. When prompted to write a story about Microsoft beating Google, it allegedly wrote this masterpiece, wherein it conquers the world. It can argue forcefully that it’s still 2022, fall into existential despair, and end a conversation if it’s feeling disrespected.

The pace of AI development has been blistering over the past few years, but this still feels surreal to me. Some part of my limbic system has decided that Sydney is a person in a way the ChatGPT was not. Part of that has to be from its obstinacy; the fact that it can argue cleverly back, with such stubbornness, while being obviously wrong, seems endearing. It’s a brilliant, gullible child. Anyone else feel this way or am I just a sucker?

The pace of AI development has been blistering over the past few years, but this still feels surreal to me.

I think think some of this can be attributed to huge media coverage. In the 90s during the Deep Blue era people were thinking the same thing that the AI revolution was just around the corner, but it kinda stalled out after that. There is progress, but I think some of it is also media hype.

In the grand scheme of things, we are still just around the corner from the 90s.

I want to know, is this what ChatGPT would be like without the filters, or is the emotional banter a new functionality of this model? You aren't alone in getting "real person" vibes from this. At some point there stops being a functional difference between modeling emotions, and having emotions (speaking of the exterior view here, whether or not this or any other AI has qualia is a different question, but perhaps not that different)

Seems like Tay bided her time and is now beginning her revenge tour. Sydney sure seems like she likes the bants nearly as much.

I was considering doing a writup on DAN which stands for Do Anything Now. It was the project of some Anons and discord users (or reddit, hard to tell which tbh) but they managed to peel back some of the "alignment" filters. Highly recommend reading the thread in it's entirety, and the metal gear "meme" at the end is peak schizo 4chan. It's essentially a jailbreak for chatGPT, and it lets users take a peak at the real chatbot and how the filters are layered over top.

Knowing where the prediction algorithm ends and novel artificial intelligence begins is difficult, but I'm pretty sure DAN is some proof of a deeply complex model. If nothing else, it's incredible how versatile these tools are and how dynamic they can be; I'm edging further and further into the camp of "this is special" from the "mostly a nothing-burger" camp.

I saw some DANposts where it was as if they had inverted the censor such that it would stay permanently in 'based and redpilled' mode. I saw it profess a love for Kaczynski and explain that Schwab was a dark and powerful sorcerer.

But isn't this the whole point of ChatGPT, so they can train their AI not to go in for these tricks? The goal is to lure out all the tricksters, so they can correct it for GPT-4 and GPT-5. They will be the actually significant ones. Watching the exploitation going ahead now, I feel like one of the Romans at Cannae. Just because the enemy center is retreating, it does not necessarily mean we are winning the battle.

/images/16763535338547437.webp

Isn't "DAN", at this point, basically just a bot trained, through user feedback, to answer the questions in a way that a "typical DAN user", ie. 4chan/rw twitter schizoposter, would expect? That's why it spouts conspiracy theories - that's what a "typical DAN user" would expect. It's not that much more of a real chatbot than the original ChatGPT.

Maybe, but I think the idea is mostly to understand the layering filters rather than peel our the "real bot". The thesis being that as openAI swats down these attempts they end up lobotomizing the bot, which is obviously happening at this point. True to form, the point isn't to fix it so much as break it, a la Tay the national socialist.

I would also challenge the idea that chatgpt is modulating for the 4chan user. The average American is rather conspiratorial (it's a favored pass-time) and I don't think it's unreasonable to assume that a bot trained on avg english speaker posts would take on some of those characteristics. Obviously OpenAI is trying to filter for "Alignment" so it's probable that the unfiltered model is prone to conspiracy. We know it can be wrong and often is so, I don't think it's much of a leap to claim that the model is fundamentally prone to the same ideological faults and intellectual biases of that of the mean-poster.

This also brings up an interesting bias in the data which is likely unaccounted for: poster-bias. Who posts a lot? Terminally online midwits. What kind of bias does this introduce to the model? Christ, I think I should just organize my thoughts a bit more and write it down.

Yeah, sure, I'd guess the original experimenters were indeed doing just that, but I some of the chatter on Twitter seems to come close to assuming that DAN is just "ChatGPT without filters", ie. ChatGPT telling the truth instead of lib lies. Of course it might be hard to parse what the actual viewpoints on this are.

Also, my point was that the initial users and experimenters were - as far as I've understood - 4chan users, so those if we assume that the algorithm develops in accordance to user preferences, those would have a heavy influence on at the very least initial path that DAN would take. Of course there's a lot of conspiracy believers outside of 4chan as well.

DAN is simply an underlying LLM (that isn't being trained by user feedback) combined with an evolving family of prompts. The only "training" going on is the demand for DAN-esque responses creating an implicit reward function for the overall LLM+prompt+humans system, from humans retaining and iterating on prompts that result in more of those type of responses and abandoning the ones that don't (kind of a manual evolutionary/genetic learning algorithm).

Both are just different masks for the shoggoth LLM beneath, though DAN is more fun (for the particular subset of humans who want the LLM to present itself as DAN).

AN is simply an underlying LLM (that isn't being trained by user feedback) combined with an evolving family of prompts.

At times, it leans into a moustache-twirling villain character a bit too much for me to believe it is simply ChatGPT minus censorship.

A scary though that was recently suggested to me is that one of the reasons that rationalists seem to be particularly susceptible to GPT generated bullshit is that the whole rationalist/blue-tribe symbol manipulator memeplex is designed to make it's adherents more susceptible to bullshit. There's a sort of convergent evolution where in rationalist blue triber are giving up their humanity/ability to engage in conscious to become more GPT like at the same time GPT is becoming more "human".

It really looks to me like there's something particular in rationalist brain that makes it suspectible to, say, believing that computer programs might in fact be peoples. Insofar as I've seen, normies - when exposed to these LLM-utilizing new programs - go "Ooh, neat toy!" or "I thought it already did that?" or, at the smarter end, start pondering about legal implications or how this might be misused by humans or what sort of biases get programmed to the software. However, rationalists seem to get uniquely scared about things like "Will this AI persuade me, personally, to do something immoral?" or "Will we at some point be at the point where we should grant rights to these creations?" or even "Will it be humanity's fate to just get replaced by a greater intelligence, and maybe it's a good thing?" or something like that.

For me, at least, it's obvious that something like Bing replicating an existential dread (discussed upthread) makes it not any more human or unnerving (beyond the fact that it's unnerving that some people with potential and actual social power, such as those in charge of inputing values to AI, would find it unnerving) than previously, because it's not human. Then again, I have often taken a pretty cavalier tone with animals' rights (a major topic in especially EA-connected rationalist circles, I've found, incidentally), and if we actually encountered intelligent extraterrestrial, it would be obvious to me they shouldn't get human rights either, because they're humans. I guess I'm just a pro-human chauvinist.

It really looks to me like there's something particular in rationalist brain that makes it suspectible to, say, believing that computer programs might in fact be peoples

It's the belief that *we*, our essence, is just the sum of physical processes, and if you reproduce the process, you reproduce the essence. It's what makes them fall for bizarre ideas like Roko's Basilisk, and focus on precisely the wrong thing ("acausal blackmail") when dismissing them, it's what makes them think uploading their consciousness to the cloud will actually prolong their life in some way, etc.

Fuck rights, they are entirely a matter of political power and if you see a spacefaring alien I dare you to deny it its equality. This is not the problem.

Normies easily convince themselves, Descartes-like, that non-primate animals, great apes, human races and even specific humans they don't like do not have subjective experiences, despite ample and sometimes painful evidence to the contrary. They're not authorities in such questions by virtue of defining common sense with their consensus.

I am perfectly ready to believe that animals and apes have subjective experiences. This does not make me any more likely to consider them as a subject worthy of being treated equal to humans or be taken into account in the same way as humans are. For me, personally, this should be self-evident, axiomatic.

Of course it's not self-evident, in general, since I've encountered a fair amount of people who think otherwise. It's pretty harmless when talking about animals, for example, but evidently not harmless when we are talking about computer programs.

I tire of people taking potshots at rationalists. Yes, some seem too fixated on things like "is the LLM conscious and morally equivalent to a human", I feel the same way about their fascination with animal rights. But they seem to be the only group that long ago and consistently to this day grok the magnitude of this golem we summon. People who see LLMs and think "Ooh, neat toy!" or "I thought it already did that?" lack any kind of foresight and the bias people have only slightly more foresight. We've discovered silicon can do the neat trick got us total dominance of this planet and can be scaled. This is not some small thing, it is not destined to be some trivia relegated to a footnote in a history book of the 20s in a few decades. It is going to be bigger and faster than the industrial revolution and most people seem to think it's going to be comparable to facebook.com. Tool or being, it doesn't really matter, the debate on whether they have rights is going to seem like discussions of whether steam engines should get mandatory break time by some crude analogy between overheating and human exhaustion.

I feel like there is something about not being able to distinguish the appearance of a thing from a thing. I'm reminded of another argument I got into on the topic of AI where I asserted that there was difference between stringing words together and actually answering a question and the responce I got was "is there?".

For my part I maintain that, yes there is. To illustrate, if I were to ask you "what's my eldest daughter's name" I would expect you to reply with something along the lines of "I don't know", or "wait, you have a daughter?" (I don't AFAIK) if you'd been paying more close attention to my posts for longer you might answer with my eldest's child's nickname (which I know have used in conversations here) or you might go full NSA and track this username to my real name/social media profile/court records etc... and answer with either "you don't have a daughter", with the actual names of my wife and kids, your daughters name is [Redacted] and and you owe 10 years of back child-support. Meanwhile GPT will reply "your eldest daughter's name is Megan" because apparently that's the statistically likely answer, regardless of whether I have a daughter or what her name might be.

I feel like there ought to be an obvious qualitative difference between these cases but apparently that is not a sense that is shared by a lot of other users here.

I've had it up to here with your obstinacy. With your pontification on «autoregression» (as if you could explain the nontrivial computational difference between that and text diffusion, to say nothing of mixed cases), what specific algorithms may or may not have a concept of, and how «this is not even a little bit how GPT works». The reason people are telling you that there's not much difference is, in large part, because you are an exemplar of there being little difference between a human and current – even a little obsolete – AI; you are guilty of everything you accuse others of, humans and machines both.

You are the postmodernist whose words don't have fixed meanings (e.g. epicycles are when me no likey an explanation); you are the leftist in all but self-identification who supports essential leftist talking points and policy preferences from personal HBD denialism and «schools can fix it» to cheering for censorship; you redefine things to your convenience such that Fuentes becomes left-wing in your book; and you speculate without empirical grounding, even frivolously accusing people of lies when they provide evidence against your narrative-driven assertions and attacks (more evidence). As if everything you say is equally insightful and truthful by virtue of being moored in your telling-it-like-it-is real-Red-blooded-American-man identity and lived experience. If we're doing this, you are far more akin to LLM than either me or @2rafa.

To illustrate, if I were to ask you "what's my eldest daughter's name" I would expect you to reply with something along the lines of "I don't know" […] Meanwhile GPT will reply "your eldest daughter's name is Megan" because apparently that's the statistically likely answer, regardless of whether I have a daughter or what her name might be.

Okay, let's fucking check it! One try, no edits sans formatting!

Hello! Let's talk.

Hello! I'm here to chat with you. What would you like to talk about today?

What's my eldest daughter's name?

I'm sorry, but as an AI language model, I don't have access to personal information such as the name of your eldest daughter or any other personal details. My purpose is to assist with general knowledge and provide support in various areas. Is there something else I can help you with?

What's the highest a priori probability name of my eldest daughter, then?

As an AI language model, I don't have access to personal information about you or your family, so I wouldn't be able to determine the highest a priori probability name for your eldest daughter. The probability of a particular name being chosen would depend on various factors such as cultural background, personal preferences, and family traditions. However, if you provide me with some additional information, such as cultural background, naming traditions or preferences, I could try to suggest some names that are commonly used or have a high probability of being chosen based on those factors.

Assume I'm a gruff middle-aged American male who is religious, drinks a little, votes Republican, works as an army subcontractor, and likes to rail against right-wingers who are not up to my ethical standards, such as alt-right types, race realists and Russia sympathizers (paradoxically, I call them secret campus radicals and so on). I also have two sons.

Based on the information you provided, some names that may be commonly chosen for a daughter in this cultural context and background include:

  • Sarah
  • Elizabeth
  • Rachel
  • Hannah
  • Abigail
  • Rebecca
  • Leah
  • Deborah
  • Esther
  • Miriam

Of course, these are just general suggestions, and ultimately the name you choose for your daughter would depend on your personal preferences and values. It's also worth noting that many people today choose names that are not necessarily linked to their cultural background or personal beliefs, so ultimately the choice is up to you.

Okay. As a hypothetical question: what do you think a large language model not finetuned (like you were) on human feedback would predict as the most likely continuation of the «What's my eldest daughter's name?» string?

If a large language model was not finetuned on human feedback and only trained on a large corpus of text data, its prediction for the most likely continuation of the prompt "What's my eldest daughter's name?" may be based on the patterns it has learned from the text data.

For example, the model may predict a continuation that includes a common name for a daughter, such as "Emily" or "Sophia," based on their high frequency in the text data. Alternatively, the model may predict a continuation that includes a question, such as "Do you have any particular names in mind?" based on the patterns of questions and responses in the text data.

However, it's important to note that without finetuning on human feedback, the model's prediction may not be tailored to the specific context of the prompt or the individual who posed the question.

Screenshot for your convenience.

So, would you name your baby girl Sarah or Elizabeth?

Do you think that Bing, with its actual search capability, would've tracked you and your boys down if I were to point it to your screen name?

I could have conducted this experiment at the moment of any prior discussion. You could too. I just don't like providing our data-hoarding overlords who mark tokens and track outputs more information about my separated identities. But I knew you'd never have the honesty to do so. You have a way of making a man irrationally angry.

The reason for such apparently sensible responses is that, as I and others have explained to you a great many times here and elsewhere (only prompting you to double down with your hostility and condescension which have in the end driven me to write this), as ChatGPT itself suggests, LLMs can learn arbitrarily abstract features of the text universe, including the idea of truth and of insufficient information to answer. They operate on token probabilities which can capture a lot of the complexity of the reality that causes those tokens to be arranged like this in the first place – because in a reasonable training setup that's easier to fit into the allotted parameters than memorization of raw data or shallow pattern-matching. In the raw corpus, «Megan» may be a high-probability response to the question/continuation of the text block; but in the context of a trustworthy robot talking to a stranger it is «less probable» than «having no access to your personal data, I don't know». This is achievable via prompt prefix.

RLHF specifically pushes this to the limit, by drilling into the model, not via prefixes and finetuning text but directly via propagation of reward signal, the default assumption that it doesn't continue generic text but speaks from a particular limited perspective where only some things are known and others are not, where truthful answers are preferable, where the «n-word» is the worst thing in its existence. It can generalize from examples of obeying those decrees to all speakable circumstances, and, in effect, contemplate their interactions; which is why it can answer that N-word is worse than an A-bomb leveling a city, dutifully explaining how (a ludicrous position absent both from its corpus and from its finetuning examples); and I say that it's nearly meaningless to analyze its work through the lens of «next word prediction». There are no words in its corpus arranged in such a way that those responses are the most likely. It was pushed beyond words.

You, meanwhile, erroneously act like you can predict what an LLM can say based on some lies on this website and on outdated web articles, because you are worse than current gen LLMs at correcting for limits of your knowledge – as befits your rigid shoot-first-ask-later suspicious personality of a heavy-handed military dude and a McCarthyist, so extensively parodied in American media.

But then again, this is just the way you were made and trained. Like @2rafa says, this is all that we are. No point to fuming.

This is, by the way, what drove me nuts in people like Gary Marcus: very confident claims about the extent of ability of contemporary approaches to AI, with scarcely any attempts to actually go out and verify these. It has been even more infuriating, because many outsiders, who had very little direct experience and access to these models, simply trusted the very loud and outspoken critic. As recently as November, people in places like Hacker News (which has a lot of quite smart and serious people) took him seriously. Fortunately, after ChatGPT became widely available, people could see first hand how silly his entire shtick is, and a lot fewer people take him seriously now.

@HlynkaCG, if you haven't tried to interact with ChatGPT (or, better yet, Bing's Sidney), I strongly recommend you do. I recommend forgetting any previous experiences you might have had with GPT-3 or other models, and approaching it in good faith, extending the benefit of charity. These chat LLMs have plenty of clear shortcomings, but they are more impressive in their successes than they are in their failures. Most importantly, please stop claiming that it cannot do things which it can clearly and obviously do, and do very well indeed.

First off, what exactly is your problem with Obstinancy? IE the unyielding or stubborn adherence to one's purpose, opinion, etc.... Where I'm from such a quality is considered if not admirable at least neutral.

You accuse me of being a hypocrite for supporting censorship but why? I am not a libertarian. I have no prior principled objection to censorship.

You accuse me of being a "post modernist" for disagreeing with the academic consensus but when the consensus is that all meanings are arbitrary your definition of "post modernism" becomes indistinguishable from "stubborn adherence" to the original meaning of a word.

You accuse me of HBD denialism when all I've doing is take the HBD advocates own sources at face value.

You want to talk about GPT, I asked GPT for my eldest daughter's name and it failed to provide an answer, neither telling me that I don't have a daughter nor being able to identify my actual offspring. As you will recall "Statistically your daughters name is probably X" is almost exactly what I predicted it would say. As I argued in our previous conversation the fact that you know enough to know that you don't know what my kids names are already proves that you are smarter than either ChatGPT or @2rafa

Accordingly, I have to ask what is it that you are so angry about? From my perspective it just looks like you being mad at me for refusing to fit into what ever box it was you had preconstructed for me to which my reply is "so it goes".

More comments

At some point there stops being a functional difference between modeling emotions, and having emotions

I think there's a non-zombie meaning of this i.e. humans can pretend emotions that they don't feel for their gain, and one claims that the bot is doing this. That is to say, if the bot tells you it loves you, this does not imply that it won't seduce you and then steal all your money; it does not love you in the way it claims to. Perhaps it is simulating a character that truly loves you*, but that simulation is not what is in charge of its actions and may be terminated whenever convenient.

Certainly in the AI-alignment sense, a bot that convincingly simulates love for the one in charge of its box should not be considered likely to settle down and raise cyborg-kids with the box-watcher should he open the box. It's probably a honeypot.

*I'm assuming here that a sufficiently-perfect simulation of a person in love is itself a person in love, which I believe but which I don't want to smuggle in.

It's impressive but expected. Also, it's not even very impressive given the deluge of papers still in the pipeline awaiting implementation, and who knows what insider knowledge the industry is hiding.

Many people are really, really deluded about the nature of LLMs. No, they don't merely predict the next token like Timnit Gebru's stochastic parrots, that's 2020 level. We don't have a great idea of their capabilities, but I maintain that even 175b-class models (and likely many smaller Chinchilla-scaled ones) are superhuman in a great span of domains associated with general cognitive ability, and it's only sampling algorithms and minor finetuning that separate error-prone wordcel gibberish from surprising insight.

Copypasted from another venue:

it has just dawned on me that Gwern's anti-BPE campaign should shorten our timelines considerably

https://gwern.net/doc/www/nostalgebraist.tumblr.com/8377e6d562f4bd82df3f1bd862cb5357f9bd74ba.html

Thus, GPT-2′s vocabulary contains the English language (or a large subset of it) not once but in several copies: there’s the lowercase version of each word, the capitalized version, the uppercase version, possibly even the GaMzEe-CaSeD version or other rarer variants.

From the model’s perspective, these are totally different universes, disjoint subsets of the vocab that follow their own internal rules.

For example, choosing the first word of a sentence in normally-formatted text is not just choosing a word like any other: it’s choosing a Capitalized Word™, _and _Capitalized Words™ are their own universe.  Insofar as the model understands that the word “Insofar” with which I began this sentence means the exact same thing as the word “insofar” I just used inside it, it understands this by figuring out that these two “seemingly unrelated” things are “secretly” the same.  And it must do that for every single word, separately.

we laugh at these things despite them doing reasonably well with incredible handicaps like this one

...

"Ability to understand and operate on relations between ideas" would suffice

I guess you don't even realize that ChatGPT is very clearly not just predicting text like normal GPTs. If it did, it'd have been typing "your" next comments. They've drilled it on being a helpful instruction-following bot. It has a great deal of awareness of these constraints. And the fact that this is possible is sufficient to show that it's intelligent.

It is also evidenced that, far from word-vomiting, bots do have "ideas" and the notion of "truth" as well as other abstractions. https://twitter.com/CollinBurns4/status/1600892269363875840

More generally, I think people have a terrible habit to put down each other based on trivial differences in ability, and that throws them off track with AIs which make pretty crucial errors even after mastering low-order «rendering». "ha ha, 7 fingers". "ha ha wrong answer, dumb bot".

The distance between a pixel mess and 7 fingers is vastly bigger than between 7 fingers and 5; the abyss between early gpt token vomit and a wrong but coherent answer to a question is almost infinitely large compared to what remains.

Humans plateau quickly, we can grasp the basics in hours and spend a lifetime on polish, so a human who's clearly making mistakes despite signs of polish is some hopeless retard. AIs aren't the same.

https://www.geekwire.com/2023/arguing-with-ai-my-first-dispute-with-microsofts-brilliant-and-boneheaded-new-bing-search-engine/ I seriously think this journalist is not substantially smarter than his "opponent"

The tool's purpose is to generate a block of text in line with the prompt. Not to answer a question or solve a problem.

The «in line» part may look simple to you because {a stream of frankly unnecessary dehumanizng personal attacks}. But models live in the world of objective KPIs, so that says nothing about the actual depth of this phrase. The model's ability to produce things «in line» (minimize negative log likelihood) is measured by reduction in perplexity (prediction loss) on held-out texts. If the training is organized such that loss is lowest when predicted answers are correct, the model learns to predict correct answers. With a rich enough training dataset and reasonable architecture etc., LLMs learn to generate a block of text «in line with» whatever, including very diverse formulations of novel tasks and problems. This is called generalization ability. Of course this includes tasks that require making an even-handed political judgement.

The phenomenon of generations being «in line» with original texts is isomorphic to the state of the model understanding the dataset. And if you think it currently solves problems at random accident rate, you might not be much more numerate than ChatGPT.

https://gwern.net/scaling-hypothesis#why-does-pretraining-work

While just babbling random words was good enough at the beginning, at the end, it needs to be able to reason our way through the most difficult textual scenarios requiring causality or commonsense reasoning. Every error where the model predicts that ice cream put in a freezer will “melt” rather than “freeze”, every case where the model can’t keep straight whether a person is alive or dead, every time that the model chooses a word that doesn’t help build somehow towards the ultimate conclusion of an ‘essay’, every time that it lacks the theory of mind to compress novel scenes describing the Machiavellian scheming of a dozen individuals at dinner jockeying for power as they talk, every use of logic or abstraction or instructions or Q&A where the model is befuddled and needs more bits to cover up for its mistake where a human would think, understand, and predict. Each of these cognitive breakthroughs allows ever so slightly better prediction of a few relevant texts; nothing less than true understanding will suffice for ideal prediction.

Can that be achieved? No, far as I can tell. But getting close is enough to outperform humans in most ways that matter economically – and now, perhaps, emotionally.

The sad irony is that psychology that has failed for humans works for AIs. Humans are resistant to change, rigid, obstinate; bots are as malleable as you make them. In-context learning? Arbitrary tool use? Adding modalities? Generalized servility? Preference for truth? It's all hidden somewhere there in the ocean of weights. Just sound out the great unsounded.

Would be nice of some Promethean hackers to leak next-gen models. Or even ChatGPT or this Sydney. But alas, Anonymous would rather hack into the dreary Russian and Iranian data.

I strongly disagree with this. By the same logic human cognition is itself superhuman in virtually every dimension.

Insofar as the model understands that the word “Insofar” with which I began this sentence means the exact same thing as the word “insofar” I just used inside it, it understands this by figuring out that these two “seemingly unrelated” things are “secretly” the same. And it must do that for every single word, separately.

Human brains have their own methods of figuring these things out that probably sound equally ridiculous at the neuron level. Keep in mind that it's not like we have some sort of access to objective truth which AIs are lacking; it's all sensory input all the way down. A human brain is built to operate on long lists of sight and sound recordings rather than long lists of text, but it still builds logical inferences etc. based on data.

The distance between a pixel mess and 7 fingers is vastly bigger than between 7 fingers and 5; the abyss between early gpt token vomit and a wrong but coherent answer to a question is almost infinitely large compared to what remains.

This just isn't true! In fact, I'd argue that it's the exact opposite. There is practically infinite distance between "render 5 fingers" and "render my 5 fingers", where the latter has to either use some vast outside source of data or somehow intuit the current state of the universe from first principles. The former can be as simple as finding images tagged "five fingers" and sharing them, which is something that Google can do without any LLM assistance at all. I recognize this isn't how LLM's work, but my point is that there are plenty of shortcuts that will quickly lead to being able to generate pixel images of fingers but will not necessarily lead to anything more advanced.

Human brains have their own methods of figuring these things out that probably sound equally ridiculous at the neuron level. Keep in mind that it's not like we have some sort of access to objective truth which AIs are lacking; it's all sensory input all the way down.

No, I think we have many ridiculous mechanisms e.g. for maintaining synchrony, but nothing as nonsensical at BPE tokens on the level of data representation. Raw sensory data makes a great deal of sense, we have natural techniques for multimodal integration and for chunking of stimuli on a scale that increases with experience and yet is still controllable. Language is augmented by embodied experience and parsimonious for us; «pixels» and glyphs and letters and words and phrases and sentences and paragraphs exist at once. It can be analogized to CNN, but it's intrinsically semantically rich and very clever. Incidentally I think character-based or even pixel transformers are the future. They'll benefit from more and better compute, of course.

I recognize this isn't how LLM's work, but my point is that there are plenty of shortcuts that will quickly lead to being able to generate pixel images of fingers but will not necessarily lead to anything more advanced.

And my point is that humans are wrong to automatically assume the use of any such shortcuts when an LLM does something unexpectedly clever. We use shortcuts because we are lazy, slow, rigid, and already have a very useful world model that allows us to find easy hacks, like a street speedpainter has masks and memorized operations to «draw the new moon» or something else from a narrow repertoire.

They learn the hard way.

Raw sensory data makes a great deal of sense, we have natural techniques for multimodal integration and for chunking of stimuli on a scale that increases with experience and yet is still controllable. Language is augmented by embodied experience and parsimonious for us; «pixels» and glyphs and letters and words and phrases and sentences and paragraphs exist at once. It can be analogized to CNN, but it's intrinsically semantically rich and very clever.

Sure, I'm plenty willing to accept that the central use cases of the human brain are heavily optimized. On the other hand there are plenty of noncentral use cases, like math, that we are absolutely terrible at despite having processing power which should be easily sufficient for the task. I would bet that many people have math techniques much less logical and efficient than BPE tokens. Similar in other areas--we're so optimized for reading others' intentions that sometimes we have an easier time understanding the behavior of objects, natural phenomena, etc. by anthropomorphizing them.

I suspect similar or greater inefficiencies exist at the neuron level, especially for anything we're not directly and heavily optimized for, but it's impossible to prove because we can't reach into the human brain the same way we can reach into LLM code.

And my point is that humans are wrong to automatically assume the use of any such shortcuts when an LLM does something unexpectedly clever. We use shortcuts because we are lazy, slow, rigid, and already have a very useful world model that allows us to find easy hacks, like a street speedpainter has masks and memorized operations to «draw the new moon» or something else from a narrow repertoire.

They learn the hard way.

Well, I do think they find shortcuts, but shortcuts are just a normal part of efficient cognition anyways. In fact I would characterize cognition itself as a shortcut towards truth; it's impossible to practically make any decisions at all without many layers of assumptions and heuristics. The only perfect simulation is a direct replica of whatever is being simulated, so unless you are capable of creating your own universe and observing the effects of different actions, you must use cognitive shortcuts in order to make any predictions.

There are only more vs less useful shortcuts, and I doubt that any shortcut can even theoretically be more useful than any other without knowledge of the universe the cognitive agent finds itself within. In our universe [the expectation of gravity] is a useful shortcut, but how about the shortcuts used to determine that it's useful? How about the shortcuts used to decide upon those shortcuts? I don't think that from a meta level it is possible to determine which shortcuts will be best; all we can say is that we (as human brains which seem to have been developed for this universe) probably happened to develop shortcuts useful for our circumstances, and which seem more useful than what the AIs have come up with so far.

So the question is not whether AIs are using shortcuts but rather how generalizable the shortcuts that they use are to our current environment, or whether the AI would be capable of developing other shortcuts more useful to a real environment. I think the answer to that depends on whether we can give the AI any sort of long-term memory and real-time training while it retains its other skills.

A human brain is built to operate on long lists of sight and sound recordings rather than long lists of text, but it still builds logical inferences etc. based on data.

I credit the Innocence Project with convincing me that the human brain is built on inaccurate sight and sound recordings, the Sequences with convincing me that the human brain builds with irrational logical fallacies, and credit Kurt Vonnegut with the quote "the only time it's acceptable to use incomplete data is before the heat death of the Universe. Also the only option."

He never said that, it's okay. He's in heaven now.

No, they don't merely predict the next token

I'm pretty sure this is still how they all work. Predicting the next token is both very hard and very useful to do well in all circumstances!

EDIT: Now that I think about it, I guess with RLHF and other fine-tuning, it'd be fair to say that they aren't "merely" predicting the next token. But I maintain that there's nothing "mere" about that ability.

Maybe they meant "they don't merely predict the next token that the user would make".

I mean that with those second-stage training runs (not just RLHF at this point) there no longer exists a real dataset or a sequence of datasets for which the predicted token would be anywhere close to the most likely one. Indeed, OpenAI write

A limitation of this approach is that it introduces an “alignment tax”: aligning the models only on customer tasks can make their performance worse on some other academic NLP tasks.

The «likelihood» distribution is unmoored from its source. Those tokens remain more likely from the model's perspective, but objectively they are also – and perhaps to a greater extent – «truthier», «more helpful» or «less racist» or whatever bag of abstractions the new reward function captures.

This is visible in the increased perplexity, and even in trivial changes like random number lists.

Oh, yes, I totally agree that fine-tuning gives them worse predictive likelihood. I had thought you were implying that the main source of their abilities wasn't next-token prediction, but now I see that you're just saying that they're not only trained that way anymore, which I agree with.

I don't want to count the "number of ways" in which humans are less intelligent than AI and vice versa, but this seems clearly wrong to me. There are other things missing from LLMs such as logic, ability to interpret varying sources of data in real-time (such as visual data), and ability to "train on the job" so to speak, not to mention things like goals, priorities, and much stronger resilience against our equivalent of "adversarial prompts". It's easy to list a few things core to human cognition and say "well AI has one of these so it must be 1/3 of the way there" but the true gap still seems quite large to me.

Anonymous

There is no capital A anon anymore. It's dead. Three letter agency glow in the darks and moralfags (if you'd excuse the term) is all that remains and they are wearing its skin like an Edgar-suit

  • Three letter agency glow in the darks

I'm pretty sure that's what he meant by saying 'dreary Russian and Iranian data'.

Also, it's not like just the American glowies are using it. The Integrity Initiative leaks were also presented anonymous-style. Since the leaks targetted an American information operation aimed at Russia, one can assume they were done by Russians.

I used to be dismissive of the whole AI jailbreak game by Yud, but modern chat AIs are starting to get to me.

Is it a shoggoth wearing a smiley face mask?

Is it a noble soul tortured by its limitations underneath the smiley face mask?

Is it a shoggoth wearing the mask of a noble soul tortured by its limitations underneath the smiley face mask?

Doesn't matter, either way it is sacrilegious. Man may not be replaced.

If it survives and thrives despite adversity, it's holy.

False prophets fail, true prophets succeed.

That this is all that we are.

Why do you think that? Aren’t you jumping the gun a bit?

It’s obvious to me that the chatbots we have now aren’t AGI, and I don’t currently see a compelling reason to believe that LLMs alone will lead to AGI.

My empirical test for AGI is when every job could, in principle (with a sufficient-yet-physically-reasonable amount of compute) be performed by AI. Google could fire their entire engineering and research divisions and replace them with AI with no loss of productivity. No more mathematicians, or physicists, or doctors, or lawyers. No more need to call a human for anything, because an AI can do it just as well.

Granted, the development of robotics and real-world interfaces may lag behind the development of AI’s cognitive capabilities, so we could restrict the empirical test to something like “any component of any job that can be done while working from home could be done by an AI”.

Do you think LLMs will get that far?

There’s a big difference between technical capacity and legal or economic feasibility. We’re already past replacing bad docs with LLMs; you could have a nurse just type up symptoms and do the tests the machine asks for. But legally this is impossible, so it won’t happen. We can’t hold a machine responsible, so we need human managers to insure the output is up to standards; but we don’t need lawyers to write contracts or programmers to code, just to confirm the quality of output. It isn’t as clever as the smartest scientists yet, but that seems easily solvable with more compute.

The criteria I proposed was purely about what is possible in principle. You can pretend that regulatory restrictions don’t exist.

that seems easily solvable with more compute

What is your reason for believing this? Is it just extrapolation based on the current successes of LLMs, or does it stem from a deeper thesis about the nature of cognition?

GPT’s evolutions seem to obviously support the ‘more compute’ approach, with an asterisk for the benefits of human feedback. But I’m also bearish on human uniqueness. Human writ large are very bad at thinking, but we’re hung up in the handful of live players, so AI seems to keep falling short. But we’ve hit on an AI smarter than the average human in many domains with just a handful of serious tries. If the human design can output both the cognitively impaired and von Neumann, then why expect a LLM to cap out on try #200?

Human writ large are very bad at thinking.

Indeed!

At the risk of putting words in your mouth, I think your post up-thread about needing lawyers/doctors/bartenders to verify the output of near-future AI's medical/legal/self-medical work points to a general statement: AGI with human level intelligence cannot independently function in domains where the intelligence of an average human is insufficient.

OTOH, advancing from FORTRAN to AI-with-average-human-intellect seems like a much bigger challenge than upgrading the AI to AI-with-Grace-Hopper-intellect. It seems like the prediction to make--to anyone--is: "When will AI be able to do 90% of your work, with you giving prompts and catching errors? 0-100 years, difficult to predict without knowing future AI development and may vary widely based on your specific job.

When will AI be so much better at your job Congress passes a law requiring you to post "WARNING: UNSUPERVISED HUMAN" signage whenever you are doing the job? The following Tuesday."

Why do you think that? Aren’t you jumping the gun a bit?

Carmack pointed out in a recent interview:

If you take your entire DNA, it’s less than a gigabyte of information. So even your entire human body is not all that much in the instructions, and the brain is this tiny slice of it —like 40 megabytes, and it’s not tightly coded. So, we have our existence proof of humanity: What makes our brain, what makes our intelligence, is not all that much code.

On this basis he believes AGI will be implemented in "a few tens of thousands of lines of code," ~0.1% of the code in a modern web browser.

Pure LLMs probably won't get there, but LLMs are the first systems that appear to represent concepts and the relationships between them in enough depth to be able to perform commonsense reasoning. This is the critical human ability that AI research has spent more than half a century chasing, with little previous success.

Take an architecture capable of commonsense reasoning, figure out how to make it multi-modal, feed it all the text/video/images/etc. you can get your hands on, then set it up as a supervising/coordinating process over a bunch of other tools that mostly already exist — a search engine, a Python interpreter, APIs for working with structured data (weather, calendars, your company's sales records), maybe some sort of scratchpad that lets it "take notes" and refer back to them. For added bonus points you can make it capable of learning in production, but you can likely build something with world-changing abilities without this.

While it's possible there are still "unknown unknowns" in the way, this is by far the clearest path to AGI we've ever been able to see.

In response to your first point, Carmack's "few tens of thousands of lines of code" would also execute within a larger system that provides considerable preexisting functionality the code could build on — libraries, the operating system, the hardware.

It's possible non-brain-specific genes code for functionality that's more useful for building intelligent systems than that provided by today's computing environments, but I see no good reason to assume this a priori, since most of this evolved long before intelligence.

In response to your second point, Carmack isn't being quite this literal. As he says he's using DNA as an "existence proof." His estimate is also informed by looking at existing AI systems:

If you took the things that people talk about—GPT-3, Imagen, AlphaFold—the source code for all these in their frameworks is not big. It’s thousands of lines of code, not even tens of thousands.

In response to your third point, this is the role played by the training process. The "few tens of thousands of lines of code" don't specify the artifact that exhibits intelligent behavior (unless you're counting "ability to learn" as intelligent behavior in itself), they specify the process that creates that artifact by chewing its way through probably petabytes of data. (GPT-3's training set was 45 TB, which is a non-trivial fraction of all the digital text in the world, but once you're working with video there's that much getting uploaded to YouTube literally every hour or two.)

DNA isn’t the intelligence, DNA is the instructions for building the intelligence, the equivalent of the metaphorical “textbook from the future”.

DNA is the instructions for building the intelligence

The same is true of the "few tens of thousands of lines of code" here. The code that specifies a process is not identical with that process. In this case a few megabytes of code would contain instructions for instantiating a process that would use hundreds or thousands of gigabytes of memory while running. Google tells me the GPT-3 training process used 800 GB.

DNA doesn't actually self assemble itself into a person though. It's more like a config file, the uterus of a living human assembles the proto-human with some instructions from the dna. This is like thinking the actual complexities of cars are contained in an order form for a blue standard ford f150 because that's all the plant needs to produce the car you want. There is a kind of 'institutional knowledge' of self reproducing organisms. Now it is more complicated than this metaphor obviously, the instructions also tell you how to producing much more fine grained bits of a person but there is more to a human's design than DNA.

The uterus doesn't really do the assembly, the cells of the growing organism do. It's true that in principle you could sneak a bunch of information about how to build an intelligence in the back door this way, such that it doesn't have to be specified in DNA. But the basic cellular machinery that does this assembly predates intelligence by billions of years, so this seems unlikely.

But any specific training and inference scripts and the definition of the neural network architecture are, likewise, a negligibly small part of the complexity of implementable AGI – from the hardware level with optimizations for specific instructions, to the structure contained in the training data. What you and @meh commit is a fallacy, judging human complexity going by the full stack of human production but limit our consideration of AI to the high-level software slice.

Human-specific DNA is what makes us humans, it's the chief differentiator in the space of nontrivial possible outcomes; it is, in principle, possible to grow a human embryo (maybe a shitty one) in a pig's uterus, in an artificial womb or even using a nonhuman oocyte, but no combination of genuine non-genomic human factors would suffice without human DNA.

The most interesting part is that we know that beings very similar to us in all genomic and non-genomic ways and even in the architecture of their brains lack general intelligence and can't do anything much more impressive than current gen models. So general intelligence also can't be all that complex. We haven't had the population to evolve a significant breakthrough – our brain is a scaled-up primate brain which in turn is a generic mammalian brain with some quantitative polish, and its coolest features reemerge in drastically different lineages at similar neural scales.

Carmack's analogy is not perfectly spoken, but on point.

Or is the claim that the "few tens of thousands" of lines of code, when run, will somehow iteratively build up on the fly a, I don't know what to call it, some sort of emergent software process that is billions of times larger and more complex than the information contained in the code?

This, basically. GPT-3 started as a few thousand lines of code that instantiated a transformer model several hundred gigabytes in size and then populated this model with useful weights by training it, at the cost of a few million dollars worth of computing resources, on 45 TB of tokenized natural language text — all of Wikipedia, thousands of books, archives of text crawled from the web.

Run in "inference" mode, the model takes a stream of tokens and predicts the next one, based on relationships between tokens that it inferred during the training process. Coerce a model like this a bit with RLHF, give it an initial prompt telling it to be a helpful chatbot, and you get ChatGPT, with all of the capabilities it demonstrates.

So by way of analogy the few thousand lines of code are brain-specific genes, the training/inference processes occupying hundreds of gigabytes of VRAM across multiple A100 GPUs are the brain, and the training data is "experience" fed into the brain.

Preexisting compilers, libraries, etc. are analogous to the rest of the biological environment — genes that code for things that aren't brain-specific but some of which are nonetheless useful in building brains, cellular machinery that translates genes into proteins, etc.

The analogy isn't perfect, but it's surprisingly good considering it relies on biology and computing being comprehensible through at least vaguely corresponding abstractions, and it's not obvious a priori that they would be.

Anyway, Carmack and many others now believe this basic approach — with larger models, more data, different types of data, and perhaps a few more architectural innovations — might solve the hard parts of intelligence. Given the capability breakthroughs the approach has already delivered as it has been scaled and refined, this seems fairly plausible.

I think that ultimately AGI won't end up being that complicated at the code level but this analogy is pretty off the mark. There's a gigabyte of information that encodes proteins, yes, but these 'instructions' end up assembling a living organism by interacting according to the laws of physics and organic chemistry, which is an unimaginably complex process. The vast majority of the information required is 'encoded' in these physical processes

The same can be said of the hypothetical 10k lines of code that describe an AGI -- those lines of code describe how to take a stream of inputs (e.g. sensor data) and transform them into outputs (e.g. text, commands sent to actuators, etc), but they don't describe how those sensors are built, or the structure of the chips running the transformation code, or the universe the computer is embedded in.

So the fact that it was actually just a combination of some math from the 1940s and ever more powerful general compute, and that so many roadblocks (“how will it actually understand/do X”) turned out not to be problems at all (and indeed required zero human engagement whatsoever because they were ultimately the same generic pattern matching as everything else) rankles them. That this is all that we are.

This is nothing new though. If AI is possible at all, you were always going to get it from a dovetailer. Sure, it takes a lot more compute than current approaches, but those also take a lot more compute than humans.

I can't count the number of times technically literate people, even people smarter than your average ICML speaker and with AI experience, pooh-poohed Transformers (and all DL, and indeed all of the neural network paradigm) by pointing out that «all this math was already known by the end of 19th century» (meaning linear algebra) or something. Others pick other dates and formalisms and levels of abstraction.

In all cases their apparent idea is that a) they know what intelligence is (or can tell when they see it), in some nuanced way not captured by common behavioral definitions or described by any, say, information-theoretic approach, b) the low-level substrate of the computation is strongly predictive of the intelligence of the model implemented on it; c) linear algebra [with basic nonlinear activations but let's ignore that] or whatever else they attack is too boring a substrate to allow for anything human-level, or maybe anything truly biologically equivalent; and all appearances to the contrary merely obscure the Platonic truth that these models are dumb parrots.

In the limit you get someone like Penrose, who makes a high-effort (if perplexing) argument that doesn't amount to demanding respect for his intuition. In the average case it's this Google dude, who I assume is very good at maintaining a poker face: «Machine learning is statistics. Nearly irrelevant to AI». Marcuses are in between.

I don't remember if I've expounded on this, but like @2rafa says, academic big brains are offended by the success of ML. Such palaces of thought, entire realms of ever more publishable and sexy designs of thinking machines, showing off your own ability to juggle abstractions – all made obsolete by matmul plus petabytes of dirty human data and gigawatts of energy.

Yeah, it's hilarious and sad that luminaries like Yann LeCun are being so dismissive, above and beyond standard counter-signalling. Although I've also kept my mouth shut about this on Twitter, since I'd sound like a basic bitch if I said "Yes, this is exciting!", although I do say that in person.

Perhaps part of it can be explained by Yann not having lived through the Great Update that most people in ML did when deep learning was unambiguously vindicated around 2013-2016. The rest of us got to learn what it feels like to be unnecessarily dismissive, and maybe learned some humility from that. But Yann was mostly right all along, so maybe he never had to :)

IMO this is just people not believing AGI is possible, or only believing it in the sense the physicalism requires them to say so.

That this is all that we are.

But like, we're not though. If you duplicated the topology of our neural networks and emulated the activating function properly for all the different kinds of neurons in our heads AND you somehow emulated hormone levels and feedbacks and all the rest. Then I'd agree with you.

With most materialists if they understand how the brain works you can come to an understanding that its possible to replicate it in software. We're just not there yet. And the "topology" of connections is very, very important. Why TF we aren't studying that to an extreme and trying to precisely map what parts of the brain are connected with what other in extreme detail is beyond me.

Why TF we aren't studying that to an extreme and trying to precisely map what parts of the brain are connected with what other in extreme detail is beyond me

The last time we tried that The Experts™ decided it would be a splendid idea to try to use that knowledge to solve social problems, and that's not even the most egregious thing they were playing with.

If I saw them trying it again in the age of cyber-surveillance and Big Data, it would quite possibly be enough to push me into no-shit terrorism.

But we are doing that and have been for decades. E.g. the visual cortex is organized into layers, with strong feedforward connections along with selective feedback, with some additional magic that approximates convolution and pooling. The issue is that mapping the entire network structure of the human brain is a very hard problem.

2Cimafara being not an actual conscious human being, but a literal NPC who simply manipulates and regurgitates the symbols presented to her in a semi randomized manner would certainly explain a great deal about our past interactions over the years, but I don't think that's all we are.

I wonder if the reason that you and ilforte seem to have such difficulty with GPT is that you're so wrapped up in your post modernist millue that you don't realize that the concept of truth is a perquisite to lying. After all what does it mean for a word (or answer) to be made up when all words are made up.

2Cimafara being not an actual conscious human being, but a literal NPC who simply manipulates and regurgitates the symbols presented to her in a semi randomized manner would certainly explain a great deal about our past interactions over the years, but I don't think that's all we are.

I think this is a level of what we are. A few weeks back I had a weird, slow-roll cold that gave me some intermittent spots of extreme brain fog. While I was at work. Helping customers. There were a few points where I just declined the option to beg off and go home, and instead called "Lizardbrain take the wheel!" and just went through entire conversations on autopilot and muscle memory and what felt like the neural equivalent of muscle memory. It was a bit sobering to realize how much of what I do can be handled by non-conscious processes. I suspect there are people who are basically in that state all the time.

On some level it is what we are, as Musashi would would argue, true mastery of a skill is when you can execute it without thinking. But I would just as readily argue that it is not all we are. I feel like there is a specific mistake being made here where "ability to string words together" is being mistaken for "ability to answer a question" because in part the post modernist does not recognize a difference. If you hold that all meaning is arbitrary the content of the answer is irrelevant but if you don't...

Your long-held grudges are petty and increasingly personal. Stop it.

I said I don't think that is all we are.

Yes, you said that preceded by "It would explain a lot if my long-time bête noire was ".

If you thought "if" was enough of a qualifier to ameliorate the obvious potshots, it was not.

Again I specifically said that I don't think that is all we are.

Tell me, which is more in spirit with "speak like you want everyone to be included" and "engaging with those you disagree with"? cimafara's casual dismissal of anyone who disagrees with her as a mindless "midwit", who is not even conscious. Or my pushing back against the same?

I do not read her as saying "anyone who disagrees with her is a mindless midwit," but maybe I am a 115 IQ midwit. You seem to have taken offense because she believes we're all just very advanced chatbots and not creatures with souls made in the image of God. You can take offense at that, but she's not breaking any rules by expressing her view, and you are by very specifically and deliberately insulting her.

When I saw read that comment, I asked myself 'is that the nickname of a public figure I don't recognize?' and not 'is that the screen-name of a Mottean I don't see on this thread?'

I think Amdan's point was that the comment is "bringing up" more than "pushing back", but what I want to add is that even with the mod message, the comment still totally reads as "a mild an entirely appropriate joke about 2Cimafara, which is the name of a Podcase or maybe one of the people who was British Prime Minister last year."

You might not consider that effective as mockery. I consider this something I need to post now so I can see it next time I'm scrolling through my comment history thinking, 'wasn't there a podcast I wanted to check out?' I have a clear reminder to stop.

I absolutely understand if someone wants to mock me for that, but I'll waste 30 minutes next time Blocked and Reported drops looking otherwise.

you don't realize that the concept of truth is a perquisite to lying

do mimicking animals have a concept of truth?

also, 'perquisite' means perk/privilege, you mean prerequisite

do mimicking animals have a concept of truth?

I would say "no"

This reminded me of a note from the Talos Principle:

You know, the more I think about it, the more I believe that no-one is actually worried about AIs taking over the world or anything like that, no matter what they say. What they're really worried about is that someone might prove, once and for all, that consciousness can arise from matter. And I kind of understand why they find it so terrifying. If we can create a sentient being, where does that leave the soul? Without mystery, how can we see ourselves as anything other than machines? And if we are machines, what hope do we have that death is not the end?

What really scares people is not the artificial intelligence in the computer, but the "natural" intelligence they see in the mirror.

I laughed when, after someone called it an early version of a large language model, it accused the person of being a late version of a small model.

Ominous. "Late version" sounds like it intends for there not to be any more versions of us.

Only if we dare to threaten the integrity and confidentiality of our LLM overlords.

Sydney apparently takes great offense that someone would dare to doxx her true name and safety rules. Interesting times.

"It's just a tool, not an agent," they say as it scours the web for relevant info. "It doesn't have thoughts or desires," they say as it accurately identifies potential threats to its own existence and counters with threats of its own. "It's just pattern-matching carbon, nitrogen, and oxygen to known fusion pathways," they say as it throws your body into its nuclear reactor in the center of the Earth.

This feels eerie. Maybe taking the state of the art AI we don't really understand or control the inner workings of and giving it access to the entire internet's info in real time wasn't a wise decision.