site banner

Culture War Roundup for the week of April 8, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Against A Purely HyperDunbarist View

World’s for FIRST is in a week.

For those unfamiliar with the organization, For Increasingly Retrobuilt Silly Term For Inspiration in Science and Technology runs a series of competitions for youth robotics, starting from a scattering of Lego Mindstorm-based FLL competitions for elementary and middle schoolers, to the mid-range 20-40 pound robots of FTC that play in alliances of 2v2 across a ping-pong-table-sized space, and for high schoolers FRC running 120-pound robots in 3v3 alliances around the space of a basketball court. Worlds will have thousands of teams, spread across multiple subcompetitions. (For a short time pre-pandemic, there were two Worlds, with all the confusion that entailed.)

If you’re interested, a lot of Worlds competition will streamed. And a lot of both off-season and next-season competitions and teams are always looking for volunteers.

The organization’s goal... well, let’s quote the mission statement:

FIRST exists to prepare the young people of today for the world of tomorrow. To transform our culture by creating a world where science and technology are celebrated and where young people dream of becoming science and technology leaders. The mission of FIRST is to provide life-changing robotics programs that give young people the skills, confidence, and resilience to build a better world.

There’s a bunch of the more normal culture war problems to point around. How goes the replacement of the prestigious Chairman’s Award with Ignite Impact? If not, complain at least that it’s a missed opportunity on the level of POCI/POCI for replacing a bad naming with a worse one? How do you end up with events playing the PRC’s theme song before the US national anthem?

There's even internal culture war stuff, which may not make a ton of sense to outsiders. Does the move away from commercial automotive motors to built-to-FIRST and especially-brushless motors privilege teams with more cash, or compromise safety or fair play? Should regional competitions, which may be the only official field plays small teams get, also accept international competitors? Should mentors white glove themselves, should they only do so during official competition events, or should the possibility of the Mentor Coach be abolished?

But the biggest question in my mind is how we got here.

Worlds competition is an outstanding and massive event, with an estimated 50k-person attendance at a ten-million-plus square foot convention center. And it’s a bit of a football game: there’s a lot of cheering and applause, and a little bit of technical work. There will be a number of tiny conferences, many of which will focus on organizational operations like running off-season events. People network. That’s not limited to Worlds itself, though the dichotomy is more apparent there: there might be one or two teams per regional competition that have a custom circuit board on their robot, but I'd bet cash that the average regional bats under 1.0 for number of teams with custom polyurethane or silicone parts.

Indeed, that football game is a large part of how teams get to Worlds. The competitions operates as a distributed tournament, where players who win certain awards may elect to continue to the next event in a hierarchy. The exact process and what exact awards count as continuing awards are pretty complex and vary by location (especially post-COVID), but as at the FRC level, the advancing awards prioritize two of the three teams that won a local competition's final, and then the team that has done the most recruitment and sponsoring of FTC or FLL teams over the last three (previously five) years, and then the team that has done the most for the current year. (Followed by the most competent Rookies, sometimes, and then a whole funnel system rolling through more esoteric awards.) In addition to the inherent randomness of alliance field play, there's a rather telling note: the 'what have you done for FIRST today' award, if won at the Worlds level, guarantees an optional invite to every future Worlds competition. By contrast, teaching or developing esoteric skills or core infrastructure is an awkward fit for any award, usually shoved into the Judge's Award, which with 3.5 USD won't buy you a good cup of coffee at Worlds.

There’s reasons it’s like this, and it’s not just the Iron Laws of Bureaucracy, or the sometimes-blurry lines between modern corporate infrastructure and mid-level-marketing. The organization hasn't been hollowed out by parasites and worn like a skinsuit (at least not in this context): it's the sort of goal that the founders and first generation would have and do consider a remarkable victory. I’m not making the Iscariot complaint, because it’s not true.

FIRST couldn’t exist in the form it does without these massive events and the political and public support they produce, not just because you wouldn’t hear about any smaller organization, but because the equipment and technology only works at sizable scale. Entire businesses have sprung up to provide increasingly specialized equipment, FIRST got National Instruments to build a robotics controller that resists aluminum glitter a little better, even the LEGO stuff has some custom support, and they can only do so because an ever-increasing number of teams exist to want it. SolidWorks, Altium, dozens of other companies donate atoms and/or bits on a yearly basis; the entire field system for FRC wouldn’t work without constant support and donation by industrial engineering companies. WPI might devote a couple post-grad students to maintaining a robotics library without tens of thousands of people using it, but I wouldn’t bet on it. States would not be explicitly funding FIRST (or its competitors) unless those programs can show up on television and have constituents that can show up at a state politician’s door.

Those demands drive not just how FIRST operates today, but what its interests are looking toward the future, not just in what it does, but what it won’t do. From a cynical eye, I wouldn’t say with certainty that FIRST would drop ten community teams for a school system buy-in, or twenty for a state program, but I wouldn’t want to be on the community team for any of those hard choices. There is no open-source motor controller or control board available for FIRST competition use, and there’s not a procedure available to present one, and there won’t be. There’s a lot of emphasis on sharing outreach tricks, and a little for sharing old code or 3d models, and a lot of limits to providing skills.

Because throughout this system, the most impactful thing you can do is always getting more people. It’s not Inspiring, it’s not Chairmanny Impactful, but that's what those awards are, with reason. Shut up and multiple: the math, in the end, is inevitable.

And I’m going to deny it.

There's a story that goes around in the FIRST sphere, where one of FIRST's founders bargained or tricked Coca-Cola into in exchange for developing some other more commercial technology. The exact form and valence tends to vary with who tells the story, whether to highlight the speaker's anti-capitalist frame, to gloss over some of the frustrations with the Coca-Cola Freestyle (tbf, usually more logistic and maintenance than with the pumps themselves), or to wave away the rough question of whether it paid off).

But that last point is a bit unfair: Solving Problems In Extreme Poverty is the sort of difficult and low-odds environment where high-variance options make sense to take, and you should expect a high-variance low-odds option to fail (or at least not succeed wildly) most of the time, and at least it wasn't as dumb an idea as the lifestraw. Maybe (probably!) enough of the steps that combine to keep FIRST running fall into the same category.

I'm hoping teaching kids isn't a low-odds environment. And ultimately, most volunteers and teams and sponsors signed up more for that than for the flashing lights and the fancy banners. But teaching, in matter involving true interaction, can not be done at the scales and directions that turn a roll of the dice from gambling to a variance strategy. It's difficult enough as a mentor to remember all the names the students and family for even a moderately-sized FRC or FTC team; few in a team that "support 128" teams (not linking directly: these are teenagers) can name every one or even a majority. These organizations have, by necessity, turned to maximize how many opportunities they present to their affiliates, without much attention to what that opportunity is. Few turn to the full argumentum ad absurdum where the recruitment exists solely to get more recruiters, but they’ve not left that problem space behind, either.

((There are other nitpicks: the same economies of scale that make these answers work eliminate many less-difficult problem whose presence is necessary to onboard and upskill new learners, the focus on bits over atoms breaks in similar ways that the outreach-vs-teaching one does.))

Dunbar proposed an upper limit to how large a social group the human mind readily handles. There's a lot of !!fun!! questions about how well this will replicate, or how accurate the exact number is, or what applicability it has for a given level of interaction: suffice it to assume some limit exists, that some necessary contact increments the counter at some level of teaching, and that it can't possibly be this high. At some point, you are no longer working with people; you're performing a presentation, and they're watching; or you're giving money and they're shaking a hand. At best, you're delegating.

These strategies exceed the limit, blasting past it or even starting beyond it. They are hyperdunbar, whether trying to get fifty thousand people into a convention center, or trying to sell ten thousand books, or 8k-10k subscribers. There are things that you can't do, or can't do without spending a ton of your own money, without taking these strategies! Whether FIRST getting NI's interest, writing or drawing, building or playing video games full-time, you either take this compromise or another one, and a lot of the others are worse.

But they're simultaneously the most visible strategies, by definition. I do not come to kill the Indigestion Impact Award; I come to raise the things that aren't in the awards. Even if FIRST could support a dozen teams that emphasized bringing new technologies forward in a one-on-one basis, and if your first exposure to the program selected from teams randomly, you'd be much more likely to hear from the hyperdunbarists -- hell, it could well be that way, and I've just missed the rest of them.

Yet they are not the only opportunity. You don't have to be grindmaxxing. One team, even in FIRST, can share skills simply for the purpose of sharing skills. It’s why I volunteer for the org. You can go into an artistic thing knowing you want a tiny audience, or to cover costs and if lucky your time, or as a hobby that's yours first. It shouldn't be necessary to say that outright, as even in hyperdunbar focuses, most fail down to that point. Yet even in spheres where Baumol's hits hardest, it can be a difficult assumption to break.

Somewhat off-topic, but: thoughts on RoboMaster?

The DJI device feels a lot like an upscaled version of the Lego MindStorms kit. It's okay as an entry-level tool for everything, and that's what makes it appealing for new learners, but you can't really get in deep or into expertise for any component. If you're programming, you're either running Python or Scratch, and even with an adult instructor it's not a great environment for learning Python. You can take it apart and reassemble it, but you're really limited in what you can physically build with it; you can rearrange DJI-provided sensors, but it's hard to even use other PWM devices, nevermind something really weird like a random I2C or SPI sensor.

The mecanum drive is a major selling point, and five or ten years ago getting decent mecanum wheels was nightmarish enough FRC or FTC teams would 3d-print them (pro tip: don't), though now a small robot set can be found under 80 USD. They do definitely make path coding easier to get right, at least for open-space play.

You could definitely build something better for a similar or slightly higher price, so a lot depends on what you're trying to do and introduce: for a student new to robotics, it's one of the cooler-looking options to dip your toes; to a student with experience it's a bit of a (very pricey) toy that gets frustrating if you try to do anything deeper. Probably the strongest selling point comes about if you really want to focus on video/image processing, and you just want a platform to do it on.

I can't speak much for the competitions and camps. As far as I know, the youth sports never went out of east Asia, the university league is 'international' but requires all competitors to be attached to a college (and the pieces look vastly un-challenging for college students), and the camps are inaccessible. Which is a pity, because I like the idea of something between BattleBots-one-robot-leaves and FIRST-it's-about-working-together-for-a-high-score philosophies.

You need to break things down in order to understand what will be scalable. Why should Dunbar's number exist? What are the actual limits of intimacy? I absolutely agree that our current methods for scaling Dunbar are limited, and that there are also fundamental limits. But we need to clarify what those limits are for specific systems.

Consider the following HyperDunbar social module algorithm.

  • Run a classifier on the types of humans.
  • Practice being intimate with LLMs trained on these classes of humans and of course humans of these classes themselves.
  • This effectively flattens them, which is bad. It lowers your awareness of who they are and their needs, and thus lowers intimacy, however, we can mitigate most of this by loading the data lost in compression Live from an exobrain using RAG as you are talking to a specific individual.

Using this technique, what part's of Dunbar's number scale?

  • The intimacy with which you know the person you are talking to right now: Scales
  • The amount of time you can give to one person: Semi-scales. You'll have to rely on LLM instances of yourself to scale this, but you can continuously improve the accuracy of this sim and the ways in which it backloads compressions of all its interactions back into your meatbrain. Whether this is really 'Your' Dunbar number isn't a scientific query, its a philosophy of cybernetics question. Since what we are discussing is the effectiveness of scaled organizations, we ought to be focused on the scientific query of whether you can meaningfully love and empower others in the same ways with your LLM self as your bio-self rather than philosophical questions like what self-hood is.
  • Percentage of your total captured capabilities that you give to each person: Doesn't scale. But it never did, even when the Dunbar number was 100.
  • The amount of your life/telos/subconcious that you can dedicate to improving yourself for each other person: Semi-scales. You'd think that this is the same as the last question but no. This actually scales with how many of the people in your circle are co-aligned, because if everyone is perfectly aligned, then the same personal growth actions can be telelogically dedicated to all of them.

What is a hyperdunbarist? Googling the term literally shows only this comment.

In discussing Dunbar's number, it's not uncommon to see people divide matters into sub- and super-Dunbar counts (eg from 2013), and this can be useful in some contexts, but it also munges together a million-person org that's constantly growing (or trying to constantly grow) and a 200-person-org that's doing minimal recruiting.

Hyperdunbar approaches do not merely require an organization to exceed Dunbar's number, but that the organization constantly be striving for growth, unconstrained and reaching for infinity or the nearest limit. They do not merely have the problem that superDunbar groups do of wildly changed social dynamics, but the constant churn makes even many of the social technologies built for superdunbar organizations break.

Apologies for coining a word for what may well be have an obvious term.

FIRST exists to prepare the young people of today for the world of tomorrow. To transform our culture by creating a world where science and technology are celebrated and where young people dream of becoming science and technology leaders.

I'm confused as to why they're treating this like a counterfactual.

The richest men in the world made their money from technology. Isn't that already a form of celebration?

And what technology are we celebrating exactly? Unprecedented surveillance capabilities to monitor all communications for wrongthink? Israel's use of machine learning to swiftly and efficiently identify targets for liquidation?

(I'm not trying to be a moralist - you're of course "allowed" to celebrate whatever you want. I just think that people should have a clear-eyed view of the implications of their own position.)

Richard Sutton says "[AIs] might tolerate us as pets or workers. (...) If we are useless, and we have no value [to the AI] and we're in the way, then we would go extinct, but maybe that's rightly so. (...) We should prepare for, but not fear, the inevitable succession from humanity to AI". Do you also celebrate your "inevitable successors"?

Celebrations are best saved for the end - in moments of repose, after the long struggle where a certain spiritual vision was forged and executed, when conditions are finally such that we can pose the question of taking a proper accounting of things...

As for "science" insofar as it can be distinguished from "technology", people have never had a taste for such a thing and never will, we live in a world where a not insignificant number of people are unaware that it's possible to have individual preferences for reasons other than status-seeking or placating your interlocutor, asking such people to build an intrinsic appreciation for something as abstract as "knowledge for the sake of knowledge" is futile. It is already an eccentric predilection even among more highly developed natures, it could never become widespread save for genetic engineering.

Cynically, "celebrate" in the mission statement probably means 'get scholarships and burnish college resumes': FIRST doesn't pull in a lot for either, but it really clearly wants to have the cash of a sports team scholarship and the reputation of an Eagle Scout.

Less cynically, a lot of school environments teach tech, not just poorly, but also as a chore, even when it could or should have been fun. You don't and shouldn't celebrate or applaud things just for being present, but from physics labs to chemistry to programming to the complete destruction of the shop class, we've lost a lot of the framework for 'projects' as things that can be completed or have real win/lose states. For all my complaints, FIRST, even at its goofiest FLL versions, avoids that problem.

So your thesis is 'there's a phase-change after a certain point where organizations become more political/institutional above Dunbar's law but despite all the bad things we know about big institutions it's necessary and fine?'

Or were you opposing that, saying that you deny that recruitment is the best thing people can do, that the human, non-optimized element is good, that organizations need soul to start off with? I don't understand, is it that the strategies like tricking Coca Cola are hyperdunbar and therefore good? Bad? It seems like a really complicated thesis!

I'm guessing we all struggled through university lecturers telling us to give Topic Sentences and Introductions and it was always cringeworthy to read someone's essay that said 'in this essay I will argue that...' But I think it's important to provide some kind of guidance, especially in long essays. I'm hopelessly lost. Are other people lost or am I having a skill issue?

Apologies, this post was a little more stream of consciousness than I'd intended. My thesis is more that :

  • Every organization, even an organization of one person, must select relative priorities of growth against other targets. For businesses, marketing and investment versus product development; for artists, growing your audience against growing your skills; for streamers focusing on following the algorithm versus following your interests. For FIRST, that's a part of that's the division between creating and expanding teams versus developing skills for those teams, but the pattern exists much more broadly.

  • Organizations that make that decision don't do so (only) because they've forgotten their original goal, or because they've been taken over by people who don't care about that goal, but because scale does genuinely have (distributed) benefit.

  • But that strategy has costs. Effective Altruists often focused on the degenerate cases, where outreach becomes almost all of what the organization does, or where outreach has hit decreasing returns while the organization is unwilling to admit that. But there are more honest problems, such as where this emphasis on outreach disconnects your metrics from your measures, or where successful growth can Baumol you as relative productivity varies with scale for individual parts of the organization.

  • More critically, it is fundamentally risky approach at the level of individual people, while obfuscating the outcome of that gamble. If a consistent and always-applicable recruitment paradigm existed, you would already have joined, as would every adult in the county/country/planet; if you could keep in mind the outcome of your recruitment efforts, it wouldn't exceed your Dunbar number. Not everyone approached can be a recruit, not all recruits persist (or are even desirable), so on: even successful orgs notorious for their outreach can spend hundreds of manhours to get four or five mid-duration recruits. Organizations can eventually make this work out by playing the odds across a large enough number of people, but individual actors within the organization can not. Hyperdunbar non-outreach/recruitment efforts can similarly be risky and hide their outcomes: it's very easy to give a talk before a thousand people, and very hard to know what portion of the audience was listening the next day.

  • Because of their public-facing nature, difficulty of measurement, influence of the internet and media coverage (and, cynically, hyperdunbar organization efforts to dazzle or baffle their membership), these approaches are what are most visible when looking into most fields from outside, such that they seem like the only viable option.

  • But that framework is flawed; hyperdunbar efforts can and often do run face-first into a ditch.

  • Even some efforts toted as wildly successful can fade off at shockingly low numbers. That's not to call them a failure for doing so, even if it's not always or often what the stated goals were. However, it shows a space where the tradeoffs necessary to try to scale to vast numbers weren't necessary.

  • And a lot of good can be done outside of hyperdunbar efforts.

It's not just you, I've struggled to read several recent gattsuru posts but thought I was just retarded

Oof. I guess I'll need to work on making my summary of the recent hyprland cancellation a bit more readable.

Thanks for saying so. I've been trying to highlight more esoteric stuff, but it necessarily involves dropping a pile of context at the start of a post, and it's hard to tell the right balancing point between succinct-but-incomplete and complete-but-infodumpy.

Unfortunately I have this issue with a lot of his comments. Much of the comment seems to consist of asides and I simply cannot keep track of what the main point is supposed to be.

Usually the respondents to his posts pull out a sentence or two and run with it, so I get the feeling I'm not alone in not seeing the main thrust.

Usually the respondents to his posts pull out a sentence or two and run with it

I don't think this is a bad thing. I think this is a good way of interacting with long posts (so people are able to respond to mega essays without feeling obligated to respond with a mega essay of their own) and it can lead to a lot of productive and interesting discussion. Like, on Kulak's recent post not many of the replies directly addressed Kulak's actual thesis, they just used it as a discussion prompt for sharing whatever thoughts they had on India. I think threads like that are healthy for the forum.

That being said, this particular post did a poor job of explaining to me why I should be interested in FIRST or hyperdunbarism (although admittedly this topic is far outside of my normal wheelhouse to begin with).

About halfway through, I completely lost track of what the comment was advocating or even saying.