@gattsuru's banner p

gattsuru


				

				

				
10 followers   follows 0 users  
joined 2022 September 04 19:16:04 UTC
Verified Email

				

User ID: 94

gattsuru


				
				
				

				
10 followers   follows 0 users   joined 2022 September 04 19:16:04 UTC

					

No bio...


					

User ID: 94

Verified Email

I don't know.

I'd like to believe that it merely proved people like Dan Savage would be tolerated and feted, rather than undermine and weaken their movements, and that in a counterfactual world where everyone instead focused on honest debate and open engagement, Obergefel would still have happened, perhaps with a bunch of references to South Park's Big Gay Al. In this world, though, we got that sorta stuff, and Savage bullying a bunch of teenagers as part of his anti-bullying campaign was just the most on-the-nose bit, rather than the worst or even highest profile.

And there is a large portion of the progressive movement believes that sort of behavior was a large portion of why they won, and it's not obvious that they're wrong. Putting massive social, career, and legal costs to opposing gay marriage genuinely blew apart a lot of anti-gay marriage movements; breaking any opposition to favored goals as homophobia worked; leaking donation records and sending newcasters to individual rando's homes increased the cost of doing those things.

The steelman is that a lot of trans people are really obviously trans before they transition even socially (and sometimes even before they realize it themselves), and whether aware or not, a lot of these regulations can still impact them (or, less charitably, be reported as/forced into impacting them, a la Floridian teachers making news releases).

The ironman is that, while there's a lot of controversy about where and when the Correct minimum age for specific types of transition in minors is even among the broader LGBT movement, setting that as 18 for hormonal transition is a very far outlier, and that's been that way for a while. I'll point to Venus Envy as an example of early-2000s media covering transition of late high schoolers (and much of the exploration of the theme is focused on the contrast between Zoe going through conventional processes, and Larson as the problems of gray market self-administration), and that being completely unnoteworthy among readers.

That's not hugely honest to describe as kids, but it's not exactly dishonest, either.

The problem is that there's a genuine paradox, where the overwhelming majority of trans people can look back and honestly say it would have been better, easier, more complete, less traumatic, so on, if they'd realized and started transition just slightly earlier, and gotten just that small amount of more support. And then Zeno stumbles in like a drunken fool.

I'm going to say that a female child raised as female knows she's a girl. A female child raised as female declaring she is really a boy? I'm waiting to see on that one.

There's not exactly a shortage of trans men who can point to an upbringing and environment that required and enforced pretty strict gender norms for behavior. To the level of 'not allowed to wear pants' sorta thing.

Yes.

At the trivial level, code with strong or moderate typing are far less likely to introduce a pretty wide variety of fairly annoying bugs. You can theoretically hire coders who aren't going to make that sort of mistake, but then you have to hire coders who don't make that class of mistake, and they have to put time and focus into it. Compile time can be the difference between iterating in seconds or minutes (or in one miserable case, tens of minutes). If you need portability (whether Windows to Linux, or x86 to ARM to Mac Silicon), some languages are much more frustrating than others.

At the less obvious, the availability of good and strong debuggers matters less for desktop (where the span is more Firefox Inspector Mode to Visual Studio) than embedded or microcontroller worlds (where the low end might be 'you get nothing, good day sir!'), but for applications requiring multithreading or complex performance or memory management, the higher end still matters. There's a tradeoff between succinctness and clarity of code, as evidenced by Java vs Kotlin vs Scala.

While you might consider them extremes of the "special requirements", some languages handle certain matters and frameworks better than others. MVVM makes a lot more sense in Java or C# than JavaScript, and may make sense for a common project type. Many things interfacing with hardware or certain databases may only have library support for a handful of languages, especially in industrial automation world -- at best you're going to end up writing a shim, at worst you may just be stuck. Some languages have really clever tricks justifying their use for certain specialty purposes (Matlab and matrix arithmetic) but are absolutely obnoxious otherwise. ((Some, like VC++, introduce weird user-environment-specific errors that can drastically increase your support costs and reduce user-friendliness, thank you msvr###.dll errors.)) For many internal-use tools, having something that you can build-and-leave-for-a-decade can push you away from languages with a history of breaking changes.

For smaller businesses, you go to war with the army you have, and I say that as someone who's written more than a fair share of internal-use C#, Java, and Python code.

I don't think they're the only part of business success: the road is paved with the skulls of LLCs that had great software but struggled on the business side, or just bad luck. And there's definitely a coding fandom that endlessly chases the Next Best Thing, either to (charitably) keep themselves sharp or (less charitably) keep their resume up to date, in preference to mastering one language well, or building lasting projects, or just getting tasks done. You can definitely end up bike-shedding. But it's a mistake to not consider it seriously and in depth.

Does anyone know how accurate these studies are

I mean, they are social science, so we're not starting off at a great point to begin with.

The Cameron & Cameron (2017) piece you link is primarily a defense of their Homosexual Parents paper from 1996, but that consisted of sending out surveys during the 1980s, starting with a 1983 survey sent to 9k adults (4340 responses) in LA, DC, Omaha, Denver, and Louisville, and a 1984 survey in Dallas going to 10k adults (5182 responses). In those surveys, the closest question to "same-sex parents" was if "one of [respondant's] parents was a homosexual"... which "was not asked in the 5-city study".

Being charitable to the level of naivety and assuming that the weird procedural changes were totally just meant to better serve the data, it's hard to think of worse ways to establish this question. Even outside of the lizardman constant problems or the tiny sample size, this isn't the same question, especially during that day and age, and there's no way to separate 'are children raised by same-sex parents more likely to be victims of sexual abuse' from 'are children sexual abused by their parents more likely to know their parent's orientation', esp given that the paper never gives base rates or overall rates.

The Sullins paper is pointing toward his 2015 work, most relevantly "The Unexpected Harm of Same-sex Marriage: A Critical Appraisal, Replication and Re-analysis of Wainright and Patterson’s Studies of Adolescents with Same-sex Parents", which does have the section "Over two-thirds (71% SE 30) of the children with same-sex married parents who had ever had sexual intercourse reported that they had been forced to have sex against their will at some point" and perhaps more shockingly that 38% of all respondents, not just those who had sexual intercourse, if they'd been forced to give or receive sexual touch or intercourse from a parent or caregiver.

There's some weirdness here, not all of which is from Sullins -- while he excludes almost half of what Wainright called lesbian parents on the basis of male adults in the household, the original survey gatherers made some bizarre decisions where the same survey segment was used to only to ask males if they had raped someone and only to ask females if they had been raped -- but combination makes the numbers less useful. Sullins is implying-without-stating that female children are being molested by lesbian parents in this sample by staggering numbers, but it's far from clear that's what actually was asked in the question. Yet at the same time, unmarried parents have zero odds?

((There's also a GRIM failure; 37.8% doesn't come as a reasonable division for any of the combinations I can provide as possible counts for total same-sex couples. Might just be a rounding error if it's the 17 'real' lesbian couples, around 40% of which identified as married, but then it's an N=3.))

Especially given the other assumptions (esp that men should only be asked if they forced someone into sex, and women only if they were forced), I'm curious if this reflects a number of victims of familial sexual abuse in one family environment then having sole custody and/or being adopted by lesbians later, but there's not data for it, just a story.

((Separately, he also wrote in 2015 "Emotional Problems among Children with Same-Sex Parents", and that's at least procedurally not-crazy: pull in NHIS surveys for sexual relationships, look at reported emotional problems and some developmental disabilities, and saw larger values (generally 2x). There's a bunch of interesting modeling, but a lot of it points to gay parents having more emotional problems themselves, and adopted kids having more emotional problems and developmental disabilities (and gay parents being more likely to adopt). But it's not really relevant here.))

should this be dispositive evidence against allowing same-sex couples to adopt?

I think you need some data with more than double-digit total same-sex couples or non-trivial number bad actors, for starters, and then some more serious effort to isolate molestation within the same-sex couple (or adoption).

FOSS and The XZ Problem

Security Boulevard reports:

A critical vulnerability (CVE-2024-3094) was discovered in the XZ Utils library on March 29th, 2024. This severe flaw allows attackers to remotely execute arbitrary code on affected systems, earning it the highest possible score (10) on both the CVSS 3.1 and CVSS 4.0 scoring systems due to its immediate impact and wide scope.

The exploit would allow remote code execution as root in a wide majority of systemd-based Linux (and Mac OSX, thanks homebrew!) machines. There's some reasonable complaints that some CVE ratings are prone to inflation, but this has absolutely earned a 10/10, would not recommend. Thankfully, this was caught before the full exploit made it to many fixed release Linux distros, and most rolling-release distros either would not have updated so quickly or would not yet be vulnerable (and, presumably, will be updating to fixed versions of XZ quickly), with the exception of a handful of rarely-used Debian options. Uh, for the stuff that's been caught so far.

Summary and FAQ, for the more technically minded reader, the NIST CVE is here, background of initial discovery at here.

Ok, most of us who'd care remember Heartbleed. What's different here?

In this case, the exploit was near-certainly introduced intentionally by a co-maintainer of the library XZ Utils, by smuggling code into a binary test file, months apart from adding calls to execute that test file from live environments, and then working to hide any evidence. The combination of complexity in the attack (requiring fairly deep knowledge of a wide variety of Linux internals) and bizarreness of exploit steps (his FOSS history is sprinkled with a replacing safe functions with their unsafe precursors, or adding loose periods in cmake files) leaves nearly zero chance that this is unintentional, and the guy has since disappeared. He was boosted into co-maintainership only recently, and only after the original maintainer was pressured to pick him up by a strangely large barrage of very picky users. The author even pushed to have these updates shoved into Fedora early.

Most mainstream technical advisories aren't outright calling this a nation-state actor, but The Grugq is pretty willing to describe whoever did it as an 'intelligence agency', whether government or private, and with cause. Both the amount of effort and time put into this attack is vast, and the scope of vulnerability it produced extreme -- though this might be the 'cope' answer, since an individual or small-private-group running this level of complex attack is even more disturbing. It's paranoid to start wondering how much of the discussion aimed encouraging XZ's maintainer to take on the bad actor here as a co-maintainer, but as people are having more and more trouble finding evidence of their existence since, it might not be paranoid enough.

There's a lot of potential takeaways:

  • The Many Eyes theory of software development worked. This was an incredibly subtle attack that few developers would have been able to catch, by an adversary willing to put years into developing trust and sneaking exploit in piecemeal.

  • Except it was caught because a Microsoft (Postgres!) developer, without looking at the code, noticed a performance impact. Shit.

  • This attack heavily exploited access through the FOSS community: the author was able to join sight-unseen through a year of purely digital communications, and the 'business decision' of co-maintainership came through a lot of pressure from randos or anons.

  • Except that's something that can happen in corporate or government environments, too. There are places where every prospective employee gets a full background check and a free prostate exam, but they're the outlier even for dotmil spheres. Many employers are having trouble verifying that prospective recruits can even code, and most tech companies openly welcome recent immigrants or international workers that would be hard to investigate at best. Maybe they would have recognized that the guy with a stereotypical Indian name didn't talk like a native Indian, but I wouldn't bet on even that. And then there's just the stupid stuff that doesn't have to involve employees at all.

  • The attack space is big, and probably bigger than it needs to be. The old school of thought was that you'd only 'really' need to do a serious security audit of services actually being exposed, and perhaps some specialty stuff like firewall software, but people are going to be spending months looking for weird calls in any software run in privileged modes. One of many boneheaded controversial bits of systemd was the increased reliance on outside libraries compared to precursors like SysV Init. While some people do pass tar.xz around, XZ's main use in systemd seems to be related to loading replacement keys or VMs, and it's not quite clear exactly why that's something that needs to be baked into systemd directly.

  • But a compression library seems just after cryptographic libraries are a reasonable thing to not roll your own, and even if this particular use for this particular library might have been avoidable, you're probably not going to be able to trim that much out, and you might not even be able to trim this.

  • There's a lot of this that seems like the chickens coming home to roost for bad practices in FOSS development: random test binary blobs ending up on user systems, build systems that either fail-silently on hard-to-notice errors or spam so much random text no one looks at it, building from tarballs, so on.

  • But getting rid of bad or lazy dev practices seems one of those things that's just not gonna happen.

  • The attacker was able to get a lot of trust so quickly because significant part of modern digital infrastructure depended on a library no one cared about. The various requests for XZ updates and co-maintainer permissions look so bizarre because in a library that does one small thing very well, it's quite possible only attackers cared. 7Zip is everywhere in the Windows world, but even a lot of IT people don't know who makes it (Igor Patlov?).

  • But there's a lot of these dependencies, and it's not clear that level of trust was necessary -- quite a lot of maintainers wouldn't have caught this sort of indirect attack, and no small part of the exploit depended on behavior introduced to libraries that were 'well'-maintained. Detecting novel attacks at all is a messy field at best, and this sort of distributed attack might not be possible to detect at the library level even in theory.

  • And there's far more varied attack spaces available than just waiting for a lead dev to burn out. I'm a big fan of pointing out how much cash Google is willing to throw around for a more visible sort of ownage of Mozilla and the Raspberry Pi Foundation, but the full breadth of the FOSS world runs on a shoestring budget for how much of the world depends on it working and working well. In theory, reputation is supposed to cover the gap, and a dev with a great GitHub commit history can name their price. In practice, the previous maintainer of XZ was working on XZ for Java, and you haven't heard of Lasse Collin (and may not even recognize xz as a file extension!).

  • ((For culture war bonus points, I can think of a way to excise original maintainers so hard that their co-maintainers have their employment threatened.))

  • There's been calls for some sort of big-business-sponsored security audits, and as annoying as the politics of that get, there's a not-unreasonable point that they should really want to do that. This particular exploit had some code to stop it from running on Google servers (maybe to slow recognition?), but there's a ton of big businesses that would have been in deep shit had it not been recognized. "If everyone's responsible, no one is", but neither the SEC nor ransomware devs care if you're responsible.

  • But the punchline to the Google's funding of various FOSS (or not-quite-F-or-O, like RaspberryPi) groups is that even the best-funded groups aren't doing that hot, for even the most trivial problem. Canonical is one of the better-funded groups, and it's gotten them into a variety of places (default for WSL!) and they can't bother to maintain manual review for new Snaps despite years of hilariously bad malware.

  • But it's not clear that it's reasonable or possible to actually audit the critical stuff; it's easier to write code than to seriously audit it, and we're not just a little shy on audit capabilities, but orders of magnitude too low.

  • It's unlikely this is the first time something like this has happened. TheGrugq is professionally paranoid and notes that this looks like bad luck, and that strikes me more as cautious than pessimistic.

I can see that as a more general concern, but I'm not sure how much it applies to cases like this. Lasse, as far as I can tell from the outside, seems a very competent developer, just one with less than maximal interest for this project; I'm not sure what level of yelling at him would have avoided this. Jia Tan has managed the amazing feat of getting pretty much every FOSS dev of every political alignment to want to yell at it, and I doubt it's on his top ten list of concerns right now.

Indeed, there's an argument that the pressure campaign against Lasse to promote Jia Tan was downstream of FOSS tolerance of that sort of thing (though in turn, the attackers probably would have just picked different pressure had it not been around).

There's a problem in that people are aging out, from Stahlman to Linus to Lasse, and few if any have anyone to step into their shoes, even at far more trivial projects, leaving them to either be vulnerable. But that's a lot broader and scarier.

I may not understand what you mean by "raised as female", then.

What does that involve, if not covered by a traditional Christian family who had very strict understandings and very overt rules about not just social roles but also biological expectations (ie, it is your duty to marry and pump out 2-4 children)?

Very much appreciate the additional takeaways.

Rolling out your own compression is much less evil: there is certainly some potential for arbitrary code execution vulnerabilities, but not more than with handling any other file parsing.

Yeah, that's fair. There are some esoteric failure modes -- how do you handle large files, what level of recoverability do you want to handle, how do you avoid being the next zlib -- but for good-enough lossless compression you can get away with some surprisingly naive approaches, without the cryptography-specific failure mode where it can look where it's working fine but be vulnerable in ways you can't even imagine.

Data point: As some casual linux user, I recognize the xz file extension.

Huh, I stand corrected. I've seen it occasionally, but more often for Docker than anything else -- a lot of environments still use .gz almost everywhere.

On the plus side, the fact that the attackers stayed in userspace instead of having /usr/bin/sshd load some kernel model seems to indicate that a stealthy compromise of the kernel is hard? Yay for NSA's SELinux?

There is that on the plus side. I'm not hugely optimistic people would be as easily able to discover those sort of attacks, but then again, there's a lot more eyes on the kernel and a lot more emphasis on finding weird or unexpected behaviors in it.

I for one do not want to scream at them because I consider them to be a sock puppet of some unknown agency. I am kind of gleeful that some agency burned through this identity they put a lot of work into propping up.

Yeah, that's probably the more Correct response.

Isn't that why we're all here on this site though?

How To Convince Me That 2 + 2 = 3 seems relevant.

The problem from that perspective isn't that guesswho's arguing; it's that he's awful at it. It's bad enough when posters provide weakmen of their enemies. No one's going to change minds by providing weakmen of the position they claim to be defending.

This was perhaps the most sophisticated attack on an open source repo ever witnessed, waged against an extremely vulnerable target, and even then it didn't come even close to broad penetration before it was stopped.

Witnessed is a little important, here; I'm not as sure as TheGrugq that this isn't the first try at this, if only because no one's found (and reported) a historical example yet, but I'm still very far from confident it is the first. And it did get really close: I've got an Arch laptop that has one part of the payload.

Despite being obvious it bears laboring that it wouldn't have been possible for our Hero Without a Cape to uncover it if he wasn't able to access the sources.

That's... not entirely clear. Visible-source seems to have helped track down the whole story, as did the development discussions that happened in public (though what about e-mail/discord?), but the initial discovery seems like it was entirely separate from any source-diving, and a lot of the attack never had its source available until people began decompiling it.

The tendency to overreact may very well serve to make open source more anti-fragile. Absolutely everyone in this space is now thinking about how to make attacks like this more difficult at every step.

Yeah, that part is encouraging; I've definitely seen places (not just in code! aviation!) where people look at circumstances like this and consider it sign the were enough redundancy, rather than enough redundancy for this time. I think it's tempting to focus a little too much on the mechanical aspects, but that's more a streetlamp effect than an philosophical decision.

A Furry Cancellation

Mary E. Lowd, aka Ryffnah, has been removed from the Furry Writer's Guild, dropped by her publishers, and bounced as a Guest of Honour from the Oregon convention Furlandia, one week before the convention started. Not one of the biggest furry writers, or as skilled as someone like Tempo Kun, Robert Baird, Rukis Croax, or Kyell Gold. She has had had some success in out-of-fandom pieces in Baen, and her Otters In Space series was more normie-friendly than even other SFW writers (and even some normie anthromorphic authors). That must take some effort: what did she do?

It comes down to their decision to use AI-generated art as a tool in the creation of things such as book covers, the professional backlash that has accompanied it, and the general attitude towards this topic in the fandom.

Lowd has been open and explicit about her use of AI image gen, likely driven both by her husband's work in the field of AI research, and more seriously by the economics of the matter. To be fair, the FWG policy was officially published in January of last year, and unofficialy well-established for some time before; FurPlanet doesn't really do policy, but their stance has been just as open and explicit for nearly as long. There's some smoke-filledfree backroom management that Happens for furcons, and I expect Lowd will find more than one or two doors has closed, here.

Businesses have policies reflecting their principles or interests or both, so it's not a huge surprise it came to this.

The interesting bit's that the next-to-last editions of her works had conventionally- or conventionally-digitally produced art, some by pretty well-known artists like BlackTeagan. Emphasis on had: as common in the book industry, the cover art belonged to her publisher; it may well fall off the planet outside of private collections. The current replacements aren't great, though it's not clear if that reflects the artistic limitations of Lowd's tools or her time crunch. She previous sold her newest books at convention tables with nice stickers marking the ones with AI art, and that's going to be a lot less common moving forward.

And she's not alone.

Of the exceptions I gave a year ago, e621 has officially shoved any AI-gen to the e6ai subsite, and while Weasyl hasn't yet updated its policies, it has updated its practices. Outside of AIgen-specific accounts on twitter or servers on Discord, it can be hard to find the stuff. If you're a furry, you can avoid seeing AI art without even trying!... er... labelled AI art. Forget the awkward questions about how increasingly wide varieties of games integrate it into their graphics pipeline, or the not-so-clear division from more advance 'brush' tech to some uses of AI-gen: the people coming up with the policies don't know how the tech works. They may never know anything other than Lowd's oh-god-I-gotta-get-a-new-publisher-whatever-works pieces, even to recognize it.

Which is one potential end to the story, and to many stories, and a quiet one. Yet at the same time, it's an utterly frustrating ending: all of the worst fears of economic impact on lower-tier artists or of unlabelled AI spam overwhelming sincere creation, all the lost opportunities for conventional artists to focus more of their time on the parts of art they love or dedicated AI-genners to explore types of media that just wouldn't be practical for conventional artwork, all come true... and no one cares.

Seconding all of fishtwanger's recs.

Astro City is very much a comic fan's comic book, but it (and to a lesser extent, Common Grounds) are great not just by the low standards of superhero works, but more broadly as explorations of the human spirit. Nextwave takes things the other direction, and despite that is the only Warren Ellis work I can stand -- hilariously zany, completely shredding the ideas of superheroic human spirit, absolutely all the more enjoyable for it.

If you like Moore, Promethea isn't perfect in a lot of ways, but it's generally underappreciated work.

Ursula Vernon's Digger is a weirder work, but fun.

For Eastern works, Kino's Journey is better-known for its anime (good) and light novel (outstanding), but the manga iterations are still pretty strong.

Sorry I'm a bit confused here, are you saying that this has already come to pass or are you offering this as a hypothetical?

I don't think it's already come to pass, or even that it'll be some clear demarcation between going to happen and has happened, but it seems the likely result of netrunnernobody's hypothetical, where :

the year is 2045. no one can tell the difference between machine-made art and the work of masters. the supposed painter has been with his wife for twenty years without learning she was male at birth.

their opposition still exist, but are as rare as people without smartphones.

We're clearly not there right now, but it's definitely plausible, and maybe the timeline is pessimistic on one end or the other. Yet to actually resolve the conflicts and culture wars, the fighters would have to accept everything they wished for at the cost of even mentioning quite a lot of what they really wanted.

People are still making money as professional artists and selling commissions online. AI has definitely impacted the market, but artists are still making money regardless. In fact the number of graphic design jobs on Upwork has increased since the release of DALL-E 2 and StableDiffusion.

I've seen that story bounce around a few times, but I'm not sure it's avoided the streetlamp effect. 'Jobs on Upwork' makes sense as a metric, but only because there's not much better visible data -- in addition to some number of these jobs revolving around, they're also long-been a saturated mix of a wide variety of roles, for which 'creating art' isn't all of it and might not even be a lot of it. More critically, even the more optimistic uses of AIgen would drop the price-per-job, either by reducing time investment or at least lifting some tedium, which could leave as many 'jobs' on the tables from Upwork's perspective, but far fewer artists able to live off them.

I don't think we're at the point where the average manager puts together something in Midjourney, then bills a rando freelancer a pittance to launder the corp's use fine-tune the piece, but if we were, it'd still look pretty good from Upwork's metrics.

Admittedly, I can't find better data, so I still have to recognize it.

The average non-specialist isn't going to mess around with running a local model, training custom LoRAs, using ControlNet and inpainting... it's still involved enough that it's reasonable to outsource the process to someone else.

Eh... outsourcing can remain, specialists can remain, and the market for artists can still fall apart. If a specialist exists that can output thirty times the speed that a conventional artist can, it might even pay better than the thirty people doing the same work previously combined... but it's going to mean thirty fewer artists in that field.

Edit: Uh, you might have been able to generate more discussion by waiting ~12 hours and posting this in the new week's thread?

Yeah, that's fair. I'm not sure this is really worth a ton of discussion, though, and not just for the reasons I didn't quote netrunnernobody's full hypothetical in the starting post.

Then again, the other post I'm ruminating on now is "Against Hyper-Dunbar Thinking", so maybe I'm just over-privileging the 'scream into the void' side of internet discussion.

Options:

  • Google's mainstay is Gemini (previously Bard) is free(ish) for now, if you have a Google account. Open it, start writing. Not private.

  • Anthropic pushes Claude. You can try Haiku and Sonnet, the lighter- and mid-weight models free, but Opus was more restricted last I checked. Tends to be one of the stronger fiction writers, for better or worse.

  • Chat-GPT3.5 is available for free at here, 4.0 is a paid feature at the same sight. The paid version is good for imagegen -- I think it's what a lot of Trace's current stuff is using. Flexible, if a bit prudish.

  • Llama is Facebook's big model, free. Llama 2 is also available for download and direct run, though it's a little outdated at this point.

  • LMSys Arena lets you pit models against each other, including a wide variety of above. Again, not private. Very likely to shutter with little notice.

  • Run a model locally, generally through the use of a toolkit like OobaBooga webui. This runs fastest with a decent-ish graphics card, in which case you want to download the .SAFETENSORS version, but you can also use a CPU implementation for (slow) generation by downloading GGUF versions for some models. Mistral 8x7B seems to be the best-recommended here for general purpose if you can manage the hefty 10+GB VRAM minimum, followed by SOLAR for 6GB+ and Goliath for 40+GB cards, but there's a lot of variety if you have specific goals. They aren't as good as the big corporate models, but you can get variants that aren't lobotomized, tune for specific goals, and there's no risk of someone turning it off.

Most online models have a free or trial version, which usually will be a little dumber, limited to shorter context (think memory), or be based on older data, or some combination of the above. Paid models may charge a monthly fee (eg, ChatGPT Plus gives access to DallE and ChatGPT4 for 20 USD / month), or they may charge based on tokens (eg, ChatGPT API has a per 1 million input and output token price rate, varying based on model). Tokens are kinda like syllables for the LLM, between a letter to a whole word or rarely a couple words, which are how the LLM breaks apart sentences into numbers. See here for more technical details -- token pricing is usually cheaper unless you're a really heavy user, but it can be unintuitive.

For use:

  • Most models (excluding some local options) assume a conversational model: ask the program questions, and it will try to give (lengthy) answers. They will generally follow your tone to some extent, so if you want a dry technical explanation, use precise and dry technical terms; if you want colloquial English, be more casual. OobaBooga lets you switch models between different 'modes', with Instruct having that Q/A form, and Default being more blank, but most online models can be set or talked into behaving that way.

  • Be aware that many models, especially earlier models, struggle with numbers, especially numbers with many significant figures. They are all still prone to hallucination, though the extent varies with model.

  • Long conversations, within the context length of the model, will impact future text; remember that creating a new chat will break from previous context, and this can be important when changing topics.

  • They're really sensitive to how you ask a question, sometimes in unintuitive ways.

But also, I don’t see how it could be bad?

There's a thing in the Mormon church where they send teenagers to evangelize randos. It seems a little weird at first glance: everybody knows that they're not going to get any bites. But getting new recruits isn't the point -- the point is to absolutely demonstrate how bad non-Mormons can act.

That's probably not intended (either here, or in the Mormon church). Yet I wonder what, precisely, the proposer expects to have happen were he to ship cornfed rural folk (or even the Unnecessariat writer) to San Francisco, or vice versa.

I'm not sure if there's a specific term in the LDS community that separates it from more general missionary work, but sending 18-25ish young adults in suits on bicycles to knock on doors away from home, typically for sections of two years. TraceWoodgrains wrote about it from the perspective of someone then-inside the community who did the work in Australia, but I've seen it referenced from online and offline ex- and current-LDS.

Yes, ostensibly missionary work gets convert baptists, and the official statistics are in 4+ per missionary-year. Which is pretty respectable, even if it's an astounding amount of manhours to get there. But these numbers come about by merging the numbers from all jurisdictions, and by mixing explicit missionary work knocking on doors with, talking with organically-developed friendships while on mission, missionary service (such as volunteer work for the destitute).

Add in retention to baptism -- and from a non-LDS perspective, that's the LDS baptism requirements are a really low bar -- where knock-on-door numbers are awful and the entire program sells itself on members talking to or encouraging investigators that they found through personal efforts, and it turns into a wash pretty quickly for a lot of jurisdictions.

I don't think there are good public numbers for baptism-per-missionary by mission or country, but at least if your missionary work was recent, I'd really guess you were probably well above-average for your mission region.

The cynical view on Rumspringa is more that it shoves younger Amish to see how weird "the English" are and how little we like it (akin to forcing someone caught sneaking a puff of a cigarette to smoke several in a row, knowing that the nicotine would be unpleasant in that dosage), rather than a hazing: a person on Rumspringa can often run into trouble, but they're not interrupting Troubles' soap operas.

The... samurai and their military leadership? At least in Japan.

Meiji- and Edo-era peasants (and especially hinin, which were somewhere between Indian dahlit and American homeless) had extremely minimal rights, at the same time that the samurai class had an explicit right to strike those who offended their honour, a rule that was of significant relevance and controversy in an incident involving Westerners that Clavell references. (Tbf, especially 1600s-era social and economic stratification meant that people sympathetic to the peasants or, more often, merchants, were often writing the histories.)

But that didn't stop peasant uprisings from happening: Chichibu is similar in time to Tai-Pan and_Gai-Jin_, and Jōkyō the best-known early Edo period peasant uprising that would have fit for Sho-Gun.

I’ve never understood that though. These people basically have a very expensive hobby and generally need to be told that.

For the most part, that's true for writers: even outside of the furry fandom, it's hard to beat minimum wage -- MorlockP has had three pretty successful works, and also a lot of commentary about how bad writing can pay. There's some furry writers that manage to make it as at least supplementing their income better than a minimum wage job would, but they tend to also be mixing art in (eg Rick Griffin, Rukis Croax) or riding the commission train hard (eg Amethyst Mare, Joshiah).

For artists, that's less true. There's a surprising number of people who can pull in low six figures through furry commission work, and while that's the top 1%ish of artists, that's in no small part because most artists don't want to make it a full-time job or a job at all, preferring to augment their more stable W2 income (eg Accelo) or just keep demand reasonable. The fandom is just heavily driven by artists -- while organizers and administrators are the 'kings' of their respective websites or conventions, an overwhelming majority of interest and more importantly cash moving around is driven by visual art (and comics, and games, etc containing visual art). And artists have been pushing to ban AI art in many contexts, with some success, seeing it as a direct threat to their income.

Why do hobbyist writers care what artists think, outside of cases where they're one and the same? FurPlanet's FuzzWolf commented at length a few years ago about the importance of a good cover artist, not just for quality or visibility, but because they will be able and willing to put your name out there. It's a marketing and networking expense, and even it won't necessarily break even for hobbyist writers (though FurPlanet does order and pay for commissions itself, not just in Lowd's case, and presumably isn't doing so out of the goodness of their hearts), the hobbyist writer can often get artwork that they'd want otherwise. Furry writers are often, if not always, furries themselves, after all.

In many cases, artwork that they couldn't get otherwise: many bigger furry publishers have good enough relationships with well-known artists that they can jump a commission queue or get in contact with artists that don't do open commissions at all. Lowd almost certainly couldn't have gotten that BlackTeagan piece on her own for Nexus Nine, for a few different reasons; Gre7g Luterman's deal with Rick Griffin for Haven Celestia cover art is little different, but almost certainly a benefit on Luterman's side. And for obvious reasons it's one that isn't available to any writer who even hints at using AIgen.

If it is a hobby, why throw away a good part of the enjoyment to save a couple hundred bucks, when you're spending weeks or months?

There's some legal messiness about the standard of causation, but in an environment with any serious level of social trust, the Crumbley's would fall fast into the sphere where no one looks that closely at it, even had they just fallen down the stairs. Even gunnies whose literal jobs involve poking at the law agree with the moral question for this specific case. I'd be interested to know how consistently parents of teenagers who drive drunk are held criminally responsible, but I dunno that the data is really available in meaningful detail, and guns are different enough, and it'd still be a good arg in favor of tightening up the law then.

Part of that fall-through-cracks is because Michigan's statutes were pretty wonky: conviction for improper storage of firearm w/ a minor would have been far more clear-cut, but they didn't really clearly exist in 2021.

The court of appeals did, in fact try to spell this one out as good-for-this-ride-or-worse-only:

Finally, we share defendants’ concern about the potential for this decision to be applied in the future to parents whose situation viz-a-viz their child’s intentional conduct is not as closely tied together, and/or the warning signs and evidence were not as substantial as they are here. But those concerns are significantly diminished by several well-established principles. First, the principle that grossly negligent or intentional acts are generally superseding causes remains intact. We simply hold that with these unique facts, and in this procedural posture and applicable standard of review, this case falls outside the general rule regarding intentional acts because EC’s acts were reasonably foreseeable, and that is the ultimate test that must be applied.13 Second, our decision is based solely on the record evidence, and the actions and inactions taken by defendants despite the uniquely troubling facts of which they were fully aware. And this point is important, as although the judiciary typically recognizes that a decision’s precedent is limited by the facts at issue, it is particularly true when the court expresses that limitation.

The trouble's that there's not much social trust. The Crumbley's are going to prison for a decade because their kid had a hallucinations and intrusive thoughts that the parents blew off, and that's extremely bad. What if he'd just written a lot about depression, and they'd ignored that? If he'd had the same problems, but not gotten sent to the principal's office the same day? He was a 15-year-old they allowed to have effective control of a handgun, would that change if he was over 18? 21? 25? They didn't lock (or 'locked' with 0-0-0) firearms. If they used a cheap 20-USD trigger lock that doesn't actually work, would that have broken the chain of causation?

These are problems for any serious statute with where the caselaw involves a ton of phrases like 'reasonably foreseeable', but most serious statutes don't have a sizable lobby pushing for (and often getting!) laws enforcing blanket criminal consqeuences in related context. The parade-of-horribles where someone is criminally liable because 'obviously' the seller knew this guy shouldn't have a gun, he shot people is an implicit goal for the Brady Bunch. I'll give Rov_Scam props for stating outright "a number of requirements that seem onerous but that's the point", but that only makes Rov honest; it doesn't help with the general problem.

Yeah, that's fair. There's definitely an unreasonable push toward a dichotomy of toy-or-career everything, not just in the writing or arts sphere, but everywhere from electrical engineering to machinework to plastic fab to web design. I'm trying to get a post together talking about that in the context of FIRST, but it's a serious problem and undermines a lot of social behavior across a lot of fields.

I do think it's a broader issue than no-FUD; the internet has pushed a lot of fields to a point where it's reasonable to see the most impactful option is outreach, shut-up-and-multiply style, and even if some people do turn away from the Omelas that making that choice, the people inviting you in will be the ones who bite that bullet.

((That said, a lot of people who do write as a supplementary wages in the furry fandom, and in many fields like TTRPGs, don't go through traditional publishing; FurPlanet is more of a print-on-demand and storefront faciliator, along with doing some ISDN bullshit. But even though they're really operating at 50-150 counts, you'd have to do some digging to realize that.))

Against A Purely HyperDunbarist View

World’s for FIRST is in a week.

For those unfamiliar with the organization, For Increasingly Retrobuilt Silly Term For Inspiration in Science and Technology runs a series of competitions for youth robotics, starting from a scattering of Lego Mindstorm-based FLL competitions for elementary and middle schoolers, to the mid-range 20-40 pound robots of FTC that play in alliances of 2v2 across a ping-pong-table-sized space, and for high schoolers FRC running 120-pound robots in 3v3 alliances around the space of a basketball court. Worlds will have thousands of teams, spread across multiple subcompetitions. (For a short time pre-pandemic, there were two Worlds, with all the confusion that entailed.)

If you’re interested, a lot of Worlds competition will streamed. And a lot of both off-season and next-season competitions and teams are always looking for volunteers.

The organization’s goal... well, let’s quote the mission statement:

FIRST exists to prepare the young people of today for the world of tomorrow. To transform our culture by creating a world where science and technology are celebrated and where young people dream of becoming science and technology leaders. The mission of FIRST is to provide life-changing robotics programs that give young people the skills, confidence, and resilience to build a better world.

There’s a bunch of the more normal culture war problems to point around. How goes the replacement of the prestigious Chairman’s Award with Ignite Impact? If not, complain at least that it’s a missed opportunity on the level of POCI/POCI for replacing a bad naming with a worse one? How do you end up with events playing the PRC’s theme song before the US national anthem?

There's even internal culture war stuff, which may not make a ton of sense to outsiders. Does the move away from commercial automotive motors to built-to-FIRST and especially-brushless motors privilege teams with more cash, or compromise safety or fair play? Should regional competitions, which may be the only official field plays small teams get, also accept international competitors? Should mentors white glove themselves, should they only do so during official competition events, or should the possibility of the Mentor Coach be abolished?

But the biggest question in my mind is how we got here.

Worlds competition is an outstanding and massive event, with an estimated 50k-person attendance at a ten-million-plus square foot convention center. And it’s a bit of a football game: there’s a lot of cheering and applause, and a little bit of technical work. There will be a number of tiny conferences, many of which will focus on organizational operations like running off-season events. People network. That’s not limited to Worlds itself, though the dichotomy is more apparent there: there might be one or two teams per regional competition that have a custom circuit board on their robot, but I'd bet cash that the average regional bats under 1.0 for number of teams with custom polyurethane or silicone parts.

Indeed, that football game is a large part of how teams get to Worlds. The competitions operates as a distributed tournament, where players who win certain awards may elect to continue to the next event in a hierarchy. The exact process and what exact awards count as continuing awards are pretty complex and vary by location (especially post-COVID), but as at the FRC level, the advancing awards prioritize two of the three teams that won a local competition's final, and then the team that has done the most recruitment and sponsoring of FTC or FLL teams over the last three (previously five) years, and then the team that has done the most for the current year. (Followed by the most competent Rookies, sometimes, and then a whole funnel system rolling through more esoteric awards.) In addition to the inherent randomness of alliance field play, there's a rather telling note: the 'what have you done for FIRST today' award, if won at the Worlds level, guarantees an optional invite to every future Worlds competition. By contrast, teaching or developing esoteric skills or core infrastructure is an awkward fit for any award, usually shoved into the Judge's Award, which with 3.5 USD won't buy you a good cup of coffee at Worlds.

There’s reasons it’s like this, and it’s not just the Iron Laws of Bureaucracy, or the sometimes-blurry lines between modern corporate infrastructure and mid-level-marketing. The organization hasn't been hollowed out by parasites and worn like a skinsuit (at least not in this context): it's the sort of goal that the founders and first generation would have and do consider a remarkable victory. I’m not making the Iscariot complaint, because it’s not true.

FIRST couldn’t exist in the form it does without these massive events and the political and public support they produce, not just because you wouldn’t hear about any smaller organization, but because the equipment and technology only works at sizable scale. Entire businesses have sprung up to provide increasingly specialized equipment, FIRST got National Instruments to build a robotics controller that resists aluminum glitter a little better, even the LEGO stuff has some custom support, and they can only do so because an ever-increasing number of teams exist to want it. SolidWorks, Altium, dozens of other companies donate atoms and/or bits on a yearly basis; the entire field system for FRC wouldn’t work without constant support and donation by industrial engineering companies. WPI might devote a couple post-grad students to maintaining a robotics library without tens of thousands of people using it, but I wouldn’t bet on it. States would not be explicitly funding FIRST (or its competitors) unless those programs can show up on television and have constituents that can show up at a state politician’s door.

Those demands drive not just how FIRST operates today, but what its interests are looking toward the future, not just in what it does, but what it won’t do. From a cynical eye, I wouldn’t say with certainty that FIRST would drop ten community teams for a school system buy-in, or twenty for a state program, but I wouldn’t want to be on the community team for any of those hard choices. There is no open-source motor controller or control board available for FIRST competition use, and there’s not a procedure available to present one, and there won’t be. There’s a lot of emphasis on sharing outreach tricks, and a little for sharing old code or 3d models, and a lot of limits to providing skills.

Because throughout this system, the most impactful thing you can do is always getting more people. It’s not Inspiring, it’s not Chairmanny Impactful, but that's what those awards are, with reason. Shut up and multiple: the math, in the end, is inevitable.

And I’m going to deny it.

There's a story that goes around in the FIRST sphere, where one of FIRST's founders bargained or tricked Coca-Cola into in exchange for developing some other more commercial technology. The exact form and valence tends to vary with who tells the story, whether to highlight the speaker's anti-capitalist frame, to gloss over some of the frustrations with the Coca-Cola Freestyle (tbf, usually more logistic and maintenance than with the pumps themselves), or to wave away the rough question of whether it paid off).

But that last point is a bit unfair: Solving Problems In Extreme Poverty is the sort of difficult and low-odds environment where high-variance options make sense to take, and you should expect a high-variance low-odds option to fail (or at least not succeed wildly) most of the time, and at least it wasn't as dumb an idea as the lifestraw. Maybe (probably!) enough of the steps that combine to keep FIRST running fall into the same category.

I'm hoping teaching kids isn't a low-odds environment. And ultimately, most volunteers and teams and sponsors signed up more for that than for the flashing lights and the fancy banners. But teaching, in matter involving true interaction, can not be done at the scales and directions that turn a roll of the dice from gambling to a variance strategy. It's difficult enough as a mentor to remember all the names the students and family for even a moderately-sized FRC or FTC team; few in a team that "support 128" teams (not linking directly: these are teenagers) can name every one or even a majority. These organizations have, by necessity, turned to maximize how many opportunities they present to their affiliates, without much attention to what that opportunity is. Few turn to the full argumentum ad absurdum where the recruitment exists solely to get more recruiters, but they’ve not left that problem space behind, either.

((There are other nitpicks: the same economies of scale that make these answers work eliminate many less-difficult problem whose presence is necessary to onboard and upskill new learners, the focus on bits over atoms breaks in similar ways that the outreach-vs-teaching one does.))

Dunbar proposed an upper limit to how large a social group the human mind readily handles. There's a lot of !!fun!! questions about how well this will replicate, or how accurate the exact number is, or what applicability it has for a given level of interaction: suffice it to assume some limit exists, that some necessary contact increments the counter at some level of teaching, and that it can't possibly be this high. At some point, you are no longer working with people; you're performing a presentation, and they're watching; or you're giving money and they're shaking a hand. At best, you're delegating.

These strategies exceed the limit, blasting past it or even starting beyond it. They are hyperdunbar, whether trying to get fifty thousand people into a convention center, or trying to sell ten thousand books, or 8k-10k subscribers. There are things that you can't do, or can't do without spending a ton of your own money, without taking these strategies! Whether FIRST getting NI's interest, writing or drawing, building or playing video games full-time, you either take this compromise or another one, and a lot of the others are worse.

But they're simultaneously the most visible strategies, by definition. I do not come to kill the Indigestion Impact Award; I come to raise the things that aren't in the awards. Even if FIRST could support a dozen teams that emphasized bringing new technologies forward in a one-on-one basis, and if your first exposure to the program selected from teams randomly, you'd be much more likely to hear from the hyperdunbarists -- hell, it could well be that way, and I've just missed the rest of them.

Yet they are not the only opportunity. You don't have to be grindmaxxing. One team, even in FIRST, can share skills simply for the purpose of sharing skills. It’s why I volunteer for the org. You can go into an artistic thing knowing you want a tiny audience, or to cover costs and if lucky your time, or as a hobby that's yours first. It shouldn't be necessary to say that outright, as even in hyperdunbar focuses, most fail down to that point. Yet even in spheres where Baumol's hits hardest, it can be a difficult assumption to break.

Apologies, this post was a little more stream of consciousness than I'd intended. My thesis is more that :

  • Every organization, even an organization of one person, must select relative priorities of growth against other targets. For businesses, marketing and investment versus product development; for artists, growing your audience against growing your skills; for streamers focusing on following the algorithm versus following your interests. For FIRST, that's a part of that's the division between creating and expanding teams versus developing skills for those teams, but the pattern exists much more broadly.

  • Organizations that make that decision don't do so (only) because they've forgotten their original goal, or because they've been taken over by people who don't care about that goal, but because scale does genuinely have (distributed) benefit.

  • But that strategy has costs. Effective Altruists often focused on the degenerate cases, where outreach becomes almost all of what the organization does, or where outreach has hit decreasing returns while the organization is unwilling to admit that. But there are more honest problems, such as where this emphasis on outreach disconnects your metrics from your measures, or where successful growth can Baumol you as relative productivity varies with scale for individual parts of the organization.

  • More critically, it is fundamentally risky approach at the level of individual people, while obfuscating the outcome of that gamble. If a consistent and always-applicable recruitment paradigm existed, you would already have joined, as would every adult in the county/country/planet; if you could keep in mind the outcome of your recruitment efforts, it wouldn't exceed your Dunbar number. Not everyone approached can be a recruit, not all recruits persist (or are even desirable), so on: even successful orgs notorious for their outreach can spend hundreds of manhours to get four or five mid-duration recruits. Organizations can eventually make this work out by playing the odds across a large enough number of people, but individual actors within the organization can not. Hyperdunbar non-outreach/recruitment efforts can similarly be risky and hide their outcomes: it's very easy to give a talk before a thousand people, and very hard to know what portion of the audience was listening the next day.

  • Because of their public-facing nature, difficulty of measurement, influence of the internet and media coverage (and, cynically, hyperdunbar organization efforts to dazzle or baffle their membership), these approaches are what are most visible when looking into most fields from outside, such that they seem like the only viable option.

  • But that framework is flawed; hyperdunbar efforts can and often do run face-first into a ditch.

  • Even some efforts toted as wildly successful can fade off at shockingly low numbers. That's not to call them a failure for doing so, even if it's not always or often what the stated goals were. However, it shows a space where the tradeoffs necessary to try to scale to vast numbers weren't necessary.

  • And a lot of good can be done outside of hyperdunbar efforts.

In discussing Dunbar's number, it's not uncommon to see people divide matters into sub- and super-Dunbar counts (eg from 2013), and this can be useful in some contexts, but it also munges together a million-person org that's constantly growing (or trying to constantly grow) and a 200-person-org that's doing minimal recruiting.

Hyperdunbar approaches do not merely require an organization to exceed Dunbar's number, but that the organization constantly be striving for growth, unconstrained and reaching for infinity or the nearest limit. They do not merely have the problem that superDunbar groups do of wildly changed social dynamics, but the constant churn makes even many of the social technologies built for superdunbar organizations break.

Apologies for coining a word for what may well be have an obvious term.