@blooblyblobl's banner p

blooblyblobl

Battery-powered!

0 followers   follows 0 users  
joined 2022 September 04 22:46:30 UTC

				

User ID: 232

blooblyblobl

Battery-powered!

0 followers   follows 0 users   joined 2022 September 04 22:46:30 UTC

					

No bio...


					

User ID: 232

To whom would you rather trust your well-being:

  • The medical professional who has spent their entire life developing strategies to meticulously check over their work to ensure consistency and accuracy
  • The average medical professional (they're confident they just don't make that kind of mistake)

I think you are significantly overestimating the scope of the problem - her failure mode was losing points for questions she did not have time to answer, as opposed to answering questions wrongly, on a timed test with pencil and paper. This is demonstrably not a representative model of the real world, in which computers, colleagues, and the spoken word exist, variables may be named at one's pleasure, operators correspond to explicit and distinct positions on the keyboard, and you get at least half a decade of extra practice before they let you loose on the unsuspecting populace. Today, her learning disabilities are effectively non-issues; in fact, her meticulousness means she tends to catch mistakes made by others as well (which has made for some colorful stories).

It is precisely this kind of tractable problem, which only really exists in a pedagogical spherical-cow setting, that requires accommodations, as opposed to nebulous claims of racial or mental victimhood from the lazy, the conniving, or the otherwise unqualified comprising the median. The challenge, as it has always been, is telling them apart. Again, there's an argument to be made that it's not worth it to try, and it may even be a good one. But it's not open-and-shut.

My sister wouldn't have graduated college without the extra time provided by disability accommodations for dyslexia and dyscalculia. I spent an entire semester of her undergrad with her on video calls (as emotional support, and as someone she could trust would get the right final answer), watching her torturously dragging herself through mandatory remidial physics and algebra classes that have never once been relevant to her professional endeavors, and I had a front-row seat to the frustration and exhaustion induced by learning disabilities on otherwise exceptional people. It takes her minutes to do problems I can do in my head - not because I'm any smarter, but because she literally can't read what the problem is asking without making symbol transposition/translation errors, and has to redo every problem about five times to arbitrate the inevitable failed attempts.

That extra time let her squeak through the remedial courses with a passing grade. Years later, she's now a successful practicing psychiatrist, and I'm confident that several of her needs-based clients would say she has utilized her education for the betterment of society.

I also don't think this had anything to do with our parents pushing parenting duties onto teachers. For all their other flaws, not once did they ever abdicate any parental responsibilities. They pushed for disability accommodations because they wanted my sister to be given a chance to prove herself, and spent years researching and trying different approaches, alongside private tutors and disability specialists, at great personal cost, to help my sister over her hump. And it worked! And if the schools didn't give her extra time on her tests, she would have flunked out of college and it would have all been for naught.

I agree that the disability accommodation system is full of parents making their children someone else's problem, and this is probably the majority of its use now. There's a level-headed argument to be made that the cost to society of exploiting that system is way more than the benefit for the handful of people like my sister. I just want to point out that there are people benefitting from disability accommodations in a way that doesn't encourage learned helplessness later in life.

Just to be clear: you are making the argument that Trump is a compromised agent on behalf of Israel? You didn't say which foreign power you meant, and I'd like you to speak plainly so I can confirm I understand your argument.

Calling an adoptive child "my son" is cromulent in the majority of the contexts where it comes up, because a majority of the mind-independent facts about reality conveyed by the term (chiefly, the processes involved in parenting a child) are still highly correlated with the term's usage - and the cases where the distinction matters (medicine, childbirth, cultural/legal distinctions) come up infrequently enough that these contexts typically warrant a clarifying distinction (adopted son), if they're ever mentioned at all.

Calling my wife's mother "mother-in-law" could only be described as unintuitive in the sense that nothing is left to the intuition, because the obvious distinction between objective and intersubjective information is directly encoded in the term.

I'll grant that there are languages and cultures where the same term can be used for "mother" and "mother-in-law", or where it is inappropriate to refer to a ward as "my son", and these use cases feel unintuitive to someone brought up without these linguistic or cultural practices. But I suggest that those languages and cultures arrived at their way of expressing these relationships because some of the mind-independent facts about reality conveyed by the terms in those languages or cultures are also more or less relevant to communication in those languages or cultures. And what's relevant to communication in those languages or cultures has historically been a consequence of many evolutionary adaptations generated by divergent selective pressures, such as geography, resource availability, proximity to other cultures and languages, etc.

I think the extent to which the language is being stretched and skewed in your examples is greatly overstated. Compare with: calling an adoptive child or my wife's mother "my flesh and blood" isn't intuitive, because it's not correlated with the (much more specific) mind-independent facts about reality that this language usually implies. A tenuous argument can be made for the wife's mother, in the sense that a flesh and blood bond is formed through a biological child, but it's indirect enough to be unintuitive. For an adopted child, I can't imagine any usage other than simile or metaphor, which is again indirect enough to be unintuitive. Calling an adoptive child and my wife's mother (with the implied familial relations) "my flesh and blood" is quite a stretch for the language, and we must retreat to subjective experiences (how I feel about the emotional bonds I share with my family) or abstract metaphors (religious covenant) to make sense of it - or maybe it doesn't make sense, and it's a lie.

It is precisely the degree to which the language is stretched and skewed by a non-central usage, relative to the information conveyed by a central usage, that determines how likely we are to permit it into everyday parlance.

With all of that in mind, consider: I've been reading a bunch of your comments to get a better understanding of your model of honorary social statuses, and I think the choice of the word "honorary" adds an implied meritorious connotation that isn't actually present. In my model of communication, languages are locally-optimizing compression schemes for transmitting information, relying on a common set of shared mind-independent facts about reality and presumed-to-be-shared subjective experiences, preferences, and tastes; intersubjective contexts such as culture and law are transforms applied to the language to modify the correlation between terms and the set of objective and subjective information they compress. The primary driver of the evolution of language is communicative fitness, which tends to map more closely to things like efficiency or clarity, than to something like merit. This isn't to say that deliberate linguistic engineering is impossible, or even necessarily unusual; nevertheless, I think a lot of your default examples of "honorary status" are not some top-down special award conferred by society upon the edge cases which then filtered down into everyday parlance, but are instead "close enough" practical communicative terminology that eventually required special intersubjective considerations as the edge cases naturally bubbled up from everyday parlance and encountered gaps, contradictions, and disputes in existing cultural, legal, and societal frameworks. In other words, I think calling this phenomenon "honorary status" inverts cause and effect by implication of merit.

In most everyday conversations, we do not make a distinction between social truths (intersubjective), matters of personal taste or opinion (subjective), and mind-independent facts about reality (objective.)

Right, because in most everyday conversation, we don't need to. The mind-independent facts about "adoption" and "women" have historically been well-correlated with the usage of the words in subjective or intersubjective contexts, independent of the society in question.

Islam has a different intersubjective analogue ("guardianship") for something that correlates with the same mind-independent facts about "adoption". No one considers this "lying", it's just different societal rules for the same fact pattern.

The transgender memeplex attempts to redefine the meaning of the intersubjective "woman" in a way that completely divorces the terms from the existing correlation with the objective "woman". Is this lying? No, it's just changing the rules about using one of the most common words in everyday parlance to render it objectively meaningless, such that it's indistinguishable from lying to anyone using the old intersubjective rules; while also expecting everyone to honor the inherited intersubjective rules about mislabeling, special interests, etc. that only exist because of the now-deprecated objective meaning; except now those inherited intersubjective rules should apply to subjective, unobservable mind states we can all change on a whim.

Again, while I don't think the average person will put it in those terms, they can probably notice the "lie by the old rules" part and the political maneuvering one step behind it, conclude that this is a scam, and refuse to engage.

Obviously, neither adoption nor transness are objective facts about reality

Claiming someone is "adopted" is a falsifiable claim about an event that occurred in reality. Unless your job is to legislate the edge cases of what constitutes "adoption", the so-called "fuzzy boundary" of what constitutes adoption is beyond the horizon of normal parlance.

Claiming someone is "a woman" has been, for the overwhelming majority of the term's historical usage, a falsifiable claim about someone's sex. Unless your job is to legislate the edge cases of what constitutes "a woman", the so-called "fuzzy boundary" of what constitutes a woman has previously been beyond the horizon of normal parlance.

In both cases, the obvious evidence that these words mean something closely reflecting reality is that mislabeling someone is somewhere between a joke and an insult. The accidental category error is so uncommon that deliberate category error is a meaningful signal in communication.

The transgender memeplex wants to expand the usage of the word "woman" to include unfalsifiable claims about someone's internal mental state. If your job is to legislate the edge cases of what constitutes "a woman", your job is now by definition completely arbitrary: how is it possible to draw the distinction, other than to fully accept or deny the dubious metaphysics that allows anyone to be anything in their imagination? For all other parlance, the meaning of "woman" is now decoupled from centuries of ordinary usage - this is less of a "fuzzy boundary" creeping in, and more a total erasure of the fundamental falsifiable claim at the heart of the word. In spite of all this, the transgender memeplex expects to inherit both the insult of mislabeling (without also inheriting the objective distinctions that made this mislabeling insulting in the first place) and the legal and social statuses and carve-outs for whichever sex is most convenient to their whims.

There's a clear, obvious distinction between the usage of words that make concrete claims about reality (but for a handful of exotic edge cases no one ever thinks about), and the usage of words in the transgender memeplex that erodes centuries of colloquial understanding in favor of obfuscating, homogenizing, and booby-trapping the terminology with definitions based on unfalsifiable internal mental states. I wouldn't call the latter "lying" per se, but I don't blame the average Joe for pattern matching demands for uncritical acceptance of unfalsifiable claims that overwrite common sense to something very close to "lying", particularly when these demands are brazenly accompanied by power grabs and political maneuvering. Motives aside, I think a lot of people instinctively consider anyone deploying this kind of rhetorical trickery to be either crazy or up to no good, and deny it legitimacy by refusing to participate.

Chinese Skinner box uses Western Monomyth to appeal to Western audiences? Say it ain't so...

I think you expect too much from a mass-market product.

Again, this only matters if they're leaving D-leaning districts. If they're being chased out of the tiny handful of R-leaning districts, this is just changing the letters after the R in the House seat.

If a mixture of R and D voters are leaving blue states, this dilutes red states - actually a substantial structural flaw in the Republican electoral map. Same is true if mostly D votes leave, until the incredibly unlikely scenario where enough D votes leave to change Senate elections in previously blue states.

If R-leaning voters are leaving blue states for red states, this only moves the house if the R-leaning voters are coming from House districts that weren't already R-leaning.

If R-leaning voters are leaving predominantly blue districts in predominantly blue states for predominantly red or purple states, that could create a House advantage - assuming it doesn't get gerrymandered away during redistricting.

There's a very narrow path to D municipal governance having any significant structural impact on elections. I think it's correct to suggest their greatest threat lies elsewhere.

In the former case, unless everything goes shockingly well for you, including many things over which you have no control, you run a significant risk of literally destroying your life. Some would argue the entire purpose of participating in civilization is to avoid needing to take that risk in the first place.

In the latter case, unless you somehow don't have neighbors, and unless you're certain that every person who will ever pass in front of your house won't call up the police for an unpermitted job, there's no such thing as "when no one is looking" - your neighbors voted for the city government. Even then, it may still come up if you ever sell the property, or try to get other work done.

The previous poster spells out examples of obvious, deliberate, unequal enforcement of the law that specifically targets the kind of noncompliance you're suggesting no one would enforce against. Are you seriously suggesting this is a bluff we should call? If so, you go first.

We are finger-countable years away from AI agents that can meet or exceed the best human epistemological standards. Citation and reference tasks are tedious for humans, and are soon going to be trivial to internet-connected AI agents. I agree that epistemological uncertainty in AI output is part of the problem, but this is actually the most likely to be addressed by someone other than us. Besides, assuming AI output is unreliable doesn't address the problems with output magnitude or non-disclosure of usage/loss of shared trust, both of which are actually exacerbated by an epistemically meticulous AI.

This sounds like a recipe for paranoid accusations of AI ghostwriting every time someone makes a disagreeable longpost or a weird factual error, and a quick way to derail subsequent discussion into the tar pit of relitigating AI rules.

If a poster's AI usage is undetectable, your "common knowledge" is now a "common misconception". Undetectable usage is unquestionably where the technology is rapidly headed. In the near future when AI prose polishing and debate assistance are widespread, would you rather have almost every post on the site include an AI disclaimer?

Edit: misread your original post - you arguably want to include AI disclaimers as bannable offenses. This takes the wind out of my sails on the last point... I'm going to leave it rudderless for now, might revisit later.

I think this sounds fine in principle.

But suppose you make that post, and it actually sucks, and you didn't realize. I've definitely polished a few turds and posted them before without realizing, these things happen to the best of us. Now what? Does subsequent discussion get derailed by an intellectually honest disclosure of AI usage, and we end up relitigating the AI usage rules every time this happens?

On the one hand, I'd like to charitably assume that my interlocutors are responsible AI users, the same way we're usually responsible Google users. I don't necessarily indicate every time I look up some half-remembered factoid on Google before posting about it; I want to say that responsible AI usage similarly doesn't warrant disclosure[1].

On the other hand, a norm of non-disclosure whenever posters feel like they put in the work invites paranoid accusations of AI ghostwriting in place of actual criticisms. I've already witnessed this interaction play out with the mods a few days ago - it was handled well in this case, but I can easily imagine this getting out of hand when a post touches on hotter culture war fuel.

I don't think there's a practical way to allow widespread AI usage without discussion inevitably becoming about AI usage. I'd rather you didn't use it; and if you do, it should be largely undetectable; and if it's detectable, we charitably assume you're just a bad writer; and if you aren't, we can spin our wheels on AI in discourse again - if only to avoid every bad longpost on the Motte becoming another AI rule debate.

[1] A big part of my hesitation for AI usage is the blurry epistemology of its output. Google gives me traceable references to which I can link, and high quality sources include citations; AI doesn't typically cite sources, and sometimes hallucinates stuff. It's telling that Google added an AI summarizer to the search function, and they immediately caught flak for authoritatively encouraging people to make pizza out of glue. AI as a prose polisher doesn't have this epistemological problem, but please prompt it to be terse.

My $0.02: non-critical appendix-style references to AI output are probably okay. Usage for generating discussion or argument should be banned. We do need a rule to match expectations between users and mods, to avoid encouraging excessive attempts at AI ghostwriting, and to reduce paranoid accusations of AI slop in place of deserved criticism.

  • The cost of generating AI content is so low that it threatens to trivially out-compete human content. The volume of output and the speed of processing by AI makes for an extremely powerful gish-gallop generator.
  • Unlike our resident human gish-gallop generators, nothing I say to the AI will meaningfully change its mind. AI can simulate a changed mind, but with substantial limitations and ephemeral results. Personally, the draw of the Motte is the symmetric potential to have my own mind changed and to change others' minds by sharing our own unique experiences and perspectives. (I am open to future AI advances that make debating the AI similarly engaging, but we're not there yet.)
  • Quoting books, blog posts, etc is an acknowledgement of the perspective and effort applied by the human being cited, regardless of topic-level value alignment. AI does not develop perspectives or apply efforts in ways that warrant social considerations (at least, not presently).
  • Quoting a source also serves as a natural bridge to further learning and discovery of the source for anyone interested. There can be valuable context, history, or interpersonal relationships surrounding the quote. In this sense, AI mostly generates shallow engagement opportunities. Where it could be more engaging (e.g. reference discovery or Google search replacement), I'd prefer to take recommendations from someone with skin in the social game.
  • Importantly, quotation is typically brief, poignant, and insightful. I'll grant that brief, poignant, and insightful are possible properties of AI output, but I've yet to see anything worth quoting by those criteria.
  • Pastebinning or spoiler-tagging AI output is an invitation for me to skip it. I'm okay with this for mentions or references, where there is already an implicit understanding that I may skip or summarize the content. I am not okay with "see my response [here](www.aislop.com )" replies.
  • I strongly agree with @SubstantialFrivolity that responding to a human with a wall of AI text creates an impression of "I can't be bothered, talk to my assistant instead." It's very rude. Critically, no amount of initial prevaricating about the effort you spent prompting, tweaking, and blessing the output makes this any less rude. On the other hand, if I can't tell if you used AI, you're likely using it well enough that I don't mind. It is in principle possible that I am already interacting with several longstanding AI characters and I just don't realize. The quality of AI output to date is not compelling evidence for this possibility. I also suspect that for each person successfully using AI to ghostwrite their posts, there would be ten other clumsy attempts that obviously fail. I feel that anything other than a blanket ban on AI ghostwriting is an invitation for people to push their luck, and will lead to more AI slop, more paranoid accusations of AI slop when mere slop is sufficient, and more moderation headaches as a result.
  • The growing pool of modhat "we didn't order you not to do this, but don't do this" posts on AI slop is a strong indication of an impedance mismatch between the expectations of mods and users, and of a need for unambiguous rules about how AI should or should not be used here.
  • I'm open to reviewing any rules made about AI posting in the coming years as AI gains increasing agency.

Aside: is $0.02 competitive for this amount of inference?

(especially networking or protocol development, where hardware developers love throwing in 'this next four bytes could be an int or a float' in rev 1.0.1a after you've built your entire reader around structs)

On behalf of hardware developers everywhere, I apologize. We didn't want to do that either, but when the potential new customer opines "what a nice piece of hardware you've got, if only it could take a float" and glances meaningfully at their suitcase full of cash... well, we shake our heads and roll up our sleeves.

Possibly relevant concurrent events: DJI (the Chinese drone company) lifts geofencing restrictions in the US: https://viewpoints.dji.com/blog/geo-system-update

My sister loves Dragonball, and has several impressively sized DBZ tattoos in normally visible places. She has a job which involves regularly interacting with black communities across the country. The tattoos are literally the first thing that comes up in any introductions, and they are apparently instrumental in quickly gaining approval and credibility with any and all black men under the age of 35.

She tells me a story about stopping to get gas somewhere in Georgia, when she suddenly hears someone shout from across the street: "IS THAT VEGETA!? ON YOUR LEG?!" She shouts back "yeah man," and three black guys working in a car shop across the street start popping off, they all drop whatever they're doing and walk across the street and have a 20 minute conversation about their favorite DBZ fights.

It's true that an entire generation of Hispanic kids grew up on DBZ, but they're not the only ones!