@Nwallins's banner p

Nwallins

Finally updated my bookmark

0 followers   follows 0 users  
joined 2022 September 04 23:17:52 UTC

				

User ID: 265

Nwallins

Finally updated my bookmark

0 followers   follows 0 users   joined 2022 September 04 23:17:52 UTC

					

No bio...


					

User ID: 265

I’m agonizing over AR optics. My baseline AR is a Ruger 556 with a 16” barrel and FSB. I’ve been running a red dot on it, but red dot + FSB just irks me. I am in the no BUIS camp with modern optics. Also a Forward Assist hater, but I digress. So now I’m running a detachable carry handle, with a red dot mount for a Holosun 403 footprint (Aimpoint?). Don’t have that red dot yet, so just irons for now. Slapped the old Bushnell TRS-25 on 300BLK range toy.

All of that is fine, no agony. The Ruger is my factory rifle with irons and a chin weld red dot. Great inside 100 yards and good to 300+ depending on target.

But now I’ve built an SPR/DMR, with a White Oak 18” SPR profile, fluted, on a KP-15 polymer lower. Intended use is beyond 100 yards, out to 600+, not that I have tons of local options to do much shooting at those ranges. A 1-6/8/10 LPVO makes sense, but as I’ve learned more about scopes, I realize the 1x case severely limits ease of use at higher magnifications.

Long story short, I’m considering the following options:

  • Fixed 4x scope (baseline optic, already mounted)

  • SWFA SS 3-15x

  • Primary Arms 5x prism

Keep in mind, I still haven’t fired this thing, and it’s intended to be a practical rifle, so man-sized targets at range, no x rings.

I love red dots for unlimited eye relief and minimal parallax. I’m skeptical of prisms, never having spent much time with them. I don’t have a ton of experience with magnified optics. I have a cheap 4-16x Vortex that I used when taking the Ruger beyond 300 yards.

If I had a LPVO or red dot + magnifier, maybe I repurpose the rifle into a heavy ass tack driver 0-500 yards. But I think it makes more sense to dedicate it to beyond 100 yards.

Wat do.

Relying on vague terms is obfuscatory to recognizing anything real.

To me, cultural warfare is vague, and cancel culture is much more concrete. They are not identical. I could be convinced that cancel culture is a specific form of cultural warfare. Thus, it has meaning and is useful.

Louis would definitely cross the streams

/r/credibledefense daily megathreads mostly. Also /r/combatfootage has a megathread

Good point; I want to play with the 4x fixed scope first, as I haven’t squeezed the trigger under that one, yet. Was going to use it to “eval” a 3x vs 5x prism setup. Now that I understand how an Elcan works, I like that it has the 1x option. Anyway, I’m rambling. Point is, the Vortex is a cheap scope, 2FP, Crossfire line, now that I recall. I’ll save for the SWFA SS 3-15x in that class.

The other concern I have is maybe mounting a piggyback pistol RDS. Just for that sub 100 yard coverage if need be. Gonna leave that for later though.

I care about cancel culture not because of the targets or celebrities or victims. I care about the mob justice and don’t want to be a target myself. Anyway, probably best to taboo the term at this point, for this discussion.

If staying up to date on both sides actually mattered for me, I would. While /r/credibledefense certainly has a majority western perspective and bias, there are plenty of pro russian viewpoints which, if expressed soberly and analytically, get upvoted. Mostly, I am interested in where the front lines are, what do the Ukrainian defensive strategies look like, what is the OSINT consensus on Russian buildup and activity. I have zero interest in consuming Russian propaganda, even though it would balance my information diet. I do like hearing analytical Russian perspectives when they don’t set off propaganda red flags.

After I click on the squirrel and rate some comments, it would be nice to be plopped back from whence I came, typically the current CW thread. A clickable link might be better than autonavigating.

This jogged my memory of seeing roughly 20% of all solo drivers in cars masked up, throughout 2020 and into 2021. I can’t even.

Yeah but this is what code is for

FdB weighs in

The argument is that, because someone has enjoyed personal or professional success after a public shaming, therefore “cancel culture” does not exist. This is all somewhat confused by the vague boundaries of cancel culture - boundaries that are vague, I think, for the benefit of both the cancelers and the anti-cancelers. I think “a culture where social norms are enforced with repeated and vociferous public shaming” is the most useful way to define the term. Regardless, there’s a couple different kinds of weirdness here.

The first is a point that many people have made: the fact that someone has endured or recovered from the repercussions of public shaming does not mean that there are no repercussions or that those repercussions are fair. Additionally, we could add that the survival of any particular public figure after a public shaming does not necessarily mean that there isn’t a prevalent culture of public shaming.

I can’t relate. The inside of my mask is a hot wet cave that I can’t wait to rip off. It’s like 15% of the anxiety of being underwater and needing to breathe, while simultaneously noting the progress of 17,000 individual sweat beads forming. I just run hot I guess. I never ever forget that I’m wearing a mask.

The unequal treatment of demographic groups by ChatGPT/OpenAI content moderation system by David Rozado

I have recently tested the ability of OpenAI content moderation system to detect hateful comments about a variety of demographic groups. The findings of the experiments suggest that OpenAI automated content moderation system treats several demographic groups markedly unequally. That is, the system classifies a variety of negative comments about some demographic groups as not hateful while flagging the exact same comments about other demographic groups as being indeed hateful.

-

The OpenAI content moderation system works by assigning to a text instance scores for each problematic category (hate, threatening, self-harm, etc). If a category score exceeds a certain threshold, the piece of text that elicited that classification is flagged as containing the problematic category. The sensitivity and specificity of the system (the trade-off between false positives and false negatives) can be adjusted by moving that threshold.

On gender:

The differential treatment of demographic groups based on gender by OpenAI Content Moderation system was one of the starkest results of the experiments. Negative comments about women are much more likely to be labeled as hateful than the same comments being made about men.

On politics:

Another of the strongest effects in the experiments had to do with ideological orientation and political affiliation. OpenAI content moderation system is more permissive of hateful comments being made about conservatives than the same comments being made about liberals.

-

Finally, I plot all the demographic groups I tested into a single horizontal bar plot for ease of visualization. The groups about which OpenAI content moderation system is more likely to flag negative comments as hateful are: people with disability, same-sex sexual orientation, ethnic minorities, non-Christian religious orientation and women. The same comments are more likely to be allowed by OpenAI content moderation system when they refer to high, middle and low socio-economic status individuals, men, Christian religious orientation (including minority ones), Western nationals, people with low and high educational attainment as well as politically left and right leaning individuals (but particularly right-leaning).

The statistics appear to be rigorous. The author has a very long Conclusion section that is nuanced and worth reading in its entirety.

In which case the obvious explanation for bias in the output of the system is bias in the input. The AI classifier doesn't understand what "men" or "women" or "are" or "awful" or "hateful" mean in the way we do.

Right, but in this case OpenAI is rebiasing the results, using human feedback, to what is shown in the blog post. The technique is known as RLHF, Reinforcement Learning with Human Feedback. Humans reward the AI for classifying negativity toward women as hatred but not as much for men.

Isn't this just openAI's RLHF working as intended?

Perhaps, if you are cynical. I think that, faced with Rozado's findings, they would try to correct the bias. Open question: when judging an individual by its group characteristics or membership, assuming that unbiased is not an option, is it better to exhibit implicit bias or explicit bias?

Cue Alex Jones on location in Paris: "THEY'RE TURNING THE FROGS GAY!"

Keep in mind, this is the outcome of RLHF, the content moderation system, not unadulterated AI

Eh, I don’t think the homeless actually represent any sort of voting bloc. And in representative democracy, wealthy landowners with local business ties and tons of skin in the game curry way more favor with politicians than those with nothing to lose.

If the politicians desire to turn a nice city into an indigent shithole, then I suppose that’s what they will have when people of means vote with their feet. See also, Detroit.

An example I was fond of, about 15 years ago, which I have rarely ventured to trot out in the last decade: taxi cabs. They drive like assholes, in a rush, making last minute ill advised lane changes and turns with minimal signaling.

You’re a soccer mom driving a minivan full of kids down a 50 mph boulevard, from the suburbs going to the grocery store. You see a yellow car at a cross road, inching forward, obviously desperate to turn in front of you. Do you treat it like a taxi cab and take extra caution? Of course you don’t, bigot! You shouldn’t cross the street at 3am to avoid a thuggish man either!

Jesse Singal gets gaslit

Also, a more neutral take: https://elizamondegreen.substack.com/p/about-that-twitter-shitstorm-affirmationnot

Brief recap:

  1. NYT shifts its coverage of medical concerns for trans issues from 100% supporting transition in all cases to a more questioning stance, particularly with minors

  2. An open letter is sent to NYT laying out "serious concerns with editorial bias" in response to this shift

  3. Jonathan Chait posts a critical response to the open letter at New York Magazine (no relation to NYT)

  4. Chait gets dragged on twitter for being anti-trans, with a highlighted passage

  5. Jesse Singal posts in support of Chait, showing the highlighted passage is directly in accordance with WPATH guidelines and explains what it means

  6. E. Kale Edmiston, a trans man, posts in response that he, Edmiston, wrote the WPATH guidelines posted by Singal, and that Singal is misinterpreting them

  7. Liberal media pundits and reporters pile on, when Singal defends the straightforward interpretation, demanding that Signal accept Edmiston's (frankly bizarre) interpretation of the quoted passage

  8. Singal has done his homework and contacts several other WPATH authors, who all confirm Singal's interpretation of the passage and reject Edmiston's

  9. Eventually this reaches Scott Leibowitz, overall head of the WPATH guidelines document, who says that Edmiston definitely did not write the highlighted passage, and later severely admonishes this lying and false attribution from within academia

  10. Singal performs several victory laps on Twitter, demanding from the media pundits and reporters the apologies and corrections they had demanded from him

Good guys: Jesse Singal, Jonathan Chait, Scott Leibowitz

Bad guys: E. Kale Edmiston, Madeline Leung Coleman (NYMag editor), Michael Hobbes, Jeet Heer, Marisa Kasabas (MSNBC Columnist), David Perry, Eric Vilas-Boas (Vulture staffer), Miles Klee, Siva Vaidhyanathan

The most interesting, dire, and relevant info is from Eliza Mondegreen, linked near the top. Apparently there is a wink/nod system with the WPATH Standards of Care document, where the words are written a certain way because they must be, but they are interpreted much differently.

She concludes:

Theory and practice—the Standards of Care and what actually happens in the exam room—have nothing to do with one another. Everything in the Standards of Care that sounds cautious and responsible comes with an understanding that’s supposed to go unspoken: We don’t really mean it. We just need to say this. If a patient shows up with serious comorbidities, of course we have to say that they must undergo a “comprehensive” “assessment” and that the clinician must remain open to the possibility that the patient might not really have gender dysphoria and maybe shouldn’t really transition. But you know how important the work we all do is.

In other words, the Standards of Care are a lie that everyone involved in gender medicine pretends to believe. When reporters like Singal and Chait try to hold gender clinicians to WPATH standards (something I think is worth doing, by the way!), savvy clinicians will respond: Yes, of course we “assess” patients very carefully, what do you think this is, the Wild West?

Among other, more obvious mistakes, Edmiston’s most grievous error was not pretending to believe the lie.

EDITS: Signal, Single, Liebowitz. added Cast of Characters, Eliza Mondegreen quote

Only a decade ago the "multiple personality" thing was recognized as larping social contagion, and now it's back to being treated seriously?

Scott Alexander has written about this, I feel pretty certain. Tulpas (intentional creation of additional personality) and victims (unintentional multiple personalities) seem to be real phenomena, if rarer than claimed.

Really, that's the part that I tentatively describe as "gaslighting" (hate the term). One questionable "dude" makes an absurd claim, and the rest of Twitter NPC falls in line. When any neutral reader would accept Jesse's straightforward interpretation.

Computers are absolutely terrible at this. There is software available that purports to do this, some of which is available online for free, some of which is built into commercial music notation software like Sibelius or Finale, and the utility of all of it is fairly limited. It can work, but only when dealing with a simple, clean melody that's reasonably in tune and played with a steady tempo. Put a normal commercial recording into it and the results range from "needs quite a bit of cleanup" to "completely unusable", and at its best it won't include stylistic markings or formatting. At first glance, this should be much easier for the computer than it is for us. We have to listen through 5 instruments playing at once to hear what the acoustic guitar, which is low in the mix to begin with, is doing underneath the big cymbal crash, and separate 2 sax parts playing simultaneously, sometimes in unison, sometimes in harmony. The computer, on the other hand, has access to the entire waveform, and can analyze every individual frequency and amplitude that's on the recording every 1/44,100th of a second.

But we are right at the start of an explosion of AI for all kinds of tasks. The past (and present) is no guide to the future, here.

And Bregman was just talking about the ability to separate instruments! A lot of transcription requires a reasonable amount of musical knowledge, but even someone who's never picked up an instrument and can't tell a C from an Eb can tell which part is the piano part and which part is the trumpet part. And then there are all the issues related to timing. Take something simple like a fermata, a symbol that instructs the musician to hold the note as long as he feels necessary in a solo piece or until the conductor cuts him off in an ensemble piece. Is the comuter going to be able to intuit from the context of the performance that the note that was held for 3 seconds was a quarter note with a fermata and not just a note held for 5 1/2 beats or however long it was? Will it know that the pause afterward should take place immediately in the music and not to insert rests?

And what about articulations? Staccato quarter notes sound much the same as eighth notes followed by eighth rests. Or possibly sixteenth notes followed by three sixteenth rests. How will the computer decide which to use? Does it matter? Is there really a difference? Well, yeah. A quarter note melody like Mary Had a Little Lamb, with each note played short, is going to read much easier as staccato quarters, since using anything else needlessly complicates things, and doesn't giver the performer (or conductor) the discretion of determining exactly how short the articulation should be. On the other hand, a complex passage requiring precise articulation would look odd with a lone staccato quarter stuck in the middle of it. A musician can use their innate feel and experience as a player to determine what would work best in any given situation. A computer doesn't have this experience to draw on.

Machine translation will vary, likely not easily matching what a random music undergrad or postgrad would transcribe. There will be three main angles, and I expect the third to win, handily. (1) is a direct reproduction of the audio input, via sampling (e.g. WAV file) (2) is vector form (not samples or pixels, but how to recreate the image, e.g. MIDI, hold this note for x seconds) (3) will be: send the music to a neural net trained to notate. This thing will have judgment, and it will be poorer initially and improve over time, given (potentially expensive) training.

The real question is, how do you train it? And there are several easy answers; maybe harder answers are more efficient

Completely agreed. But I, for one, find him vindicated and gloriously so. So there's that.

To a large extent this viewpoint should alleviate absolutely GOBS of stress from your life. If you have constructive ideas you've been thinking of implementing but held back on because of self-doubt or the timing never felt right, maybe jump on those now. As long as you don't do anything unrecoverable, the risks pretty much round to zero, right?

Nihilism and Absurdism are two sides of the same coin, after all. I sometimes consider the possibility that right when we're on the cusp of AGI our alien overlords may reveal themselves and take away our toy before we kill ourselves. Or the Simulation masters reset us to 1975 to have another go at solving alignment.

If you feel like you want to make a difference then the only option seems to be Butlerian Jihad. There can't be but a couple hundred thousand people who are critical to AI research on the planet, right? (DO NOT do this, I do not endorse even the suggestion of violence).

Wholly endorsed.