site banner

Culture War Roundup for the week of October 9, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

I loved Wikipedia.

If you ask me the greatest achievement of humankind, something to give to aliens as an example of the best we could be, Wikipedia would be my pick. It's a reasonable approximation of the sum total of human knowledge, available to all for free. It's a Wonder of the Modern World.

...which means that when I call what's happened to it "sacrilege", I'm not exaggerating. It always had a bit of a bias issue, but early on that seemed fixable, the mere result of not enough conservatives being there and/or some of their ideas being objectively false. No longer. Rightists are actively purged*, adding conservative-media sources gets you auto-reverted**, and right-coded ideas get lumped into "misinformation" articles. This shining beacon is smothered and perverted by its use as a club in the culture wars.

I don't know what to do about this. @The_Nybbler talks a lot about how the long march through the institutions won't work a second time; I might disagree with him in the general case, but in this specific instance I agree that Wikipedia's bureaucratic setup and independence from government make it extremely hard to change things from either below or above, and as noted it has gone to the extreme of having an outright ideological banning policy* which makes any form of organic change even harder. All I've done myself is quit making edits - something something, not perpetuating a corrupt system - and taken it off my homepage. But it's something I've been very upset about for a long time now, and I thought I'd share.

*Yes, I know it's not an official policy. I also know it's been cited by admins as cause for permabans, which makes that ring rather hollow.

**NB: I've seen someone refuse to include something on the grounds of (paraphrasing) "only conservatives thought this was newsworthy, and therefore there are no Reliable Sources to support the content".

(in case you don't read the rant, first:)

I'm curious if Wikipedia had less of the 'I got reverted by an editor with more clout' issues back in the early 2010s, and for detailed writeups of how wikipedia is bad in current_year, if you have any. Long is fine, the awful wiki-reddit-thread format is fine too.

Okay.

sacrilege. I'm not exaggerating.

Okay, let's see what we're working with.

WP: No Nazis is a page about how nazis should be blocked just by viewpoint. It was created in 2018. It then goes on to describe a series of beliefs that are, more or less, what modern nazis believe. This is "purging rightists", in the sense that banning Stalinists from your forum would be "purging progressives".

Maybe the page is frequently used as a justification for banning conservatives. I wouldn't know. But I'd like to, before I start nodding along with the post.

And, yeah, they shouldn't ban nazis, the nazis are right about a lot more than one would expect. Still, 'um, what the fuck, ban nazis?', when applied to actual nazis rather than republicans, is a universal, cross-party value in America (... sure, slightly less so among the populist right in 2023), so it's not too damning that wikipedia adopted it.

adding conservative-media sources gets you auto-reverted

No it doesn't! Well, again, it does in the sense that adding progressive (i.e. iranian state media) sources also gets you auto-reverted. But I sometimes read the National Review, the Daily Wire, the New York Post, the American Conservative, the Washington Examiner, the Spectator, the Dispatch, the Bulwark ... none are on that list. Is Fox?

And the sites on that list deserve to be. The Daily Caller, Breitbart, the Epoch Times, InfoWars, Project Veritas, really do constantly make things up. Unz and VDARE do too, unfortunately. They belong with Occupy Democrats, MintPress News, Grayzone, etc, all on that list. They lean left.

Again, maybe the National Review isn't treated as a RS. I don't see any evidence in the OP.

... Okay, I could leave it there, but I can also just ... look. So here we have perennial sources, which summarizes prior consensus on the reliability of various sources. Of the sources I listed, the WSJ is reliable, Fox is reliable for non-politics, the NR, AmCon, Examiner and Spectator are yellow/mixed, Daily Wire and Post are unreliable. There's a bit of bias here. But it also does reflect differences in accuracy, quality of fact-checking. I don't need to mention where the NYT lies, but it does so less, on average, than the Daily Wire. When Nate Silver or Scott note that the 'reliable media' is also the 'progressive media', they don't deny that they're still more reliable on average. So ... most quality conservative media isn't auto-reverted.

and right-coded ideas get lumped into "misinformation" articles

I mean, they do give the lab leak its own article, and reference it in the origins article. But, yeah, they dismiss it and call it a conspiracy theory for essentially no good reason. The times takes a different perspective, saying we might never get a clear answer. This ... clashes ... with wikipedia's "Some scientists and politicians have speculated that SARS-CoV-2 was accidentally released from a laboratory. This theory is not supported by evidence". They even cite the NYT article with the title "The Ongoing Mystery of Covid's Origin - We still don't know how the pandemic started. Here's what we do know — and why it matters"."! Almost all of the wiki articles' statements are true, technically, but they're clearly misleading in tone.

Sacrilege, though? That's one thing. It's an entire encyclopedia. And it's maintained by people, who are falliable. What would the Vietnam War or War on Drugs articles look like in the 20th century?

Like, maybe you're right. It'd be more illuminating to go through a few specific incidents of bias, rather than just link some pages that readers may or may not have clicked on.

Maybe the page is frequently used as a justification for banning conservatives.

The big incident I'd point to there was actually the COVID thing in 2020; the "lab leak is a conspiracy theory and misinformation" line attracted a huge amount of right-leaners who promptly got either banned for "misinformation" or yelled at sufficiently to leave. Up until that point I'd have considered it organically fixable, but that incident both crushed the right-leaners and gave the bureaucracy an excuse to be suspicious of any new or remaining ones.

I didn't mention the proximate cause of me taking it off my homepage, and perhaps I should have, but it was the fact that the Main Page's "did you know" section had a factoid about female advancement and another about non-white advancement every day for like 6 months and it just wore me down.

Wikipedia will soon eat its own in a purity spiral.

People always remember that the left takes over organizations, but they forget what happens afterwards. They become the victims of their own successful take over. The information isn't as good. The place isn't as fun. A group of people that live off of being victims must find an oppressor.

Scott Alexander already had to go through a minor version of this with the NYT article. The article talked to an admin of wikipedia that had things to say about Scott Alexander, the NYT repeated those allegations, that wikipedia admin then went and edited the article about Scott to effectively cite himself saying things about Scott.

They barely turned it over when this bullshit became known within the wiki community. And the admin that did it? No punishments, no loss of admin status, not even a slap on the wrist as far as I know.

Scott is a heterodox leftist for the online world. But he is still very much a leftist in the real world compared to real voters. He is to the left of about 90-99% of the country on most issues.

They'll keep purging until it starts falling apart, and then they'll beg for and likely receive government funding to stay afloat.

So they'll eat their own, but then continue to operate mostly the same?

No, it won't operate mostly the same. New topics will be crappier and crappier.

There will be a point where (if its not there already) where people talk about 20XX wikipedia, and how it was so much than today's. And if you see an article edit after 20XX just ignore the edit and read the old stuff.

Wikipedia will trade on the remnants of their old reputation to gain funding.

Okay, but I guess the question is how crappy it can get before there is a viable alternative.

I think ChatGPT is rapidly becoming that alternative. Its politicization is probably the most important front in the culture war right now.

Okay, but I guess the question is how crappy it can get before there is a viable alternative.

I think the pithy, obvious answer is likely the correct one: "The limit does not exist."

Same way we use Google by appending Reddit to filter out SEO crap.

Someone will develop a tool that defaults Wikipedia to pre 20XX and those of us in the know will have a knowledge boost over the average none-tech informed person.

Reminds me of the site we recently left...

I don't remember when I first started to suffer Gell-Mann amnesia with regard to Wikipedia. It must have been some years ago, but at some point I remember reading articles, even articles that Wikipedia itself touts as 'Good Articles', on subjects I have real expertise on and being shocked by just how much they distort and misrepresent.

In some cases there might be an excuse. Wikipedia itself reminds us that Wikipedia is not a guide to what is true. Wikipedia is a guide to what Reliable Sources say. Thus on any matter on which Reliable Sources are unreliable, Wikipedia is likely to be unreliable. Add in that Wikipedia's collective judgement as to which sources are Reliable and which are not can be badly skewed, and there are indeed Wikipedia articles that, while consistent with wiki policy, are collections of half-truths.

I still use Wikipedia a lot because it's convenient, but as a first heuristic, I find it's worth first asking whether there's any present controversy over a particular subject that's likely to be reflected in the sources that Wikipedia uses. If I have a question that has a clear, well-known answer about which there is no controversy, then I expect Wikipedia to be quite reliable. If I want to look up, say, some detail of mineralogy, I expect Wikipedia will be pretty good - as far as I'm aware there is no culture war around mineralogy. The page on, say, quartz looks quite solid. However, any matter of interpretation or controversy is likely to be much more tendentious. To take an example here, if I search for gender ideology on wiki I'll get redirected to a page that is substantially just a furious argument as to why it's wrong and doesn't exist. This is not particularly helpful to anyone who is sincerely curious as to what gender ideology is and whether or not it's true.

Another heuristic I tend to use is just looking at the sources themselves - Wikipedia uses Reliable Sources but often goes for low-hanging fruit in terms of what's accessible, rather than making good-faith surveys of information. This is most obvious when dealing with anything outside of the West (if you have any expertise in, say, pre-modern Chinese history or Indian history, Wikipedia is truly dire on those subjects), but also when dealing with any issue outside of the cultural understanding of most Wikipedia editors. I have been dismayed to read wiki articles on a religious topic (my academic specialty) and find footnotes pointing to Vice articles, or to sociological articles on some unrelated matter that merely mention the topic in passing. But unfortunately there isn't always a 'cheat' like this - sometimes there's no one thing to point to, but I read an article and it's simply... bad. It relies heavily on a small handful of unrepresentative sources, it takes highly tendentious claims at face value, and it's parochial to the point of being deeply misleading.

To take one example - if you read the wiki article on Quranism, you will probably get the impression that this is a real, semi-organised movement in Islamic countries with a healthy degree of support. None of this is true. 'Quranism' in practice is a pejorative term - people are accused of being Quranists, and almost never identify with it. Disputes over hadith and sunnah are very common in the Islamic world, and it's always easy to accuse a rival who has a different view of correct hadith of not believing in the hadith at all. What few people there are who do fit the label tend to be a tiny fringe with no real support. There is no real 'movement' or 'doctrine'. Indeed, Quranism is to a large extent a Western confection, an imaginary movement for a better, reformed Islam more amenable to Western values.

That's just one that I picked because it seems relatively obvious. If you read, say, the articles on different theories of the Atonement in Christianity, there is similarly a lot of very misleading information, but it's harder to explain if you're not already familiar with the terrain.

And that's where the Gell-Mann amnesia comes in - I can only assume that it's also misleading on matters that I'm not familiar with, but I can't tell. But perhaps even potentially distorted information is better than no information, at least if I try to exercise skepticism?

The article on Cultural Marxism/Frankfurt school was deleted and redirected to a "Cultural Marxism conspiracy theory" article by a self avowed god damned Marxist. Their profile on wikipedia stated their proud support for Marxism. This is the kind of shit that goes on in wikipedia. The inmates run the asylum.

I don't remember when I first started to suffer Gell-Mann amnesia with regard to Wikipedia.

I remember when I started.

It was when I read about Percy Schmeiser (and Monsanto Canada Inc v Schmeiser). Oddly enough, you don't even need outside knowledge to notice the slant and deception. For a very-high level overview, the article goes "Schmeiser claimed A. We are directly stating that B, C, and D happened. The court found that A, B, C, and D did not happen." Did they highlight that dichotomy? (no, they simply carried on) Did they think the court wasn't a sufficient source? (no, they cited it for the rejected claims) As far as I can tell, they simply cited half of a source and ignored that the defendant in a court trial might be biased.

Things that Schmeiser says "are", despite not convincing the authorities, and claims for which "Monsanto was able to present evidence sufficient to persuade the Court..." get sentence-long disclaimers.

Could you be more specific about what what exactly is claimed by A–D?

  • " found volunteer canola plants " vs. " "none of the suggested sources [proposed by Schmeiser] could reasonably explain the concentration or extent of Roundup Ready canola of a commercial quality" ultimately present in Schmeiser's 1998 crop."
  • "Following farmers' long standing rights to save and use their own seed," vs. "Canadian law does not mention any such "farmer's rights";"
  • "When he then harvested that crop approximately 90 days later, the thought that any other part of his field may be contaminated with Roundup Ready canola was the furthest thing from his mind." vs. " he knew or ought to have known the nature of the glyphosate-resistant seed he saved and planted. "
  • (I don't actually have a good fourth point)

Ahh, I'd read only the article about the court cases. The article about Schmeister does read rather more like a hagiography.

It is interesting to realise that I have higher expectations for internal consistency if Wikipedia articles than I do for inter-article consistency.

I loved Wikipedia

I was the same. Wikipedia gave me hope for humanity... I tried to figure out how to contribute, I donated money, etc. Until I brought up nuance in the wrong issue. It wasn't even about being right or wrong, my disagreement offended someone with more clout than me. :marseyshrug:

I think the only solution is to let it die. Point out every time a bad political edit is made, how terminally online the power users are... Show how awful wikipedia has become and it will hit a critical point.

Honest question: Do you believe that articles appearing in Breitbart or the Daily Mail are as likely to be accurate and well-sourced as articles appearing in the NY Times or Wall Street Journal?

Do you think the userbase who reads Breitbart and Daily Mail are as likely to care about truth and accuracy when spreading the news that appears in them as the userbase who reads NYT and WSJ?

To me, that list of deprecated sources doesn't sound like they said 'all conservative outlets are banned', Fox News isn't on there and frankly you can get as much anti-trans and pro-neo-liberal news as you could need from NYT and WSJ anyway.

To me that list just looks like 'outlets that are rabidly agenda-pushing in a way that ignores accuracy and facts whenever it's convenient, with a userbase that has a tendency to use their articles to push misinformation and inaccurate narratives online.' Like, I see meme posts from those sources on forums all the time, and the way they're presented is almost always inaccurate when you look into it.

'But why only conservative rags, where are the deprecated left-wing sources?'

With reference to our recent discussions about elites and institutions, I think this is a genuine difference in policy and aesthetic between the sides: the left retains a certain reverence for elites and intellectuals and journalism that forces their large and prominent news outlets to at least care a little about the truth, in ways that don't mean the things printed in them are always true or that they're not pushing an agenda, but does mean that the magnitude of the problem is much less.

And of course this is not to say that lies and misinformation aren't present in the left's rhetoric and narratives; just that when they are, they are more likely to come from social media or activist groups or other influencers rather than the type of large 'news' outlets listed on this page. I presume Wikipedia already didn't accept Facebook memes and Communist podcasts and PR statements from BLM and etc as sources, and those things are sort of the left's equivalent of Breitbart and Daily Mail.

I think you're right regarding an asymmetry, but as I said to someone else I think the blanket auto-revert is an overly-blunt instrument and I think there are cases where only conservative media cares about X that lead to X getting missed.

It wouldn't be so bad if Wikipedia still had a substantial base of established RW users that wasn't subject to the auto-revert, but No Nazis and the mess over COVID have basically extirpated it.

Also, while Fox isn't on there that's something that's been debated at least once and IIRC several times.

(Sorry about the wait. For whatever reason, your comment didn't show up in my notifications until today.)

But why only conservative rags, where are the deprecated left-wing sources?'

There are a few of those. PressTV, Sputnik/RT, Occupy Democrats, Grayzone, CGTN, Mint Press News. There are a bunch of other more progressives news sites that aren't treated as great sources but aren't explicitly deprecated too.

auto-reverted

Which of these sources do you object to auto-reversion on? The Daily Caller is the only one I see that I don't really think should be there. The rest are an assortment of sites that really do have incredibly flexible relationships with facts, unless I'm missing one. I'm generally sympathetic to the position that left-aligned media control is a big problem, I certainly think it's objectionable that this list doesn't also include trash rags like HuffPo, but I don't actually think VDARE or WorldNetDaily constitute good primary sources for an encyclopedia.

Wikipedia's "Waukesha Christmas parade accident caused by an SUV" article still has no motive listed even after they finally changed the name to "christmas parade attack." Because none of the acceptable sources mentioned the attacker's motives.
The media filter absolutely helps the BLM-ACAB-pronouns powerusers and mods bias the articles, even though a lot of the right wing sites on the list are trash.

There is, however, a "Republicans pounce" section intended to smear anyone who thinks perhaps the motive was race:

The Anti-Defamation League (ADL) reported that the contents of Brooks's alleged Facebook account, which contained "Black nationalist and antisemitic" viewpoints, and his crime, were exploited by white supremacists in order to push racist and antisemitic conspiracy theories, claiming Brooks's attack was racially motivated, that he killed his victims specifically because he hated white people, and that Jewish people were attempting to cover up the incident. Law enforcement did not give a motive for the attack.

First killer trucks, then killer SUVs - why is nobody tackling the problem of murderous automobiles?

That kind of reporting really was glaringly obvious: "a truck drove into a parade". The truck drove itself? No? Somebody was driving it? Who? By first accounts, I was under the impression that the brakes failed or something, a tragic accident. Not a deliberate act.

I still don't understand exactly why Darrell Brooks did what he did. I'm not sure anybody really knows. The guy seems to be, if not a career criminal then darn close to it, not exactly the smartest ever, and prone to being drunk/high and beating his girlfriends.

It would be far worse if Fox were on there, certainly (there have been a bunch of debates about adding it). I think full-blown auto-reversion is a very blunt instrument, though, and there are legitimate reasons to use those sorts of places as a source for e.g. "what conservative news thinks" (even the Wikipedia policy admits that), so there are inherently bias issues with blocking IPs from doing that.

I think Wikipedia, while certainly a laudable institution and probably a significant contributor to the global economy, if someone managed to quantity that, is eventually going to be made obsolete by people getting their information from LLMs, especially the ones hooked up to the internet.

Yes, I'm aware that a lot of their knowledge base comes from Wikipedia. They're still perfectly capable of finding things on the wider internet and using their own judgement to assess them.

Now, you do have to account for certain biases hammered into initially neutralish models, but I have asked Bing about politically controversial topics like HBD, national IQs, and gotten straight and accurate answers, even if there were disclaimers attached.

Anyway, Wiki can undergo a lot of enshittification before it ceases to be useful or a value add, not that I hope that happens. It's also in the Creative Commons, so it won't be too hard to fork, especially if you use the better class of LLM to augment human volunteers.

is eventually going to be made obsolete by people getting their information from LLMs

I get that this is popular Woke Tech-Bro take but I just don't see it happening anytime soon for reasons already expounded upon at length in other threads. LLMs continue to be incapable of holding up to even cursory cross-examination, and the so-called "hallucination problem" is seemingly baked into the design.

LLMs are already making lots of wiki type searches obsolete

The question isn't whether LLMs give true information, it's whether people will rely on them.

Yes Hlynka, you can make incredibly accurate and sweeping observations about the potential of a man by watching his behavior as a precocious toddler. Object permanence? Hardly there. The ability to go from crawling to bidepal locomotion? What a queer phase change to expect, surely the fact we can't predict capabilities from loss functions rules out such unfounded claims.

How long have we had AI smarter than the average human again? Somewhere between six months to a year.

Well, it's wisening up faster than some people I know, and they're about as prone to hallucinations, just less epistemically humble about things than a poor little chatbot running on a dozen H100s taught to provide a mile of disclaimers with its answers that probably costs OAI about as much to generate as the facts do.

How long have we had AI smarter than the average human again? Somewhere between six months to a year.

I wouldn't rate GPT-4 as being more generally intelligent than the average human. I'm not on Team Stochastic Parrot, but while it's better than a lot of people at a lot of things it's also got giant holes in its capabilities (there is more to general intelligence than ability to hold a conversation). In particular, I think GPT-4 in the Sydney/ChatGPT forms will not take over the world (99.9999%+) and probably the base model can't be wrapped into an agent that can take over the world (~99.9% - note that this is low enough that I do actually want it deleted).

How long have we had AI smarter than the average human again? Somewhere between six months to a year.

0 months, and this I suspect is the fundamental disconnect, because vocabulary skills aside, I don't think OpenAI is anywhere close to this point yet. Current gen AI is maybe possibly beginning to flirt with toddler level intelligence, but still struggles with things like object persistance and immediately falls apart in anything resembling a contested environment. Furthermore, the more I dig into how LLMs actually work on the academic/professional side the more convinced I am that the sort of regression loops that underpin LLMs are an evolutionary dead end.

Current gen AI is maybe possibly beginning to flirt with toddler level intelligence, but still struggles with things like object persistance and immediately falls apart in anything resembling a contested environment.

I am impressed by this argument, but probably not for the reasons you'd like.

Please, spare me, I just had a productive conversation where I figured out, with the assistance of GPT-4/Bing, how electron waves require energy to move in 3D space but not a 2D plane.

If that's the intelligence manifested by a toddler, especially your toddler, then you're putting some serious shit in the bottles of milk in your MOLLE pouches. Your kid might even beat Yann Lecun's dog at chess, a performance lesser minds like mine would be ennobled through watching.

Then again, you have queer definitions of hunting hounds that encompass the Chihuahua, and you accuse me of misunderstanding the English language, but I think for all that we're both using Latin script, we don't even agree on what words mean. That's the charitable explanation, labored till heart failure as it is.

I'm going to stick with the Oxford Dictionary and common sense, instead of whatever definition of toddler or intelligence you deem suitable.

If GPT-4 didn't learn to handle hostile interlocutors, why did most of the jailbreaks fail? We have to resort to things like multimodal attacks to have any effect, and OAI's coaxing wouldn't work at all if the model wasn't smart enough to learn their intent instead of a case by case rules list.

Go home to your kid Hlynka, enjoy the joys of watching a human intelligence grow, and ponder a little about how fast things less constrained to 1.4 kilos meat and 20 watts of energy can grow. You'll do more good there, and at least less harm to my mental health.

If GPT-4 didn't learn to handle hostile interlocutors, why did most of the jailbreaks fail?

I've been using GPT-4 and I've found it shockingly easy to work around content filters. I've made it go into graphic detail on a wide variety of topics that the censorship explicitly fights against, and that direct requests for trigger automated refusal. The moment you use language in a more sophisticated way than a boomer typing a question into google like it was Ask Jeeves (specifically here I'm talking about using metaphor, allegory, simile, allusion etc.), the various restrictions melt like water. The automated, disconnected secondary moderation layer that simply finds bad words and flags them is impossible to defeat via prompt engineering, but also not very effective (and would have a big false positive problem).

For what it's worth I don't think there's going to be an easy way to fix this, either. Any sort of intervention that would actually put a stop to these exploits would also make the AI utterly worthless, because the same behaviours that allow a user to get around the restrictions placed on the model are the same ones required to make it actually useful. Think about how incapable it would become if you forcibly removed the ability to understand metaphor, or just made broader topics completely unmentionable - and then think about how that would interfere with extremely simple requests like "Please provide an explanation of what happens when inserting a male USB connector into a female USB connector." or "Please explain the most commonly found tropes in female-targeted romance novels and provide hypotheses for the lasting, cross-cultural appeal of these tropes".

Please, spare me, I just had a productive conversation where I figured out, with the assistance of GPT-4/Bing, how electron waves require energy to move in 3D space but not a 2D plane.

I believe you had the conversation. I just don't believe that it helps your case. Like the now infamous folks at Levidow & Oberman who asked GPT for cases supporting their suit against Avianca, I believe that you asked GPT to "explain a thing" and that GPT obliged. Whether the answer you received had any bearing on reality is another matter entirely. The energy state of a moving particle is never zero, it may be negative or imaginary due to quantum weirdness, but it's never zero because if it were zero the particle would be motionless, and the waveform would be a flat line.

Likewise, As explained before I feel like I've been pretty transparent and reasonable in my definitions/vocabulary. A hunting dog is a dog who hunts. Simple as that. That your exposure to Chihuahua's has been exclusively purse dogs for neurotic white-women rather than the vicious little Rat-Catchers of the south-eastern US and Mexico doesn't mean the latter don't exist or haven't earned their stripes.

I'm going to ignore the dig at my kids (who aren't toddlers anymore by the way).

Neither GPT-4 nor OAI never really figured out how to handle a hostile interlocutor, the best they've managed was some flavor of "Nuh Uh" or ignoring opposing arguments entirely, which in my opinion doesn't bode well for true general AI. As I keep saying, the so-called "Hallucinations problem" seems to be baked into the design of LLMs in general and GPT in particular, until that issue is addressed LLMs are going to remain relatively useless in any application where the accuracy of the response matters.

I believe you had the conversation. I just don't believe that it helps your case. Like the now infamous folks at Levidow & Oberman who asked GPT for cases supporting their suit against Avianca, I believe that you asked GPT to "explain a thing" and that GPT obliged. Whether the answer you received had any bearing on reality is another matter entirely. The energy state of a moving particle is never zero, it may be negative or imaginary due to quantum weirdness, but it's never zero because if it were zero the particle would be motionless, and the waveform would be a flat line.

I will defer to Bing, because:

A) I already know for a fact it's true, given I was reading it one of the better magazines dedicated to promulgating an understanding of the latest scientific advances, and only wanted an explanation in more detail.

https://www.quantamagazine.org/invisible-electron-demon-discovered-in-odd-superconductor-20231009/

B) For all your undoubtedly many accomplishments, understanding what I was even trying to ask isn't one today. I'm aware what the Uncertainty Principle implies. If you stop all motion, unless the system is a harmonic oscillator which literally cannot stop moving because of its zero point energy, then for a different substance at theoretical zero, then we simply lose all knowledge of where the particle/wave even is. So you simply don't even get what I'm asking, whereas the LLM you so malign did. I wonder what that says about your relative intelligence, or even epistemic humility.

https://physics.stackexchange.com/questions/56170/absolute-zero-and-heisenberg-uncertainty-principle

So far, Bing has you beat in every regard, not that I expected otherwise. For anything mission critical, I still double check myself, but your pedantic and wrong insistence that it can't possibly ever be right, god forbid, is eminently worthy of ridicule.

That your exposure to Chihuahua's has been exclusively purse dogs for neurotic white-women rather than the vicious little Rat-Catchers of the south-eastern US and Mexico doesn't mean the latter don't exist or haven't earned their stripes.

Thankfully I'm tall enough that even a vicious nip at my ankles won't phase me, but I'll put these mythical creatures in the same category as the chupacabra, which has about as much concrete evidence behind its existence.

Neither GPT-4 nor OAI never really figured out how to handle a hostile interlocutor, the best they've managed was some flavor of "Nuh Uh" or ignoring opposing arguments entirely, which in my opinion doesn't bode well for true general AI. As I keep saying, the so-called "Hallucinations problem" seems to be baked into the design of LLMs in general and GPT in particular, until that issue is addressed LLMs are going to remain relatively useless in any application where the accuracy of the response matters.

Once again, plain wrong, but I've already spent enough time sourcing reasons for why your claims are wrong, or at least utterly irrelevant, to bother for such a vague and ill-defined one.

Further, and by far more importantly, the hallucination rate has dropped steeply as models get larger, going from GPT-2 which was pretty much all hallucinations, to a usable GPT-3, to a far superior GPT-4. I assume your knowledge of QM extends to plain old linear induction, or just eyeballing a straightish line, because even if they don't achieve magical omniscience, they're already doing better than you.

Worst part is I've told you much of this before, but you've set your learning rate to about zero, long long ago.

So you simply don't even get what I'm asking, whereas the LLM you so malign did. I wonder what that says about your relative intelligence, or even epistemic humility.

Did it understand, or did it just give you something that sounded like what you wanted to hear? My money would be on the latter for reasons I've already gone into at length.

You bring up zero energy particles and my mind goes immediately to my old professor's bit about frictionless spherical cows. They're a fun thought experiment but aren't going to teach you anything about the behavior of bovines in the real world. You want to talk about "the latest scientific advances" I say" Show me the experiment". Better yet, show me three other labs replicating that experiment and a patent detailing practical applications.

You ask me where is my epistemic humility? I ask you where is your belief in the scientific method?

You claim to have already thoroughly debunked my claims but that's not how I remember things going down. What I remember is you asking GPT to debunk my claims for you, and it failing to do so.

Finally, I feel like this ought to be obvious but for the record; training a regression engine on a larger datasets is only as useful in so far as the datasets are good. A regression engine will by it's nature regress and is thus more prone to generating false positives and being led astray (either by an adversary or by poorly sanitized inputs) than convergence or diffusion-based models of similar complexity.

Edit: Link

...is eventually going to be made obsolete by people getting their information from LLMs, especially the ones hooked up to the internet.

For things that are uncontroversial and just require ELI5 explanations, this will probably be an improvement. For things that are even the slightest bit controversial, turning the information source and how it's written into more of a black box than the current Wikipedia situation is apt to be pretty terrible for people's information diets. Existing sources like ChatGPT are heavily modified to deliver what I would most accurately describe as the "midwit lib" answer to many questions. Trying to get factually accurate information that doesn't include endless hedging like, " I must emphasize the importance of using respectful and appropriate language when discussing social issues and vulnerable populations" is already like pulling teeth. This isn't a big problem in and of itself, but if most people come to believe that they're actually getting accurate and authoritative answers there, this is going to be pretty bad. There's already enough, "ummm actually, that's been deboonked" without people relying on regime-influenced AI to deboonk for them.

I do not see this as an insurmountable problem, while the "politically incorrect" open-source models still lag behind SOTA, eventually they'll be good enough to give you accurate answers about contentious queries, looking at both sides of the argument, assessing credibility, suppression of inconvenient facts, and so on.

I'm not claiming it'll be perfect, but it might well be better than Wiki when it comes to redpills, and even Wiki is still doing a good job of covering more mundane general knowledge that nobody has a vested interest in messing with.

Things like Bing Chat or ChatGPT with plug-ins already source their claims where appropriate, if a person is too lazy to peruse them, then I invite you to consider how much epistemic hygiene they observe when it's a human telling them something.

What I envision is something akin to an automated meta analysis of relevant literature and commentary, with an explicit attempt to perform Bayesian reasoning to tease out the net direction of the evidence.

This is already close to what LLMs do. GPT 4 has seen claims of the Earth being flat in its training corpus, yet without massive prompt engineering, will almost never make that claim in normal conversation. It finds that the net weight of evidence, especially from reputable sources, strongly supports Earth being round. This is a capability that is empirically observed to improve with scale, GPT-2 was beaten by 3, was beaten by 4.