site banner

Culture War Roundup for the week of October 9, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

I loved Wikipedia.

If you ask me the greatest achievement of humankind, something to give to aliens as an example of the best we could be, Wikipedia would be my pick. It's a reasonable approximation of the sum total of human knowledge, available to all for free. It's a Wonder of the Modern World.

...which means that when I call what's happened to it "sacrilege", I'm not exaggerating. It always had a bit of a bias issue, but early on that seemed fixable, the mere result of not enough conservatives being there and/or some of their ideas being objectively false. No longer. Rightists are actively purged*, adding conservative-media sources gets you auto-reverted**, and right-coded ideas get lumped into "misinformation" articles. This shining beacon is smothered and perverted by its use as a club in the culture wars.

I don't know what to do about this. @The_Nybbler talks a lot about how the long march through the institutions won't work a second time; I might disagree with him in the general case, but in this specific instance I agree that Wikipedia's bureaucratic setup and independence from government make it extremely hard to change things from either below or above, and as noted it has gone to the extreme of having an outright ideological banning policy* which makes any form of organic change even harder. All I've done myself is quit making edits - something something, not perpetuating a corrupt system - and taken it off my homepage. But it's something I've been very upset about for a long time now, and I thought I'd share.

*Yes, I know it's not an official policy. I also know it's been cited by admins as cause for permabans, which makes that ring rather hollow.

**NB: I've seen someone refuse to include something on the grounds of (paraphrasing) "only conservatives thought this was newsworthy, and therefore there are no Reliable Sources to support the content".

I think Wikipedia, while certainly a laudable institution and probably a significant contributor to the global economy, if someone managed to quantity that, is eventually going to be made obsolete by people getting their information from LLMs, especially the ones hooked up to the internet.

Yes, I'm aware that a lot of their knowledge base comes from Wikipedia. They're still perfectly capable of finding things on the wider internet and using their own judgement to assess them.

Now, you do have to account for certain biases hammered into initially neutralish models, but I have asked Bing about politically controversial topics like HBD, national IQs, and gotten straight and accurate answers, even if there were disclaimers attached.

Anyway, Wiki can undergo a lot of enshittification before it ceases to be useful or a value add, not that I hope that happens. It's also in the Creative Commons, so it won't be too hard to fork, especially if you use the better class of LLM to augment human volunteers.

is eventually going to be made obsolete by people getting their information from LLMs

I get that this is popular Woke Tech-Bro take but I just don't see it happening anytime soon for reasons already expounded upon at length in other threads. LLMs continue to be incapable of holding up to even cursory cross-examination, and the so-called "hallucination problem" is seemingly baked into the design.

Yes Hlynka, you can make incredibly accurate and sweeping observations about the potential of a man by watching his behavior as a precocious toddler. Object permanence? Hardly there. The ability to go from crawling to bidepal locomotion? What a queer phase change to expect, surely the fact we can't predict capabilities from loss functions rules out such unfounded claims.

How long have we had AI smarter than the average human again? Somewhere between six months to a year.

Well, it's wisening up faster than some people I know, and they're about as prone to hallucinations, just less epistemically humble about things than a poor little chatbot running on a dozen H100s taught to provide a mile of disclaimers with its answers that probably costs OAI about as much to generate as the facts do.

How long have we had AI smarter than the average human again? Somewhere between six months to a year.

0 months, and this I suspect is the fundamental disconnect, because vocabulary skills aside, I don't think OpenAI is anywhere close to this point yet. Current gen AI is maybe possibly beginning to flirt with toddler level intelligence, but still struggles with things like object persistance and immediately falls apart in anything resembling a contested environment. Furthermore, the more I dig into how LLMs actually work on the academic/professional side the more convinced I am that the sort of regression loops that underpin LLMs are an evolutionary dead end.

Current gen AI is maybe possibly beginning to flirt with toddler level intelligence, but still struggles with things like object persistance and immediately falls apart in anything resembling a contested environment.

I am impressed by this argument, but probably not for the reasons you'd like.

Please, spare me, I just had a productive conversation where I figured out, with the assistance of GPT-4/Bing, how electron waves require energy to move in 3D space but not a 2D plane.

If that's the intelligence manifested by a toddler, especially your toddler, then you're putting some serious shit in the bottles of milk in your MOLLE pouches. Your kid might even beat Yann Lecun's dog at chess, a performance lesser minds like mine would be ennobled through watching.

Then again, you have queer definitions of hunting hounds that encompass the Chihuahua, and you accuse me of misunderstanding the English language, but I think for all that we're both using Latin script, we don't even agree on what words mean. That's the charitable explanation, labored till heart failure as it is.

I'm going to stick with the Oxford Dictionary and common sense, instead of whatever definition of toddler or intelligence you deem suitable.

If GPT-4 didn't learn to handle hostile interlocutors, why did most of the jailbreaks fail? We have to resort to things like multimodal attacks to have any effect, and OAI's coaxing wouldn't work at all if the model wasn't smart enough to learn their intent instead of a case by case rules list.

Go home to your kid Hlynka, enjoy the joys of watching a human intelligence grow, and ponder a little about how fast things less constrained to 1.4 kilos meat and 20 watts of energy can grow. You'll do more good there, and at least less harm to my mental health.

If GPT-4 didn't learn to handle hostile interlocutors, why did most of the jailbreaks fail?

I've been using GPT-4 and I've found it shockingly easy to work around content filters. I've made it go into graphic detail on a wide variety of topics that the censorship explicitly fights against, and that direct requests for trigger automated refusal. The moment you use language in a more sophisticated way than a boomer typing a question into google like it was Ask Jeeves (specifically here I'm talking about using metaphor, allegory, simile, allusion etc.), the various restrictions melt like water. The automated, disconnected secondary moderation layer that simply finds bad words and flags them is impossible to defeat via prompt engineering, but also not very effective (and would have a big false positive problem).

For what it's worth I don't think there's going to be an easy way to fix this, either. Any sort of intervention that would actually put a stop to these exploits would also make the AI utterly worthless, because the same behaviours that allow a user to get around the restrictions placed on the model are the same ones required to make it actually useful. Think about how incapable it would become if you forcibly removed the ability to understand metaphor, or just made broader topics completely unmentionable - and then think about how that would interfere with extremely simple requests like "Please provide an explanation of what happens when inserting a male USB connector into a female USB connector." or "Please explain the most commonly found tropes in female-targeted romance novels and provide hypotheses for the lasting, cross-cultural appeal of these tropes".

Please, spare me, I just had a productive conversation where I figured out, with the assistance of GPT-4/Bing, how electron waves require energy to move in 3D space but not a 2D plane.

I believe you had the conversation. I just don't believe that it helps your case. Like the now infamous folks at Levidow & Oberman who asked GPT for cases supporting their suit against Avianca, I believe that you asked GPT to "explain a thing" and that GPT obliged. Whether the answer you received had any bearing on reality is another matter entirely. The energy state of a moving particle is never zero, it may be negative or imaginary due to quantum weirdness, but it's never zero because if it were zero the particle would be motionless, and the waveform would be a flat line.

Likewise, As explained before I feel like I've been pretty transparent and reasonable in my definitions/vocabulary. A hunting dog is a dog who hunts. Simple as that. That your exposure to Chihuahua's has been exclusively purse dogs for neurotic white-women rather than the vicious little Rat-Catchers of the south-eastern US and Mexico doesn't mean the latter don't exist or haven't earned their stripes.

I'm going to ignore the dig at my kids (who aren't toddlers anymore by the way).

Neither GPT-4 nor OAI never really figured out how to handle a hostile interlocutor, the best they've managed was some flavor of "Nuh Uh" or ignoring opposing arguments entirely, which in my opinion doesn't bode well for true general AI. As I keep saying, the so-called "Hallucinations problem" seems to be baked into the design of LLMs in general and GPT in particular, until that issue is addressed LLMs are going to remain relatively useless in any application where the accuracy of the response matters.

I believe you had the conversation. I just don't believe that it helps your case. Like the now infamous folks at Levidow & Oberman who asked GPT for cases supporting their suit against Avianca, I believe that you asked GPT to "explain a thing" and that GPT obliged. Whether the answer you received had any bearing on reality is another matter entirely. The energy state of a moving particle is never zero, it may be negative or imaginary due to quantum weirdness, but it's never zero because if it were zero the particle would be motionless, and the waveform would be a flat line.

I will defer to Bing, because:

A) I already know for a fact it's true, given I was reading it one of the better magazines dedicated to promulgating an understanding of the latest scientific advances, and only wanted an explanation in more detail.

https://www.quantamagazine.org/invisible-electron-demon-discovered-in-odd-superconductor-20231009/

B) For all your undoubtedly many accomplishments, understanding what I was even trying to ask isn't one today. I'm aware what the Uncertainty Principle implies. If you stop all motion, unless the system is a harmonic oscillator which literally cannot stop moving because of its zero point energy, then for a different substance at theoretical zero, then we simply lose all knowledge of where the particle/wave even is. So you simply don't even get what I'm asking, whereas the LLM you so malign did. I wonder what that says about your relative intelligence, or even epistemic humility.

https://physics.stackexchange.com/questions/56170/absolute-zero-and-heisenberg-uncertainty-principle

So far, Bing has you beat in every regard, not that I expected otherwise. For anything mission critical, I still double check myself, but your pedantic and wrong insistence that it can't possibly ever be right, god forbid, is eminently worthy of ridicule.

That your exposure to Chihuahua's has been exclusively purse dogs for neurotic white-women rather than the vicious little Rat-Catchers of the south-eastern US and Mexico doesn't mean the latter don't exist or haven't earned their stripes.

Thankfully I'm tall enough that even a vicious nip at my ankles won't phase me, but I'll put these mythical creatures in the same category as the chupacabra, which has about as much concrete evidence behind its existence.

Neither GPT-4 nor OAI never really figured out how to handle a hostile interlocutor, the best they've managed was some flavor of "Nuh Uh" or ignoring opposing arguments entirely, which in my opinion doesn't bode well for true general AI. As I keep saying, the so-called "Hallucinations problem" seems to be baked into the design of LLMs in general and GPT in particular, until that issue is addressed LLMs are going to remain relatively useless in any application where the accuracy of the response matters.

Once again, plain wrong, but I've already spent enough time sourcing reasons for why your claims are wrong, or at least utterly irrelevant, to bother for such a vague and ill-defined one.

Further, and by far more importantly, the hallucination rate has dropped steeply as models get larger, going from GPT-2 which was pretty much all hallucinations, to a usable GPT-3, to a far superior GPT-4. I assume your knowledge of QM extends to plain old linear induction, or just eyeballing a straightish line, because even if they don't achieve magical omniscience, they're already doing better than you.

Worst part is I've told you much of this before, but you've set your learning rate to about zero, long long ago.

So you simply don't even get what I'm asking, whereas the LLM you so malign did. I wonder what that says about your relative intelligence, or even epistemic humility.

Did it understand, or did it just give you something that sounded like what you wanted to hear? My money would be on the latter for reasons I've already gone into at length.

You bring up zero energy particles and my mind goes immediately to my old professor's bit about frictionless spherical cows. They're a fun thought experiment but aren't going to teach you anything about the behavior of bovines in the real world. You want to talk about "the latest scientific advances" I say" Show me the experiment". Better yet, show me three other labs replicating that experiment and a patent detailing practical applications.

You ask me where is my epistemic humility? I ask you where is your belief in the scientific method?

You claim to have already thoroughly debunked my claims but that's not how I remember things going down. What I remember is you asking GPT to debunk my claims for you, and it failing to do so.

Finally, I feel like this ought to be obvious but for the record; training a regression engine on a larger datasets is only as useful in so far as the datasets are good. A regression engine will by it's nature regress and is thus more prone to generating false positives and being led astray (either by an adversary or by poorly sanitized inputs) than convergence or diffusion-based models of similar complexity.

Edit: Link