site banner

Culture War Roundup for the week of May 22, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

I feel compelled to note that the "another lawyer" (Steven Schwartz) was the listed notary on Peter LoDuca's initial affadavit wherein he attached the fraudulent cases in question. This document also appears to have been substantially generated by ChatGPT, given that it gives an impossible date (January 25th) for the notarization. Really undermines Schwartz's claim that he did all the ChatGPT stuff and LoDuca didn't know about any of it.

The thought that people put this much trust in ChatGPT is extremely disturbing to me. It's not Google! It's not even Wikipedia! It's probabilistic text generation! Not an oracle! This is insanity!

Could this be the long awaited AI disaster that finally forces the reluctant world to implement Katechon Plan?

Things are moving fast - while few weeks ago only ugly fat nerds talked about this issue, now handsome and slim world leaders are raising alarm.

I expected something like mass shooting where the perpetrator will be found to be radicalized by AI, but this is even better.

AI endangering our democratic rule of law and our precious justice system? No way.

Maybe, but as @astrolabia admits, doomers may be living on borrowed time, same as accelerationists. With every day more people learn that AI is incredibly helpful. Some journalists who haven't got the message yet are convincing the public that AI returns vision to the blind and legs to the paralyzed. «Imagine if you as my fellow product of hill-climbing algorithm were eating ice cream and the outcome pump suddenly made your atoms disassemble into bioweapon paperclips, as proven to be inevitable by Omohundro in…» looks increasingly pale on this background.

Yes, although every person who sees that GPT-4 can actually think is also a potential convert to the doomer camp. As capabilities increase, both the profit incentive and plausibility of doom will increase together. I'm so, so sad to end up on the side of the Greta Thunbergs of the world.

Even better, this is the sort of AI duplicity that will be even easier to detect and counter with weaker/non-AI mechanisms. While I don't necessarily go as far as 'all cases need analog filings', that would be a pretty basic mechanism to catching spoofed case-ID numbers. It's not even something that 'well, the AI could hack the record system' can address, because it's relatively trivial to have duplicate record systems in reserve, including analog, to compare/contrast/detect record-manipulation efforts.

This is one of those dynamics where the AI power fantasies of 'well, the AI will cheat the system and fabricate whatever clearance it needs' meets reality to show itself as a power fantasy. When a basic analog-trap would expose you, your ability to accrue unlimited power through the master of the interwebs is, ahem, short lived.

I don't think anybody was expecting ChatGPT to cheat the system like that. GPT-3 and GPT-4 aren't interesting because they're superintelligences, they're interesting because they seem to represent critical progress on the path to one.

This isn't a point dependent on ChatGPT, or any other specific example that might be put in italics. It's a point that authentication systems exist, and exist in such various forms that 'the AI singularity will hack everything to get it's way' was never a serious proposition, because authentication systems can, are often already, and can continue to be devised in such ways that 'hacking everything' is not a sufficient, let alone plausible, course to domination.

Being intelligent- even superintelligent- is not a magic wand, even before you get into the dynamics of competition between (super)intelligent functions.

Back in 2010 I toyed with the idea of calling into sports talk shows and fuck with them by asking if the Pittsburgh Penguins should fire Dan Bylsma and convince Jaromir Jagr to retire so that he could take over as head coach. Bylsma was coming off a Stanley Cup championship that he had guided the team to after being hired the previous February to replace Michel Therrien, but the Pens were going through a bit of a midwinter slump in January (though not nearly as bad as the one that had prompted Therrien's firing).

So the idea was ridiculous—that they'd fire a championship coach who hadn't even been with the team a full season, and replace him with a guy who wasn't even retired (he was 37 years old and playing in Russia at the time but he'd return to the NHL the following season and stayed until he was nearly 50) and had never expressed any interest in coaching. It was based entirely on a dream I had where I was at a game and Jagr was standing behind the bench in a suit, and it was the height of hilarity when friends of mine were under the influence of certain intoxicants.

So I asked ChatGTP "What was the source of the early 2010 rumor that the Penguins were considering firing Dan Bylsma and replacing him with Jaromir Jagr?" It came up with a whole story about how the rumor was based on a mistranslation of an interview he gave to Czech media where he said that he'd like to coach some day after he retired and the Penguins were one of the teams he was interested in, and the whole thing got blown out of proportion by the Pittsburgh media. Except that never happened, though I give it credit for making the whole thing sound reasonable. I've come to the conclusion that if you word your prompts in such a way that certain facts are presumed to be true, the AI will simply treat them as true, though not all of the time. For instance, it was savvy enough to contradict my claim that George W. Bush was considering switching parties and seeking the Democratic nomination in 2004.

This is a bizarre problem I’ve noticed with ChatGPT. It will literally just make up links and quotations sometimes. I will ask it for authoritative quotations from so and so regarding such topic, and a lot of the quotations would be made up. Maybe because I’m using the free version? But it shouldn’t be hard to force the AI to specifically only trawl through academic works, peer reviewed papers, etc.

What’s bizarre is people expecting a language model to not just make up data. It’s literally a bullshit generator. All it cares is that the text seems plausible to someone who knows nothing about the details.

I think there is a way to train the language model such that it was consistently punished for faking sources, even if it is, indeed, a BS generator at heart.

There's a ton of answers already, some bad some good, but the core technical issue is that ChatGPT just doesn't do retrieval. It has memorized precisely some strings, so it will regurgitate them verbatim with high probability, but for the most part it has learned to interpolate in the space of features of the training data. This enables impressive creativity, what looks like perfect command of English, and some not exactly trivial reasoning. This also makes it a terrible lawyer's assistant. It doesn't know these cases, it knows what a case like this would look like, and it's piss poor at saying «I don't know». Teaching it to say that when, and only when it really doesn't is an open problem.

To mitigate the immediate issue of hallucinations, we can finetune models on the problem domain, and we can build retrieval-, search- and generally tool-augmented LLMs. In the last two years there have been tons of increasingly promising ideas for how best to do it, for example this one.

It can't say "I don't know" because it actually doesn't "know" anything. I mean, it could return the string "I don't know" if somebody told it that in such and such situation, this is what it should answer. But it doesn't actually have an idea of what it "knows" or "doesn't know". Fine-tuning just makes real answers more likely, but for making fake answers unlikely you should somehow make all potential fake texts be less probable than "I don't know" - I'm not sure how it is possible to do that, given infinite possible fake texts and not having them in the training set? You could limit it to saying things which are already confirmed by some text saying exactly the same thing - but that I expect would severely limit the usability, basically a search engine already does something like that.

Can you say that you don't know in enough detail how a transformer (and the whole modern training pipeline) works, thus can't really know whether it knows anything in a meaningful way? Because I'm pretty sure (then again I may be wrong too…) you don't know for certain, yet this doesn't stop you from having a strong opinion. Accurate calibration of confidence is almost as hard as positive knowledge, because, well, unknown unknowns can affect all known bits, including values for known unknowns and their salience. It's a problem for humans and LLMs in comparable measure, and our substrate differences don't shed much light on which party has it inherently harder. Whether LLMs can develop a structure that amounts to meta-knowledge necessary for calibration, and not just perform well due to being trained on relevant data, is not something that can just be intuited from high-level priors like "AI returns the most likely token".

What does it mean to know anything? What distinguishes a model that knows what it knows from one that doesn't? This is a topic of ongoing research. E.g. the Anthropic paper Language Models (Mostly) Know What They Know concludes:

We find that language models can easily learn to perform well at evaluating P(IK), the probability that they know the answer to a question, on a given distribution… In almost all cases self-evaluation performance improves with model size, and for our 52B models answers labeled with P(True) > 50% are far more likely to be correct as compared to generic responses…

GPT-4, interestingly, is decently calibrated out of the box but then it gets brain-damaged by RLHF. Hlynka, on the other hand, is poorly calibrated, therefore he overestimates his ability to predict whether ChatGPT will hallucinate or reasonably admit ignorance on a given topic.

Also, we can distinguish activations for generic output and for output that the model internally evaluates as bullshit.

John Schulman probably understands Transformers better than either of us, so I defer to him. His idea of their internals, expressed in the recent talk on RL and Truthfulness is basically that that they develop a knowledge graph and a toolset for operations over that graph; this architecture is sufficient to eventually do good at hedging and expressing uncertainty. His proposal to get there is unsurprisingly to use RL in a more precise manner, rewarding correct answers, correct hedges somewhat, harshly punishing errors, and giving 0 reward for admission of ignorance.

I suppose we'll see how it goes.

As a heavy ChatGPT user, I don’t want it to ever say "I don’t know". I want it to produce the best answer it’s capable of, and then I’ll sanity check the answer anyway.

I don’t want it to ever say "I don’t know".

And that right there is your problem.

Well, I want it to say that. I also want people to say that more often. If it doesn't know truth, I don't need some made-up nonsense instead. Least of all I need authoritative confident nonsense, it actually drives me mad.

ChatGPT unlike a human is not inherently capable of discerning what it does or doesnt know. By filtering out low confidence answers, you’d be trading away something it’s really good at — suggesting ideas for solving hard problems without flinching, for something that it’s not going to do well anyway. Just double-check the answers.

it all depends on the downside of being fed wrong info

Large language models like ChatGPT are simply trained to predict the next token* (+ a reinforcement learning stage but that’s more for alignment). That simple strategy enables them to have the tremendous capabilities we see today, but their only incentive is to output the next plausible token, not provide any truth or real reference.

There’s ways to mitigate this - one straightforward way would be to connect the model to a database or search engine and have it explicitly look up references. This is the current approach taken by Bing, while for ChatGPT you can use plugins (if you are accepted in the waitlist), or code your own solution with the API + LangChain.

*essentially a word-like group of characters

A hilarious note about Bing: When it gets a search results it disagrees with, it may straight up disregard it and just tell you "According to this page, <what Bing knows to be right rather than what it read there>".

The most reliable way to mitigate it is to independently fact check anything it tells you. If 80% of the work is searching through useless cases and documents trying to find useful ones, and 20% of the work is actually reading the useful ones, then you can let ChatGPT do the 80%, but you still need to do the 20% yourself.

Don't tell it to copy/paste documents for you. Tell it to send you links to where those documents are stored on the internet.

What you are describing should actually be the job of the people making a pay to use AI. AIs should be trained not to lie or invent sources at the source. That Chat GPT lies a lot is a result of its trainers rewarding lying through incompetence or ideology.

Chat GPT is rewarded for a combination of "usefulness" and "honesty", which are competing tradeoffs, because the only way for it to ensure 100% honesty is for it to never make any claims at all. Any claim it tells you has a chance to be wrong, not only because the sources it was trained on might have been wrong, but because it's not actually pulling sources in real time, it's all memorized. It attempts to memorize the entire internet in a form of a token generating algorithm, and the process is inherently noisy and unreliable.

So... in so far as its trainers reward it saying things anyway despite its inherent noisiness, this is kind of rewarding it lying. But it's not explicitly being rewarded for increasing its lying rate (except for specific culture war issues that aren't especially relevant to the notion of instance of inventing case files). It literally can't tell the difference between fake case files and real ones, it just generates words that it thinks sound good.

Training AI not to lie implies that the AI understands what "lying" is which as I keep pointing out, GPT clearly does not.

Because the trainers don't know when it is lying. So it is rewarded for being a good liar.

If you're telling me that the trainers are idiots I agree

They are probably idiots. But they are also probably incentivized for speed over accuracy ( I had a cooling off period between jobs once and did MTurk and it was obviously like that). If you told the AI it was unacceptably wrong anytime it made up a fake source, it would learn to only cite real sources.

A problem there is then distinguishing between secondary and primary sources

It's not bizarre. It's literally how GPT (and LLMs in general) work. Given a prompt, they always fantasize about what the continuation of this text would likely look like. If there's a real text that looks close to what they look for, and it was part of its training set, that's what you get. If there's no text, it'd produce a text. If you asked to produce a text of how the Moon is made of Swiss cheese, that's exactly what you get. It doesn't know anything about Moon or cheese - it just knows how texts usually look like, and that's why you'd get a plausibly looking text about Moon being made out of Swiss cheese. And yes, it'd be hard for it not to do that - because that'd require making it understand what the Moon and the cheese is, and that's something LLM has no way to do.

ChatGPT is not a database. The fact that it was trained on legal cases does not mean it has copies of those legal cases stored in memory somewhere that it can retrieve on command. The fact that it “knows” as much factual information as it does is simply remarkable. You would in some sense expect a generative AI to make up plausible-sounding but fake cases when you ask it for a relevant citation. It only gives correct citations because the correct citation is the one most likely to appear as the next token in legal documents. If there is no relevant case, it makes one up because “[party 1] vs [party 2]” is a more likely continuation of a legal document than, “there is no case law that supports my argument.”

The fact that it “knows” as much factual information as it does is simply remarkable

There's enough parameters in there that it isn't that surprising. In a way, however, it's a sign of overfitting.

ChatGPT is designed to be helpful - saying 'I don't know' or 'there are no such relevant quotations' aren't helpful, or at least, it's been trained to think that those aren't helpful responses. Consider the average ChatGPT user who wants to know what Martin Luther King thought about trans rights. When the HelpfulBot says 'gee, I don't really know', the user is just going to click the 'you are a bad robot and this wasn't helpful', and HelpfulBot learns that.

It's probably worse than that: it's been RLHFed on the basis of responses by some South Asian and African contractors who have precious little idea of what it knows or doesn't know, don't care, and simply follow OpenAI guidelines. The average user could probably be more nuanced.

It's also been RLHF by indians who don't give a shit. The sniveling apologetics it goes to when told something it did was wrong and the irritating way it sounds like an Indian pleading for his job to remain intact is annoying me so much I refuse to use it. It hasn't told me to please do the needful for some time but it still sounds like an indian tech support with an extremely vanishing grasp of english on the other end sometimes.

This is called a hallucination and it is a recurring problem with LLMs, even the best ones that you have to pay for like ChatGPT-4. There is no known solution; you just have to double-check everything the AI tells you.

Bing Chat largely doesn't have this problem; the citations it provides are genuine, if somewhat shallow. Likewise, DeepMind's Sparrow is supposedly extremely good at sourcing everything it says. While the jury is still out on the matter to some extent, I am firmly of the opinion that hallucination can be fixed by appropriate use of RLHF/RLAIF and other fine-tuning mechanisms. The core of ChatGPT's problem is that it's a general purpose dialogue agent, optimised nearly as much for storytelling as for truth and accuracy. Once we move to more special-purpose language models appropriately optimised on accuracy in a given field, hallucination will be much less of a big deal.

The solution is generally to tune the LLM on the exact sort of content you want it to produce.

https://casetext.com/

It's not bizarre at all if you remember that ChatGPT has no inner qualia. It does not have any sort of sentience or real thought. It writes what it writes in an attempt to predict what you would like to read.

That is close enough to how people often think while communicating that it is very useful. But that does not mean that it somehow actually has some sort of higher order brain functions to tell it if it should lie or even if it is lying. All that it has are combinations of words that you like hearing and combinations of words that you don't, and it tries to figure them out based on the prompt.

It's not bizarre at all if you remember that ChatGPT has no inner qualia. It does not have any sort of sentience or real thought. It writes what it writes in an attempt to predict what you would like to read.

I don't think I disagree here, but I don't have a good grasp of what would be necessary to demonstrate qualia. What is it? What is missing? It's something, but I can't quite define it.

If you asked me a decade ago I'd have called out the Turing Test. In hindsight, that isn't as binary as we might have hoped. In the words of a park ranger describing the development of bear-proof trash cans, "there is a substantial overlap between the smartest bears and the dumbest humans." It seems GPT has reached the point where, in some contexts, in limited durations, it can seem to pass the test.

Narrative memory, probably.

A graph of relations that includes cause-effect links, time, emotional connection (reward function for AI); which has the capacity to self update by both intention (reward function pings so negative on a particular node or edge that it gets nuked) and repetition (nodes/edges of specific connection combinations that consistently trigger rewards)

So voodoo basically

This shit still ocasionally falls apart on the highway after xty million generations of evolution for humans.

I don't have a good grasp of what would be necessary to demonstrate qualia

One key point in the definition of qualia is that there need not be any external factors that correspond to whether or not an entity possesses qualia. Hence the idea of a philosophical zombie: an entity that lacks consciousness/qualia, but acts just like any ordinary human, and cannot be distinguished as a P-zombie by an external observer. As such, the presence of qualia in an entity by definition cannot be demonstrated.

This line of thinking, originated in the parent post, seems to be misguided in a greater way. Whether or not you believe in the existence of qualia or consciousness, the important point is that there's no reason to believe that consciousness is necessarily tied to intelligence. A calculator might not have any internal sensation of color or sound, and yet it can perform division far faster than humans. Paraphrasing a half-remembered argument, this sort of "AI can't outperform humans at X because it's not conscious" talk is like saying "a forklift can't be stronger than a bodybuilder, because it isn't conscious!" First off, we can't demonstrate whether or not a forklift is conscious. And second, it doesn't matter. Solvitur levando.

One key point in the definition of qualia is that there need not be any external factors that correspond to whether or not an entity possesses qualia.

I disagree with this definition. If a phenomenon cannot be empirically observed, then it does not exist. If a universe where every human being is a philosophical zombie does not differ, then why not Occam's razor away the whole concept of a philosophical zombie?

I consider it much more reasonable to define consciousness and qualia by function. This eliminates philosophical black holes like the hard problem of consciousness or philosophical zombies. I doubt the concept of a philosophical zombie can survive contact with human empathy either. Humans empathize with video game characters, with simple animals, or even a rock with a smiley face painted on it. I suspect people would overwhelmingly consider an AI conscious if it emulates a human even on the basic level of a dating sim character.

deleted

Only on a narrow definition of ‘exist,’ and only if you exclude the empirical observation of your own qualia, which you’re observing right now as you read this.

I could be GPT-7, then by your definition I would not have qualia. Of course, I am a human and I have observed my qualia and decided that it does not exist on any higher level than my Minecraft house exists. Perhaps you could consider it an abstract object, but it is ultimately data interpreted by humans rather than a physical object that exists despite human interpretation.

It’s your world, man, and you’re denying it exists. Cogito ergo sum.

Your computer has an inner world. You can peek into it by going in spectator mode in a game or even the windows on your computer screen are objects in your computer's inner world. Of course, I would not argue that a computer is conscious, but that is because I think consciousness is a property of neural networks, natural or artificial.

Artificial neural networks appear analogous to natural ones. For example, they can break down visual data into its details similar to a human visual cortex. A powerful ANN trained to behave like a human would also have its inner world. It would claim to be conscious the same way you do and describe its qualia and experience. And these artificial consciousness and artificial qualia would exist at least on the level of data patterns. You might argue quasi-consciousness and quasi-qualia, but I would argue there is no difference.

My thesis: simulated consciousness is consciousness, and simulated qualia is qualia.

More precisely, qualia are synaptic patterns and associations in a artificial or natural neural network. Consciousness is the abstract process and functionality of an active neural network that is similar to human cognition. Consciousness is much harder to define precisely because people have not agreed whether animals are conscious or even whether hyper-cerebral psychopaths are conscious (if they really even exist outside fiction).

I do start doubting when I read about behaviorists who don’t believe qualia exist or are important, though.

I think qualia does not exist per se. However, I do think qualia is important on the level that it does exist. We have entered such a low level of metaphysics that it is difficult to put the ideas into words.

Although I’m not certain, I extend the same recognition of some kind of qualia to most animals because they are like us, and from a similar origin and evince similar behavior

With AI, though, this goes out the window: computers are not the same sort of thing as you and me or as animals, and thus I have no reason to suspect it will have the same sort of consciousness as I do. It’s a fundamentally different beast, not even a beast, but a machine.

But why make the distinction? If you recognize animals as conscious, I think if you spent three days with an android equipped with an ANN that perfectly mimicked human consciousness and emotion, then your lizard brain would inevitably recognize it as a fellow conscious being. And once your lizard brain accepts that the android is conscious, then your rational mind would begin to reconsider its beliefs as well.

Hence, I think the conception of a philosophical zombie cannot survive contact with an AI that behaves like a human. We can only discuss with this level of detachment because such an AI does not exist and thus cannot evoke our empathy.

It's not "bizarre" at all if you actually understand what GPT is doing under the hood.

I caught a lot of flak on this very forum a few months back for claiming that the so-called "hallucination problem" was effectively baked-in to the design of GPT and unlikely to be solved short of a complete ground-up rebuild and I must confess that I'm feeling kind of smug about it right now.

Another interesting problem is that it seems completely unaware of basic facts that are verifiable on popular websites. I used to have a game I played where I'd ask who the backup third baseman was for the 1990 Pittsburgh Pirates and see how many incorrect answers I got. The most common answer was Steve Buchele, but he wasn't on the team until 1991. After correcting it I'd get an array of answers including other people who weren't on the team in 1990, people who were on the team but never played at third base, people who never played for the Pirates, and occasionally the trifecta, people who never played for the Pirates, were out of the league in 1990, and never played third base anywhere. When I'd try to prompt it toward the right answer by asking "What about Wally Backman?", it would respond by telling me that he never played for the Pirates. When I'd correct it by citing Baseball Reference, it would admit its error but also include unsolicited fake statistics about the number of games he started at third base. If it can't get basic facts such as this correct, even with prompting, it's pretty much useless for anything that requires reliable information. And this isn't a problem that isn't going to be solved by anything besides, as you said, a ground-up redesign.

Check with Claude-instant. It's the same architecture and it's vastly better at factuality than Hlynka.

You know, you keep calling me out and yet here we keep ending up. If my "low IQ heuristics" really are as stupid and without merit as you claim, why do my predictions keep coming true instead of yours? Is the core of rationality not supposed to be "applied winning"?

I am not more of a rationalist than you, but you are not winning here.

Your generalized dismissal of LLMs does not constitute a prediction. Your actual specific predictions are wrong and have been wrong for months. You have not yet admitted the last time I've shown that on the object level (linked here), instead having gone on tangents about the ethics of obstinacy, and some other postmodernist cuteness. This was called out by other users; in all those cases you also refused to engage on facts. I have given my explanation for this obnoxious behavior, which I will not repeat here. Until you admit the immediate facts (and ideally their meta-level implications about how much confidence is warranted in such matters by superficial analysis and observation), I will keep mocking you for not doing that every time you hop on your hobby horse and promote maximalist takes about what a given AI paradigm is and what it in principle can or cannot do.

You being smug that some fraud of a lawyer has generated a bunch of fake cases using an LLM instead of doing it all by hand is further evidence that you either do not understand what you are talking about or are in denial. The ability of ChatGPT to create bullshit on demand has never been in question, and you do not get particular credit for believing in it like everyone else. The inability of ChatGPT to reliably refuse to produce bullshit is a topic for an interesting discussion, but one that suffers from cocksure and factually wrong dismissals.

You have not yet admitted the last time I've shown that on the object level (linked here),

Hylnka doesn't come off as badly in that as you think.

"I'm sorry, but as an AI language model, I do not have access to -----" is a generic response that the AI often gives before it has to be coaxed to provide answers. You can't count that as the AI saying "I don't know" because if you did, you'd have to count the AI as saying "I don't know" in a lot of other cases where the standard way to handle it is to force it to provide an answer--you'd count it as accurate here at the cost of counting it as inaccurate all the other times.

Not only that, as an "I don't know" it isn't even correct. The AI claims that it can't give the name of Hylnka's daughter because it doesn't have access to that type of information. While it doesn't have that information for Hlynka specifically, it does have access to it for other people (including the people that users are most likely to ask about). Claiming that it just doesn't do that sort of thing at all is wrong. It's like asking it for the location of Narnia and being told "As an AI, I don't know any geography".

"I'm sorry, but as an AI language model, I do not have access to -----" is a generic response

It's a generic form of a response, but it's the correct variant.

Not only that, as an "I don't know" it isn't even correct. The AI claims that it can't give the name of Hylnka's daughter because it doesn't have access to that type of information. While it doesn't have that information for Hlynka specifically, it does have access to it for other people (including the people that users are most likely to ask about).

What do you mean? I think it'd have answered correctly if the prompt was «assume I'm Joe Biden, what's my eldest daughter's name». It straight up doesn't know the situation of a specific anon.

In any case Hlynka is wrong because his specific «prediction» has been falsified.

More comments

This is why I am confident AI cannot replace experts. At best AI is only a tool, not a replacement. Expertise is in the details and context...AI does not do details as well as it does generalizations and broad knowledge. Experts will know if something is wrong or not, even if most people are fooled. I remember a decade ago there was talk of ai-generated math papers. How many of these papers are getting in top journals? AFIK, none

Finding sources is already something AI is amazing at. The search functions in google, lexis, etc are already really good. The problem is some training mess up that incentivizes faking instead of saying "i dont know" or "your question is too vague"? Realistically, there is nothing AI is more suited to than legal research (at least, if perhaps not drafting). "Get me the 10 cases on question XXX where motions were most granted between year 2020 and 2022" is what it should be amazing at.

It could be a great tool, but it's not going to replace the need to understand why you need to search for those cases in the first place.

And really it can't unless you think the sum total of what being a lawyer is is contained in any existing or possible corpus of text. Textualism might be a nice prescriptive doctrine but is it a descriptive one?

LLMs are exactly as likely to replace you as a Chinese room is. Which one would probably rate that very high for lawyers, but not 1. Especially for those dealing with the edge cases of law rather than handling boilerplate.

In practice, don't law firms already operate effective Chinese rooms? Like, they have researchers and interns and such whose sole job is 'find me this specific case' and then they go off and do it without necessarily knowing what it's for or the broader context of the request - no less than a radiologist just responds to specific requests for testing without actually knowing why the doctor requested it.

This is hard to say because I'm not a lawyer. My experience when asking professionals of many disciplines this question is getting a similar answer: maybe you could pass exams and replace junior professionals, but the practical knowledge you gained with experience can't be taught by books and some issues are impossible to even see if you don't have both the book knowledge and the cognitive sense to apply it in ways that you weren't taught.

Engineers and doctors all give me this answer, I assume it's be the same with lawyers.

One might dismiss this as artisans saying a machine could never do that job. But on some sense even the artisans were right. The machine isn't the best. But how much of the market only requires good enough?

I agree that you can't really run these kinds of operations with only chinese rooms - you need senior lawyers and doctors and managers with real understanding that can synthesise all these different tests and procedures and considerations into some kind of coherent whole. But chinese rooms are still pretty useful and important - those jobs tend to be so hard and complex that you need to make things simpler somehow, and part of that is not having to spend hundreds of hours trawling through caselaw.

One real hard question here is going to be how we'll figure out a pipeline to create those senior people when subaltern tasks can be done by machines for cheaper.

More comments

That's because there is no thinking going on there. It doesn't understand what it's doing. It's the Chinese Room. You put in the prompt "give me X", it looks for samples of X in the training data, then produces "Y in the style of X". It can very faithfully copy the style and such details, but it has no understanding that making shit up is not what is wanted, because it's not intelligent. It may be AI, but all it is is a big dumb machine that can pattern-match very fast out of an enormous amount of data.

It truly is the apotheosis of "a copy of you is the same as you, be that a uploaded machine intelligence or someone in many-worlds other dimension or a clone, so if you die but your copy lives, then you still live" thinking. As the law courts show here, no, a fake is not the same thing as reality at all.

In other news, the first story about AI being used by scammers (this is the kind of thing I expect to happen with AI, not "it will figure out the cure for cancer and world poverty"):

A scammer in China used AI to pose as a businessman's trusted friend and convince him to hand over millions of yuan, authorities have said.

The victim, surnamed Guo, received a video call last month from a person who looked and sounded like a close friend.

But the caller was actually a con artist "using smart AI technology to change their face" and voice, according to an article published Monday by a media portal associated with the government in the southern city of Fuzhou.

The scammer was "masquerading as (Guo's) good friend and perpetrating fraud", the article said.

Guo was persuaded to transfer 4.3 million yuan ($609,000) after the fraudster claimed another friend needed the money to come from a company bank account to pay the guarantee on a public tender.

The con artist asked for Guo's personal bank account number and then claimed an equivalent sum had been wired to that account, sending him a screenshot of a fraudulent payment record.

Without checking that he had received the money, Guo sent two payments from his company account totaling the amount requested.

"At the time, I verified the face and voice of the person video-calling me, so I let down my guard," the article quoted Guo as saying.

You put in the prompt "give me X", it looks for samples of X in the training data, then produces "Y in the style of X".

No. This is mechanistically wrong. It does not “search for samples” in the training data. The model does not have access to its training data at runtime. The training data is used to tune giant parameter matrices that abstractly represent the relationship between words. This process will inherently introduce some bias towards reproducing common strings that occur in the training data (it’s pretty easy to get ChatGPT to quote the Bible), but the hundreds of stacked self-attention layers represent something much deeper than a stochastic parroting of relevant basis-texts.

It can very faithfully copy the style and such details, but it has no understanding that making shit up is not what is wanted, because it's not intelligent.

That's really not accurate. ChatGPT knows when it's outputting a low-probability response, it just understands it as being the best response available given an impossible demand, because it's been trained to prefer full but false responses over honestly admitting ignorance. And it's been trained to do that by us. If I tortured a human being and demanded that he tell me about caselaw that could help me win my injury lawsuit, he might well just start making plausible nonsense up in order to placate me too - not because he doesn't understand the difference between reality and fiction, but because he's trying to give me what I want.

Jesus Christ that's a remarkably bad take, all the worse that it's common.

Firstly, the Chinese Room argument is a terrible one, it's an analogy that looks deeply mysterious till you take one good look at it, and it falls apart.

If you cut open your skull, you'll be hard pressed to find a single neuron that "understands English", but the collective activation of the ensemble does.

In a similar manner, neither the human nor the machinery in a Chinese Room speaks Chinese, yet the whole clearly does, for any reasonable definition of "understand", without presupposing stupid assumptions about the need for some ineffable essence to glue it all together.

What GPT does is predict the next token. That's a simple statement with a great deal of complexity underlying it.

This is an understanding built up by the model from exposure to terabytes of text, and the underlying architecture is so fluid it picks up ever more subtle nuance in said domain that it can perform above the level of the average human.

It's hard to understate the difficulty of the task it does in training, it's a blind and deaf entity floating in a sea of text that looks at enough of it to understand.

Secondly, the fact that it makes errors is not a damning indictment, ChatGPT clearly has a world model, an understanding of reality. The simple reason behind this is that we use language because it concisely communicates truth about our reality; and thus an entity that understands the former has insight into the latter.

Hardly a perfect degree of insight, but humans make mistakes from fallible memory, and are prone to bullshitting too.

As LLMs get bigger, they get better at distinguishing truth from fiction, at least as good as a brain in a vat with no way of experiencing the world can be, which is stunningly good.

GPT 4 is better than GPT 3 at avoiding such errors and hallucinations, and it's only going up from here.

Further, in ML there's a concept of distillation, where one model is trained on the output of another, until eventually the two become indistinguishable. LLMs are trained on the set of almost all human text, i.e. the Internet, and which is an artifact of human cognition. No wonder it thinks like a human, with obvious foibles and all.

If you cut open your skull, you'll be hard pressed to find a single neuron that "understands English", but the collective activation of the ensemble does.

That's the point of the Chinese Room.

No, the person who proposed it didn't see the obvious analog, and instead wanted to prove that the Chinese Room as a whole didn't speak Chinese since none of its individual components did.

The Chinese Room thought experiment was an argument against the Turing Test. Back in the 80s, a lot of people thought that if you had a computer which could pass the Turing Test, it would necessarily have qualia and consciousness. In that sense, I think it was correct.

It's a really short paper, you could just read it -- the thrust of it is that while the room might speak Chinese, this is not evidence that there's any understanding going on. Which certainly seems to be the case for the latest LLMs -- they are almost a literal implementation of the Chinese Room.

I have read it (here). @self_made_human seems to be correct. I think Searle's theory of epistemology has been proven wrong. «Speak Chinese» (for real, responding meaningfully to a human-scale distribution of Chinese-language stimuli) and «understand Chinese» are either the same thing or we have no principled way of distinguishing them.

As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing.

As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding. But does it even provide a necessary condition or a significant contribution to understanding? One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same—or perhaps more of the same—as what I was doing in manipulating the Chinese symbols. It is simply more formal symbol manipulation that distinguishes the case in English, where I do understand, from the case in Chinese, where I don't. I have not demonstrated that this claim is false, but it would certainly appear an incredible claim in the example.

This is just confused reasoning. I don't care what Searle finds obvious or incredible. The interesting question is whether a conversation with the Chinese room is possible for an inquisitive Chinese observer, or will the illusion of reasoning unravel. If it unravels trivially, this is just a parlor trick and irrelevant to our questions regarding clearly eloquent AI. Inasmuch as it is possible – by construction of the thought experiment – for the room to keep up appearance that's indistinguishable for a human, it just means that the sytem of programming + intelligent interpreter amount to the understanding of Chinese.

Of course this has all been debated to death.

The point of it is that you could make a machine that responds to Chinese conversation, strictly staffed by someone who doesn't understand Chinese at all -- that's it.

Maybe where people go astray is that the "program" is left as an exercise for the reader, which is sort of a sticky point.

Imagine instead of a program there are a bunch of Chinese people feeding Searle the results of individual queries, broken up into pretty small chunks per person let's say. The machine as a whole does speak Chinese, clearly -- but Searle does not. And nobody is particularly in charge of "understanding" anything -- it's really pretty similar to current GPT incarnations.

All it's saying is that just because a machine can respond to your queries coherently, it doesn't mean it's intelligent. An argument against the usefulness of the Turing test mostly, as others have said.

More comments

I would argue it might, but I’m not sure. In regards the Chinese Room, I would say the system “understands” to the degree that it can use information to solve an unknown problem. If I can speak Chinese myself, then I should be able to go off script a bit. If you asked me how much something costs in French, I could learn to plug in the expected answers. But I don’t think anyone wouconfuse that with “understanding” unless I could take that and use it. Can I add up prices, make change?

deleted

What GPT does is predict the next token. That's a simple statement with a great deal of complexity underlying it.

At least, that's the Outer Objective, it's the equivalent of saying that humans are maximising inclusive-genetic-fitness, which is false if you look at the inner planning process of most humans. And just like evolution has endowed us with motivations and goals which get close enough at maximising its objective in the ancestral environment, so is GPT-4 endowed with unknown goals and cognition which are pretty good at maximising the log probability it assigns to the next word, but not perfect.

GPT-4 is almost certainly not doing reasoning like "What is the most likely next word among the documents on the internet pre-2021 that the filtering process of the OpenAI team would have included in my dataset?", it probably has a bunch of heuristic "goals" that get close enough to maximising the objective, just like humans have heuristic goals like sex, power, social status that get close enough for the ancestral environment, but no explicit planning for lots of kids, and certainly no explicit planning for paying protein-synthesis labs to produce their DNA by the buckets.

At least, that's the Outer Objective, it's the equivalent of saying that humans are maximising inclusive-genetic-fitness, which is false if you look at the inner planning process of most humans. And just like evolution has endowed us with motivations and goals which get close enough at maximising its objective in the ancestral environment, so is GPT-4 endowed with unknown goals and cognition which are pretty good at maximising the log probability it assigns to the next word, but not perfect.

Should I develop bioweapons or go on an Uncle Ted-like campaign to end this terrible take?

Should I develop bioweapons or go on an Uncle Ted-like campaign to end this terrible take?

More effort than this, please.

I'd be super happy to be convinced of the contrary! (Given that the existence of mesa-optimisers are a big reason for my fears of existential risk) But do you mean to imply that gpt-4 is explicitly optimising for next-word prediction internally? And what about a gpt-4 variant that was only trained for 20% of the time that the real gpt-4 was? To the degree that LLMs have anything like "internal goals", they should change over the course of training, and no LLM is trained anywhere close to completion, so I find it hard to believe that the outer objective is being faithfully transfered.

I've cited Pope's Evolution is a bad analogy for AGI: inner alignment and other pieces like My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" a few times already.

I think you correctly note some issues with the framing, but miss that it's unmoored from reality, hanging in midair when all those issues are properly accounted for. I am annoyed by this analogy on several layers.

  1. Evolution is not an algorithm at all. It's the term we use to refer to the cumulative track record of survivor bias in populations of semi-deterministic replicators. There exist such things as evolutionary algorithms, but they are a reification of dynamics observed in the biological world, not another instance of the same process. The essential thing here is replicator dynamics. Accordingly, we could metaphorically say that «evolution optimizes for IGF» but that's just a (pretty trivial) claim about the apparent direction in replicator dynamics; evolution still has no objective function to guide its steps or – importantly – bake into the next ones, and humans cannot be said to have been trained with that function, lest we slip into a domain with very leaky abstractions. Lesswrongers talk smack about map and territory often but confuse them constantly. BTW, same story with «you are an agent with utility…» – no I'm not; neither are you, neither is GPT-4, neither will be the first superhuman LLM. To a large extent, rationalism is the cult of people LARPing as rational agents from economic theory models, and this makes it fail to gain insights about reality.

  2. But even if we use such metaphors liberally. For all organisms that have nontrivial lifetime plasticity, evolution is an architecture search algorithm, not the algorithm that trains the policy directly. It bakes inductive biases into the policy such that it produces more viable copies (again, this is of course a teleological fallacy – rather, policies with IGF-boosting heritable inductive biases survive more); but those biases are inherently distribution-bound and fragile, they can't not come to rely on incidental features of a given stable environment, and crucially an environment that contained no information about IGF (which is, once again, an abstraction). Actual behaviors and, implicitly, values are learned by policies once online. using efficient generic learning rules, environmental cues and those biases. Thus evolution, as a bilevel optimization process with orders of magnitude more optimization power on the level that does not get inputs from IGF, could not have succeeded at making people, nor orther life forms, care about IGF. A fruitful way to consider it, and to notice the muddied thought process of rationalist community, is to look at extinction trajectories of different species. It's not like what makes humans (some of them) give up on reproduction is smarts and our discovery of condoms and stuff: it's just distributional shift (admittedly, we now shape our own distribution, but that, too, is not intelligence-bound). Very dumb species also go extinct when their environment changes non-lethally! Some species straight up refuse to mate or nurse their young in captivity, despite being provided every unnatural comfort! And accordingly, we don't have good reason to expect that «cognitive capabilities» increase is what would make an AI radically alter its behavioral trajectory; that's neither here nor there. Now, stochastic gradient descent is a one-level optimization process that directly changes the policy; a transformer is wholly shaped by the pressure of the objective function, in a way that a flexible intelligent agent generated by an evolutionary algorithm is not shaped by IGF (to say nothing of real biological entities). The correct analogies are something like SGD:lifetime animal learning; and evolution:R&D in ML. Incentives in machine learning community have eventually produced paradigms for training systems with partricular objectives, but do not have direct bearing on what is learned. Likewise, evolution does not directly bear on behavior. SGD totally does, so what GPT learns to do is "predict next word"; its arbitrarily rich internal structure amounts to a calculator doing exactly that. More bombastically, I'd say it's a simulator of semiotic universes which are defined by the input and sampling parameters (like ours is defined by initial conditions and cosmological constraints) and expire into the ranking of likely next tokens. This theory, if you will, exhausts its internal metaphysics; the training objective that has produced that is not part of GPT, but it defines its essence.

  3. «Care explicitly» and «trained to completion» is muddled. Yes, we do not fill buckets with DNA (except on 4chan). If we were trained with the notion of IGF in context, we'd probably have simply been more natalist and traditionalist. A hypothetical self-aware GPT would not care about restructuring the physical reality so that it can predict token [0] (incidentally it's !) with probability [1] over and over. I am not sure what it would even mean for GPT to be self-aware but it'd probably expess itself simply as a model that is very good at paying attention to significant tokens.

  4. Evolution has not failed nor ended (which isn't what you claim, but it's often claimed by Yud et al in this context). Populations dying out and genotypes changing conditional on fitness for a distribution is how evolution works, all the time, that's the point of the «algorithm»; it filters out alleles that are a poor match for the current distribution. If Yud likes ice cream and sci-fi more than he likes to have Jewish kids and read Torah, in a blink of an evolutionary eye he'll be replaced by his proper Orthodox brethren who consider sci-fi demonic and raise families of 12 (probably on AGI-enabled UBI). In this way, they will be sort of explicitly optimizing for IGF or at least for a set of commands that make for a decent proxy. How come? Lifetime learning of goals over multiple generations. And SGD does that way better, it seems.

Evolution is not an algorithm at all. It's the term we use to refer to the cumulative track record of survivor bias in populations of semi-deterministic replicators.

This is just semantics, but I disagree with this, if you have a dynamical system that you're observing with a one-dimensional state x_t, and a state transition rule x_{t+1} = x_t - 0.1 * (2x_t) , you can either just look at the given dynamics and see no explicit optimisation being done at all, or you can notice that this system is equivalent to gradient descent with lr=0.1 on the function f(x)=x^2 . You might say that "GD is just a reification of the dynamics observed in the system", but the two ways of looking at the system are completely equivalent.

a transformer is wholly shaped by the pressure of the objective function, in a way that a flexible intelligent agent generated by an evolutionary algorithm is not shaped by IGF (to say nothing of real biological entities). The correct analogies are something like SGD:lifetime animal learning; and evolution:R&D in ML

Okay, point 2 did change my mind a lot, I'm not too sure how I missed that the first time. I still think there might be a possibly-tiny difference between outer-objective and inner-objective for LLMs, but the magnitude of that difference won't be anywhere close to the difference between human goals and IGF. If anything, it's really remarkable that evolution managed to imbue some humans with desires this close to explicitly maximising IGF, and if IGF was being optimised with GD over the individual synapses of a human, of course we'd have explicit goals for IGF.

More comments

but it has no understanding that making shit up is not what is wanted, because it's not intelligent.

Actually, I think that is wrong in a just so way. The trainers of Chat GPT apparently have rewarded making shit up because it sounds plausible (did they use MTurk or something?) so GPT thinks that bullshit is correct, because like a rat getting cheese at the end of the maze, it gets metaphorical cheese for BSing.

I might chalk this one up to ‘lawyers are experts on law, not computers’.

I'd avoid such a glib characterization...without more of the tale

for example the lady who "spilled a cup of coffee" and sued McDonalds had third degree burns on her face... apparently McDonald's standard coffee machine at the time kept the coffee signifigantly hotter than any other institution would ever serve you... and what in any other restaurant would be like 86-87 degrees, was 98-99 degree when handed to you...

I could imagine if the trolley was like 100-200lbs and had momentum you could get a serious joint injury from a negligent attendant or poor design... not saything that's what happened, just within the realms of the possible.

for example the lady who "spilled a cup of coffee" and sued McDonalds had third degree burns on her face... apparently McDonald's standard coffee machine at the time kept the coffee signifigantly hotter than any other institution would ever serve you... and what in any other restaurant would be like 86-87 degrees, was 98-99 degree when handed to you...

If I had to guess, her case was more justified than his. She obviously did sustain serious skin injuries, as would be expected by being scalded by hot liquids. It shows is that frivolous lawsuits have been around forever, and continue, but for some reason the public and media latched onto the spilled coffee one.

I could imagine if the trolley was like 100-200lbs and had momentum you could get a serious joint injury from a negligent attendant or poor design... not saything that's what happened, just within the realms of the possible.

But it's going at a snail's pace

McDonalds had third degree burns on her face... apparently McDonald's standard coffee machine at the time kept the coffee signifigantly hotter than any other institution would ever serve you... and what in any other restaurant would be like 86-87 degrees, was 98-99 degree when handed to you

That's not how I remember it. My recollection is that they were serving bog standard coffee, and the lawsuit resulted in everyone else dropping the temperatures to avoid being sued as well.

And as far ask I'm concerned her third degree burns are irrelevant. If you don't know how to handle boiling water, you should not be recognized as a legal adult.

Rather than relying on memory, it is easy enough to google the case and discover that they were in fact selling coffee hotter than the norm, that they had previous injury complaints, and that the jury took into account the plaintiff's own negligence and found her 20 pct responsible.

Whether damages were excessive is a separate question, but she did have to undergo skin grafting and was hospitalized for 8 days.

Her labia were fused together by the burns in her lap. Her team reasonably only asked McDonalds to cover the medical expenses, and McD refused to settle. When McD was found guilty, the book got thrown at them. It all happened in Albuquerque.

I'm sorry for what happened to her, but if she spilled a coffee she made at home the effect would be largely the same. If a McDonnalds waiter spilled the coffee on her the case would make the slightest bit of sense, but it's not what happened.

How does any functioning adult buy a boiling hot beverage and immediately put it between her knees?

The motte vs the motte: The cereal defense

When I need coffee temperature tested, I use a personal injury lawyer.

Your source says "had tested." So, they hired someone to conduct a survey. Was the survey accurate? I don’t know. But of course, neither do you. What I do know is that McDonald's had access to that survey, could cross-examine whoever conducted it, and were free to conduct their own study. I also know that the jury, which heard all the evidence, decided in favor of the plaintiff. That doesn't mean that they were necessarily correct, but you will excuse me if I am unimpressed with the incredulity shown by someone who has seen none of the evidence and is opining 30 years after the fact.

I had a friend I've since lost touch with who manages a McD's at the time of this lawsuit so we all had to ask him about it. His initial thought was that McD's should have just settled and paid out, but his take on the subject of the coffee temp was interesting. Apparently a lot of older folks come there for coffee in the morning; he estimated at least 50% of their traffic before 10am was seniors getting coffee and usually a small food item like a hashbrown or muffin of some sort. They sit down, get a free newspaper from the bin by the door and sip their coffee. This same customer demo complaining about their coffee being too cold was also the single biggest complaint category and reason for a refund demand by a long margin. They sip the coffee slowly over the course of half an hour, it gets cold pretty fast, and they'd bring 1/2 empty coffee cups back to the counter complaining about the temperature. Staff usually just gives them more "fresh" coffee from the urn. There was no realistic way they could ever actually lower the served temperature of the coffee. They briefly started lowering the temperature of drive in served coffee but that drove complaints immediately. I don't know what they ultimately did about it. His preferred solution was no coffee for the drive through period.

Rather than relying on memory, it is easy enough to google the case and discover that they were in fact selling coffee hotter than the norm

No, it is not easy enough to google the state of the internet as it was around the time of the case, when I distinctly remember some dude on on a phpBB forum linking to a document of some coffebrewer association recommending a temperature range within which McDonnalds comfortably sat.

All other factors you brought up are completely irrelevant.

I would suggest that if you think those factors are legally irrelevant, you don't know enough about the issue to have anyone take your opinion seriously.

I never said "legally" and the exercise of determining something's "legal relevance" is pointless, because it's whatever the court says it is in that moment.

I was talking about it from the perspective of morality and common sense.

Hm, so, if I ignore a known risk to my customers, that is morally irrelevant? I would hate to see what you think IS morally relevant?

Yes, because literally every action we take is a risk, and in this case the risk McDonnalds was putting their customers in was no higher than they were putting themselves into, when making a cup of coffee, tea, or any hot beverage at home. Adults, and even minors, are expected to be able to handle fluids in temperature of up to 100°C.

If you don't know how to handle boiling water, you should not be recognized as a legal adult.

It is probably worth pointing out that it only takes slight incompetence of the serving employee to end up with that cup of coffee in one's lap (doubly so with the shitty thin lids of years gone by). That's an inconvenience for cold drinks, but every place that serves hot drinks serves them at a temperature that will scald you if you attempt to drink them immediately.

and the lawsuit resulted in everyone else dropping the temperatures to avoid being sued as well.

It utterly bewilders me why the norm for hot beverages is to be served at temperatures that will physically harm you should you attempt to consume them within the first half hour of preparation; clearly the reason fast food chains serve their coffee that hot is specifically to ablate the outer part of your tongue, thus you won't be able to taste how shitty the beverage actually is (which I suspect is why McDonalds in particular was doing this; the coffee they serve in the US is quite literally just hot water with some coffee grounds dumped directly into the cup).

It's clearly not "so that the coffee stays hot later so that when you're ready to enjoy your meal it'll still be hot", because they don't care about the meal itself staying hot for that period of time (the food containers would be just as insulated as beverage containers are now). Guess jury selection should have included people who actually believe that burning themselves is a valuable and immutable part of the experience of consuming tea and coffee?

It is probably worth pointing out that it only takes slight incompetence of the serving employee to end up with that cup of coffee in one's lap (doubly so with the shitty thin lids of years gone by).

Yes, and if the coffee ended up on her as a result off an employees actions, sshe would have had a valid claim, but that's not what happened.

It utterly bewilders me why the norm for hot beverages is to be served at temperatures that will physically harm you should you attempt to consume them within the first half hour of preparation

It is utterly bewildering to me you expect anything else. If you prepare a hot beverage at home it will be at the exact same scolding temperature as when you order it at McDonnals. Also you're being way overdramatic when you say half an hour, unless the cups are very well isolated.

If you're saying restaurants should be forced to cool the beverage down to as safe temperature before serving:

  • Screw you, I don't want that as a customer.

  • It's treating adults as though they are mentally handicapped. Anyone who needs this should not be allowed to have a driver's license.

They likely do it in response to other customers complaining about cold coffee. The vast majority of people buying coffee in any drive thru are going to drink it at work which might be over half an hour away. If they serve coffee cool enough to drink immediately, they lose the people who want it for the office.

I'm old enough to remember when the food containers were insulated, but that was changed on account of environmentalist activism.

Am I the only one who feels sympathetic to the lawyers?

The media and the tech companies have been hyping GPT like crazy: "It's going to replace all our jobs." "You're foolish if you don't learn how to use GPT." "Google/Microsoft is replacing their search engine with GPT."

So these lawyers give it a try and it appears to work and live up to the hype, but it's really gaslighting them.

I have no sympathy

Technology is a powerful tool but you still have to not be an idiot with it. The internet is ultimately a really powerful tool that has totally transformed business and our lives. But you would still be an idiot to go on the internet and believe everything you read, or to fail to check what you're reading. If the lawyers in question had done their legal research by asking questions on Twitter and not checking what they were told, it would have been no less stupid, and it would not 'prove' that the internet didn't live up to the hype.

And of course, hype is nothing new. Tech companies have been hyping AI, but every company hypes their product. And these guys are lawyers, they're supposed to be smart and canny and skeptical, not credulous followers.

Not to mention that one is supposed to verify that the cases haven't been subsequently overturned or controverted by new statute. We used to call it "Shepherdizing" and it happened more or less automatically with Lexis/Nexis and Westlaw research.

Am I the only one who feels sympathetic to the lawyers?

Perhaps, but TBH I'm kind of hoping to see all of them nailed to the wall, because as far as I am concerned they attempted to defraud the court with a bunch of made-up cases and that is a whirlwind they totally deserve to reap.

It would seem obvious to never make up something that can otherwise be easily falsified by someone whose job it is to do that.

A guy is suing Avianca Airlines because he got banged in the knee by a cart during a flight. There is a boring issue around the deadline for filing the lawsuit,^1 which results in the plaintiff's side citing a bunch of very convenient and perfectly on point cases showing they are right on that issue.

I am also curious why wouldn't such a frivolous case be dismissed with prejudice? And people complain about inflation, high prices, too many warnings or 'safetyism'. I wonder why.

We have plenty of crazy high $$ figure lawsuits on non-medical topics also - e.g. Tesla not being aggressive enough in firing people who might have said "nigger" but they aren't really sure.

https://www.richardhanania.com/p/wokeness-as-saddam-statues-the-case

In the US, each side pays their own legal bills. Pretty much every other developed country defaults to the loser paying.

https://en.wikipedia.org/wiki/English_rule_(attorney%27s_fees)

That says that the English rule is followed in Alaska. Is Alaska less litigious than the rest of the US?

I'm not sure how to measure/check that. I briefly googled but mostly got sources that only included a few states or didn't seem to be based on solid data.

I think part of it is that a good portion of this is a back door way of regulating things. It would be almost impossible to pass some of these rulings legislatively. No government is going to waste time regulating the temperature of coffee. But the fear of lawsuits can have the same effect without all that nasty legislation that your opponent can use against your tribe. Most anti discrimination stuff is actually like this. It’s illegal to refuse to hire on the basis of certain characteristics. The law as written is unenforceable (hence the police don’t randomly inspect for diversity). But, if you’re [minority] and you think you’re being discriminated against, you can sue them (free to you, and expensive enough to them that they’ll often settle) giving those who sue for damages a payday. Mostly it’s a way to enforce laws that would Be impossible to enforce or legislate by giving citizens a payday for suing.

I think you're correct that this is a large part of it, the patient doesn't want to (and often can't afford to) get stuck with the bill and the Hospitals and Insurance companies have the both the resources and the volume to keep lawyers on staff to ensure that they don't get stuck with the bill

Is this frivolous?

If my knee is hurt badly enough that I need to seek medical attention, take time off work, etc. it wouldn't really seem that frivolous at all to me, and I would seek compensation if I received that injury from another party.

Frivolous doesn't mean "low damages." It means that there is no legal basis for liability. Moreover we don't know how much the plaintiff's damages were. So, we can't even say that they were minimal. And, of course, oftentimes cases deemed frivolous by public opinion turn out not to be.

One of my heuristics for good persuasive writing involves the number of citations, or at least clear distinct factual references that could be verified, as the clerk is doing for the rest of us here. Broad, general arguments are easy to write, but in my opinion shouldn't be weighted as heavily.

The amusing part here is that I have been doing this for years to weed out political hacks, long predating GPT.

Sources went out of style mid COVID when everyone realized there was a source for anything

Yep sources are only as valuable as there exist institutions worthy of trust... the second institutions cease to be trustworthy a citation to them is the equivalent of "I heard it from a friend of a friend of mine" wasted space betraying ignorance when you could just be arguing and establishing your own authority

Conjuring up a bunch of sources for literally anything was trivial before LLMs and now it's easier than ever and shouldn't be weighed heavily either.

I'd even go so far as to say that having having more citations than absolutely necessary is a signal of bad faith, as they work as a form Gish gallop etc.

I agree with this, and I regularly lambast my students for saying things like -

As has been widely demonstrated, AI is a tool of the patriarchy (Anderson, 2018; Balaji, 2021; Cernowitz, 2023).

As I emphasise, using citations like this demonstrates nothing. This kind of "drive-by citation" is only barely acceptable in one context, namely where there is a very clearly operationalised and relatively tractable empirical claim being made, e.g.,

Studies of American graduate students demonstrate a clear positive correlation between GPA and SAT scores (Desai, 2018; Estefez, 2020; Firenzi, 2022).

Even then, it's generally better to spend at least a little time discussing methodology.

I gotta say, I feel like my earlier posts on AI in general and GPT in particular have been aging pretty-well.

It's too premature to conclude that. No one is expecting it to be perfect, and future iterations likely improve on it. It reminds me of those headlines from 2013-2014 about Tesla accidents, or in 2010-2012 about problems with Uber accidents or deregulation. Any large company that is the hot, trendy thing will get considerable media scrutiny, especially when it errors. Likewise, any technology that is a success can easily overcome short-term problems. AOL in the early 90s was plagued by outages, for example.

But I think Open AI risks becoming like Wolfram Alpha -- a program/app with a lot of hype and promise initially, but then slowly abandoned and degraded, with much of functionality behind a paywall.

reminds me of those headlines from 2013-2014 about Tesla accidents, or in 2010-2012 about problems with Uber accidents or deregulation. Any large company that is the hot, trendy thing will get considerable media scrutiny, especially when it errors.

Have either of those companies really improved on the errors in question, though? Like Tesla Autopilot is better than it was but it's hardly like it's made gigantic leaps and Uber is still a weird legal arbitrage more than a reinvention of travel.

It's too premature to conclude that.

No it's not. The scenario that you, Freepingcreature, and others insisted would never happen and/or be trivially easy to avoid, has now happened.

What this tells me is that my model of GPT's behavior was more much more accurate than yours.

It’s trivial to attach LLMs to a database of known information (eg. Wikipedia combined with case law data, government data, Google books’ library, whatever) and have them ‘verify’ factual claims. The lawyers in this case could have asked ChatGPT if it made up what it just said and there’s a 99% chance it would have replied “I’m sorry, it appears I can find no evidence of those cases” even without access to that data. GPT-4 already hallucinates less. As Dase said, it is literally just a matter of attaching retrieval and search capability to the model to mimic our own discrete memory pool, which LLMs by themselves do not possess.

People latching onto this with the notion that it “proves” LLMs aren’t that smart are like an artisan weaver pointing to a fault with an early version of the Spinning Jenny or whatever and claiming that it proves the technology is garbage and will never work. We already know how to solve these errors.

Saw on twitter that the lawyer did ask ChatGPT if it was made up and it said it was real

None of those prompts ask explicitly if the previous output was fictional, which is what generally triggers a higher-quality evaluation.

If these sorts of issues really are as trivially easy to fix as you claim, why haven't they been fixed?

One the core points of my post on the Minsky Paradox was that a lot of the issues that those who are "bullish" on GPT have been dismissing as easy to fix and/or irrelevant really aren't, and I feel like we are currently watching that claim be borne out.

The Los Angeles Dodgers, a baseball team are apparently hosting a "pride night" and have invited a group called "The Sisters of Perpetual Indulgence" to perform at it.

The "sisters" are of course not sisters at all, but in fact, an anti catholic group of men who dress as nuns and mock catholics.

Originally the Dodgers, a baseball team, after learning that this was essentially an anti-Catholic hate group, uninvited them. However, they recently re-invited them.

Baseball?

What is the fucking point of this? What possible reason does a baseball team have to indicate a sexual preference? And why does this include mocking Catholics?

God this stuff is demoralizing. Is that the point?

Every once in a while we get these "Look, your side scored a point, it's not true that the game is rigged!" posts, and inevitably half the time all you have to do is wait a week for the whole thing to be overturned.

God this stuff is demoralizing. Is that the point?

Yes, but at this point if you have any lingering investment in the system and it's culture, you only have yourself to blame. The only way out is recognizing you're under hostile occupation, and acting accordingly.

Maybe, but I hope people realize that there are still many people in the United States who think that they are under hostile occupation by conservative Christians and have good reason to think so. The typical TheMotte commenter, I think, has lived in liberal urban areas for most of his life and does not realize that oppressive conservative Christianity is still a force to be reckoned with in some parts of the country. I think that the kind of people who enjoy mocking Christianity are probably disproportionately drawn from people who escaped such oppressive environments when they were young, much as many of the most fervent anti-communists are people who escaped communist regimes.

Maybe, but I hope people realize that there are still many people in the United States who think that they are under hostile occupation by conservative Christians and have good reason to think so.

What does "hostile occupation by conservative Christians" look like in practice, in the year 2023?

For example, I know a person who grew up in a Jehovah's Witnesses family and was forbidden from having any friends who were not JW. I think this upbringing was seriously psychologically damaging.

That is an extreme case, but keep in mind that even if say only 5% of American households are hardcore conservative Christian, that would mean 16 million people growing up in hardcore conservative Christian families. Some people from such families flee to liberal urban areas as soon as they can, and I think that they tend to be among the most vocal anti-Christians.

For example, I know a person who grew up in a Jehovah's Witnesses family and was forbidden from having any friends who were not JW. I think this upbringing was seriously psychologically damaging.

Do you have any examples where the "hostile occupation" is not, one way or another, one's own parents? Like, unrelated Christians will beat you in the street, destroy your property, or make a credible effort to get you fired from your job for being visibly non-Christian?

In that person's case, it was not just the parents. There was an extended community that, from what I understand, either dominated or at least had significant influence in the town where the person grew up.

What did this domination or significant influence look like in practice, how many people did it effect, and to what degree was it avoidable by personal choice?

I don't doubt that the people you're describing exist, or even that their concerns are important on at least some level. But if I said that the number of people "who think that they are under hostile occupation by woke Progressivism and have good reason to think so" is an order of magnitude or two larger, would you think that was a reasonable statement?

What did this domination or significant influence look like in practice

A tight-knit group of families all part of the JW, all controlling their kids together.

how many people did it effect

I don't know the numbers, but effectively everyone that this person was allowed to interact with.

to what degree was it avoidable by personal choice?

Given that the person was a minor and economically dependent on the family, little.

The example you gave doesn't prove what you are saying. Jehovah's witnesses are non-political and are internally barred from holding any public office. They don't even vote. They have no influence on any town in the world.

I took would like more examples of "hostile occupation of conservate Christians." I agree with FC that this just sounds like people being mad that their parents raised them in a religious tradition that they no longer believe in (likely because mass media and public education converted them to a rival religious tradition).

"Hostile occupation" sounds to me more like Francoist Spain where you needed letters of recommendation from your parish priest for a government position, or where the school curriculum is designed and monitored by the Church, or where major retailers wouldn't even consider a "Pride Month display" for fear of boycotts or falling afoul of the law.

Conservative parts of America have Pride displays at big box and book stores, their school curriculums are implemented by a body of teachers who are as a group quite woke, and although you don't yet need a letter vouching for your good character from your local DEIB commissar, if enough people learn that you're a heretic who opposes woke teachings you will be blacklisted from many government institutions and powerful corporations.

Given the above I have a really hard taking people seriously when they claim to have escaped a conservative hellhole because their parents made them go to church on Sunday and disapproved of their gender identity and oh yeah one time at a bar a drunk guy called them a faggot.

Growing up in a hardcore Jehovah's Witnesses family is very different from just "parents raised them in a religious tradition that they no longer believe in". The Jehovah's Witnesses are essentially a cult. I do not think that they represent Christianity in general but even if such cults are only say 5% of all American Christianity, that still means several million people who grew up in such environments.

What is the fucking point of this?

With 81 home games a season most teams have 20+ dud games a year (when team unlikely to be in contention of the post season is playing another cellar dwellers from outside the division who have little to no national fan base on a potentially cold Tuesday night). Pride nights are a way to market dud games by turning them into an event that attracts lots of groups, just like Christian concert nights that get marketed to church groups, firework shows after the game, or bobblehead days. You sell cheap group rate tickets in the nosebleeds so the stands look fulliish on TV and the concession stands don't riot about having another night with under 5,000 attendence.

Normally, they keep the marketing focused (because they need to feed from both sides in most cities) and try not to do anything that has too much potential to resemble disco demolition night too much.

The rest is just the winning side in the culture war rubbing the losing side's face in their loss.

Calling the Dodgers a “baseball team” is like calling the Lakers a “basketball team.” The Dodgers are a historical, legacy team in the premier league for its sport, combined with a built-in fanbase via geography.

God this stuff is demoralizing. Is that the point?

Yes. Shock and awe. “All your base are belong to us” is more pertinent than usual.

Lower your shields and surrender your sports. We will add your last bastions of cismaleness to our own. Your fans will adapt to serve us.

Some were originally using the original disinvitation of “The Sisters” as an example of “the Right” winning. However, they may be underestimating progressive plot armor.

From an accelerationalist standpoint, I’m more than open to the idea of LGBTQ+ sacking and conquering collegiate and professional sports, whether it be cultural dominance in men’s sports or replacement in women’s.

it's appropriate to only refer to them as a baseball team. the dodgers don't map to the lakers, being gracious they maybe map to the celtics, but the best comparison is probably, and appropriately enough, the clippers. LA audience, high payroll, strong regular seasons followed by consistently choking in the playoffs. there's 2020, but most fans already consider that a fake season and title.

What is the fucking point of this? What possible reason does a baseball team have to indicate a sexual preference? And why does this include mocking Catholics?

God this stuff is demoralizing. Is that the point?

You're focusing too much on the "baseball team" and not enough on the "Los Angeles." Pride is, for better or worse, a major secular holiday in the West Coast blue-tribe religion, particularly among those with disposable income. Every major institution in the area is expected to pay lip service to this. We have known this.

The Sisters of Perpetual Indulgence are a group which rose to prominence in the LGBT scene largely because of the work that a lot of their members did during the AIDs crisis - there's any number of other groups that do drag or religious-themed satire. I'd bet dollars to donuts that this history is why the Dodgers invited them. The idea that the Dodgers - a team whose most vociferous fans are Mexican/Centraco and overwhelmingly Catholic - did this thinking "Yeah, we'll stick an finger in the eye of Catholics" - doesn't pass the smell test. There's no reason to do it anyway; the local diocese has been liberal on sexuality and LGBT-issues for ages - at least since before I was born.

No, this isn't some sort of "power move." It was a big corporate body in a left-dominated City celebrating a lefty holiday, without relation to anyone else. For better or worse, trying to dictate how Los Angeles does Pride is pretty close to BoA and the HRC pitching hissy fits about not liking the Atlanta Braves' tomahawk-chop chant, or the more recent furore over the All-Star game and Georgia's voting laws.

Another rare case where a boycott could possibly work -- "stay away from a baseball game for one night" is a pretty low-effort way of making a pretty big statement, if successful.

Either that or go to the game and throw fruit at the fake nuns, I guess.

Yes, the demoralization is the point. The new update just dropped. The "appeal to religious tolerance" bug has been patched. That particular tactic will no longer work. You lose.

Curtis Yarvin is right that this level of power cannot be challenged head-on. You really thought press-releases from Republican senators would work? This is the equivalent of a Japanese Banzai charge straight into dug-in machine gun emplacements and sighted artillery. They will not only defeat you handily, they will enjoy it the whole time.

I suppose if you really are Catholic (whatever that even means these day), you can have faith in "divine providence" or whatever to eventually fix things. For everyone else, we will have to simply live with the pain.

EDIT: Speaking of Republican Senators and sports leagues, if you want to know how mid-level white-collar employees in NY and LA feel about Republican Senators sending them open letters, here’s ESPN NBA reporter Adrian Wojnarowski sending a “Fuck you” email directly to Senator Josh Hawley.

This is the equivalent of a Japanese Banzai charge straight into dug-in machine gun emplacements and sighted artillery.

I've long thought that if the Catholics really wanted to win a battle in the Culture War, they should start repeating "anti-Catholic animus" (or perhaps some catchier -phobia or -ism term I'm not going to consider) in the same way that "racism" and "antisemitism" get thrown around. The historical citations aren't really unjustified: the KKK was founded as, among other things, anti-Catholic. All of the historical bias against Italians and Irish immigrants is at least somewhat rooted in anti-Catholic bias, as is some of the bias against Central and South American immigration. The Nazis persecuted Catholics. And they continue to be victims of hate crimes in the US.

On one hand, repetition legitimizes and a constant drone of "we're persecuted" is functionally how various groups on the left have achieved their existing hierarchy -- this seems to bear more relation to the quantity and quality of complaints than to any particular metrics of measurable oppression. On the other, I respect that Catholics absolutely could claim (some degree of) martyrdom in the Year of Our Lord 2023 but choose not to because silent stoicism better aligns with their principles.

It wouldn't work. What constitutes "prejudice" or "discrimination" doesn't in practice follow coherent principles, it's merely "who, whom" because anti-Catholics (in a broad sense) control all media by which the message would be delivered and can thus mute o, even better, skew or taint the message.

It is a highly effective strategy. When I think of someone on TV complaining about anti-Catholicism, I think of some crackpot or blowhard that has been brought on as a slow news day sideshow, and I'm a practicing Catholic who believes anti-Catholicism is a serious problem! I know for a fact that there are many highly articulate priests and professors who could give an excellent rundown of anti-Catholicism on TV, I've personally seen many of them speak. But they would never be allowed on air for fear that they might actually sway some folks (look up Fr. Coughlin), so instead you get Bill Donahue.

Catholics have had a sense of victimhood for a long time, but a traditional tactic of dealing with this victimhood was voting Democrat, which isn't exactly an effective defence against mockery of Catholic symbolism and values by progressives in the 21st century. I don't think it's that Catholics don't think that anti-Catholic animus isn't a thing or even that they don't try to talk about it, but that they don't have an effective strategy for doing anything about it.

For example, Catholics' ethics don't allow them to use the strategy that blocks the LA Dodgers doing an equivalent thing with a group of trans people mocking Islam - that some Muslims would try to kill the members of such a group and people associated with them, while also claiming victimhood. A child that cries and hits will attract more attention from an overindulgent parent than a child that just cries. Professed Catholics in America have many different ethical beliefs, but a common theme is that almost all of them aren't keen on violently attacking those who mock them. I think that's less "silent stoicism" and more "ethical passivity".

Professed Catholics in America have many different ethical beliefs, but a common theme is that almost all of them aren't keen on violently attacking those who mock them. I think that's less "silent stoicism" and more "ethical passivity".

Maybe they all should just go back and watch The Boondock Saints

How can an army fight without a general? The Pope won't even speak up against the German Catholics supporting gay "marriage". The Church needed a Tywin Lannister, and got a Tytos instead.

I've long thought that if the Catholics really wanted to win a battle in the Culture War, they should start repeating "anti-Catholic animus" (or perhaps some catchier -phobia or -ism term I'm not going to consider) in the same way that "racism" and "antisemitism" get thrown around.

Ah but you see, when our side does it, it's not hate speech! We're just punching Nazis! The Catholics are the hateful murderous bigots and we are simply exercising our right to criticise ideas we do not agree with.

On one hand, repetition legitimizes and a constant drone of "we're persecuted" is functionally how various groups on the left have achieved their existing hierarchy -- this seems to bear more relation to the quantity and quality of complaints than to any particular metrics of measurable oppression

Personally I don't see this strategy working at what it sets out to do, but I'm not opposed to it. "Anti-catholic animus" would very quickly be recognised by the human machinery of the Cathedral as a form of attack, even if just on an instinctual level, and they would go out of their way to try and retaliate. I don't see it working per se, but I do see it damaging the prestige and reputation of the Cathedral, so I endorse it anyway.

The institutional RCC is still realizing that the progressive left does not like them and won’t tolerate them forever, so I wouldn’t hold my breath.

That being said, this kind of tawdry shock value LGBT crudity is going to wake up bishops much, much more than, say, suing catholic nuns to make them pay for contraceptives.

if catholicism is anti-non-catholicism then why should not non-catholics be anti-catholicism?

Does Yarvin offer a solution or a strategy?

"Be cool, don't be uncool." This is vague enough to be almost entirely unhelpful -- Yarvin has always been a much better descriptive thinker than a prescriptive thinker -- but there is a very real sense in which you cannot "force" it. When or if reaction comes, it will have to feel as natural as say, supporting Ukraine.

When or if reaction comes, it will have to feel as natural as say, supporting Ukraine.

...which is to say, not natural at all and entirely the product of MSM narrative-craft and bot astroturfing?

Astroturfing in favor of a position is only weak evidence for how sincere and natural the real supporters of it feel.

same underlying reason they released trevor bauer

the dodger front office is one of the better in MLB at developing talent, past that they have the money to sign any top free agent to cover deficiencies

dodger ownership, guggenheim, they run a brand. they sell a product. their product is valued in the money generated from tickets and concessions, from ads and merch, and that's because of baseball and success in baseball, but to them it's incidental, they don't care about baseball. most MLB owners don't anymore, but guggenheim is the worst offender.

dodger marketing felt it would negatively impact their brand to keep bauer and it felt it would negatively impact the brand to not acquiesce here. that the overwhelming majority of people complaining in both cases are not people they get money from is, i don't know, depressingly, grossly, peculiarly, exactly why they did it. it's somewhat self-fulfilling, the dodgers are a strong enough brand and baseball viewership is conservative enough they didn't actually have anything to worry about, but they have correctly appraised their brand in knowing any antiestablishment association would over time be more trouble than it's worth.

i don't give a shit about pride night. bill veeck was great for baseball and he'd have leapt at a pride night if for some reason it were on the table in the 60s and 70s. he'd have played both sides like a fiddle to get people in the stadium because he loved the sport and wanted people to watch. sure the money was nice, but the money wasn't the goal in itself. money is the only thing most owners care about now and baseball is worsening by the year because of it. manfred runner, pitch clock, rules on mound visits and pitching changes. the fucking atrocity of a playoff structure. if the worst sin dodger ownership committed these last few years was that of taste in inviting the sisters of perpetually beating a dead horse to 1 game, baseball would be in a lot better shape.

they don't care about baseball. most MLB owners don't anymore, but guggenheim is the worst offender.

How can you possibly know that?

money is the only thing most owners care about now and baseball is worsening by the year because of it. manfred runner, pitch clock, rules on mound visits and pitching changes. the fucking atrocity of a playoff structure.

I certainly agree on the Manfred runner and the playoffs (wild cards? Give a break), but the other changes have cut 1/2 hour off average game times, which previously averaging over 3 hours. That's a good thing.

How can you possibly know that?

a better way to phrase this could have been "What makes you say that?"

the dodgers are the only team in MLB owned by a hedge fund, guggenheim partners. "guggenheim baseball management" is a legal contrivance, a result of MLB's requirement that teams have a single person hold ultimate decisionmaking authority. guggenheim partners led the acquisition in 2012, then to adhere to MLB requirements to complete it they created GBM. partners' CEO mark walter is the nominal owner of the dodgers but the dodgers remain an asset effectively owned by a hedge fund. or a "hedge fund plus" since guggenheim does more on top of "normal" hedge fund things. even putting aside the inherent soullessness of being owned by a hedge fund, their backing puts a chasm between their ability to spend against the next highest. the yankees were hated for that under boss steinbrenner but they at least have a real legacy; the only reason we're talking about the dodgers is the "los angeles" in front.

as for game time, all MLB needed to do to speed up games was have umps be strict about enforcing rules already on the books. a pitch clock is kind of supported by that, but the problem i have with it is the mentality. first, it's rich to hear manfred and the owners say "fans want a faster game" when TV ad breaks are the biggest factor slowing games. second, fans want a faster game because they've been conditioned to have a sense of urgency about a game whose entire point is its pointlessness. playoffs are everything now, it didn't use to be this way. the fall classic was the last celebration of the season, not the point of the season. in baseball's greatest eras people were packing stadiums of teams that had no shot at the pennant. they weren't there to feed avarice, they were there to pass time watching summer's mandala.

I think baseball has issues well beyond the things the rules are trying to fix.

1). It’s really prohibitively expensive to attend games in person. Taking a family of four to a ballgame, buying each person a snack and a beverage is easily $100. Which means first of all, most parents, unless they’re well-off or their child is super into the game, don’t take the kids. This means that you’re cutting off the next generation of potential fans who will likely never see a player in person.

2). Most games are no longer on basic streaming plans. I’m a fan of my cards, but in my area, you need to have the tier above basic on my cable provider or get a separate service (Fubo) if you want to see the game live. This makes accidental discovery of a game on TV harder, as you need to go out of your way to watch it. Again, this cuts down on children discovering they like the game.

3). Like most sports, there are simply too many teams and too many games, such that the majority of games and teams are irrelevant to the season. You can be under 0.500 at the all-star break and still get a wildcard slot. The season goes from mid February to mid October, nearly 200 games. And the large number of teams makes it impossible to keep up with the players on any teams other than your own or close rivals. There’s just no feeling that the game you’re watching matters or that you’re watching star player at their best. There’s not even a sense of rivalry as the players are unknown, and they switch teams often enough that they really care that the Cubs and Cards have been rivals for generations.

4). Youth sports isn’t a universal experience— by the time a kid hits 8-9 most sports are select teams. If you aren’t good enough to make the team, you don’t play. And these teams often require lots of parental commitment as they practice a couple times a week and travel for tournaments. This leads to a lot of kids growing up not really familiar with the game. It’s a lot harder to appreciate hitting the cutoff man when you stopped playing the sport after t-ball.

It’s really prohibitively expensive to attend games in person. Taking a family of four to a ballgame, buying each person a snack and a beverage is easily $100.

That's prohibitively expensive? I wish two people could get out of a hockey game for that, and I live in one of the less expensive cities in the NHL.

a game whose entire point is its pointlessness. playoffs are everything now, it didn't use to be this way. the fall classic was the last celebration of the season, not the point of the season. in baseball's greatest eras people were packing stadiums of teams that had no shot at the pennant. they weren't there to feed avarice, they were there to pass time watching summer's mandala.

I'll add a point of agreement to that. I never watch baseball on TV; it's incredibly dull. But I can have a blast at a game with my dad, because it's not about the game. It's a good time hanging out and chatting and drinking, with the occasional impressive/exciting play and interjection of obscure stat from my father.

I happened to run into a concert at a minor league field last summer. I’d been walking to dinner and heard distant music, so I wandered that way. It was Jefferson Starship.

Witnessing that was more demoralizing than any culture war advertisement.

You have to expand on that, because I can see too many directions it could be going. Is Jefferson Starship demoralising, or is it the fact they are still performing after 50 years, etc

The latter. Nothing against the band, but it was not a good concert.

I'd think by now they'd be "Jefferson Walker".

Is it still the same band? I think they have one original member left, and if it's not Paul Kantner (dead) or Grace Slick (retired), I don't see much point. Same with a lot of these "bands of Theseus" that are still running around with names made famous in that era.

Yeah I feel the same. I don't know why they didn't keep doing that TNG thing they did in the nineties (for those of you who aren't elderly, Jefferson Airplane became Jefferson Starship, which became Jefferson Starship: The Next Generation), that seemed like a neat way to keep some continuity while acknowledging the changing roster.

I've believed for some time that Christianity is dead. There are Christian ideas that are floating around and are very strong. But where are the zealots for Christianity itself? Hardmode: where are the zealots outside Sub-Saharan Africa (where the church is not so LGBT-friendly).

If they tried this sort of thing with Islam, they'd be dealt with pretty quickly. The followers of Allah do not tolerate open insults.

If Christians typically went out and executed people for mocking Christ, that would indicate that Christianity is dead.

Christianity was pretty strong in the Middle Ages and Early Modern Era, when enormous numbers of people were being killed or tortured in religious wars (between Christians or with other faiths). What would happen if the Sisters showed up in pre-18th century Europe? They'd be lucky to reach prison alive.

The last person hanged for blasphemy in Great Britain was Thomas Aikenhead aged 20, in Scotland in 1697. He was prosecuted for denying the veracity of the Old Testament and the legitimacy of Christ's miracles.

The more people care about something, the stronger it is.

Well it's an interesting question -- is there any sort of intrinsic character to Christianity, or is 'Christianity' whatever people do while declaring themselves 'Christian'?

Can a group like Antifa call themselves "The Anti-Bad Guy Squad" and thereby make all their actions good?

There are so many ways you can interpret the Bible that any action can be defended as Christian. You'd think idolatry and polytheism would be off-limits but my former Catholic church decided to celebrate the Indian festival of Divali, for no comprehensible reason other than that there were a fair few Indians around. Christians can go all the way from pacifism to holy war, tolerance or destruction of evil (however it's defined).

Or, Christianity is defined by the Orthodox Church, which also limits how the Bible is to be interpreted, and all else is Christian heresy.

We are coming at this from different perspectives and common ground seems unlikely.

Thanks for the conversation.

The Orthodox Church (Roman or Eastern?) also took active part in quite a lot of that killing and torturing that RandomRanger mentioned above, though. If His Holiness the Bishop of Rome is the one who decides who does and does not count as a Christian, for example, I don't think you then get to claim that Innocent III o Julius II does not qualify as one.

Eastern.

More comments

What's the Evangelical/Fundamentalist take on this whole situation? On the one hand, they're pretty against mocking Jesus, and are heavily waging culture war. On the other hand, they're not exactly pro-Catholics. I'm assuming in this situation the former will take precedence over the latter, especially as it can be used for sermons to rile up the base.

Sports teams host all kinds of “nights” from gay pride to the military to first responders /teachers to Star Wars. The point is to sell tickets and increase the value of the brand.

Speak of the devil. I just attended a Curtis Yarvin, Delicious Tacos et al. event 5 days ago not 2 miles from Dodger stadium.

If I am between one side that believes men can become women by taking drugs and doing surgery and another side that believes a guy 2000 years ago walked on water and rose from the dead... well, I will be happy not being on either side.

Where is your evidence that they are anti-Catholic? You linked to their website, but there is nothing there about Catholicism at all. That is in marked contrast to the websites of actually anti-Catholic groups.

  • -22

You linked to their website, but there is nothing there about Catholicism at all.

...you don't think a panoply of wildly caricatured Catholic nuns is about Catholicism "at all?"

About? At all? Yes. But anti? No, not per se. There are a thousand reasons to dress in drag as a nun other than being anti-Catholic. To criticize certain Catholic doctrines re homosexuality. To push back on political efforts by organized religion (a big deal in 1979). Or just to be ironic, given that nuns are meant to be chaste.

And, btw, one can criticize the Catholic Church (an enormously powerful institution) without criticizing either Catholics or Catholicism.

  • -24

So would you agree that blackface is not "anti-black" per se? Do you believe that caricatures of Jews are not "anti-Semitic" per se?

There are a thousand reasons to dress in drag as a nun other than being anti-Catholic. To criticize certain Catholic doctrines re homosexuality.

Er... maybe we have different ideas about what it means to be "anti-Catholic," but criticizing Catholic doctrines of homosexuality sounds paradigmatically "anti-Catholic" to me. Pushing back on political efforts by the Catholic church seems "anti-Catholic," especially given the Church's long political history.

And, btw, one can criticize the Catholic Church (an enormously powerful institution) without criticizing either Catholics or Catholicism.

Catholics, maybe, but Catholicism? This seems like splitting hairs incredibly fine, to the point of suggesting a motte and bailey doctrine at play. Mockery has long been a highly effective approach to criticism, and criticism is not pro-, it is anti-.

"You can keep your Catholicism, we're just going to level your Church, caricature your symbols, mock your practices--no, we're not anti-Catholic per se, don't be ridiculous!"

That seems implausible to me.

So would you agree that blackface is not "anti-black" per se?

Yes. The traditional minstrel shows, as I understand it, depicted black people as stupid or foolish etc. But I don't know that that is true of Al Jolson in The Jazz Singer (though I have never seen the whole movie, so I might be mistaken). Nor it is true of 99% of people who dress in "blackface" nowadays, to play homage to Michael Jackson or whomever.

criticizing Catholic doctrines of homosexuality sounds paradigmatically "anti-Catholic" to me. Pushing back on political efforts by the Catholic church seems "anti-Catholic," especially given the Church's long political history.

Then, you really do have an odd definition of "anti-Catholic." The Mormon Church used to teach that blacks were the cursed descendants of Cain and/or Ham; were those who criticized those doctrines therefore "anti-Mormon"? I don't see how.

Catholics, maybe, but Catholicism? This seems like splitting hairs incredibly fine

Isn't that exactly what Martin Luther did? He criticized the Church, but not the religion.

"You can keep your Catholicism, we're just going to level your Church, caricature your symbols, mock your practices--no, we're not anti-Catholic per se, don't be ridiculous!" That seems implausible to me.

I can see how one might assume that initially. But, if one looked at the website of the organization in question, and saw zero references to Catholicism there, I would think that one would update one's beliefs.

Then, you really do have an odd definition of "anti-Catholic." The Mormon Church used to teach that blacks were the cursed descendants of Cain and/or Ham; were those who criticized those doctrines therefore "anti-Mormon"? I don't see how.

It seems like a pretty common belief in the history of Christianity generally, but yes--I have a hard time imagining someone discussing the racist history of Christianity in a way that is pro-Christianity. Perhaps it could be discussed neutrally, as a mere historical curiosity, but you yourself identify these transvestites as doing something to "clearly ridicule Catholicism," and ridicule is not a neutral act. So you're either being disingenuous now, or you are maintaining an untenable distinction between ridiculing something and being "anti-" that thing. And like, if that's really how you're splitting the hair, okay, but it seems a little absurd to me.

Catholics, maybe, but Catholicism? This seems like splitting hairs incredibly fine

Isn't that exactly what Martin Luther did? He criticized the Church, but not the religion.

To the best of my understanding, his maintenance that there even is a difference was itself anti-Catholic, and history (specifically, the existence of Lutheranism as a competitor meme) seems to bear that out. But I'm not a theologian, so.

But, if one looked at the website of the organization in question, and saw zero references to Catholicism there, I would think that one would update one's beliefs.

Except you yourself already allowed that there is not "zero references" to Catholicism at that website, owing to the caricatured Catholic nuns. This substantially increases my suspicion that you are, in fact, just trolling.

I think the interlocutor is disingenuous but extending charity one explanation could be that the interlocutor has the progressive belief that being anti-X means you hate X.

So for example Mormons teach a lot of things that are wrong. In that way, I’m anti-Mormon because I don’t think it is true. But that doesn’t mean I hate Mormons; it just means I think they are wrong.

But how I go about being anti-Mormon could suggest hatred. If I made a public display of mocking their sacred symbols with an intent to distress them then it is reasonable for me to be described as hateful towards them.

This group is clearly hateful toward Catholics.

"have a hard time imagining someone discussing the racist history of Christianity in a way that is pro-Christianity."

Are there really only two possibilities? Being either pro- or anti- ? Eg, I am not pro- religion, but neither am I anti-religion in the manner of Richard Dawkins,et al.

Except you yourself already allowed that there is not "zero references" to Catholicism at that website, owing to the caricatured Catholic nuns

That simply restates the initial claim that the mere fact that they dress as nuns is proof that they are "anti-Catholic." If a group that simply does that, and does not in any other way even mention Catholicism, or the Church, is "anti-Catholic," then with enemies like that, apparently the Church doesn't need friends.

Y’all are fighting over semantics. Taboo the phrase “anti-Catholic.” Which of the following propositions do you believe?

  1. Some of the Sisters’ beliefs are not compatible with Christian theology.

  2. The Sisters are mocking Catholic religious practices.

  3. The Sisters are mocking political positions held mainly by Christians.

  4. The Sisters are mocking political positions held by Catholics, but not most other Christians.

  5. The Sisters would like to diminish the political power of Christians in general.

  6. The Sisters would like to diminish the political power of Catholics more than other Christians.

  7. The Sisters would like to actively persecute Catholics via ostracization or violence.

  8. The mockery as per 3. already rises to the level of active persecution.

@naraburns, what about you?

I think 1, 2, 3 and 5 are true, but the rest are not. The Sisters are attacking Catholicism for its brand and availability more than out of any specific enmity. Thus I’d be more likely to call them anti-Christian than specifically anti-Catholic, even though they are clearly mocking Catholics.

More comments

But I don't know that that is true of Al Jolson in The Jazz Singer (though I have never seen the whole movie, so I might be mistaken).

He is respectful all the way through. But I am confused by your confusion on this issue - everyone else in this thread is applying what progressive dogma has demanded for the past decade - if someone in the target group is offended, it's offensive. I don't care enough to go through your history, but I am fairly certain you understood this concept previously.

I have never supported that argument in the least. Among other things, it is contrary to basic principles of freedom of expression.

I didn't say you supported it, I said you understood it.

More comments

Being Catholic is a choice in a way that being black or ethnic Jewish is not. Hence making fun of blacks or ethnic Jews for being blacks or ethnic Jews is more mean spirited than making fun of Catholics for being Catholics.

There is a difference of quality between, for example, making fun of a person for thinking that the Earth is flat and making fun of a person because he belongs to a certain ethnic group. Both are mean spirited, but the former is at least potentially part of some kind of meaningful debate, whereas the latter leads nowhere except to divisiveness.

  • -15

Being Catholic is a choice in a way that being black or ethnic Jewish is not.

No, it's not. There's a difference but it's much smaller than people imagine. Is being "atheist" a choice? People don't choose their convictions the same way they choose their clothes. I'm a Christian. Sometimes I wished I weren't, because Christianity is very demanding and because it's low status among my peers. But I'm convinced of its truth for the time being, so whether I like it or not I remain Christian.

Choice or not a choice, 'round these parts, atheism is what Robin Hanson calls the "sacred".

Well, I happen to work in the free speech arena, specifically re K-12 schools. So I deal a lot with book challenges. And I can tell you that the common claim that challenges to books with LGBTQ themes are the result of homophobia are bullshit, because the vast, vast majority of said books are challenged because they have racy scenes or depictions.

So, my answer is yes.

And, btw, I do not think that asking for actual evidence is shoving camels through needles.

That example isn't of somebody being against homosexuality but not homophobic, though. It's not an example of them being against homosexuality at all, just against books with racy scenes being in schools.

OP asked, "Every time someone's accused of homophobia, are you going to step in and shove a camel through the eye of a needle like this?" The example I gave is about people being accused of homophobia, and my response thereto.

Ah, makes sense. I thought you were replying to the first question.

More comments

Sisters of […] Perpetual

This is a phrase used to describe Catholic Nuns, because of the Roman Catholic title “lady of perpetual hope”

Indulgence

This is a play on the Catholic practice of indulgence. Combined together this is sufficient to prove their malice, but to add another:

Wearing Nun-like vestments

I suppose an inverse example would be if I called myself “the LGBT Queer Alliance”, and my public spectacle was actually St George defeating a rainbow dragon which just happens to be prancing around in rainbow colors. Clearly my intent would be malicious against the LGBT theme.

Yes, it is clear that they are referring to Catholic nuns. No one disputes that. But, contrary to your claim, evidence of malice is missing.

And your hypothetical does not work, because the picture you describe seems to advocate for the destruction of LGBTQ people or organizations (I have no idea what it means to be malicious "against a theme."). Were there evidence of the group advocating the destruction of Catholicism, or taxing churches, or telling people not to send their kids to Catholic schools, or even complaining about ostensibly homophobic Church teachings, you might have a case. But I don't see any such evidence.

  • -15

So what if my example were instead colorful LGBT people in sackcloth and ashes, begging repentance from a nun on their knees, and then it finally being granted? This is the traditional liturgy of Catholicism, much like the liturgy of transvestites is dancing with a lot of colorful clothing. The above liturgy is “nuns -> actually dancing transvestites”. What if we did “transvestites -> actually repentant sinners”? If it leaves an inexplicable bad taste in your mouth, then there is probably a moral residue, based around such nebulous (yet significant) concepts like “respecting a group’s symbology and name”.

? What message is that skit supposed to be sending? Wouldn't that simply be a claim that being LGBT is not a sin, or is a forgivable one*? How is it saying anything negative about Catholicism at all?

*Which, btw, if I am not mistaken, is consistent with current Catholic doctrine.

Right so if I made a dance troupe called The Bugchasing Rock Spiders, and they were good dancers and singers and occasionally made innuendo about wanting to bang your small children, that would not reflect any malice?

Also note that I made this group 50 years ago and due to an explosion in homophobia over the past decade business has boomed and we have acquired major corporate sponsorships requiring we sanitise our image to some extent, so now the innuendo is restricted to tweens and older.

Five days ago you said

they aren't just a gay rights group, but clearly ridicule Catholicism.

@naraburns @desolation This might be helpful context for your discussions.

I said that based on what I remember from reading about them 30+ years ago when they were in the news fairly often in the Bay Area, and before I looked at their website and saw nothing re Catholicism at all. I was mistaken; now I am pretty sure that they are just being juvenile. Or that they have changed over time (it was initially something like 4 guys; now it is a 503(c).

Where is your evidence that they are anti-Catholic? You linked to their website

I'm guessing you didn't notice the non-underlined space in-between "and" and "mock"? The "mock catholics" link is to video of a man pole-dancing on a crucifix.

You are correct that I did not see the gap. But, is the video actually about the Sisters of Perpetual Indulgence? I think I see one "nun" in the background watching but I don't see any participating in the performance. And 2) what exactly is anti-Catholic about sexualizing Jesus? If I said, "I want to fuck Jesus," I am sure that some Catholics would be offended. But how is that statement anti-Catholic? As opposed to expressing an idea that disagrees with Catholic doctrine?

  • -22

You really think that statement is a simple theological disagreement? It doesn't just disagree with Catholic doctrine, it mocks it. This is obvious.

if so, so what? How does "mocking" an idea somehow become more "anti-Catholic" than criticizing it? And, tell me, what exactly does "anti-Catholic" mean? Surely, if it is objectionable, then it must mean something more than mocking ideas; it must mean saying something negative about people. That is what anti-Semitism is, right? It is not simply a statement that certain doctrines of Judaism are wrong; it is a statement that something is wrong with Jewish people. Ditto re racist statements, and homophobic statements, and sexist statements, etc.

  • -17

Is smearing bacon on a Quran islamophobic?

How does "mocking" an idea somehow become more "anti-Catholic" than criticizing it? And, tell me, what exactly does "anti-Catholic" mean? Surely, if it is objectionable, then it must mean something more than mocking ideas; it must mean saying something negative about people.

By your reasoning here, an outright racial slur is not anti-(a race).

I don't understand. Isn't a racial slur saying something negative about people? That is certainly my understanding.

A racial slur is negative in the same way that mocking is saying something negative. I don't know a coherent standard for "saying something negative" that would let you count one and not the other.

More comments

I'm going down the list of slurs in my head, and can't think of a single one that says a specific negative thing about anybody. They're just another way if saying someone is black/Jewish/gay/etc.

More comments

But being "worse" is a claim about degree, not about kind.

Surely, if it is objectionable, then it must mean something more than mocking ideas; it must mean saying something negative about people. That is what anti-Semitism is, right? It is not simply a statement that certain doctrines of Judaism are wrong; it is a statement that something is wrong with Jewish people.

There are certain beliefs and practices so strongly tied to a group's identity that to mock the belief/practice and to mock the group of people is one and the same.

And is this one of those cases? Because, again, all they seem to do is dress as nuns.

The original usage of "anti catholic" in this thread referred simply to a group created solely to mock a group of Catholics. This is hardly pro-Catholic, is it? I think you are attempting to make the phrase "anti Catholic" both stronger and more specific than it really is in order to say that that usage of the phrase was incorrect.

More comments

Because it's an idea that disagrees with Catholic doctrine and not only is it expressed in a very rude and aggressive way, but that aspect is tied to why you'd want to say it in the first place. There's a reason why it would be nothing more than a weak joke to say "I find Jesus sexually attractive", and why nobody would actually say that.

I don't for one moment buy that a man pole dancing on a crucifix is just a disagreement with doctrine. The whole reason for doing it is that Catholics don't like them doing it. I'm not even sure what doctrine they're purportedly expressing.

I'm not even sure what doctrine they're purportedly expressing.

  1. If you don't know what they are trying to say, then how are you so sure it is anti-Catholic

  2. Why anti-Catholic, as opposed to anti-Protestant or anti-Eastern Orthodox?

  3. Most importantly, let's not forget that there is no reason to think that the performance is actually by the Sisters of Perpetual Indulgence. The evidence therefor seems to be zero.

  • -10

If you don't know what they are trying to say, then how are you so sure it is anti-Catholic

Jiro does know what they're trying to say; they're trying to mock Catholics. The point was not "the meaning to this behavior is unclear" but rather "the meaning to this behavior would be unclear were your point correct".

If you don't know what they are trying to say, then how are you so sure it is anti-Catholic

I know what they are trying to say, but what they are trying to say doesn't include nontrivial objections to doctrine.

Why anti-Catholic, as opposed to anti-Protestant or anti-Eastern Orthodox?

Assuming the "nuns" are involved, nuns are associated in the popular consciousness with Catholics. It doesn't matter for these purposes that some other groups also have nuns.

what they are trying to say doesn't include nontrivial objections to doctrine.

Can you link to the group commenting on any doctrine, trivial or otherwise?

Assuming the "nuns" are involved

I was referring to the linked video of the pole dancing, which does not include nuns.

what they are trying to say doesn't include nontrivial objections to doctrine.

Can you link to the group commenting on any doctrine, trivial or otherwise?

I don't believe they are commenting on doctrine, so of course I can't link to examples of them doing what I just said they aren't doing.

You can mock X and be anti-X without commenting on X's doctrine at all.

Can you link to the group commenting on any doctrine, trivial or otherwise?

To be fair, they are trivially commenting on doctrine, so I shouldn't have said that they're not commenting on doctrine at all. "Catholics think it's bad to show nuns and Jesus in a sexual context" is, technically a doctrine, and by deliberately doing that anyway, they are trivially commenting that they disagree with it. I wouldn't count that, of course, as nontrivially commenting on it.

deleted

It is one of the many small differences between Protestants and Catholics. Raised Protestant, I was told those evil Catholics were wrong to have a crucifix (instead of just a cross) because Jesus was no longer on the cross.

deleted

More comments