@ControlsFreak's banner p

ControlsFreak


				

				

				
5 followers   follows 0 users  
joined 2022 October 02 23:23:48 UTC

				

User ID: 1422

ControlsFreak


				
				
				

				
5 followers   follows 0 users   joined 2022 October 02 23:23:48 UTC

					

No bio...


					

User ID: 1422

Perhaps part of it is that married women who changed their name want to vote too.

As someone whose wife came from a foreign location where women don't tend to change their names, and can thus attest to a significantly higher-than-normal level of grief over the wife changing her name, getting US documentation that would be sufficient for voting is probably the easiest part of a married woman changing her name.

This is correct. My comment is mostly trash; it's a pointer to something interesting with just enough summary to put it in context and get over the top-level comment barrier. Rov_Scam's is good. McKenzie's is good.

How about in Variant 2? Should Alice do some weird anthropic probability shifting for what she puts into the computer for Bob? Should she do two different weird anthropic probability shifting things, one for herself and a different one for Bob?

...wouldn't it be sooooo much simpler to just say, "Alice is capable of distinguishing between the probability of the coin flip itself, the probability that she observes an outcome, and the probability that Bob observes an outcome," rather than some conceptual mess garbage about her simultaneously anthropically probability shifting for Bob opposite her own? Like, what do you even mean "anthropically probability shifting" now? I thought it was supposed to be something about updating a belief on the coin flip, itself, but it seems like you've already just admitted that that is not happening. She still has "the naive weight of the coin". She still knows about this probability as a distinct probability.

This is not a Quality Contribution.

This is a Quality Contribution. You really ought to just read the whole thing and maybe not even bother reading my comment.

Patrick McKenzie, if you don't know, knows a lot about financial infrastructure and its interaction with tech, regulatory, and human systems. He routinely shares his knowledge in mostly accessible form online. He is also one of the few authors where I would be shocked if I learned that he used LLMs in his written work. When I read him, he often plays incredibly subtly, almost understating his point, often making me have to think again to see if think he's making the implication I think he might be making. His writing is quite unique in my mind. The linked post is his sizable contribution to the conversation about the SLPC indictment.

When the indictment came out, I didn't really say much. I didn't have a lot of specific expertise on the legal case. I was generally suspicious of how one could draw proper lines around the idea of 'donor fraud', where non-profits are defrauding donors who usually give money to non-profits without any strings attached.1 I upvoted @Rov_Scam's comment to that effect. I don't want to denigrate it; I think it was a great comment, fully deserving of a Quality Contribution in its own right. However, I now (only with the benefit of hindsight of McKenzie's post) think it may have taken a bit too much of a gloss over the bank fraud charge.

McKenzie is very serious about the bank fraud charge. He appears to have lived and breathed a world where bank fraud charges are routinely brought and routinely won by the government. He recounts how incredibly easy it seems to be for the government to routinely win on these cases. I don't know that I have a good summary of this; again, you kind of should just read it. He seems to think that basically any lie to a bank will do (a single piece of paper or a single word, he says), and he goes on at length about the extensive record-keeping done by banks and how these systems allow both internal-to-banks investigators and external regulators to easily find the documents or communications to make such charges a done deal. He gives a plethora of examples of actual people going to prison for these exact charges to make his case.

He then turns to what may be more important for the broader Culture War. Sure, lots of conservatives are vaguely annoyed with the SPLC, but even if they get brought up on charges, how much does that really change in the world? He lays out the technical means by which banks evaluate their customers and their transactions. Some of this might be known to people who were already steeped in this portion of the Culture War, but I hadn't really realized until he laid it all out. Sure, I knew of stuff like OFAC, where the Treasury will give a list of foreigners/entities that US banks are prohibited from dealing with, and sure, they pay close attention to that list and scrutinize their customers/transactions accordingly. But they also use all sorts of other 'data products' to screen out potentially 'problematic' customers/transactions. One of the most widely used was developed by the SPLC, which if you're one of those conservatives who were vaguely annoyed by the SPLC but didn't know this already, get ready for your blood to boil.

Admittedly, as he points out, much of this was actually public information. I just never had it laid out in one place, in a way that really made it sink in what was going on.

Not just banks, but all kind of other tech/finance companies, including regular companies who have employer matching contributions to non-profits, use lists like those generated by SPLC, to filter who they transact with. They want to tell regulators that they take steps not to transact with The Bad People, and how else can they feasibly do that other than to just use the SPLC list? In one of those 'public, but I didn't really know about it/internalize it' moments, he talks about how Amazon used the SPLC list, and how Jeff Bezos talked about it in public Congressional testimony:

Jeff Bezos, in Congressional testimony, describing Amazon's reliance on the SPLC data product for AmazonSmiles, a now-discontinued charitable product they offered:

"We use the Southern Poverty Law Center data to say which charities are extremist organizations. We also use the U.S. Foreign Asset Office [sic] to do the same thing.”

Bezos was interrupted before he could finish his next thought; you're welcome to read the testimony for full context. He is clearly referring to the OFAC SDN list.

Bezos went on to elaborate that the Fortune 2 company could not operate AmazonSmile without some way to kick out the extremist organizations and that SPLC was, effectively, the only reasonable option. He asked Congress for other suggested data providers. None were offered. (No, really, he did that.)

Let us pause to acknowledge that Bezos, one of the richest men in the world, considers these two four-letter organizations as peers. One of them is created by statute, operates within constitutional and administrative-law constraints, and answers to Congress, the courts, and ultimately the people of the United States of America. It could jail Bezos, personally, for willful non-compliance. And the other is …some people in Montgomery with a very specific interest, whose decisions are subject to review by no court, and whose only power appears to be moral suasion.

Bezos was equally and entirely committed to satisfying both.

Why? We’ll return to it in a minute.

[Me here: returning to it after a minute]

Well, remember, when you bought the data product, you were also buying someone anticipating your concerns before you even voice them and preparing options before you ask. Jeff Bezos’ words echo in San Francisco today: Does anyone know another option?

[Me here: returning to it after another minute]

About a month later 15 Republican lawmakers wrote Bezos a letter, saying:

Amazon’s ongoing reliance on the SPLC, with its documented anti-conservative track record, reinforces allegations that Big Tech is biased against conservatives and censors conservative views.

The letter did not contain a recommendation for an alternative data product.

What's next is what may be the biggest impact of the SPLC indictment. Not some guys from some non-profit, no matter how influential, going to prison. Instead:

Now, a quiz: do you think Compliance at a bank is neutral on “Can the bank delegate transaction-level decisioning authority, in any part of the business, however small, to an entity under federal indictment for bank fraud? Does the answer change if they are convicted of bank fraud?”

No! Compliance will not let you do that! Not because they are worried about the integrity of the blacklist. An accused bank fraudster has the final say to approve money movement out of a regulated financial institution. That is very likely intolerable to Compliance.

That is, he thinks that all those companies, those banks, finance companies, internet companies, employers matching contributions to non-profits, etc. will probably have to stop letting the SPLC tell them who The Bad Guys are that they shan't transact with.

His post goes on.

He describes an alliance of non-profits, organized by SPLC, that he describes as having engaged in an extremely lengthy campaign to pressure companies. He describes the mechanics of how their pressure campaign worked, how they burrowed themselves into the policies and workings of many companies. Again, I find it hard to summarize, and you should read, but his persistent theme is to imply that these folks were claiming to be non-partisan in this non-profit work, but building an extensive case that they were clearly targeting partisan targets, and their entire operation dried up after their partisan targets seemed to be no longer a target.

In his typical understated fashion, right near the end, he tells a parable, presumably for those who have eyes to see and ears to hear. My interpretation of his parable is that non-profit law requires folks to actually be non-partisan. Of course, non-profit law is not McKenzie's specialty, so others closer to that world will have to chime in. But it seems to me that he's clearly indicating that he thinks it's plausible, perhaps likely, and if The Powers That Be haven't thought of it yet they probably should, for the gov't to continue going after various folks who were involved in this.

1 - For, uh, reasons, I am aware that people can and do attach strings to donations plenty of times. Moreover, I'm aware that from the non-profit's perspective, this can be quite annoying unless they've already chosen to build boxes for those particular strings (e.g., "We have a 'X Fund', and donations marked as going to the X Fund will be used in the X Fund"). In fact, my sense is that plenty of non-profits will simply refuse donations that try to attach additional strings that they don't already have boxes for.

I don't act particularly Indian, beyond a fondness for biryani.

It would be monumentally difficult for anyone to not act particularly Indian in this particular way.

In fairness, it trips up a lot of people. I would probably say including you. Last time we discussed it, you didn't come back to explain how your position worked, but my best interpretation was that your position thought:

Alice is smart enough and capable of distinguishing between "the probability that Bob observes an outcome" and "the probability of the coin flip, itself"... but is too stupid to distinguish between "the probability that I, Alice, observe an outcome" and "the probability of the coin flip, itself"?

I paid a decent amount of attention when they did the LLM-vs-LLM chess tournament. You could read a bunch of the 'thinking' tokens (I use single quotes not to make fun of the term, but to only note that it is genuinely difficult to unpack what the word does/does not mean besides being conventionally used for a particular set of tokens). Some of them were genuinely impressive. Some were outright gibberish. Obviously, they were typically better in the opening phase of the game, where there is likely gobs of information on the internet/in books spelling out the reasoning behind particular moves. But that is not to say that it was never impressive later in the game. Of course, that competition used a pretty significant harness that objectively retained the true state. To what extent that matters and/or can be overcome is an ongoing question.

One possibility for trying to make progress in testing this distinction is to consider chess variants, particularly novel ones that are very unlikely to have anything in the training data. 960 is almost this, but something about it is at least in the training data, even if very minimal in comparison; to start, I don't even know that I'd go that far. "Let's play a game of chess where the knights and the bishops switch starting places," might be a good start. Harder versions would be, "Let's play a game of chess where the knights move like bishops and the bishops move like knights." It's logically the same, but you have to keep track of a difference in notation as well as reasoning. I imagine this would actually make the game harder for most people, since they're so used to thinking in one way. Good players will likely make more reasoning mistakes in calculating longer lines, but will probably be able to double-check well enough immediately before making a move that they're not likely to attempt all that many illegal moves (unless they are pretty severely time-constrained). Classic engines would have essentially no degradation in performance (because you'd have to bake in the difference). I'm not quite sure how to think about what kind of degradation to expect from LLMs or, having observed some level (or no) degradation from them, how one would interpret it; but I'd be interested to see. One could get a bit more whacky, like, "Knights can no longer simply jump over pieces; at least one of the two possible L directions needs to be open," possibly also throw in for the fun of it, "Bishops may now jump over one piece along their route," or something. I played Knightmare Chess long ago when I was young. There are a ton of tweaks you can do to mess with stuff. For humans, it is fun to keep track of various rule modifications and try to reason through it.

At the very least, if LLMs absolutely tank in these sorts of variants, just spamming illegal moves all the time, while humans are able to at least moderately cope, it would be some amount of useful information. Of course, one must always have the disclaimer that it is certainly possible that with enough progress and compute, LLMs may even outperform humans. We sort of just don't know.

My concern would be that for someone who gets up in the morning and doesn't like who they see in the mirror, that surgery will not fix what ails them.

Are you being honest with yourself that you could just get one surgery, and then you would be happy? That it would remedy what gnaws at you?

I had a plastic surgeon come up in one of the podcasts I listen to. I remember him saying that there was a category of people, I don't remember the whole set of criteria, but I remember that it was young-ish men, that he simply would refuse to operate on, specifically for this reason. He had too many experiences of people in that category (again, I don't remember all of the qualifiers) exhibit this exact phenomenon, and they'd keep coming back for something else, then something else, then something else, and it just wasn't healthy for them.

I would have thought that it would be women who are more likely to have this problem, which is why it stuck out in my memory that he called out men.

Which is just another way of saying "they're irrelevant".

No, they'd just as relevant as any other individual voter.

Speaking generally, I don't know that I have a useful definition of being "relevant" or "irrelevant". I'm hearing very similar claims that, after Callais, gerrymandering can or will make many black voters "irrelevant". One could pithily retort that they are just as relevant as any other individual voter, but I don't think that would be satisfying to the person making the claim.

This is sort of precisely where I think there is a simmering culture war, the clash between your comment and that of @JTarrou.

Scoping out a bit, the stylized story I might tell would be that back in ye olde days of Snowden/Assange, there was this sense of "information is meant to be free" and "sunlight is the best disinfectant". My sense is that at least some of those folks had a change of heart when their own ox was gored. But I think it's still a significant culture war.

Are soldiers supposed to keep secret military operations secret? Or is part of the point of things like prediction markets specifically to say something like "information is meant to be free", even governments shouldn't be able to keep even that sort of stuff secret, and it's good to build tools with the "whole point" being to prevent folks from being practically capable of keeping even stuff like that secret?

I certainly don't think this culture war has been won in either direction. It's just sitting there, menacingly, underneath a variety of these related debates.

We have an indictment of a special forces soldier, who participated in the planning/execution of the Maduro raid, for making Polymarket bets on questions about Maduro and US involvement in Venezuela.

Specifically, Gannon Ken Van Dyke used USDC.e to trade on at least four markets: "Maduro out by ... January 31, 2026", "US forces in Venezuela by ... January 31, 2026", "Trump invokes War Powers against Venezuela by ... January 31, 2026", and "Will the US invade Venezuela by ... January 31, 2026?" The last of the four markets actually resolved to NO, but he sold his position at some point before he took losses.

He apparently didn't do a great job of hiding it. He transferred his winnings to "a foreign cryptocurrency 'vault' which advertises that it generates interest for depositors" and then a couple of weeks later, transferred them to his crypto exchange account. At some point after (the indictment doesn't say), he cashed it out and transferred it to a brand new Interactive Brokers account (which was presumably in his real name). The only steps they mention him taking to try to cover his tracks were asking Polymarket to delete his account (claiming that he had lost access to the associated email address) and changing the email address on his crypto exchange account. I think the implication in the indictment is that the original email account associated with his crypto exchange account was "subscribed to in his name".

To my knowledge, this is the first US prosecution of someone trading in 'war prediction markets' using classified insider information. Unsurprisingly, they throw in quite a few different counts, and I'm not qualified/would have to do more work to have a sense on whether some of them are unlikely to succeed (did he do a sort of "fraud" in some technical sense? somebody would probably have to know the case law of the particular statute).

Everyone has known that this sort of thing was possible; some have criticized prediction markets for even having specific markets that are vulnerable to this type of insider trading on sensitive national security matters. The buzz on military subreddits by soldiers is that they're confident civilian politicos have also made a bunch of money by trading on this stuff. Are they not getting prosecuted because they're connected to the powers that be, while lowly grunts have examples made out of them? Are others just better at hiding their tracks?

If I had one observation of my own to add, I would reflect on the nature of monetary incentives. They're potentially large; this guy allegedly made about $400k. I think back to the story of cyber crime generally. Some stylized accounts say that long ago, internet viruses or whatever were kind of a game that people sort of did for fun. Some people just liked causing damage or they just wanted to see what it was possible to do. There weren't super easy ways to make a bunch of money with it. It certainly wasn't non-existent, but there were genuine, significant frictions. Then, when crypto made it vastly easier to extort folks for real money from the other side of the world, it took off on industrial scales.

In some sense, I feel a bit of that here. People getting in trouble for bad use of their access to classified information obviously isn't a new problem. Folks have been doing it because of a girl they like or because they decided they now believe in some other government/social or political movement/whathaveyou more than their promises to their own government. Maybe some folks even just found it fun. There was at least the one guy who posted classified information in the forums of a video game, because he wanted the US tanks in the game to be stronger. Foreign governments have long been trying to monetize this, as well, paying handsomely for information provided by insiders. But that path to money is kind of hard and cumbersome. You have to find some legit way to contact some component of the foreign government, possibly build a relationship, etc. Now, there are big piles of money, just sitting there, ready to be taken, and my guess would be that it's probably easier for folks to think that they can figure out how to cover their tracks while they bank a bunch of money this way. Many of them might actually be wrong, be bad at covering their tracks, and get caught. Others might succeed, and I have no real sense for how much this phenomenon will grow.

I think the one I'm remembering might have been a different one that came out later, but yeah, probably similar. There is, of course, a wide range of estimates, depending on model details.

climate science itself is based on thousands of different interactions that are hard to model out with degree of accuracy

I actually kind of go the other way on this, depending on how strict one is about the "degree of accuracy". There's an at least plausible way that you can approximate the high-dimensional system with a low-dimensional representation with one primary input (carbon-equivalents). Of course, one needs to consider a range of possible time series inputs and acknowledge that it's a pretty noisy model, but you can do okay-ish, about as okay-ish as you can do with other noisy models. And of course, you have to acknowledge that your estimates are genuinely dependent on the chosen time series inputs (e.g., it took a long time and a lot of people saying, "RCP 8.5 probably isn't very likely," for folks to sort of grudgingly accept that it wasn't the most useful time series input; but maybe things could change and it becomes more likely! There's a genuine dependence on the time series input). But you can do alright.

It's when we glom on a coupled system, that operates in a vastly different timescale regime, that we run into serious theoretical problems.

I appreciate that you've probably reviewed the literature more closely than I have. Maybe a month or so ago, I saved a review paper, and I was wanting to go digging through the cites, but I haven't had time yet (the motivating question was concerning which/how many papers dealt with effects specifically of gum/patches). Perhaps you could help me with a few specific questions:

(1) You say, "It's a shit nootropic". Is this because you think that the worthwhile effects are, indeed, minimal? Is it worse than, say, caffeine? Or is this judgment coupled significantly with the dependency risk?

(1a) Is any of the above possibly conflated by possible interactions with, say, ADHD meds or even caffeine alone?

(2) Is there any dependency risk data you can point me to for gum/patches? I think Zyn is likely to be closer to vaping/chew tobacco than gum/patches (I can accept that perhaps Gwern got this one wrong). I've seen plenty of statements like that FDA one; note that it calls out pouches. Is there anything in the literature specifically for gum/patches?

For disclosure, I have toyed around with gum on a few occasions. I would use it for specific parts of my day that I wanted a mild stimulant and perhaps some increased habit forming, like going to the gym. When I would, for example, go on trips where I wasn't expecting to have gym access, I never experienced any withdrawal or cravings. It's more of a pain for me to buy than, say, protein/creatine, so I've also just gone long stretches without having any without any difficulty. If anything, I feel like I feel more withdrawal effects from coffee or even caffeinated tea. This may be personal variation and apart from the data, which is why I would be interested in whether you've seen any data specifically for gum/patches.

Couldn’t some of the models just be…wrong? Bad? Maybe even dishonest?

Possibly? Of course, there's the "all models are wrong..." quote, so it would take additional caveats there. But they don't have to be bad/dishonest. They're just trying to do something that we can't do.

People have been smugly telling me that climate change isn’t real(ly a problem) for years. They had studies and everything. Why is this time different?

Because before, you had people actively arguing that climate change wasn't real(ly a problem), and there were tons of vibes/momentum to stamp that out. Now, it's come around to my position, and the vibes are more, "Yeah, you're probably right, we probably can't actually estimate damage." Maybe with some feel goods about how we can still do some things the author likes (e.g., batteries, public transit, methane, that are more or less economically viable). I think your comment is a good example of that. Similar to what I said to @quiet_NaN:

Back a decade ago, when I would give my position, the patterns were matched; the knives were out; I was classified as a "denier" who must be refuted. There's Nobel-winning work giving us estimates and everything!

I would have never gotten something so... tame... in response a decade ago at the old old old place. That's a pretty significant shift.

Humans make decisions under uncertainty all the time.

Sure, but usually they acknowledge when situations contain deep uncertainty. For a long time, many folks were acting like there wasn't any uncertainty with the human effects of climate change, or if there was, it was too minimal to matter in comparison to the known effect. Even shifting to, "Yeah, we probably can't estimate this and have very little clue, so we have to operate in a situation of deep uncertainty," is a pretty significant change.

I think your comment is a good example of the vibe shift. Back a decade ago, when I would give my position, the patterns were matched; the knives were out; I was classified as a "denier" who must be refuted. There's Nobel-winning work giving us estimates and everything! Now, when I say, "Yeah, we probably can't estimate that," the response seems more likely to be along the lines of, 'Sure, you're probably right that we probably can't estimate that. So what? We can still make decisions under uncertainty and maybe even do some things I prefer.'

Sign - yes. It will be a net negative.

How do you know this? From what I see, you just sort of guesstimated some things on different sides and don't take the timescales of the coupled systems into account. For example, you say:

Unless birth rates see radical change, the north simply isn't a place with enough people

Isn't this one of those things that could plausibly change? There have been entire reports on possible migrations of people, and people can move, populations can grow/dwindle on much faster timescales than climate changes. How do you account for the timescale effects?

Magnitude and scaling characteristics - no.

This is likewise concerning. Reddit search is hopelessly broken, but a while back, I remember one of these "estimates of climate change economic effects" papers coming out, and while I've already said that I think the task is actually impossible, I took the claim at face value and compared it to a contemporary Krugman NYT column that talked about tariffs. His approximate estimate of the reduction in GDP due to tariffs was ~3%, and I don't recall exactly what the particular climate paper gave, but the number that sticks in my mind was like 0.7%; I'm pretty sure it was something less than 1%. If we have no clue whether it's more like 1% or 10%, why should one think that it's more like 1% than 0%... or -1%? Like, how do we actually estimate this with anything other than vibes?

Didn't you go for vaping, whereas Gwern specifically distinguished between gum/patches and vaping, even in the abstract of the essay?

But there will be an impact, that's for sure.

How do you have any idea what order of magnitude the impact will be? How do you have any idea what the sign of the impact will be?

I think the vibes have fully shifted on climate change damage estimates. Tyler Cowen posted this morning with a terse:

The whole climate to gdp transmission thing does not seem to be working very well?

He's referring to this paper and this thread about it. They perform an empirical review of previous major estimates, focusing on replicating them and analyzing the methodology. One thing I found interesting is that they distinguished between damage estimates, themselves, and applications of damage estimates, like SCC. They say that the latter have already been show to be irreducibly uncertain, though even if the damage->SCC pathway was not irreducibly uncertain, they are arguing that since the damage estimates, themselves, are irreducibly uncertain, so too would be things like SCC.

They spell out multiple factors that create identification challenges and show how small changes to the inputs of prior models can result in huge changes in the outputs, in strange and unstable ways. They don't necessarily think prior authors did anything actively bad or malicious in their approach, just that the entire endeavor is probably doomed from the start:

Importantly, we don’t think these particular papers are uniquely flawed; our point is that they are attempting an impossible feat...

Their tweet thread has the typical disclaimer needed to get out in front of the typical objections one would immediately hear upon taking such a position:

Importantly: we are not claiming that climate change is economically harmless. We're arguing that the magnitude of damages is deeply and irreducibly uncertain, and trillion-dollar decisions need to stop being made as if it isn't.

I feel a bit vindicated by the vibe change, because I had been arguing something similar a full decade ago at the old old old place, pretty much on my lonesome. Obviously, I didn't have the exact set of empirical critiques that these authors present today, but I feel like it's a good example of where you can have very strong theoretical knowledge in a related/relevant area (timescale-separated dynamical systems) that leads to a correct intuition along the lines of, "I don't actually have to know the details of the methods they're using (though I did look at several back in the day); I can't imagine they could possibly accomplish what they're setting out to accomplish, just because of the nature of the type of system they're working with."

There has, from time to time, been some discussion concerning doctor salaries. I don't personally care all that much about this. They're highly-trained professionals in an in-demand field, and doctor salaries probably aren't the main driver of overall healthcare costs.

Nevertheless, there's often some debate over what the numbers actually look like. I was just linked to this tweet in one of my econ link aggregators. (Yay, built-in browser translation!)

Their claim is that 84% of American physicians are in the top 10% of incomes, and 26% of American physicians are in the top 1%. Their paper makes comparisons to other countries. They also broke it down into primary care vs. specialists.

So, at least this is one snapshot view of the actual distribution of doctor salaries, which I hadn't really seen before in these discussions. Assuming, of course, that their methodology is sound, which I'm not qualified to assess.

if that made it work better

It seems to me that you are saying that you have goals for what you want the end product to be like. As such, I think you're implicitly affirming that you would choose to not do things like train on the test set. That is, you wouldn't just clearly and directly give it the answers, even though you could.

Now, the question seems to me, "What do you even mean by benevolence?" You originally said:

Lack of benevolence: God created the world and all that is in it, and is able to interact with it, but doesn't actually care about us.

But this sort of doesn't make direct sense. You care about the LLM you're creating. You deeply care about it, at least in that you very much care to "ma[k]e it work better". It seems like you're using some other sense of words that is not fully fleshed-out. Like, maybe to be benevolent, you have to care about some particular type of goal or in some particular way, but other types of caring/goals do not count, or something. I think we just don't have enough information to figure out whether this reasoning makes much sense.

I drive 99% of the time, and my wife very very occasionally says things. She always apologizes about it, but somehow every. single. time. it is valid and useful information. For example, maybe I'm looking back to initiate a lane change, and something suddenly happens in front of us and to the other side.

That sexual revolution thing didn't turn out so well for women, did it?

If you were creating an LLM, would you train on the test set? If not, does that mean that you lack benevolence? You could just clearly and directly give it the answers!