domain:ashallowalcove.substack.com
What can one learn about how to get away with serious crimes from this?
Very little. This isn't about a man getting away with serious crimes, it is about the fact that elites don't consider sexual abuse of chavettes a serious crime. It's a nothingburger when well-connected celebrities do it, it's a nothingburger when Mirpuri Pakistani gangs do it, and it's a nothingburger when Mum's new boyfriend does it.
After the Acosta plea deal and Epstein's "release" from "jail", he returned to being a star of the Manhattan social scene despite everyone knowing he was a sex offender. Nobody who mattered cared - apparently Neri Oxman's female graduate students were upset at being drafted into being part of a dog-and-pony show being put on for Epstein as a major donor to the MIT media lab, but Oxman's boss expected her to shut them up with the normal tools used by senior academics to discipline junior ones, and she did.
Late to the party I started
Fashionably late, I would say.
but spending money to incentivize a change in outcomes in my opinion is categorically different then legally enforcing those outcomes
I'm only relying on the example of "DEI" provided in your original comment. Unless DEI encapsulates "spending money to incentivise a change in outcomes" (in a discriminatory way I might add), why would you include "Women-owned businesses" as an example of a DEI initiative? Is there a law mandating that women-owned businesses must be X% of businesses? AFAIK most of the benefits that women-owned businesses receive involve preferential access to funding and grants and so on, but they don't amount to an explicit mandate that women-owned businesses must be 50% of the businesses in a given field.
Unless that actually exists and the situation is even more ridiculous than I initially thought (I seriously hope this is not the case but won’t rule it out), or unless your opinion is that it must be in the legislation to qualify as DEI, which seems overly pedantic as to how the incentive should be implemented, I find the statement you've made here to be in conflict with your previous ones.
the former is not strictly DEI imo, whereas the latter is.
I would think they are both DEI due to their shared objective of achieving representation for "marginalised groups" and that most people would consider them such. DEI isn't defined by a hyperspecific set of actions so much as it is by a loose set of beliefs and objectives IMO.
But to paint it is as DEI is imo aggressively retroactive because the west has a century of history of programs that attempt to bring about positive social change through funding, but the phrase DEI only recently came into the lexicon.
This reasoning is quite odd, to say the least. The concept of social programs is an old one, however that doesn't mean that the word "DEI" can't be used to refer to a set of (largely discriminatory) social programs that attempt to bring about social change through funding based on a specific ideological outlook, within a certain cultural context. Just because something can be defined as part of a broader phenomenon does not mean it can't also be specifically singled out for its peculiarities.
And even if DEI-like things existed before the term was coined, I don't necessarily think a term being retroactively applicable inherently makes it invalid. If that was so, a large swath of terms used within scholarship to define systems of social organisation that have been around since forever would need to be thrown out.
I have frequently taken vacations from this hobby, and come back to it. Sometimes it's a few days, sometimes a few months. But I always come back. In any case, I want to do this. I just want to regain the productivity I used to have (granted, in a more accessible environment and with me having much more time for it) so that I can sit down for a session, implement an idea, and see it bear out, at a more predictable pace than right now. This entire aspect of my life is "optional" only in so far as having a family or a job is optional. Yes I could abandon it, but I'd just end up obsessing over it again sooner or later.
Maybe I need to age into some more wisdom before I can accept your advice, but right now I'm philosophically of the opinion that being a human is cheap. In all ways that matter, you are what you do. We're human doings, not human beings.
Would it maybe be an idea to take a vacation from this hobby? I'm a pretty regular reader here and I don't think it's psychologically healthy to always be so critical of oneself about the progress one is having in an optional aspect of their life. I struggle with this myself, which is why I play the old wise man now. As Oliver Burkeman says: You don't have to think about it in terms of productivity debt that you have to pay off to be considered a human being. You are one.
Sure there was still a role for cavalry as mounted dragoons or scouts in WW1 and WW2 but real European doctrine was theorizing actual cavalry charges with lances and sabers.
You're telling me that I am wrong and that I am ignorant, but what I'm describing is the core functionality of both DeepSeek and Google's flagship products. As I recall you are all in in on Deepseek. Do you actually read and understand any of the technical material they publish or the subsequent commentary there on? or are you a mere "think piece" writer?
In elementary school grammar classes, students are admonished for saying things like “Me and Tim played baseball yesterday”.
I always thought about it in a way that if the sentence makes sense with just one person, then I should use I. For instance: I went to school yesterday means that I should use My brother and I went to school yesterday. But when the original sentence makes sense with "me" I should also copy it. E.g: My mother gave me a cookie changes into My mother gave a cookie to me and my brother. I am not sure if this is correct, but that is what I use as a heuristic.
The hypercorrection makes sense, except given how English language forms it means it will actually be acceptable very soon. Similarly to how literally/metaphorically are now basically synonyms, except when they are not.
Do you mean everyone trying to implement age verification on their platforms and in their countries all of a sudden?
I don't think so. Our original train was supposed to get us to Dover directly. This one took a parallel route, stopping at Ramsgate. Wasn't too hard to go to Dover from there, thankfully.
Our train was a combined Dover/Ramsgate train that separated in Faversham. The Dover half was the one that stopped at Canterbury.
Jesus. I would have turned back. Anything north of 27° rules out a hike for me. Even though the sun made a surprise appearance on this trip, it actually made the whole experience way more pleasant.
It was another loop trail, we were halfway around it and it was uphill both ways. Turning back wouldn't have helped. Can't say I enjoyed it myself, but now I know why cross-country skiers are all asthmatics. No exercise-induced shortness of breath for me, thank you, montelukast.
Detail 3: Software scales infinitely and is eating the world despite being tragically, hopelessly, pathetically shit. The fact that it's shit hasn't stopped it from running the world and a not-insignificant portion of people already live being told what to do by a computer instead of the other way around. I see this as the most likely outcome for AI if it's not already the case, barring us cracking AGI way sooner than expected.
Software was better when making it was low status and not super lucrative.
I agree with most of what you said, though I do not think ASI or AGI or whichever term people wish to use is ever going to happen.
The bubble popping will cause a lot of pain to the world, I hope I can get some money before that happens.
It is indeed a bubble, and it indeed will pop!
However, much of modern first world labor is non-productive/bullshit jobs, and AI is going to Change Things for these people. If most of the growth is in the office-tier services industry, these stand the greatest chance of being decimated first.
It's already Changing Things for people in my industry, which deals in physical and practical reality: we are leveraging use cases in computer vision, smart sorting/learning algorithms, and many of the less talented devs admitted to using it to format and/or proof code.
What people are missing in the entire debate are a couple of details that turn out to be immensely powerful in practice:
Detail 1: It doesn't have to be smart to change the world. It already has. It can in fact be moronic and still change the world. As some doomers have pointed out, it doesn't need to be smart to kill us all.
Detail 2: Many of the movers and shakers, people with Real Money, hate their fellow man and trust them way less than even a trained orangutan. As pointed out by others already, the metric is stupid. Nobody is seriously comparing the AI to an orangutan. But even if I took the metric seriously, I have been in many conversations with these people where it becomes clear they consider others marginally less intelligent than an empty aquarium. These are the people making decisions, and in those decisions the utility of their fellow man counts for as little as they can make it.
2b: AI serves a marvelous space in the legal world right now where it is conceivably black box enough for normies to not understand it or what's in it, and the complexity may grow as a lot of research is currently leveraged towards using AI tools to build better AI. This is magical for companies that wish to absolve themselves of legal responsibility - they didn't screw up, the AI did! Talk to our legal team, who we replaced with one lawyer and a bunch of AI tools.
Detail 3: Software scales infinitely and is eating the world despite being tragically, hopelessly, pathetically shit. The fact that it's shit hasn't stopped it from running the world and a not-insignificant portion of people already live being told what to do by a computer instead of the other way around. I see this as the most likely outcome for AI if it's not already the case, barring us cracking AGI way sooner than expected.
In point of fact, I do literally believe that a great many Western environmentalists are only tooting the horn about climate change as a convenient pretext to instate global communism or something approximating it. (I think Greta Thunberg had a bit of a mask-off moment in which she more or less copped to this.) But even if that was true of 100% of them, it wouldn't change the factual question of whether or not the earth is actually getting hotter because of human activity. "You're only sounding the alarm as a pretext to instate global communism" could be literally true of the entire movement's motivations, and yet completely irrelevant for the narrow question of fact under discussion.
I would like to see someone do some kind of analysis of whether writing style is genetic. How you would adjust for the confounder of culture, I have no idea.
Pretty predictable overall, but it's fascinating how things like Palestinians having shitty leadership and Lebanese killing Palestinians is still Israel's fault because what isn't? It looks like Israel is by default expected to have such sky-high moral standards that it would feel obligated to protect the very organization that declared itself their mortal enemy and is conducting the active warfare against them, or to conduct a policy beneficial for the leadership of the enemy. It's a bit like somebody would declare Hitler's suicide a war crime from the Allied side because they didn't work hard enough to prevent it.
WTF are lancers gonna do to bolt action riflemen, let alone machineguns?
Cavalry did pretty well in the ACW, and in the Franco-Prussian war.
I find the tactical insanity of WW1 pretty understandable if you remember that bolt action rifles and light machine guns are incremental changes. It was hard to foresee that just making everything slightly faster and more portable would make most doctrine obsolete.
There was a tendency for cavalry to get lighter and serve more as scouts than shock forces. But the total obsolescence of the concept was hard to fathom .
Moreover, outside of the Western front, cavalry did an outstanding job in WW1 even. Both on the eastern front and the Balkans with fast moving fronts, the advantages of mobility start to outweigh firepower.
It's only in WW2 with the infamous polish failures that cavalry was rendered soundly obsolete. And only really because motorized units took over the role.
It's far more understandable to me than some air forces deciding to stick to scouting and refusing to entertain combat flight despite obvious trends. But then again, the future of aviation was as mysterious as that of AI today at the time.
The fact you've never been tempted to use the 'stochastic parrot' idea just means you haven't dealt with the specific kind of frustration I'm talking about.
Yeah the 'fallible but super intelligent human' is my first shortcut too, but it actually contributes to the failure mode the stochastic parrot concept helps alleviate. The concept is useful for those who reply 'Yeah, but when I tell a human they're being an idiot, they change their approach.' For those who want to know why it can't consistently generate good comedy or poetry. For people who don't understand rewording the prompt can drastically change the response, or those who don't understand or feel bad about regenerating or ignoring the parts of a response they don't care about like follow up questions.
In those cases, the stochastic parrot is a more useful model than the fallible human. It helps them understand they're not talking to a who, but interacting with a what. It explains the lack of genuine consciousness, which is the part many non-savvy users get stuck on. Rattling off a bunch of info about context windows and temperature is worthless, but saying "it's a stochastic parrot" to themselves helps them quickly stop identifying it as conscious. Claiming it 'harms more than it helps' seems more focused on protecting the public image of LLMs than on actually helping frustrated users. Not every explanation has to be a marketing pitch.
What do you mean that prison? I'm certain they're all alike because they all have the same incentives.
Epstein is like the 1 in 10,000,000 prisoner that society didn't want to wind up dead in his prison cell with no surveillance.
I don't think even this is the right framing. It's not a question of a tiny population of nutjobs of one stripe or another that we hope to disincentivize. We know from history that a large proportion of human beings will kill in cold blood, or at least approve of it, if conditioned and pressured to do so. Apologia and celebration of this killing will only shift the margin of how rabid an anti-corporation true believer needs to be to undertake such an action.
Here’s how I understand this tic to have originated (but do take this with a grain of salt). In elementary school grammar classes, students are admonished for saying things like “Me and Tim played baseball yesterday”. (The error in that sentence is that “me” is one of the subjects of the sentence, so it should be “I” instead.) The problem is, when the teachers correct their students, they do so by saying “it’s not ‘me and Tim’, but ‘Tim and I.’” Of course, most kindergarten teachers don’t know what a noun case is, so they sure as hell aren’t going to be able to explain to their students the precise nature of the error. Thus, many native English speakers grow up with this strong sense that “[person] and I” is correct and anything else is wrong. I know that at least for me, even a perfectly grammatical sentence like “I and Tim went to play baseball” feels wrong somehow, presumably due to this childhood conditioning. So if this theory is true, then bizarre locutions like “Elon and I’s” are examples born from hypercorrection based on this conditioning. (And hey, it turns out that the very first English example provided on that Wikipedia page is precisely this one; I actually didn’t know that when I was writing this.)
I would say the advantage of ChatGPT over a traditional translator is that you can interrogate it. For example, say you get an email from your boss you do not understand. You can ask it not only for a translation but also about subtext or tone, even to rephrase the translation in a way that preserves meaning. It seems to me that if you take advantage of this even 20% of the time, you come out ahead, because despite obvious model weaknesses and potential errors, direct translation has its own misunderstandings too (which seem worse).
Ditto for the composition side of things. You can do stuff like compose a foreign language email and then have it back-translate it to you as a way of double checking you said what you intended to say. Sure, AI might worsen the writing,
Alas, most humans lack this kind of imagination, but optimistically we can teach people how to get more out of their LLM usage.
All that said the original post as I understood it was more about using LLMs as a language learning tool, and I think there, they have a potential point. The biggest counterpoint also comes from interactivity: ever tried using the advanced voice mode? It's pretty neat, and allows verbal practice in a safe, no-judgement, infinite-time environment, which is quite literally the biggest obstacle to language learning 95% of people face! So if the AI sometimes misleads in correcting a passage, I think it's a worthwhile tradeoff for the extra practice time, considering how frequently language learners basically stop learning, or give up learning, at a certain point.
Just as a linguistic aside, "bold-faced" lie is not incorrect according to some dictionaries but is probably the wrong word; the original is "bald-faced" (or the less common "barefaced") and meaning unconcealed, as opposed to "bold-faced" which meant impudent. It's been mutually confused for long enough that most won't call it wrong, but IMO it properly still is.
It's only impressive if the base rate is cameras have 99.999% uptime and guards never ever sleep through shifts.
No, at those odds it's not "impressive", it actually starts leaning towards "unlimited". Even, if the chances of each failure (either camera, or either guard) is as high as 50%, you end up with a ~93% chance of some part of the system catching the incident.
Now, you can argue that a 7% chance is nothing to scoff at, but aren't conspiracy theorists the ones being accused of picking the less likely option for ideological reasons?
Also if the rate of these incidents is so high in that prison, at some point you have to start questioning the decision of sending Epstein there to begin with.
It's not unlimited, but two cameras going out, and two guards taking a nap simultaneously, is pretty impressive, no?
It's only impressive if the base rate is cameras have 99.999% uptime and guards never ever sleep through shifts.
What if cameras being in a general state of disrepair and guards routinely falsifying records because "they didn't see nothing"
is the norm and you generally never know because usually this huge gap in accountability never counts against, and in fact is to the benefit of the corrections officers?
Irrespective of whether that's true, there is no explicit intent by Congress here.
There is not some kind of magic escape hatch from constitutional law that is invoked by putatively combating racism. If anything, I would have expected the Biden DOJ to put forward that kind of wonky theory (e.g. in SFFA) not the Trump one.
I'm absolutely in favor of doing things and not just being. It's more about a shift in mentality. I think beating yourself up about not getting things done is long-term harmful. A real break would maybe get you out of the cycle.
Maybe I should've added that one has to try to adopt a new mentality: "Getting Things Done by Being Friendly to Yourself" instead of having the self-criticism angle and then when the regular self-criticism doesn't work, one just dials it up to 11, because that's the whip that always worked. Self-love and all those terms are not terms certain kind of people will accept, but maybe "being friendly to oneself" works for you.
I get shit done after this ongoing transformation and I worked on my most ambitious software architecture so far the last quarter and I only use the whip less than 20% of the time.
More options
Context Copy link