site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 9734 results for

domain:greyenlightenment.com

Do you mean everyone trying to implement age verification on their platforms and in their countries all of a sudden?

I don't think so. Our original train was supposed to get us to Dover directly. This one took a parallel route, stopping at Ramsgate. Wasn't too hard to go to Dover from there, thankfully.

Our train was a combined Dover/Ramsgate train that separated in Faversham. The Dover half was the one that stopped at Canterbury.

Jesus. I would have turned back. Anything north of 27° rules out a hike for me. Even though the sun made a surprise appearance on this trip, it actually made the whole experience way more pleasant.

It was another loop trail, we were halfway around it and it was uphill both ways. Turning back wouldn't have helped. Can't say I enjoyed it myself, but now I know why cross-country skiers are all asthmatics. No exercise-induced shortness of breath for me, thank you, montelukast.

Detail 3: Software scales infinitely and is eating the world despite being tragically, hopelessly, pathetically shit. The fact that it's shit hasn't stopped it from running the world and a not-insignificant portion of people already live being told what to do by a computer instead of the other way around. I see this as the most likely outcome for AI if it's not already the case, barring us cracking AGI way sooner than expected.

Software was better when making it was low status and not super lucrative.

I agree with most of what you said, though I do not think ASI or AGI or whichever term people wish to use is ever going to happen.

The bubble popping will cause a lot of pain to the world, I hope I can get some money before that happens.

It is indeed a bubble, and it indeed will pop!

However, much of modern first world labor is non-productive/bullshit jobs, and AI is going to Change Things for these people. If most of the growth is in the office-tier services industry, these stand the greatest chance of being decimated first.

It's already Changing Things for people in my industry, which deals in physical and practical reality: we are leveraging use cases in computer vision, smart sorting/learning algorithms, and many of the less talented devs admitted to using it to format and/or proof code.

What people are missing in the entire debate are a couple of details that turn out to be immensely powerful in practice:

Detail 1: It doesn't have to be smart to change the world. It already has. It can in fact be moronic and still change the world. As some doomers have pointed out, it doesn't need to be smart to kill us all.

Detail 2: Many of the movers and shakers, people with Real Money, hate their fellow man and trust them way less than even a trained orangutan. As pointed out by others already, the metric is stupid. Nobody is seriously comparing the AI to an orangutan. But even if I took the metric seriously, I have been in many conversations with these people where it becomes clear they consider others marginally less intelligent than an empty aquarium. These are the people making decisions, and in those decisions the utility of their fellow man counts for as little as they can make it.

2b: AI serves a marvelous space in the legal world right now where it is conceivably black box enough for normies to not understand it or what's in it, and the complexity may grow as a lot of research is currently leveraged towards using AI tools to build better AI. This is magical for companies that wish to absolve themselves of legal responsibility - they didn't screw up, the AI did! Talk to our legal team, who we replaced with one lawyer and a bunch of AI tools.

Detail 3: Software scales infinitely and is eating the world despite being tragically, hopelessly, pathetically shit. The fact that it's shit hasn't stopped it from running the world and a not-insignificant portion of people already live being told what to do by a computer instead of the other way around. I see this as the most likely outcome for AI if it's not already the case, barring us cracking AGI way sooner than expected.

In point of fact, I do literally believe that a great many Western environmentalists are only tooting the horn as a convenient pretext to instate global communism or something approximating it. (I think Greta Thunberg had a bit of a mask-off moment in which she more or less copped to this.) But even if that was true of 100% of them, it wouldn't change the factual question of whether or not the earth is actually getting hotter because of human activity. "You're only sounding the alarm as a pretext to instate global communism" could be literally true of the entire movement's motivations, and yet completely irrelevant for the narrow question of fact under discussion.

I would like to see someone do some kind of analysis of whether writing style is genetic. How you would adjust for the confounder of culture, I have no idea.

Pretty predictable overall, but it's fascinating how things like Palestinians having shitty leadership and Lebanese killing Palestinians is still Israel's fault because what isn't? It looks like Israel is by default expected to have such sky-high moral standards that it would feel obligated to protect the very organization that declared itself their mortal enemy and is conducting the active warfare against them, or to conduct a policy beneficial for the leadership of the enemy. It's a bit like somebody would declare Hitler's suicide a war crime from the Allied side because they didn't work hard enough to prevent it.

WTF are lancers gonna do to bolt action riflemen, let alone machineguns?

Cavalry did pretty well in the ACW, and in the Franco-Prussian war.

I find the tactical insanity of WW1 pretty understandable if you remember that bolt action rifles and light machine guns are incremental changes. It was hard to foresee that just making everything slightly faster and more portable would make most doctrine obsolete.

There was a tendency for cavalry to get lighter and serve more as scouts than shock forces. But the total obsolescence of the concept was hard to fathom .

Moreover, outside of the Western front, cavalry did an outstanding job in WW1 even. Both on the eastern front and the Balkans with fast moving fronts, the advantages of mobility start to outweigh firepower.

It's only in WW2 with the infamous polish failures that cavalry was rendered soundly obsolete. And only really because motorized units took over the role.

It's far more understandable to me than some air forces deciding to stick to scouting and refusing to entertain combat flight despite obvious trends. But then again, the future of aviation was as mysterious as that of AI today at the time.

The fact you've never been tempted to use the 'stochastic parrot' idea just means you haven't dealt with the specific kind of frustration I'm talking about.

Yeah the 'fallible but super intelligent human' is my first shortcut too, but it actually contributes to the failure mode the stochastic parrot concept helps alleviate. The concept is useful for those who reply 'Yeah, but when I tell a human they're being an idiot, they change their approach.' For those who want to know why it can't consistently generate good comedy or poetry. For people who don't understand rewording the prompt can drastically change the response, or those who don't understand or feel bad about regenerating or ignoring the parts of a response they don't care about like follow up questions.

In those cases, the stochastic parrot is a more useful model than the fallible human. It helps them understand they're not talking to a who, but interacting with a what. It explains the lack of genuine consciousness, which is the part many non-savvy users get stuck on. Rattling off a bunch of info about context windows and temperature is worthless, but saying "it's a stochastic parrot" to themselves helps them quickly stop identifying it as conscious. Claiming it 'harms more than it helps' seems more focused on protecting the public image of LLMs than on actually helping frustrated users. Not every explanation has to be a marketing pitch.

What do you mean that prison? I'm certain they're all alike because they all have the same incentives.

Epstein is like the 1 in 10,000,000 prisoner that society didn't want to wind up dead in his prison cell with no surveillance.

I don't think even this is the right framing. It's not a question of a tiny population of nutjobs of one stripe or another that we hope to disincentivize. We know from history that a large proportion of human beings will kill in cold blood, or at least approve of it, if conditioned and pressured to do so. Apologia and celebration of this killing will only shift the margin of how rabid an anti-corporation true believer needs to be to undertake such an action.

Here’s how I understand this tic to have originated (but do take this with a grain of salt). In elementary school grammar classes, students are admonished for saying things like “Me and Tim played baseball yesterday”. (The error in that sentence is that “me” is one of the subjects of the sentence, so it should be “I” instead.) The problem is, when the teachers correct their students, they do so by saying “it’s not ‘me and Tim’, but ‘Tim and I.’” Of course, most kindergarten teachers don’t know what a noun case is, so they sure as hell aren’t going to be able to explain to their students the precise nature of the error. Thus, many native English speakers grow up with this strong sense that “[person] and I” is correct and anything else is wrong. I know that at least for me, even a perfectly grammatical sentence like “I and Tim went to play baseball” feels wrong somehow, presumably due to this childhood conditioning. So if this theory is true, then bizarre locutions like “Elon and I’s” are examples born from hypercorrection based on this conditioning. (And hey, it turns out that the very first English example provided on that Wikipedia page is precisely this one; I actually didn’t know that when I was writing this.)

I would say the advantage of ChatGPT over a traditional translator is that you can interrogate it. For example, say you get an email from your boss you do not understand. You can ask it not only for a translation but also about subtext or tone, even to rephrase the translation in a way that preserves meaning. It seems to me that if you take advantage of this even 20% of the time, you come out ahead, because despite obvious model weaknesses and potential errors, direct translation has its own misunderstandings too (which seem worse).

Ditto for the composition side of things. You can do stuff like compose a foreign language email and then have it back-translate it to you as a way of double checking you said what you intended to say. Sure, AI might worsen the writing,

Alas, most humans lack this kind of imagination, but optimistically we can teach people how to get more out of their LLM usage.

All that said the original post as I understood it was more about using LLMs as a language learning tool, and I think there, they have a potential point. The biggest counterpoint also comes from interactivity: ever tried using the advanced voice mode? It's pretty neat, and allows verbal practice in a safe, no-judgement, infinite-time environment, which is quite literally the biggest obstacle to language learning 95% of people face! So if the AI sometimes misleads in correcting a passage, I think it's a worthwhile tradeoff for the extra practice time, considering how frequently language learners basically stop learning, or give up learning, at a certain point.

Just as a linguistic aside, "bold-faced" lie is not incorrect according to some dictionaries but is probably the wrong word; the original is "bald-faced" (or the less common "barefaced") and meaning unconcealed, as opposed to "bold-faced" which meant impudent. It's been mutually confused for long enough that most won't call it wrong, but IMO it properly still is.

It's only impressive if the base rate is cameras have 99.999% uptime and guards never ever sleep through shifts.

No, at those odds it's not "impressive", it actually starts leaning towards "unlimited". Even, if the chances of each failure (either camera, or either guard) is as high as 50%, you end up with a ~93% chance of some part of the system catching the incident.

Now, you can argue that a 7% chance is nothing to scoff at, but aren't conspiracy theorists the ones being accused of picking the less likely option for ideological reasons?

Also if the rate of these incidents is so high in that prison, at some point you have to start questioning the decision of sending Epstein there to begin with.

It's not unlimited, but two cameras going out, and two guards taking a nap simultaneously, is pretty impressive, no?

It's only impressive if the base rate is cameras have 99.999% uptime and guards never ever sleep through shifts.

What if cameras being in a general state of disrepair and guards routinely falsifying records because "they didn't see nothing"
is the norm and you generally never know because usually this huge gap in accountability never counts against, and in fact is to the benefit of the corrections officers?

Irrespective of whether that's true, there is no explicit intent by Congress here.

There is not some kind of magic escape hatch from constitutional law that is invoked by putatively combating racism. If anything, I would have expected the Biden DOJ to put forward that kind of wonky theory (e.g. in SFFA) not the Trump one.

Right, well, the FBI stats are not the BLS stats.

The BLS stats have been generally correct (and getting better) and, more importantly, have erred both upwards and downwards approximately equally.

It's all unfalsifiable. You see exactly this same rejoinder to anyone who does a deep dive on JFK, or 9/11, or Elvis.

Or Imane Khelif, oh wait, they were right about that one, let's pretend it never happened. Oh, how about the Lab Leak? What, it's gaining mainstream mainstream acceptance? Quick, pretend we were only deboonking the bio-weapon people!

There's always a They with unlimited evidence-manipulating powers.

It's not unlimited, but two cameras going out, and two guards taking a nap simultaneously, is pretty impressive, no?

In the eight decades since Lewis coined the term, the popularity of this fallacious argumentative strategy shows no signs of abating, and is routinely employed by people at every point on the political spectrum against everyone else. You’ll have evolutionists claiming that the only reason people endorse young-Earth creationism is because the idea of humans evolving from animals makes them uncomfortable; creationists claiming that the only reason evolutionists endorse evolution is because they’ve fallen for the epistemic trap of Scientism™ and can’t accept that not everything can be deduced from observation alone; climate-change deniers claiming that the only reason environmentalists claim that climate change is happening is because they want to instate global communism; environmentalists claiming that the only reason people deny that climate change is happening is because they’re shills for petrochemical companies. And of course, identity politics of all stripes (in particular standpoint epistemology and other ways of knowing) is Bulverism with a V8 engine: is there any debate strategy less productive than “you’re only saying that because you’re a privileged cishet white male”? It’s all wonderfully amusing — what could be more fun than confecting psychological just-so stories about your ideological opponents in order to insult them with a thin veneer of cod-academic therapyspeak?

Your post is overall good, but I think you take this part too far. There are questions, indeed including on issues you've listed here, where a genuine issue of material fact exists, and is not and likely cannot be resolved in the near term.

My example would be climate change. I have slight confidence, approximately 65%ish that the climate is warming faster than it would without human CO2 emissions. This is hardly the sort of confidence level one should have if you are deciding major issues. It gets even lower when I ask the question, "assuming it is true the climate is warming because of human CO2 emissions, is that bad?" On even that question we are at 50% max, most credible people I have looked at seem to indicate slight warming is probably good for the earth and humanity. And then there is the next question of, will the policy proposed by this politician/advocate meaningfully change the outcome, and there I estimate abysmal results in the 1-5% range.

So I am left with a confidence chart of(when being favorable to environmentalists): A) Global Warming is true and humans contribute: 70% B) That is bad: 50% C) The proposed policies can fix it: 5%

For a composite confidence level of 1.75% that environmentalist proposals will solve the problem they are purporting to solve.

And yet, environmentalists act as if they have 100% confidence, and they commonly reject market solutions in favor of central planning. The logical deduction from this pattern of behavior is that the central planning is the goal, and the global warming is the excuse. It is not bad argumentation to say to the environmentalist, "you are just a socialist that wants to control the economy, and are using CO2 as an excuse" because a principled environmentalist would never bother raising a finger in America. They'd go to India and chain themselves to a river barge dumping plastic or go to Africa and spay and neuter humans over there. If you are trying to mess with American's cars, heat, and AC, its because you dont like that Americans have those things, because other concerns regarding the environment have been much more pressing for several decades at this point, and that isn't likely to change.

Hey, I wanted to say thanks under our other Epstein conversation, but I'll do so here. I appreciate your correction on Acosta's alleged statement in particular, and taking the time for writing your response in general.

I don't know when I get the time to read through this one, but I'll try to go through the whole thing as well.

Excellent deep dive. Probably mostly wasted, alas. I could have predicted the weak conspiratorial rebuttals. "Well...I just don't believe it. Because we know They're lying!" It's all unfalsifiable. You see exactly this same rejoinder to anyone who does a deep dive on JFK, or 9/11, or Elvis. "Okay, yeah, that's what the 'official reports' and the 'evidence' says, but of course we can't trust it."

There's always a They with unlimited evidence-manipulating powers.

Every narrative will have holes you can poke in it with enough motivated reasoning. Some people can cast doubt on the color of the sky. Once they become attached to a theory that properly identifies a nefarious They, nothing is going to convince them that reality is actually mostly tawdry and just what the evidence says it is.

I too, found this article extremely annoying. This guy is for real accusing, Scott Alexander of all people, of not laying out his opinions and justifications of ai acceleration in enough detail? Could he have maybe tried reading any of how his writing on the topic?

My wife's father died of cancer. She tends to notice and react more strongly to stories about cancer in fictional shows, and real cases in people around us. Our brains are not perfect logic engines. Traumatic enough events can have an outsized impact on how we judge and notice other events around us.

I generally reserve usage of the term "racist" to refer to people that hate others because of their race. I know that is not how everyone uses the term, but I'll stick with it. If you feel no hate but you treat other races different that is what I'd call "prejudice". I do not consider it "racist" merely to notice things about the world. I might not agree with what they have noticed, but we can definitely have a discussion about it.

I try to pick my words carefully, and I prefer words that add light not heat to the discussion. The term "racist" usually just adds heat. I would almost always prefer to just write out the whole definition of what I mean rather than use "racist" as a shorthand term. I know its verbose, I don't care. whiningcoil responded to me, and didn't seem to come away with your interpretation. So you are butting you head in and trying to make it look like I picked a fight when no one actually involved seems to have felt that way.

Patchett's an absolute blast. Hope you enjoy his books.