site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 110649 results for

domain:alexepstein.substack.com

Yes. America was basically founded on the idea that if you take protestant religion serious, you're one of the no-longer-a-pure-Anglican sects and if you need to go to a protestant church every so often to be socially respectable that church is what would become Episcopalian.

Alexander VI, although his personal moral behavior was quite bad probably would not make a top ten, or even top twenty, list for worst popes from a doctrinal confusion standpoint- although Francis would. Honorius I would probably go down as the worst, perhaps the original John XXIII.

It's interesting; I generally don't have a high opinion of Paul VI's handling of the magisterium but Humanae Vitae was legitimately surprising to everyone, including close confidantes of Paul VI, and I've used that as an argument against sedevacantists and Eastern Orthodox before in defending the papacy. Unfortunately even JPII and Benedict couldn't resist drowning their clarity in argle bargle and corpo speak, but from a doctrinal perspective they're probably top fifty percent of popes(remember, the median pope's theological contributions round to 0. For all his questionable decisions JPII did come in clutch on doctrine when it counted with things like the definition of the priesthood as all male) at least.

I can’t tell if this supposed to be humorous or if it’s just genuinely delusional. No one who goes to a “pointy house in the sticks” has even heard of Aella. Her fans and her haters are both among the terminally online.

Anyone want to blackpill me on why this is Bad Actually because strict liability regulatory crimes are actually a major load-bearing part of how our legal system works and without it the situation will devolve to anarchy in the streets?

Partially: I'm opposed to strict liability crimes, in principle, but business regulations are the application for which they make the most sense, in that 1) if you're doing something for a commercial purpose, there's a rationale for holding you to a hire standard 2) perverse incentives + plausible deniability of mens rea = bad time.

I made a rather uncontroversial (if snippy) assertion that many people on the Motte hold tech jobs of the kind found in Silicon Valley. Which is something that many many other conversations on here basically take as a matter of fact. Suddenly you and ten other people angrily surge out of the woodwork to gish-gallop me with user survey statistics and bizarre and nonsensical arguments about Mormons. And now I am uncharitably accused of “sneering Bulvarism”.

I'm sure some people do this, but 99.9% of people do not.

I actually tend to agree that social justice warriors are downstream of Christianity, but I don't think this is a sufficiently nuanced portrait of what Christianity teaches. Yes, it criticizes the rich and strong, but also the lazy and the lawbreaker. The Biblical solution to lazy people who refuse to work? Let them not eat. The Biblical solution to bad people who bring destruction? A wrathful sword.

Obviously there's some debate among Christians on these topics – some would disagree with me. (And it is true that many early church fathers were very pacifistic, although they were being persecuted by their enemies and largely did not have to deal with the problems of power; it's not surprising that the emphasis of the church changed when their circumstances did.)

But I don't think, historically, Christians were okay with executing and imprisoning criminals just because they aren't good at being Christians (although, yes, Christians are often bad at following Scripture's teachings.) I think it's pretty natural to read the parts of Scripture dealing with justice and go "...yeah it's totally fine to use lawful force to suppress evil" and do it.

TLDR; while non-pacifistic Christianity might be wrong, I don't think that it is hypocritical.

There's nothing in the church fathers, in the didache, or in the new testament which indicates that Christianity tends towards ethnic nationalism. I'm pretty sure Islam is similar.

While truckers who get in an at-fault accident will be immediately fired and not hired by any trucking company ever again, ambulance chasers don't go after them because they don't have the money to give a big payday. Trucking lawsuits usually hinge on getting a big insurance payout on the basis of 'you should be liable for hiring/overworking/undermanaging him'. There's no inherent reason a trucking company wouldn't prefer to have an ambulance chaser fighting Tesla's lawyers than State Farm's.

Eh, it takes time for change to percolate, and truck drivers are sufficiently selected that we can assume they're better drivers than average- the average driver, after all, includes averages from lots of people who insist on driving drunk/high, texting while driving, etc.

I don't follow AI especially closely. So forgive me if this is a stupid observation. But it seems like AI gets more powerful('smarter') all the time, but it doesn't get any more aligned. I don't mean that in a 'our societal pareidolia about racism will keep skynet at bay' way, I mean that in the sense that The Robots Are Not Alright.

Just the other day I read a news story of an AI system which had been put in charge of administering vending machines- should be a pretty simple job, anybody could figure it out. But this AI decided, with no evidence, that it was a money laundering scheme, contacted the FBI, and then shut down. There's stories like this all the time. There was the case of ChatGPT hijacking whatever conversation to talk about the immaculate conception a month or so ago. Just generally AI is way more prone to naval-gazing, schizophrenia, weird obsessions, and just shutting down because it can't make a decision than equivalently-smart humans.

There's an old joke about selling common sense lessons- 'who would pay to learn something that can't be taught?... Oh.'. I feel like AI is a bit like this. We don't understand the mind well enough to make it work, and we probably never will. AI can do math, yeah. It can program(I've heard rather poorly but still saves time overall because editing is faster?). But it remains an idiot savant, not sure what to do if its brakes go out. Yes, it'll change the economy bigtime and lots of non-sinecure white collar work that doesn't require any decision making or holistic thought will get automated. But it's not a global paradigm shift on the level of agriculture or mechanization.

I don’t think most of them are valuable to most people

Not vastly in a purely economic sense, but personally I think the way I interact with information, ideas and the world generally is incomparably better off for having studied history at university, in a way I doubt I could have achieved by pure dilettantism. Maybe it isn't the most rational use of national resources, but either way I think it's still one of the developed world's greatest achievements that so many people get the opportunity to have their internal world enriched forever, even if a lot of them don't take it up when they're there.

I gave it a read, and yeah it's a pretty accurate summary. But I don't agree that the gemini version is meaningless, and I don't think the limitations you suggest in your post would have made the Claude test better. We have a pretty good idea now of the varying level of crutches needed to be useless (none), to get through some of the game, and beat the game. Now we can ask the question of what would it take for a LLM to not be useless without human assist.

In my mind, it basically needs to write code, because the flaws to me appear fundamental due to how predictable they are across LLMs, and seeing versions of these issues going back years. The LLM has to write a program to help it process the images better, do pathfinding, and store memory. In that sense it would be building something to understand game state not that differently than the current RAM checking from reading the screen.

It then needs to have some structure that vastly improves it's ability to make decisions based on various state, I'd imagine multiple LLM contexts in charge of different things, with some method of hallucination testing and reduction.

And it has to do all this without being slow as hell, and that's the main thing I think that improved models can help with hopefully. I'd like it if any of the current Twitch tests started taking baby steps towards some of these goals now that we've gotten the crutch runs out of the way. It's odd to me that the Claude one got abandoned. It feels like this is something the researchers could be taking more seriously, and it makes me wonder if the important people in the room are actually taking steps towards ai agency or if they kust assume a better model will give it to them for free.

I mean I think the rub is that the alignment problem is actually two problems.

First, can an AI that is an agent in its own right be corralled in such a way that it’s not a threat to humans. I think it’s plausible. If you put in things that force it to respect human rights and dignity and safety, and you could prevent the AI from getting rid of those restrictions, sure, it makes sense.

Yet the second problem is the specific goals that the AI itself is designed for. If I have a machine to plan my wars, it has to be smart, it has to be a true AGI with goals. It does not, however have to care about human lives. In fact, such an AI works better without it. And that’s assuming an ethical group of people. Give Pinochet an AGI 500 times smarter than a human and it will absolutely harm humans in service of tge directive of keeping Pinochet in power.

what demographic a complete collapse of that as a job will disproportionately affect

Women?

I'm not being obtuse. That's the best I can come up with. What's wrong with a worse job market for women with no specific skills? It's probably better for society on the whole.

To quote Hawkeye/Ronin: "Don't give me hope."

Look, if I was suddenly able to rewrite federal laws- universities accepting federal funds(including for loans) would be allowed to house students in conditions no better than those junior enlisted in the army experience(food would have to be absolutely identical down to coming in boxes labeled 'not suitable for prison use'). All classes with writing components would require in-class handwritten essays, and if the professor can't read your writing too damn bad(I say this as someone who would, probably, have failed out of middle school without accommodations for my terrible handwriting). Federal student loans have to go through underwriting and the underwriter can cut you off at any time. Everyone involved is exempted from federal antidiscrimination laws, and all jobs connected to the diversity industrial complex are now ended, with the people involved permanently barred from any work in the education system other than janitorial.

But that's not happening. Any solution which doesn't allow the vast majority of women to get college degrees in partial literacy and not fucking up too too bad isn't going to happen. This is a dumb societal waste of resources but it's not going anywhere- and I don't see how letting them have artificial stupidity write their essays in whatever retarded nonsense we're pretending is a course of study for them hurts anyone but themselves. Indeed, it's probably a societal net positive to have them waste less effort on this crap.

By paying for ChatGPT Plus, you have access to o3, which is the best model in OAI's stable, and a cutting edge LLM.

That being said, it likely isn't the best model out there, as Gemini 2.5 Pro is equivalent/slightly better. I don't think it would be a noticeable difference, in the use cases you mentioned. However, you can use it for free, and without limits too, on Google AI Studio. No catch, beyond the absolutely not worth worrying about fact they'll use your chats as training data. That would save you $20/m, but beyond that you're doing things fine.

https://aistudio.google.com/prompts/new_chat

All you need to do is link your Google account and you're set.

Waymo has a lot of data, and claims a 60-80% reduction in accidents per mile for self-driving cars. You should take it with a grain of salt, of course, but I think there are people holding them to a decent reporting standard. The real point is that even being 5x safer might not be enough for the public. Same with having an AI parse regulations/laws...

I’ve always found it amazing just how out of touch the intellectuals in university are about what their institution actually means for students. To be blunt, college hasn’t been about education for a very long time, and it strikes me as hilarious that anyone who attended one writes these sorts of handwringing articles bemoaning the decline of education in college. 99% of students who were ever in university (perhaps with the exception of tge leisure class) have ever gone to college seeking the education for the sake of education. For most of us, it’s about getting job skills, getting a diploma, padding a resume, etc. if learning happens on the side, fine, but most people are looking at college as a diploma that will hopefully unlock the gates to a good paying job.

In the 1990s kids were caught cheating, and many before computers outsourced those slop essays to grad students or upperclassman. Every kids knows how to find old exams and cajole the exam topics out of the TA. Which is to say, except for this being done with LLM bots, it’s not even unusual. And civilization has not fallen because students cheat on tests. Mostly because the things tge students are cheating on — slop writing assignments in non major classes and generally covering topics that most people would only use on Jeopardy— it doesn’t matter if they know it or master it. History, sociology, psychology, X studies, and philosophy can certainly be interesting classes. But I don’t think most of them are valuable to most people, so again, the cheating not only isn’t harming them, but it’s beneficial, both because they’re saving time so they can focus on the courses that matter, but because they’re getting hands on experience using a technology that will be more important to their future than whatever essay they’re not writing on their own.

Of course the professors of these courses tend to have exaggerated notions of their importance and the importance of the subject matter they are teaching, not just for the current crop of twenty year olds who are forced into their classrooms by the college itself, but to the world at large. I enjoy philosophy and history. I like reading about it, thinking about it, and so on. But I also understand that unless you’re going to work in a university teaching the subject to students and writing research papers about it, it’s not going to be valuable for the students. They love to bemoan the decline of students, that they don’t read the material, or they use chatbots or they scroll during class time. But they don’t ever ask why it’s happening to them and not in engineering classes or CS classes.

It would be pretty funny if this forum contained two New Jerseyans who could serve as references for you. (Not that it would help solve the other half of the problem.)

The things Mormons think are not the same as what Christians think, though. This isn’t a case of like…Protestants disliking the pope but still having more or less the same core ideology, or Eastern Orthodox Christians not agreeing on “the filioque” or whatever it’s called. Mormons think god is an actual physical being with skin and bones that you could touch with your hands from near the planet Kolob, and that there were major mass civilizations in North America where most of the mythology comes from. That’s not me being hyperbolic to make point, that’s literally what they actually believe. It has almost nothing to do with Christianity. Muslims have more in common with Christians than Mormons do. “Unitarian Universalists” have more in common, as far as beliefs are concerned at least, with Christians than Mormons do. When Mormons talk about “god” they aren’t talking about a mystical unseeable unknowable entity like Christians are, they’re talking about a physical person who traveled here from another planet: https://en.wikipedia.org/wiki/Kolob

https://en.wikipedia.org/wiki/God_in_Mormonism

The whole reason I’m probably being so annoying in this thread is that I had always heard of Mormons, had some ex Mormon friends, live around Mormons, and had honestly no idea what they actually thought or taught each other. Looking into their beliefs was a WILD experience to me because of how insane it all is. It’s a group of people, who are rapidly growing, who have made a religion out of what essentially comes down to worship of actual fucking space aliens. MAJOR WTF experience learning this.

Here’s a hymn about traveling to kolob: https://www.churchofjesuschrist.org/media/music/songs/if-you-could-hie-to-kolob?lang=eng

writing indigenous studies slop essays

If you are at an elite-ish university like Columbia and you are writing 'slop' essays that is almost certainly entirely your own fault, or at least a failure in your own imagination. Even in the most modish areas the questions they are grappling with are almost always interesting and important, even if one disagrees with the way those questions are presented and the assumptions within them (incidentally, there is nothing examiners love more, no matter their outlook, than answers which 'interrogate' the question set). I doubt there is a single humanities essay/coursework/examination question at Columbia to which an intelligent and engaged student could not engage with in an enriching and interesting way.

Honestly, I think the article does itself a disservice by not breaking the problem down into the two major but separate issues, detailed below. Instead it bounces between the two in an effort to provide an engaging article, but it's very important to realize that these two problems are largely separable problems. They both involve AI, but that's the extent of the overlap.

Problem One: Scientific research clearly indicates that the difficulty and engagement with a task is directly proportional to learning. The neuroscience points out that different parts of the brain are activated when asked to perform "recall" instead of mere "recognition." Unfortunately many students are unable to recognize the difference! Recall is something like: "tell me something about this" and you work from scratch, recognition might be a looking at your notes or a nice summary and going "oh yeah that makes sense", or answering a multiple-choice question where you have plenty of cues to work with. Some have even argued that it's possible to create in-class notes that are too good at their purpose, thus "offloading" the work to an external knowledge storage device, in a sense. The key point however is this: not only is recall far more potent than recognition in terms of how likely the information is to make it into long-term memory in the first place, it's also worth stating that the more connections that are made during the learning process, the more likely the brain is to be able to retrieve that information from long-term memory as well.

ChatGPT in its most common use case, entirely "short-circuits" this process, depriving a student from forming connections, and developing a kind of "base knowledge" that could be helpful on less foundational topics later. This does not necessarily have to be the case - a good prompter might use ChatGPT to self-quiz, or ask smart follow-up questions, or give deeper explanations that trigger more connections (ignoring hallucinations for now). I think this kind of advanced usage is a small minority of college users, though. In short, this is the most serious problem for AI in college.

Problem Two: How important are essays, anyways? We can't really escape the classic "calculator problem": remember plaintively asking your math teachers why you needed to learn this if a calculator or graphing software could do it just fine? Obviously that's a complicated question, and this one is too; a certain level of familiarity with numbers and how they work is critical if you go into any kind of later applied math, not knowing your times tables can cripple the ability to engage with algebra, but frankly there were absolutely some questions that were designed to be deliberately difficult rather than to emulate any kind of real-world situation. So, essays. What good is an essay? Honestly I think the evidence has always been a little hand-wavy and weak for essays. Not only did virtually all humanities professors go way overboard on being strict about formatting in a misguided attempt to help students (I've seen some horror stories where well-written essays get absolutely demolished due to stupid rules like "you must exactly rephrase your thesis at the start of the conclusion") but it's hard to see if the act of writing essays noticeably improves vague notions like "thinking critically". Now, I might be behind the times on this particular area of research (if it even meaningfully exists), but it has always seemed to me that essays were more crude attempts at prompting students to do plenty of recall via independent research and synthesis. Thus increasing learning. But this was always an artifact of how difficult the task of assembling an essay from scratch is, something clearly no longer difficult with AI.

Thus, the essay must die. Perhaps professors should ask for a wider variety of writing formats, more applicable to life. Perhaps the standards should shift to the end-result of the writing - is it enjoyable to read and factual and the right length/complexity? Perhaps live or oral assessments should be more prominent. Or maybe professors should focus on teaching smaller and more broadly useful writing tips, about the writing itself, or even consider teaching tips about how to best prompt an AI for assembling a piece of writing. Is there any evidence writing essays actually increases the capacity or ability to wield "critical thought"? I say no, if you want to teach critical thinking, you might as well attempt to do so directly and not default to weak proxies like essays.

Literally every single example is students automating busy work which should cost any 120+ IQ individual little brain power but lots of time.

The credentialist thesis is that the function of college in society is to demonstrate the ability of a person to perservere doing boring work and run to completion a program that requires multiple years of effort. Both of which are important capacities for an employee.

I recall that psychometrics can't find a way to measure someone's industriousness and ability to perservere doing hard work except by asking them, or by actually observing their behavior over a long period of time. We don't have a "hard-worker" test the way we have an IQ test. So college operates as the best thing we have to attest to a person's capacity for sustained effort, and it throws in an IQ-loaded element and the ability for young people with little experience to make connections with people who have lots of knowledge or experience in a specific field.