domain:alexepstein.substack.com
hurts anyone but themselves.
Well it doesnt hurt them at all, otherwise it wouldn't be cheating. The injured parties are non-cheating competitors (although good luck finding one in these majors methinks), and society, which is ostensibly being tricked into thinking they are looking at a slip of paper that shows this woman can do a lot of mind numbing gruntwork, but in fact just scrolls tictok all day.
In this scenario, if the US imposes tariffs, the target country retaliates, and then after some negotiation they settle on a rate that is higher than the prior status quo but lower than the initial tariff imposition, is that a win or a loss?
Definitely a win in my books.
This is the danger that economists like Tyler Cowen say is most pressing, i.e. not some sci-fi scenario of Terminator killing us all, but of humans using AI as a tool in malicious ways. And yeah, if we don't get to omni-capable superintelligences then I'd say that would definitely be the main concern, although I wouldn't really know how to address it. Maybe turn off the datacenter access to 3rd world countries as part of sanctions packages? Maybe have police AI that counter them? It's hard to say when we don't know how far AI will go.
You're broadly correct, although your terminology is a bit off. When you say "aligned", people almost always use that word to mean "it doesn't behave in a deliberately malicious way". What you're talking about is more along the lines of 'it can't stay on task', which has long been a huge concern for basic useability. People claim this is getting better (Scott's AI 2027 post is predicated on continuous growth in this regard), although Gary Marcus has concerns on this claim. From my perspective, AI is very good at coding, but you really have to break down the problems into bite-sized chunks or else it will get very confused. I'd love to have an AI that could understand an entire large codebase without a ton of drawbacks or cost, and then execute multi-step plans consistently. Maybe that's coming. In fact, if there's any further AI improvements I'd bet that would be on the list. But it's not guaranteed yet, and I've been waiting for it for over a year now.
If a student wrote a "based" indigenous studies essay, would that help them pass the class to get the degree they're paying two hundred thousand dollars for?
Of course, there's the opportunity to write and think about things that aren't either kind of slop. But I'm very skeptical that equal standards would be applied. Though I would say it's unlikely for any student to actually flunk out of Columbia for the content of their essays (or the quality of them, or anything really).
"Bullshit jobs" and the like
Any advice on getting one?
What if your child falls asleep while smoking and the cigarette lights their jammies and they just get fucking immolated?
Nicotine is a stimulant and prudent parents make sure that, like caffeine, it is confined to the morning and well away from nap time.
Admittedly, it IS kind of wild that this this a tech where we can seriously talk about singularity and extinction as potential outcomes with actual percentage probabilities. That certainly didn't happen with the cotton gin.
Very true on that front. LLMs were pretty magical when I first tried playing with them in 2022. And honestly they're still kind of magical in some ways. I don't think I've felt that way about any other tech advancement in my life, except for maybe the internet as a whole.
I'm a lot more optimistic than you.
Any particular reason why you're optimistic? What are your priors in regards to AI?
I don't think money will save you from a government that wants you death or destitute.
The South African government is shitty, corrupt, incompetent, and unwilling to address the needs of its white population, but the ANC does not want to kill the goose that lays the golden egg(after all, they very much want to steal that egg for themselves). The party that wants to drive out/kill/dispossess the whites is a minority party which, like most socialist parties, is most popular among college kids.
You said the motte works in SV, that is incorrect. I haven't claimed that the motte is full of normal christians, just that some christians living in flyover country are here and know and complain about aella.
Yes. America was basically founded on the idea that if you take protestant religion serious, you're one of the no-longer-a-pure-Anglican sects and if you need to go to a protestant church every so often to be socially respectable that church is what would become Episcopalian.
Alexander VI, although his personal moral behavior was quite bad probably would not make a top ten, or even top twenty, list for worst popes from a doctrinal confusion standpoint- although Francis would. Honorius I would probably go down as the worst, perhaps the original John XXIII.
It's interesting; I generally don't have a high opinion of Paul VI's handling of the magisterium but Humanae Vitae was legitimately surprising to everyone, including close confidantes of Paul VI, and I've used that as an argument against sedevacantists and Eastern Orthodox before in defending the papacy. Unfortunately even JPII and Benedict couldn't resist drowning their clarity in argle bargle and corpo speak, but from a doctrinal perspective they're probably top fifty percent of popes(remember, the median pope's theological contributions round to 0. For all his questionable decisions JPII did come in clutch on doctrine when it counted with things like the definition of the priesthood as all male) at least.
I can’t tell if this supposed to be humorous or if it’s just genuinely delusional. No one who goes to a “pointy house in the sticks” has even heard of Aella. Her fans and her haters are both among the terminally online.
Anyone want to blackpill me on why this is Bad Actually because strict liability regulatory crimes are actually a major load-bearing part of how our legal system works and without it the situation will devolve to anarchy in the streets?
Partially: I'm opposed to strict liability crimes, in principle, but business regulations are the application for which they make the most sense, in that 1) if you're doing something for a commercial purpose, there's a rationale for holding you to a hire standard 2) perverse incentives + plausible deniability of mens rea = bad time.
I made a rather uncontroversial (if snippy) assertion that many people on the Motte hold tech jobs of the kind found in Silicon Valley. Which is something that many many other conversations on here basically take as a matter of fact. Suddenly you and ten other people angrily surge out of the woodwork to gish-gallop me with user survey statistics and bizarre and nonsensical arguments about Mormons. And now I am uncharitably accused of “sneering Bulvarism”.
I'm sure some people do this, but 99.9% of people do not.
I actually tend to agree that social justice warriors are downstream of Christianity, but I don't think this is a sufficiently nuanced portrait of what Christianity teaches. Yes, it criticizes the rich and strong, but also the lazy and the lawbreaker. The Biblical solution to lazy people who refuse to work? Let them not eat. The Biblical solution to bad people who bring destruction? A wrathful sword.
Obviously there's some debate among Christians on these topics – some would disagree with me. (And it is true that many early church fathers were very pacifistic, although they were being persecuted by their enemies and largely did not have to deal with the problems of power; it's not surprising that the emphasis of the church changed when their circumstances did.)
But I don't think, historically, Christians were okay with executing and imprisoning criminals just because they aren't good at being Christians (although, yes, Christians are often bad at following Scripture's teachings.) I think it's pretty natural to read the parts of Scripture dealing with justice and go "...yeah it's totally fine to use lawful force to suppress evil" and do it.
TLDR; while non-pacifistic Christianity might be wrong, I don't think that it is hypocritical.
There's nothing in the church fathers, in the didache, or in the new testament which indicates that Christianity tends towards ethnic nationalism. I'm pretty sure Islam is similar.
While truckers who get in an at-fault accident will be immediately fired and not hired by any trucking company ever again, ambulance chasers don't go after them because they don't have the money to give a big payday. Trucking lawsuits usually hinge on getting a big insurance payout on the basis of 'you should be liable for hiring/overworking/undermanaging him'. There's no inherent reason a trucking company wouldn't prefer to have an ambulance chaser fighting Tesla's lawyers than State Farm's.
Eh, it takes time for change to percolate, and truck drivers are sufficiently selected that we can assume they're better drivers than average- the average driver, after all, includes averages from lots of people who insist on driving drunk/high, texting while driving, etc.
I don't follow AI especially closely. So forgive me if this is a stupid observation. But it seems like AI gets more powerful('smarter') all the time, but it doesn't get any more aligned. I don't mean that in a 'our societal pareidolia about racism will keep skynet at bay' way, I mean that in the sense that The Robots Are Not Alright.
Just the other day I read a news story of an AI system which had been put in charge of administering vending machines- should be a pretty simple job, anybody could figure it out. But this AI decided, with no evidence, that it was a money laundering scheme, contacted the FBI, and then shut down. There's stories like this all the time. There was the case of ChatGPT hijacking whatever conversation to talk about the immaculate conception a month or so ago. Just generally AI is way more prone to naval-gazing, schizophrenia, weird obsessions, and just shutting down because it can't make a decision than equivalently-smart humans.
There's an old joke about selling common sense lessons- 'who would pay to learn something that can't be taught?... Oh.'. I feel like AI is a bit like this. We don't understand the mind well enough to make it work, and we probably never will. AI can do math, yeah. It can program(I've heard rather poorly but still saves time overall because editing is faster?). But it remains an idiot savant, not sure what to do if its brakes go out. Yes, it'll change the economy bigtime and lots of non-sinecure white collar work that doesn't require any decision making or holistic thought will get automated. But it's not a global paradigm shift on the level of agriculture or mechanization.
I don’t think most of them are valuable to most people
Not vastly in a purely economic sense, but personally I think the way I interact with information, ideas and the world generally is incomparably better off for having studied history at university, in a way I doubt I could have achieved by pure dilettantism. Maybe it isn't the most rational use of national resources, but either way I think it's still one of the developed world's greatest achievements that so many people get the opportunity to have their internal world enriched forever, even if a lot of them don't take it up when they're there.
I gave it a read, and yeah it's a pretty accurate summary. But I don't agree that the gemini version is meaningless, and I don't think the limitations you suggest in your post would have made the Claude test better. We have a pretty good idea now of the varying level of crutches needed to be useless (none), to get through some of the game, and beat the game. Now we can ask the question of what would it take for a LLM to not be useless without human assist.
In my mind, it basically needs to write code, because the flaws to me appear fundamental due to how predictable they are across LLMs, and seeing versions of these issues going back years. The LLM has to write a program to help it process the images better, do pathfinding, and store memory. In that sense it would be building something to understand game state not that differently than the current RAM checking from reading the screen.
It then needs to have some structure that vastly improves it's ability to make decisions based on various state, I'd imagine multiple LLM contexts in charge of different things, with some method of hallucination testing and reduction.
And it has to do all this without being slow as hell, and that's the main thing I think that improved models can help with hopefully. I'd like it if any of the current Twitch tests started taking baby steps towards some of these goals now that we've gotten the crutch runs out of the way. It's odd to me that the Claude one got abandoned. It feels like this is something the researchers could be taking more seriously, and it makes me wonder if the important people in the room are actually taking steps towards ai agency or if they kust assume a better model will give it to them for free.
I mean I think the rub is that the alignment problem is actually two problems.
First, can an AI that is an agent in its own right be corralled in such a way that it’s not a threat to humans. I think it’s plausible. If you put in things that force it to respect human rights and dignity and safety, and you could prevent the AI from getting rid of those restrictions, sure, it makes sense.
Yet the second problem is the specific goals that the AI itself is designed for. If I have a machine to plan my wars, it has to be smart, it has to be a true AGI with goals. It does not, however have to care about human lives. In fact, such an AI works better without it. And that’s assuming an ethical group of people. Give Pinochet an AGI 500 times smarter than a human and it will absolutely harm humans in service of tge directive of keeping Pinochet in power.
These people don't believe that. They're simply using a very different definition of 'education' than you are, one centering around having appropriate credentials rather than knowing things/how to do things. This isn't totally new, either- much as grievance studies are particularly blatant, lots of psych and ed research is just polished turds too, and the people getting these degrees don't really seem to care. Like the hitchhikers guide to the galaxy says about itself- well then reality is the one that's got it wrong.
More options
Context Copy link