domain:eigenrobot.substack.com
And do you want to know another animal with a clearly non human form of cognition? A parrot.
Touché. I walked into that one.
We are looking at this from two different angles. My angle helps people. Your angle, which seems to prioritize protecting the LLM from the 'insult' of a simple metaphor, actively harms user adoption.
Look, come on. We are literally in a thread dedicated to avoiding Bulverism. Do you honestly think I'm out here defending the honor of a piece of software? My concern is not for the LLM's public image. Sam Altman is not sending me checks. I pay for ChatGPT Plus.
I think the charitable, and correct, framing is that we are both trying to help people use these things better. We just disagree on the best way to do that. My entire point is that the "stochastic parrot" model, while it might solve the one specific problem of a user getting frustrated, ultimately creates more confusion than it solves. It's a bad mental model, and I care about users having good mental models.
You're right that a metaphor is a subway map, not a satellite image. Its value is in its simplification. But for a subway map to be useful, it has to get the basic topology right. It has to show you which stations connect. The parrot map gets the topology fundamentally wrong.
It tells you the machine mimics, and that's it. It offers zero explanation for the weird, spiky capability profile. Why can this "parrot" debug Python but not write a good joke? Why can it synthesize three different academic papers into a novel summary but fail to count the letters in a word? The parrot model just leaves you with "I guess it's a magic parrot". It doesn't give the user any levers to pull. What's the advice? "Just keep feeding the parrot crackers and hope it says something different?"
Compare that to the "fallible but brilliant intern" model. It's also a simplification, but it's a much better map. It correctly predicts the spikiness. An intern can be a world-class expert on one topic and completely sloppy with basic arithmetic. That feels right. More importantly, it gives the user an immediate, actionable strategy. What do you do with a brilliant but fallible intern? You give them very clear instructions, you provide them with all the necessary background documents, and you always, always double-check their work for anything mission-critical. That maps perfectly onto prompt engineering, RAG, and verification. It empowers the user. The parrot model just leaves them shrugging.
Shaming users for not having the correct mental model is precisely how we end up with people who are afraid of their tools
I'm pretty sure I haven't done that. My frustration isn't with your average user. It's with people who really should know better using the term as a thought-terminating cliche to dismiss the whole enterprise.
If my own grandmother told me she was getting frustrated because "Mr. GPT" kept forgetting what she told it yesterday, I wouldn't lecture her on stateless architecture. I'd say something like, "Think of it as having the world's worst long-term memory. It's a total genius, but you have to re-introduce yourself and explain the whole situation from scratch every single time you talk to it."
That's also a simple, not-quite-accurate metaphor. But it's a better one. It's a better map. It addresses her actual problem and gives her a practical way to think that will get her better results next time. It helps her use the tool, which is the goal I think we both agree on.
I think @problem_redditor was referring to “other humans” as aliens.
Congratulations on not being a furry (I also, don't know what he's talking about, what're the odds there's a Kiwifarms thread about it though lol)
Some women not-so-deniably elicit rape 'threats' too.
This is exactly the realization I've come to. Nothing I will say will convince you to adopt my moral position because it's not a logical position to hold (like any and every moral proposition). Rather than heckle people who will not be receptive, it would be much better for vegans to strategize about practical ways to reduce average meat consumption by focusing on non-moral incentives that can actually be debated, such as removing subsidies for animal ag, encouraging the development of lab grown meat, etc.
It is. I'd prefer it was overturned and we got the First Amendment back, but that ain't going to happen, so sharpening the other edge of that blade is the next best thing.
I would argue that it is also one of the least actionable topics. On priors, I would expect that most suffering is not even in the context of predators but just animals having a long and painful death due to disease or the environment becoming unable to sustain them (e.g. starvation, rising saline concentration in a drying pond). The blind idiot god who designed them cares not for making their end painless.
However, at the moment I would rather be reborn as the median wild mammal than as the median mammal kept by humans. Fitting all the wild animals with suicide implants (or gene-editing them to that effect) is something which can wait until we have made sure that farm animals have a good life.
Uh, I hope posthumans wouldn't be aging or dying, for one. I think that's a pretty big deal, every hour I play of a video game becomes a painful tradeoff as my life expectancy becomes ever shorter. Leaving aside everything else, it would be very nice not have thanatochrony be an issue.
We could be much smarter, and thus able to enjoy far more complex and strategic games. I don't think someone with mental retardation would enjoy Crusader Kings of Civ, even at the baby difficulties.
We could be faster, be it mentally or when it comes to physical reflexes. That would make games that rely on that more enjoyable.
We could have immense amounts of computing power, such that the lines between virtual and real become blurry, and you could live a billion years doing whatever your heart desires, without being able to tell your experience apart from reality.
We could be more physically durable, so that Airsoft with real guns might be on the cards. We could back ourselves up to external storage, such that we could play recreational nuclear warfare with H-bombs. (Someone will make nuclear tennis from Infinite Jest into a real thing)
Is it a good life if you dedicate your life to video games? On what terms could such a life be considered good?
The answer to that is mu. If someone enjoys video games, then a good life for them involves video games. If they don't, it doesn't.
What is "good" about gardening? Or painting? Or getting into debates with strangers on the internet?
What is laudable about being a doctor when the AI can do your job better? What is so great about travel when you can catch a flight to anywhere in the world and get there in less than 24 hours? What if it takes no time at all, subjectively, and we send a scan of your brain to Enceladus at the speed of light?
I do not rely on the approval of others to define my interests. I hope others have the courage to do the same.
Prison sucks though. The Tasmanian Devils are getting expert and attentive care with the goals of meeting their needs as best as we can.
They even get laid! And not sexually or violently assaulted.
The lack of freedom and movement is analogous to prison, but basically nothing else is.
Homeless people is a better analogy, although shelters dedicate WAY less effort/money to making the homeless happy than sanctuaries do for their animals.
So is eating humans also OK, since humans have been known to engage in cannibalism?
Only for non-hamans. This seems to track, while I would defend myself from a wolf, bear, lion, etc., I would hardly begrudge them wanting to eat me.
Are dogs aliens? I must admit I've never seen a wolf piloting a flying saucer, but my lab has sent many a normal saucer flying. Unfortunately, they were only domesticated between 20-40k years ago, not 200k.
Cannibals from Papua New Guinea tell us that human flesh tastes just like pork. "Long pork", if you will. I like bacon, and I'm not averse to alternative sources. Maybe lab-grown meat will let me have a me-burger, kinda removed the ethical downsides.
So is eating humans also OK, since humans have been known to engage in cannibalism?
In any case, you are missing the point that jdizzler brought up. Your arbitrary judgement of a species to be "dishonorable" doesn't justify breeding and raising tens of millions of them in horrible conditions.
I don't care about the moral worth of non-human animals, if they didn't want me to eat them, they should have been less tasty.
On a more serious note, I have no innate preference for cruelty, I simply do not care. If lab-grown meat (or even meat substitutes) tasted just like meat, and were cheaper, I'd eat them with equanimity.
I have a cousin in the UK who is a vegan, initially to get laid (his ex was vegan), but apparently the moral draw remained. He's stuck fast to it, even if his current soon to be fiancé is merely vegetarian. He's not preachy, when we meet, he makes sure to look for mutually acceptable options, and I have no issue with his lifestyle. I can see it makes his life significantly harder, but that's his choice. I introduced him to an Indian friend of mine in Edinburgh, who started lecturing him on nutritional deficits. I pointed out that he looked perfectly healthy to me, and if there were the kinds of serious issues he was positing, the man would have been dead by now. Each to their own, and me to a plate of bacon rashers please.
I'm not sure where you're going with that. That Epstein prepared himself by hiding / deleting evidence? Isn't Rov's point in this post that one way or the other, there was more than enough to lock him up, and it was only due to Acosta's poor judgement that he got off easy?
And again, what was his re-prosecution based on then, some decade later, with even less physical evidence, and witness testimony even more dubious? How was Maxwell sentenced for so long?
Or am I misunderstanding your argument, and you meant something else entirely?
Huh, interesting. I mostly hear weebs made fun of by Asian girls who want to complain about fetishes.
Has this actually been done? I'm aware of people talking about it, but not of it actually happening.
See Colossal Biosciences and their Dire Wolf project. Regardless at which point you consider it "true de-extinction", they have demonstrated how you can modify key genome locations of a related species to the original of the species you want to de-extinct, and that these modifications do indeed generate the desired traits that species is known for. At the moment it's, as said, quite limited (they only made 20 edits with large phenotypic impact), but from here it's mostly just a question of doing this repeatedly to get arbitrarily close to the original species. And dire wolfs have gone extinct in ancient times; It should be much easier with contemporary animals due to the better availability of varied genomic information and more closely related species you can start from. That approach is probably not viable for every extinct animal, though.
To the second paragraph, I guess my opinion is probably close enough; I'd be lying if I claimed that I consider every human life more valuable than every extinction imaginable.
This presumes that "consensus" remains an unbreached scaffold. What do you do when consensus itself melts, and all that's left is a Will to Power knife fight?
That was partially poetic, partially literal. I don't know exactly what she thought of me. But I've never, not once, had another medical professional clearly uncomfortable with literally just touching a patient during an examination.
I’m as much a redditor as I am a Digg user, or a 4chan User, or a SometingAwful user (goon), or a MySpace user, or a Facebook user.
Which is to say, not at all. I don’t use any of those things much at all anymore, even if I have a still active account.
You’re projecting since you’re still using Reddit. I probably spend less than an hour a month on Reddit, only reading it when shared in some other context on some other site. Or when I do a search and the other results are Reddit. Been that way for 3 years at this point.
It got boring. I appealed my ban in a sort of weary indifference, mostly out of curiosity. I knew it wouldn’t work, the site was too far gone and had been that way for years.
When I finally copped the ban I was using it maybe 2hrs a week when I was very bored, I simply didn’t enjoy it much anymore and most of the good communities had long ago gotten zapped by the eye of Sauron. I knew it was just a matter of time before there was literally nothing of value left so I pulled the plug early.
Just like Digg; not with a bang, but a whimper.
That's fascinating. I knew about mallard forced copulation, but I didn't know that the hens tried to elicit it. Science factoid providers not wanting to victim-blame mallards, maybe?
Reminds me of boxing hares.
He wrote that one should retreat down to the lowest level of the tower one finds necessary to fulfill one's moral obligations.
I disagree, the last exchange of his example suggests that when you've retreated to that lowest level, someone like Scott should come along to keep nudging you up the layers:
Q: FINE. YOU WIN. Now I’m donating 10% of my income to charity. A: You should donate more effectively.
The person is not left to be comfortable at their fulfillment level.
I also continue to think it's interesting that he opposed this kind of shenanigan in his What We Owe The Future review, published the next day, TINACBNIEAC:
This series of commitments feels basically right to me and I think it prevents muggings.
But I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms.... I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice.
He perceives the muggings can't really be prevented, that there's always going to be another switch, and a rational choice is to avoid the whole game and choose different axioms.
I have some experience with games and algorithms, and that leads to some thoughts.
The big headline is that all the various methods we know (including humans) have problems. They often all have some strengths, too. The extremely big picture conceptual hook to hang a variety of particulars under is the No Free Lunch Theorem. Now, when we dig in to some of the details of the ways in which algorithms/people are good/bad, we often see that they're entirely different in character. What happens when you tweak details of the game; what happens when you make a qualitative shift in the game; what happens on the extremes of performance; what you can/can't prove mathematically; etc.
To stick with the chess example, one can easily think about minor chess variants. One that has gotten popular lately is chess 960. Human players are able to adapt decently well in some ways. For example, they hardly ever give illegal moves. At least if you're a remotely experienced player. You miiiiight screw up castling at some point, or you could forget about it in your calculation, but if/when you do, it will 'prompt' you to ruminate on the rule a bit, really commit it to your thought process, and then you're mostly fine. At top level human play, we almost never saw illegal moves, even right at the beginning of when it became a thing. Of course, humans clearly take a substantial performance hit.
Traditional engines require a minor amount of human reprogramming, particularly for the changed castling rules. But other than that, they can pretty much just go. They maybe also suffer a bit in performance, since they haven't built up opening books yet, but probably not as much.
An LLM? Ehhhh. It depends? If it's been trained entirely like Chess LLM on full move sets of traditional chess games, I can't imagine that it won't be spewing illegal moves left and right. It's just completely out of distribution. The answer here is typically that you just need to curate a new dataset (somehow inputting the initial position) and retrain the whole thing. Can it eventually work? Yeah, maybe. But all these things are different.
You can have thought experiments with allll sorts of variants. Humans mostly adapt pretty quickly to the ruleset, with not so many illegal moves, but a performance hit. I'm sure I can come up with variants that require minimal coding modification to traditional engines; I'm sure I can come up with variants that require substantial coding modification to traditional engines (think especially to the degree that your evaluation function needs significant reworking; the addition of NNs to modern 'traditional' engines for evaluation may also require complete retraining of that component); others may even require some modification to other core engine components, which may be more/less annoying. LLMs? Man, I don't know. Are we going to get to a point where they have 'internalized' enough about the game that you could throw a variant at it, turn thinking mode up to max, and it'll manage to think its way through the rule changes, even though you've only trained it on traditional games? Maybe? I don't know! I kind of don't have a clue. But I also slightly lean toward thinking it's unlikely. [EDIT: This paper may be mildly relevant.]
Again, I'm thinking about a whole world of variants that I can come up with; I imagine with interesting selection of variants, we could see all sorts of effects for different methods. It would be quite the survey paper, but probably difficult to have a great classification scheme for the qualitative types of differences. Some metric for 'how much' recoding would need to happen for a traditional engine? Some metric on LLMs with retraining or fine-tuning, or something else, and sort of 'how much'? It's messy.
But yeah, one of the conclusions that I wanted to get to is that I sort of doubt that LLMs (even with max thinking mode) are likely to do all that well on even very minor variants that we could probably come up with. And I think that likely speaks to something on the matter of 'general'. It's not like the benchmark for 'general' is that you have to maintain the same performance on the variant. We see humans take a performance hit, but they generally get the rules right and do at least sort of okay. But it speaks to that different things are different, there's no free lunch, and sometimes it's really difficult to put measures on what's going on between the different approaches. Some people will call it 'jagged' or whatever, but I sort of interpret that as 'not general in the kind of way that humans are general'. Maybe they're still 'general' in a different way! But I tend to think that these various approaches are mostly just completely alien to each other, and they just have very different properties/characteristics all the way down the line.
Yes I would. Sure, it depends on length of time - someone who lives in another country for a few months or even a few years does not cease to be an American that quickly. But when that person has been in the other country for a few decades, I think it's fair to say they aren't American any more. And I think @MaximumCuddles case is more analogous to the American living overseas for a few decades - if he hasn't had an account in 9 years, that's an eternity in Internet time.
More options
Context Copy link