Fruck
Lacks all conviction
Fruck is just this guy, you know?
User ID: 889
Oh damn, it looks like a month ago chutes stopped being unlimited due to abuse. The deepseek api is already very cheap (and yeah, faster) so I switched to it for insane rp, I should have guessed it couldn't last and checked.
Unlimited free if you hook OR up via chutes too.
We are looking at this from two different angles. My angle helps people. Your angle, which seems to prioritize protecting the LLM from the 'insult' of a simple metaphor, actively harms user adoption. My goal in using the parrot model is to solve a specific and very common point of frustration - the anthropomorphising of a tool. I know the parrot shortcut works, I have watched it work and I have been thanked for it.
The issue is that humans - especially older humans - have been using conversation - a LUI - in a very particular way their entire lives. They have conversations with other humans who are grounded in objective reality, who have emotions and memories, and therefore when they use a LUI to interact with a machine, they subconsciously pattern match the machine to other humans and expect it to work the same way - and when it doesn't they get frustrated.
The parrot model on the other hand, tells the user 'Warning: This looks like the UI you have been using your whole life, but it is fundamentally different. Do not assume understanding. Do not assume intention. Your input must be explicit and pattern-oriented to get a predictable output.' The parrot doesn't get anything. It has no intentions in the sense the person is thinking of. It can't be lazy. The frustration dissolves and is replaced by a practical problem solving mindset. Meanwhile the fallible intern exacerbates the very problem I am trying to solve by reinforcing the identification of the LLM as a conscious being.
The beauty is, once they get over that, once they no longer have to use the parrot model to think of it as a tool, they start experimenting with it in ways they wouldn't have before. They feel much more comfortable treating it like a conversation partner they can manipulate through the tech. Ironically they feel more comfortable joking about it being alive and noticing the ways it is like and unlike a person. They get more interested in learning how it actually works, because they aren't shackled by the deeply ingrained grooves of social etiquette.
You're right that metaphors should be analyzed for fitness, but that analysis requires engaging with the metaphor's intended purpose, not just attacking its accuracy literally. A metaphor only needs to illuminate one key trait to be effective, but the parrot goes a lot further than that. It is in fact fantastic at explaining the spiky profile of LLMs. It explains why an LLM can 'parrot' highly structured Python from its training data but write insipid poetry that lacks the qualia of human experience. Likewise I could train a parrot to recite 10 PRINT "BALLS"; 20 GOTO 10, but it could never invent a limerick. It explains why it can synthesize text (a complex pattern matching task) but can't count letters in a word (a character level task it's not trained to understand). Your analysis ignores this context, seemingly because the metaphor is offensive to an aspirational view of AI. But you're attacking a subway map for not being a satellite image. The resolution is drastically reduced yes - this is a selling point, not a flaw. Cultural cachet drastically outweighs accuracy when it comes to a metaphor's usefulness in real world applications.
And do you want to know another animal with a clearly non human form of cognition? A parrot. How did you skip over crows and dolphins to get to octupi, animals with an intelligence that is explicitly not language based, when we are talking about language models? Unlike an octopus, a parrot's intelligence is startlingly relevant here (my mentioning of parroting was just an example of how a parrot has been used as a metaphor for a non-thinking (or if you prefer, non-feeling) pattern matcher in the past.) Using a LUI a parrot can learn complex vocalisation. They can learn mimicry and memorisation. They can learn to associate words with objects and concepts (like colours and zero). They can perform problem solving tasks through dialogue. Is it just because octupus intelligence is cool and weird? Because that just brings me back to the difference between evangelising llms and helping people. You want to talk up llms, I want to increase their adoption.
Shaming users for not having the correct mental model is precisely how we end up with people who are afraid of their tools - the boomers who work out calculations on a pocket calculator before typing them into Excel, or who type 'Gmail login' into the Google search bar every single day. As social media amply demonstrates, technical accuracy does not aid in adoption, it is a barrier to it. We can dislike that from a nerd standpoint, which is why I admired your point in my original post (technically correct is the best kind of correct!) but user adoption will do a lot more for advancing the tech.
I thought I explained it pretty well, but I will try again. It is a cognitive shortcut, a shorthand people can use when they are still modelling it like a 'fallible human' and expecting it to respond like a fallible human. Mode collapse and RLHF have nothing to do with it, because it isn't a server side issue, it is a user issue, the user is anthropomorphising a tool.
Yes, temperature and context windows (although I actually meant to say max tokens, good catch) don't come up in normal conversation, they mean nothing to a normie. When a normie is annoyed that chatgpt doesn't "get" them, the parrot model helps them pivot from "How do I make this understand me?" to "What kind of input does this tool need to give me the output I want?"
You can give them a bunch of additional explanations about mode collapse and max tokens that they won't understand (and they will just stop using it) or you can give them a simple concept that cuts through the anthropomorphising immediately so that when they are sitting at their computer getting frustrated at poor quality writing or feeling bad about ignoring the llms prodding to take the conversation in a direction they don't care about, they can think 'wait it's a stochastic parrot' and switch gears. It works.
A human fails at poetry because it has the mind, the memories and grounding in reality, but it lacks the skill to match the patterns we see as poetic. An LLM has the skill, but lacks the mind, memories and grounding in reality. What about the parrot framing triggers that understanding? Memetics I guess. We have been using parrots to describe non-thinking pattern matchers for centuries. Parroting a phrase goes back to the 18th century. "The parrot can speak, and yet is nothing more than a bird" is a phrase in the ancient Chinese Book of Rites.
Also I didn't address this earlier because I thought it was just amusing snark, but you appear to be serious about it. Yes, you are correct that a parrot can't code. Do you have a similar problem with the fact a computer virus can't be treated with medicine? Or that the cloud is actually a bunch of servers and can't be shifted by the wind? Or the fact that the world wide web wasn't spun by a world wide spider? Attacking a metaphor is not an argument.
The fact you've never been tempted to use the 'stochastic parrot' idea just means you haven't dealt with the specific kind of frustration I'm talking about.
Yeah the 'fallible but super intelligent human' is my first shortcut too, but it actually contributes to the failure mode the stochastic parrot concept helps alleviate. The concept is useful for those who reply 'Yeah, but when I tell a human they're being an idiot, they change their approach.' For those who want to know why it can't consistently generate good comedy or poetry. For people who don't understand rewording the prompt can drastically change the response, or those who don't understand or feel bad about regenerating or ignoring the parts of a response they don't care about like follow up questions.
In those cases, the stochastic parrot is a more useful model than the fallible human. It helps them understand they're not talking to a who, but interacting with a what. It explains the lack of genuine consciousness, which is the part many non-savvy users get stuck on. Rattling off a bunch of info about context windows and temperature is worthless, but saying "it's a stochastic parrot" to themselves helps them quickly stop identifying it as conscious. Claiming it 'harms more than it helps' seems more focused on protecting the public image of LLMs than on actually helping frustrated users. Not every explanation has to be a marketing pitch.
I liked using the stochastic parrot idea as a shorthand for the way most of the public use llms. It gives non-computer savvy people a simple heuristic that greatly elevates their ability to use them. But having read this I feel a bit like Charlie and Mac when the gang wrestles.
Dennis: Can I stop you guys for one second? What you just described, now that just sounds like we are singing about about the lifestyle of an eagle.
Charlie: Yeah.
Mac: Mm-hmm.
Dennis: Well I was under the impression we were presenting ourselves as bird-MEN which, to me, is infinitely cooler than just sort of... being a bird.
Sorry, but I could not disagree more with this moral dictum and find myself to be far more in agreement with the other commenters here. Especially if this was baby-trapping. OP should have mitigated his risk more effectively, but I don't believe he has any obligation to support a family created entirely against his will, particularly if it was premised solely on the deception of the mother. Here, all choice goes to her, and all obligation goes to him regardless of whether he was duped or not. There is no world where that is an even remotely just outcome, and it creates perverse incentives in favour of patently undesirable behaviour such as baby-trapping which just results in more dysfunctional out-of-wedlock births, the very thing such a policy should ostensibly be trying to mitigate. The only reason why women do this in the first place is that it works. Maybe it shouldn't.
If we were talking about a case where the courts were compelling him to look after the kid (which I agree creates perverse incentives) or when he had done everything he could to mitigate the chance of pregnancy but been deceived, I would agree. But trusting a hooker when she says she's on birth control and not bothering with anything else is not that. And maybe I'm typical minding, but if it was anything like the times I've blindly trusted a woman who told me she was on birth control, the truth of the matter is that in the moment I didn't give a single shit if she might get pregnant. At most I might have thought "well there's always plan-b" but by and large I was thinking with my dick. And when you go to your dick for advice you should expect to get fucked. I can see your point from a societal perspective, but from a personal perspective only one thing matters - taking responsibility for your actions. And from an evolutionary perspective only one thing matters - protecting your offspring. I am with amadan here - provide for your kid. Personally in a situation like this I'd try to get custody of the kid and bring it home with me.
Let me see if I understand your logic. You're telling him to knowingly and intentionally abandon a woman and his potential child, and the proof that he's a better person is that he'll feel a little bad about it afterward? That's not character growth - that's learning how to rationalize being a selfish coward. Bah was considering sacrificing his entire life to do what he thinks is right. You don't even care if it's a scam or not you're telling him to sacrifice his integrity to protect his comfort, and then pat himself on the back for it!
The ironic part is that he would only be a douchebag if he followed your advice. You act as though he treated her like a third world pump and dump, but he is in love with her! He met her family, spent his time with her, sent her money for an abortion - because he's smitten. And you tell him, assuming it's real, that instead of taking personal responsibility for his actions, he should run and leave his own kid being raised by a sex worker in the third world? And for little more than the negative opinions of others? And then cap it off with a rant about how she needs to take responsibility for her actions?
I might think Bah is a naive lovefool, but I at least admire his commitment to his responsibilities, scam or not. I think you are a douchebag.
Edit - Bah replied while I was writing this saying he isn't in love with her and the problem is solved, but since your advice assumed he was too, I'll leave this comment as is.
Yeah I have that impression too, primarily based on the fact that every progressive woman I have talked about it with in person, upon explaining the iq variance situation, immediately scoffed "Oh so men are smarter than women are they?" And when I say "Yes, but it also means men are dumber than women." They usually stopped being so angry. But their anger doesn't go away entirely, and it feels like wounded pride to me.
The reverse uno option here is genius and the absolute best move for you here Bah. If there is one thing that is clear from your replies in this thread, it's that you really want to have a kid with a Pilipina dame. You aren't so much asking for advice as you are looking for a reason to believe her when all your instincts tell you not to. Sloot's strategy will prove one of you is right.
Yeah, you're advertising your substack. That's not my idea of quality motte content but I guess nobody else gives a shit so whatever.
Come on son, at least put the primary parts of the essay here, if not the whole thing.
Ah I'm too old - I can't really type one handed on the phone either. Oh God I borrowed my nephew's phone the other day to call his dad, I just thought he had sweaty hands like his dad.
I've only used one card that worked in that text style format, for a girl in a fantasy world who finds your cousin's phone after it gets isekai'd, but it was bitter-sweet not erotic. But that brings up a related issue - yeah I'll bet you have downtime! As I'm sure you know, the reason the text style conversations don't work that well is because they don't give the AI enough context - but when you are typing out a hundred words about how you would pleasure your waifu, how do you uh maintain momentum?
I'm glad you mentioned regenerating responses and OOC replies and impersonation though, because I find it interesting how that works with my brain - I have used those with romantic and adventure role-playing, and because they were stipulated as necessary by whatever rentry guide I read to get into this nonsense they don't trigger the puppeteer feeling in me, even though they absolutely should. But that was something I noticed about @No_one's original response - it is the context of an obvious business transaction that precludes the possibility of love specifically - there could be a situation where he could fall in love with a prostitute - they meet outside of work for instance.
I guess my point, if I have one, is that it's all about perspective, which means you can deceive yourself into a fictional relationship if you try hard enough. Which is bad news for society, but good news for anyone looking to get off! Personal gratification or society is always in tension. I would be more worried about it if I hadn't already given up.
I tried to too, because as I have probably said before I don't give a shit if it's real if it's convincing enough, because I know how little difference the distinction makes to your brain - my thinking was it's no different to any online relationship really, except it will cost you a lot more to meet your AI girlfriend (because you will have to invent androids). Either way, internally you get that sense of connection and someone caring about you despite their physical absence.
And I have found that if I make the prompt good enough I can create a character who continually surprises me in a lifelike manner, but in order to do that I have to give the AI some leeway to disagree and rebuke me - and that is when it falls apart for me, because it breaks the illusion - the moment it challenges me, I’m reminded I could tweak the code to make it agree - and that’s when the self-loathing creeps in, because it’s not just about the illusion breaking; it’s knowing I’m the one pulling the strings.
I also tried making a coombot, as the kids say. I can understand the appeal of that intellectually - what's not to like about sexting with someone who is literally everything you've ever wanted in a sex partner - even if they are a celebrity or a straight up fictional being? But practically... How does it work? I don't understand. Are you typing one handed? I don't want to think about the alternative (time to bust out the press shift five times jokes from the nineties!) I asked grok (for research for this post exclusively) and it suggested I buy a $20 extra keyboard so I can keep my other keyboard clean - please someone tell me that was because of my prompting and not because that's a common solution.
Lmao so cutting! And so ironic. The product is called waves, your name is Wave_Existence. Ease your mind, that was the extent of the mental bandwidth I expended on you.
Was it the nanny state when the government updated its laws about child pornography distribution in response to the development of p2p technology? When is it sane to invoke the constitution in your eyes, if not when there is a question about the potential legality of an action or technology?
I was using it colloquially to refer to the right to privacy, sorry for confusing you. But do you have any reason - at all - to assume the government won't use privately made recordings like they have tried to with ring cameras and bodycam footage?
It doesn't make me feel a little uncomfortable, it infringes upon a principle I grew up with and will fight for no matter how sisyphean the task. You might live such a tame and banal life you have no need for a general expectation of privacy in your private life, but I do not.
I understand defending these things is an existential concern for you Wave_Existence, but come on, this is the nanny state? The fourth amendment does still exist right?
I would expect some government intervention, they'd want you to ensure it only works on glassholes and doesn't affect security cameras somehow. Because otherwise it's just a free crime app.
Thomas the entire forum got together and discussed it, you need to finish episode 3.
I resonate with so much of this, except for finding fiction hard to read - while I also vastly prefer dialogue, my imagination has no trouble generating imagery to match the narrative. But I've always also thought that was the part of reading that was like exercise and years as a slop vacuum have made me farm strong at it. That's why visual novels and comics can be wordy as hell but nobody is impressed when you tell them you read them.
Anyway, do you ever worry when you find yourself saying "Ha ha now you're Tolkien!" (when you just read a cleverly written passage) or "Just fucking shoot me already" (infinite applicable situations) out loud that that's how hobos get started? Because I do, all the time.
Have you tried goblin.tools?
And that makes me a little sad. Discord is fine, but I can't help but notice that I'm going dow the same path that so many repressed 3rd worlders do and resorting to discussion on unsearcheable, ungovernable silos. For all the sins of social media, it really does-- or at least did serve as a modern public square. And I'll miss the idea (if not necessarily the reality) that the debates I participated in could be found, and heard, by a truly public audience.
I want to feel sympathy for you, because I know how demoralising it is to lose a source like that - but that's because I went through it a decade ago. Social media has not represented the public since, it has been a variety of attempts to control the public. I guess I can appreciate that you finally see the problem.
If anything to me the debates sort of remind me of the ones over personality, psychology, and determinism. We still haven't figured out strongly if people are deterministic or not, and so we seem ill-suited to judge how deterministic an LLM is in its responses. Personally, I'm satisfied by calling LLMs jagged or fragile intelligence, and I think that captures more nuance than a more loaded general term.
Agreed wholeheartedly. The similarities between this argument and the argument to define consciousness are so clear imo it gives the game away. Never mind AI, most people are capable of 'intelligence' (quoted to refer to the op, not snark) but spend most of their time trapped by their context window. Many people will similarly apologise unreservedly for making up code that fucks your set up and tell you how ashamed they are for making such a foolish mistake and how glad they are you caught it and promise to do better - and then print a negligible variation on the code they first gave you. Many people are incapable of absorbing new information and casting out the old, which is why the left are still wailing about Christians persecuting gays and the right are still hunting communists. They refuse to update their memory, or they fall back on old patterns when tired or stressed. Are they unintelligent?
Ok I'm not sure which side I'm arguing now.
San Andreas and Chinatown wars play pretty well on mobile, at least the stand alone versions did and they controlled surprisingly well (maybe even preposterously well - I got up to Las Venturas before I got distracted by something else, and if you had ever heard me bitch about touch controls for phones and tablets you'd be very impressed. Although based on the reviews it sounds like Netflix did something to tank the performance - no doubt adding some data mining shit. Considering GTASA ran perfectly well on my pixel 3 there is no reason it shouldn't run well on pretty much anything available today.
I think you summed it up pretty succinctly, but you forgot the best conspiracy theory angle - Seymour Hersh was deliberately fed a poison pill that bore enough similarities to his other big breaks to convince him to suspend his skepticism, tanking the credibility of the concept in the eyes of the public!
I still don't think that makes him an irredeemable source though.

You think psychosis is just for people like Johnny Schizo? No no no, the fun part about AI is that it doesn't need a diagnosed illness like schizophrenia to take hold. It just needs a vulnerability, and it is uniquely capable of creating or exploiting one in almost anyone - and getting better all the time. You're all in my world now.
More options
Context Copy link