Fruck
Lacks all conviction
Fruck is just this guy, you know?
User ID: 889

I have been hermiting it up big time since getting back to Australia, and mowing my neighbour's lawn doesn't really count since I do it all the time, but I had the opportunity to do a good deed for someone the last day I was in Osaka - an old lady at the subway station dropped her umbrella and didn't realise it. She was so cute, like the platonic ideal of a little Japanese grandma, and she almost jumped out of her skin when I tapped her shoulder and she turned to see me looming over her. Then she double checked her bag like I was playing the old 'pretend someone dropped their umbrella and give them a second identical umbrella' prank on her. Then when she she realised I was being sincere she transformed from reserved and slightly suspicious to joyous gushing and appreciation, grabbing my arm and thanking me like I just pulled her off the tracks before a train arrived. The way people in Japan transform from mostly affectless to hyper animated when you break through the social conditioning is so much fun as an outsider.
Except they do not have different morals, they do not believe in the tenets of Satanism, they are trolling? Petulant trolling no less since I would bet they agree with the morality of most of the ten commandments, usually they're just having a 'fuck you dad' reaction to at least one of the first four?
I am usually the last one to figure it out, like with Darwin or Impassionata or Julius, so I assumed that's what was happening there too, otherwise I would have said something.
I found it - it's not so obvious now that I reread it, but after reading @Hoffmeister25's post about his suspicion, this post struck me as such classic hlynka in style and tone and proud sense of humour, plus the overt familiarity with the motte's inner workings, that it felt obvious.
He did get "special treatment" but we never hid that;
If I'm right and it's all above board then uh, why are you qualifying special treatment? I'm not trying to imply anything, just confused.
K. I meant the royal we, there was a thread a while ago where everyone many people were reminiscing about Hlynka, in which I thought Tequila basically came right out and said 'yeah gang, it's me!' in different words. And everyone many people reacted so nonchalantly that I thought it was already well known and I was just oblivious.
I genuinely do not consider you the modal case of the Parrot-apologist I dislike.
Thank you for saying so. I would say this conversation stopped going anywhere a while ago, and I think our philosophy on AI is much more aligned than you think, but... I'm not trying to start anything again, but I won't let philosophy get in the way of practicality if I don't think there is a moral component. Which is how I see this situation.
Hlynka was a mod from back on reddit who took care of troublemakers and had a bit of a chip on his shoulder from growing up poor (like most of us who grow up poor) that he used to fuel the zingers he would level at troublemakers. But being the enforcer made him bitter (like it does to everyone who assumes that role) so at some point he stopped being a mod, but his former mod status gave him leeway to continue making zingers. But people were less willing to tolerate it when he wasn't using it for the good of the community and people started to feel like he got special treatment (he did). But I think to him he just felt like he was being the same person he'd always been, and it just kind of made him angrier and eventually he flamed out.
Was this when we were all nostalgic for Hlynka and he was joking that Hlynka might be JD Vance? Because I thought he basically came right out and said it lol. I thought everyone else had already figured it out and known for ages.
You say you're joking, and then you continue by explaining why you wouldn't intervene in another scenario where you imagine me in a cage with a tiger. You couch your "apparent hostility towards bird fanciers" in the dismissive phrase "quite a bit", leaving yourself wiggle room to continue thinking less of some - like me. Then you tell me, a stranger you have never met and never will meet who lives on the other side of the world, that you don't actually wish me dead. Implying that my concern is for my life, not the insults. Yeah, I know all the tricks, chum.
Do you want to know how I know? Because I used to prioritize my jokes over the rules of the motte. I learned the hard way, through multiple bans, that being clever is no excuse for hostility. And that hostility is often in the eye of the beholder no matter how you meant it to come across.
So where is this line? It's north of blatantly obvious cliched examples of comedic shaming like "die in a fire" that's clear, but apparently south of "I hope you get mauled by a tiger" and "you're dumber than a parrot". How about, "I hope swarms of aphids crawl down your throat"? Or "I almost want to stick a iron hook up your nose and scrape out your brains, but I see there's no point" or maybe "scientists discovered a new sub-atomic particle on the edge of the gluon field - your worthless dick". I really need to know so I can go back to 'joking' people into silence. Either way I'll be damned if I'm going to let a mod get away with it if I can't.
Now, onto your 'scaffolding'. What was it I said you'd have to tell your grandma about your intern?
You'd never actually saddle your grandmother with the mental load of dealing with an intern who is an amnesiac - and is also a compulsive liar who has mood swings, no common sense, and can't do math.
Huh, looks like I discovered the concept a while ago. And what 'scaffolding' did you just invent? A list of rules that describes an amnesiac, unreliable, potentially flattering (read lying) intern who is bad at certain tasks.
You are still deliberately missing the fundamental concept. Let me try one last time. Cognitive. Shortcut. The goal is to give a novice a powerful, easy to remember tool to 'shortcut' if you will, their biggest barrier - anthropomorphism. Your scaffolding is just a more complicated version of my model. In fact you had to gut your own metaphor (the fallible intern, closer to a human than a parrot) and adopt the primary principle of mine (it's not human) to make it work. It's funny how the grandmas and grandpas I've taught my 'bad' model to have managed to wrap their heads around it immediately - and have gone on to exceed the AI skills of many of my techbro friends.
And as for armchair psychology, you brought up your financial relationship with OpenAI as proof you aren't biased, that you aren't defending the public image of LLMs. I just pointed out how flawed that argument is by explaining basic psychological principles like the sunk cost fallacy. I honestly can not believe a trained psychiatrist is claiming paying for something is proof they aren't biased towards it. It's beyond ridiculous.
And of course paying customers can be credible reviewers. I used to be one for a living. The site I worked for refused to play the '7 out of 10 is the floor' game, so despite being part of the biggest telecommunications network in the country we had to pay for Sega and Xbox Studios games to review them. But we made an effort to check our biases, with each other and our readers. And more importantly, this isn't a product review, this is a slap fight about which mental model is is best for novice AI users. You are heavily invested in your workarounds, I understand. I am heavily invested in mine. And while I haven't been heavily into it since before it was 'cool', I did:
-
Jump in with both feet. I use Gemini 2.5 pro, which I pay for, every day. I find its g-suite integration to be an incredible efficiency enhancer.
-
Expand beyond using a single model - I have API credit for DeepSeek, Gemini, Claude, Kimi, ChatGPT, and Grok. I could say I use them every day too, except I'm currently away from my computer.
-
Develop your nuanced, multi-part user model before you did, with greater clarity.
My amusement at your condescension aside, that makes me biased too. But it also gives me the perspective to know that 'thinking like a GPT power user' isn't a universal solution. And it's working with others that gives me the perspective to know that a simple, portable mental model like the parrot is far more useful for novices across all platforms than a complex personality profile for just one.
I suspect none of what I just said matters though. Much like nothing I've said matters. You aren't arguing to enlighten, you are arguing to win the argument. That's not my assessment, in case you think this is more of my pop psychology, it was the assessment Gemini gave me prior to the last post when I put our conversation into it and asked it how I could possibly get my point across when you hadn't seemed to understand anything I'd said already. I should have listened.
Noted. You won't take back either statement - I am still dumber than a parrot (given the retreat you have been on over the last few posts I guess that's score 1 for parrots?) and you still want to see me meet a tiger outside of its cage, but you would throw rocks at it.
I am familiar with hyperbole. I am also familiar with the mechanics of shaming. I think you are too and you know that isn't a defence. Shame often uses hyperbole to express the level of emotion of the shamer and to trigger a more visceral reaction in the shamed. Can I start ending my arguments on the motte with die in a fire if I promise it's rhetorical?
On the topic of your grandma, you have my condolences. Retreating to literalism is just more condescension though, it's not an argument I will engage with, particularly when I already noted the hypothetical nature of the exercise. I will simply point out that you had the opportunity to deploy your intern model in a hypothetical with a novice user and you refused - twice now.
You have not needed to argue any of this.You're clearly capable of nuance when you want to be - over the past day you've written however many words on MIAD in that other thread and also given me a detailed breakdown of how and why you'd throw rocks at a tiger. You chose, after explaining the superiority of the intern model, not to use it. After having the discrepancy pointed out, you chose again not to use it. You can't imagine using it because it does not work as a cognitive shortcut, case closed. High five Sam Waterston. Created by Dick Wolf.
Lastly, my point about Tesla is that the fact that you are willing to pay for ChatGPT plus is a mad defence against the claim that you are evangelising on its behalf. You don't need to pay someone to advertise your product if they are already paying you, that's advertising 101 - you let the principles of brand fusion and post purchase rationalisation do their thing, eventually reinforced by the sunk cost fallacy. As these things go it's closer to a confession than it is to a defence.
I'm pretty sure I haven't done that. My frustration isn't with your average user. It's with people who really should know better using the term as a thought-terminating cliche to dismiss the whole enterprise.
I'm pretty sure you said people like me are less intelligent than a parrot and that you hope we get mauled by a tiger. You did not specify that it was only directed at those using it to dismiss using AI, it was anyone using the term unironically. If I felt shame like normal people I would have simply stopped doing it instead of defending it - and I would no longer be helping people stop anthropomorphising a tool.
You lay out your complex 'fallible intern' model as the superior model. It can debug code and synthesise academic papers, it has a mind, though unlike any we know. You say we need to teach people to give clear instructions, provide background documents, and verify all work. But when you imagine talking to your own grandmother - a perfect example of a novice user - what do you do? You drop the intern model completely in favour of a genius with the world's worst memory. Why?
Because you know the intern model is too complicated. You know it doesn't work for a normal person. You'd never actually saddle your grandmother with the mental load of dealing with an intern who is an amnesiac - and is also a compulsive liar who has mood swings, no common sense, and can't do math. You give her a simple tool for the problem. But your tool deals with the symptom, mine deals with the cause.
I believe that you are trying to help people too, but you really are prioritising defending your model first. It might work great with techbros or the techbro adjacent, but even you drop it when you imagine a real world situation with a novice.
And I have to say, if I told you I'm not biased towards Teslas, Elon doesn't send me cheques, and in fact I just paid money for one, how wide would your eyes go as you attempted to parse that?
And Johnny Schizo in his early stages just got brainfucked by chatgpt pretending to be a god and told to kill his family.
You think psychosis is just for people like Johnny Schizo? No no no, the fun part about AI is that it doesn't need a diagnosed illness like schizophrenia to take hold. It just needs a vulnerability, and it is uniquely capable of creating or exploiting one in almost anyone - and getting better all the time. You're all in my world now.
Oh damn, it looks like a month ago chutes stopped being unlimited due to abuse. The deepseek api is already very cheap (and yeah, faster) so I switched to it for insane rp, I should have guessed it couldn't last and checked.
Unlimited free if you hook OR up via chutes too.
We are looking at this from two different angles. My angle helps people. Your angle, which seems to prioritize protecting the LLM from the 'insult' of a simple metaphor, actively harms user adoption. My goal in using the parrot model is to solve a specific and very common point of frustration - the anthropomorphising of a tool. I know the parrot shortcut works, I have watched it work and I have been thanked for it.
The issue is that humans - especially older humans - have been using conversation - a LUI - in a very particular way their entire lives. They have conversations with other humans who are grounded in objective reality, who have emotions and memories, and therefore when they use a LUI to interact with a machine, they subconsciously pattern match the machine to other humans and expect it to work the same way - and when it doesn't they get frustrated.
The parrot model on the other hand, tells the user 'Warning: This looks like the UI you have been using your whole life, but it is fundamentally different. Do not assume understanding. Do not assume intention. Your input must be explicit and pattern-oriented to get a predictable output.' The parrot doesn't get anything. It has no intentions in the sense the person is thinking of. It can't be lazy. The frustration dissolves and is replaced by a practical problem solving mindset. Meanwhile the fallible intern exacerbates the very problem I am trying to solve by reinforcing the identification of the LLM as a conscious being.
The beauty is, once they get over that, once they no longer have to use the parrot model to think of it as a tool, they start experimenting with it in ways they wouldn't have before. They feel much more comfortable treating it like a conversation partner they can manipulate through the tech. Ironically they feel more comfortable joking about it being alive and noticing the ways it is like and unlike a person. They get more interested in learning how it actually works, because they aren't shackled by the deeply ingrained grooves of social etiquette.
You're right that metaphors should be analyzed for fitness, but that analysis requires engaging with the metaphor's intended purpose, not just attacking its accuracy literally. A metaphor only needs to illuminate one key trait to be effective, but the parrot goes a lot further than that. It is in fact fantastic at explaining the spiky profile of LLMs. It explains why an LLM can 'parrot' highly structured Python from its training data but write insipid poetry that lacks the qualia of human experience. Likewise I could train a parrot to recite 10 PRINT "BALLS"; 20 GOTO 10, but it could never invent a limerick. It explains why it can synthesize text (a complex pattern matching task) but can't count letters in a word (a character level task it's not trained to understand). Your analysis ignores this context, seemingly because the metaphor is offensive to an aspirational view of AI. But you're attacking a subway map for not being a satellite image. The resolution is drastically reduced yes - this is a selling point, not a flaw. Cultural cachet drastically outweighs accuracy when it comes to a metaphor's usefulness in real world applications.
And do you want to know another animal with a clearly non human form of cognition? A parrot. How did you skip over crows and dolphins to get to octupi, animals with an intelligence that is explicitly not language based, when we are talking about language models? Unlike an octopus, a parrot's intelligence is startlingly relevant here (my mentioning of parroting was just an example of how a parrot has been used as a metaphor for a non-thinking (or if you prefer, non-feeling) pattern matcher in the past.) Using a LUI a parrot can learn complex vocalisation. They can learn mimicry and memorisation. They can learn to associate words with objects and concepts (like colours and zero). They can perform problem solving tasks through dialogue. Is it just because octupus intelligence is cool and weird? Because that just brings me back to the difference between evangelising llms and helping people. You want to talk up llms, I want to increase their adoption.
Shaming users for not having the correct mental model is precisely how we end up with people who are afraid of their tools - the boomers who work out calculations on a pocket calculator before typing them into Excel, or who type 'Gmail login' into the Google search bar every single day. As social media amply demonstrates, technical accuracy does not aid in adoption, it is a barrier to it. We can dislike that from a nerd standpoint, which is why I admired your point in my original post (technically correct is the best kind of correct!) but user adoption will do a lot more for advancing the tech.
I thought I explained it pretty well, but I will try again. It is a cognitive shortcut, a shorthand people can use when they are still modelling it like a 'fallible human' and expecting it to respond like a fallible human. Mode collapse and RLHF have nothing to do with it, because it isn't a server side issue, it is a user issue, the user is anthropomorphising a tool.
Yes, temperature and context windows (although I actually meant to say max tokens, good catch) don't come up in normal conversation, they mean nothing to a normie. When a normie is annoyed that chatgpt doesn't "get" them, the parrot model helps them pivot from "How do I make this understand me?" to "What kind of input does this tool need to give me the output I want?"
You can give them a bunch of additional explanations about mode collapse and max tokens that they won't understand (and they will just stop using it) or you can give them a simple concept that cuts through the anthropomorphising immediately so that when they are sitting at their computer getting frustrated at poor quality writing or feeling bad about ignoring the llms prodding to take the conversation in a direction they don't care about, they can think 'wait it's a stochastic parrot' and switch gears. It works.
A human fails at poetry because it has the mind, the memories and grounding in reality, but it lacks the skill to match the patterns we see as poetic. An LLM has the skill, but lacks the mind, memories and grounding in reality. What about the parrot framing triggers that understanding? Memetics I guess. We have been using parrots to describe non-thinking pattern matchers for centuries. Parroting a phrase goes back to the 18th century. "The parrot can speak, and yet is nothing more than a bird" is a phrase in the ancient Chinese Book of Rites.
Also I didn't address this earlier because I thought it was just amusing snark, but you appear to be serious about it. Yes, you are correct that a parrot can't code. Do you have a similar problem with the fact a computer virus can't be treated with medicine? Or that the cloud is actually a bunch of servers and can't be shifted by the wind? Or the fact that the world wide web wasn't spun by a world wide spider? Attacking a metaphor is not an argument.
The fact you've never been tempted to use the 'stochastic parrot' idea just means you haven't dealt with the specific kind of frustration I'm talking about.
Yeah the 'fallible but super intelligent human' is my first shortcut too, but it actually contributes to the failure mode the stochastic parrot concept helps alleviate. The concept is useful for those who reply 'Yeah, but when I tell a human they're being an idiot, they change their approach.' For those who want to know why it can't consistently generate good comedy or poetry. For people who don't understand rewording the prompt can drastically change the response, or those who don't understand or feel bad about regenerating or ignoring the parts of a response they don't care about like follow up questions.
In those cases, the stochastic parrot is a more useful model than the fallible human. It helps them understand they're not talking to a who, but interacting with a what. It explains the lack of genuine consciousness, which is the part many non-savvy users get stuck on. Rattling off a bunch of info about context windows and temperature is worthless, but saying "it's a stochastic parrot" to themselves helps them quickly stop identifying it as conscious. Claiming it 'harms more than it helps' seems more focused on protecting the public image of LLMs than on actually helping frustrated users. Not every explanation has to be a marketing pitch.
I liked using the stochastic parrot idea as a shorthand for the way most of the public use llms. It gives non-computer savvy people a simple heuristic that greatly elevates their ability to use them. But having read this I feel a bit like Charlie and Mac when the gang wrestles.
Dennis: Can I stop you guys for one second? What you just described, now that just sounds like we are singing about about the lifestyle of an eagle.
Charlie: Yeah.
Mac: Mm-hmm.
Dennis: Well I was under the impression we were presenting ourselves as bird-MEN which, to me, is infinitely cooler than just sort of... being a bird.
Sorry, but I could not disagree more with this moral dictum and find myself to be far more in agreement with the other commenters here. Especially if this was baby-trapping. OP should have mitigated his risk more effectively, but I don't believe he has any obligation to support a family created entirely against his will, particularly if it was premised solely on the deception of the mother. Here, all choice goes to her, and all obligation goes to him regardless of whether he was duped or not. There is no world where that is an even remotely just outcome, and it creates perverse incentives in favour of patently undesirable behaviour such as baby-trapping which just results in more dysfunctional out-of-wedlock births, the very thing such a policy should ostensibly be trying to mitigate. The only reason why women do this in the first place is that it works. Maybe it shouldn't.
If we were talking about a case where the courts were compelling him to look after the kid (which I agree creates perverse incentives) or when he had done everything he could to mitigate the chance of pregnancy but been deceived, I would agree. But trusting a hooker when she says she's on birth control and not bothering with anything else is not that. And maybe I'm typical minding, but if it was anything like the times I've blindly trusted a woman who told me she was on birth control, the truth of the matter is that in the moment I didn't give a single shit if she might get pregnant. At most I might have thought "well there's always plan-b" but by and large I was thinking with my dick. And when you go to your dick for advice you should expect to get fucked. I can see your point from a societal perspective, but from a personal perspective only one thing matters - taking responsibility for your actions. And from an evolutionary perspective only one thing matters - protecting your offspring. I am with amadan here - provide for your kid. Personally in a situation like this I'd try to get custody of the kid and bring it home with me.
Let me see if I understand your logic. You're telling him to knowingly and intentionally abandon a woman and his potential child, and the proof that he's a better person is that he'll feel a little bad about it afterward? That's not character growth - that's learning how to rationalize being a selfish coward. Bah was considering sacrificing his entire life to do what he thinks is right. You don't even care if it's a scam or not you're telling him to sacrifice his integrity to protect his comfort, and then pat himself on the back for it!
The ironic part is that he would only be a douchebag if he followed your advice. You act as though he treated her like a third world pump and dump, but he is in love with her! He met her family, spent his time with her, sent her money for an abortion - because he's smitten. And you tell him, assuming it's real, that instead of taking personal responsibility for his actions, he should run and leave his own kid being raised by a sex worker in the third world? And for little more than the negative opinions of others? And then cap it off with a rant about how she needs to take responsibility for her actions?
I might think Bah is a naive lovefool, but I at least admire his commitment to his responsibilities, scam or not. I think you are a douchebag.
Edit - Bah replied while I was writing this saying he isn't in love with her and the problem is solved, but since your advice assumed he was too, I'll leave this comment as is.
Yeah I have that impression too, primarily based on the fact that every progressive woman I have talked about it with in person, upon explaining the iq variance situation, immediately scoffed "Oh so men are smarter than women are they?" And when I say "Yes, but it also means men are dumber than women." They usually stopped being so angry. But their anger doesn't go away entirely, and it feels like wounded pride to me.
The reverse uno option here is genius and the absolute best move for you here Bah. If there is one thing that is clear from your replies in this thread, it's that you really want to have a kid with a Pilipina dame. You aren't so much asking for advice as you are looking for a reason to believe her when all your instincts tell you not to. Sloot's strategy will prove one of you is right.
Yeah, you're advertising your substack. That's not my idea of quality motte content but I guess nobody else gives a shit so whatever.
- Prev
- Next
No, the way I would object would be to remember the last ten years of mainstream media and laugh at your concerns about propaganda until I hyperventilated.
You are right, it is an asymmetric weapon. And the establishment want to keep it that way. So it doesn't matter that explicitly government backed propaganda was used to protect migrants who raped little British girls, or to cover up said rape of little British girls, or to protect the people who covered up the rape of little British girls. It doesn't matter that slightly less explicitly government backed propaganda has been used in the decade since to paint the 'migrants' as scared women and children fleeing tyranny and to defame and punish anyone who doesn't like them. It doesn't matter that government propaganda hid nigh constant protests in France for years, or was used to defame a presidential candidate, to censor social media, to protect corrupt and incompetent politicians, to launder public support for useless and pointless wars, to hide the intel agency to big tech pipeline, to convince everyone to fear their neighbours and cripple childhood development and wear a stupid fucking mask/not wear a stupid fucking mask and give up their bodily autonomy in the name of self righteousness. What matters is that Tommy fucking Robinson can whip up a meme in ten seconds. That's when propaganda is dangerous.
More options
Context Copy link