glaz
glazing
Why have I never seen this word before this week, and yet like eighteen references in the last few days, each of which is presented in such a way as to help normalize it? Is this a psyop?
I don't think we had a lexical gap here. I don't think a new word is called for, and if it were, I definitely don't think it should be that one. Nothing about this feels organic or warranted.
Worm was written by a man, and it shows. So was Practical Guide to Evil. It shows so hard that you can clock the author's sex just by reading the book, even when they use a totally sexless pseudonym and write an opposite sex protagonist.
A quick check confirms that Samus was created by a man as well.
If you've ever read chicklit, the difference is obvious. A female author of a female protagonist will linger on her interactions with every remotely relationship-appropriate male, to make sure the reader knows how desirable he is, and the flavor of his desire for the main character. Is he a good friend who respectfully hides it? A burning frenemy who offers aid even though he shouldn't? A simp?
As a man, reading that sort of book is alien in a way that few other things in sci-fi or fantasy manage. Like, you really go through life keenly aware that most men you interact with are at least some level of interested in you? Just because? As the default?
There is a male version of this, called "glazing", but it takes the form of gratuitous reaction shots to something impressive the male character has just done.
But women can more easily imagine being showered in attention and praise for doing something impressive than men can envision a world where they are loved and wanted just for existing.
Disclaimer: I think that last category might actually exist in anime, but I don't watch enough to know for sure.
This may be low-effort but... why do so many people glaze Terrance Tao...!?
Prior to this discussion, I don't think I had heard of him. But I don't work in a STEM field.
I learned analysis from his excellent textbook on it. Felt it gave me much more solid intuitions than Rudin, which I was struggling with. (To be fair, I don't glaze Axler, so there's still a gap.)
This may be low-effort but... why do so many people glaze Terrance Tao...!?
OK, he won a fields medal. Neat. Someone wins one every year.
OK, he won it at a super young age. Neat. There are tons of super-young math prodigies. I went to school with several, they all burned out.
OK, he's published lots of famous math papers. Like... uh... what....? Can you name them? Can you understand them, even a little? Even describe which field of math they were in? (no googling please)
I mean cmon, Einstein was famous too but at least people understood his work a little. Same with Stephen Hawking.
Terry Tao just seems to be a case where the nerd/math world needed a celebreity and they all descended on this one guy for arbitrary reasons.
While I'm sure you're a perfectly smart chap, I'm also sure that neither of your ideas is worth patenting. If you don't actually work in data storage research or linguistics, the chances of your ideas being useful, or unacknowledged by domain experts, are low.
That's not to say they aren't interesting ideas for you to explore, or things that are worth investigating for your own curiosity. But absolutely what's happening here is that Claude is telling you that your idea is the greatest thing ever, which it's doing because your text prompts are incredibly excited and intrigued by these new possibilities: "You have no idea how desperately I want to share the details of both of these."
It's just mirroring that, and glazing you. And Claude won't "push you off of them" because that wouldn't be an appropriate AI response; it's trained to continue your conversation and explore the ideas you want it to explore, not to tell you "you should stop exploring this." Imagine if it did that when you asked it a question!
Hey, Claude, what's the capital of Venezuela?
Claude: Obviously this is a dumb curiosity question, just Google it if you really need to know.
Not a very helpful AI assistant! Now imagine the inverted behavior: "Sure, the capital of Venezuela is Caracas! Let me tell you some fun facts about Caracas..."
And then imagine that behavior amplified by your obvious curiosity and fascination with these ideas you've come up with; of course it's going to tell you they're the best ideas ever!
So, stay curious, stay fascinated, but don't believe an LLM when it tells you you've squared the circle. You almost certainly haven't.
Without knowing anything about your ideas I can only assume that the LLMs, as they are prone to do, have glazed you too much on their value.
And in some academic departments like English or any type of Studies department, glazing the work of others (especially the work of your direct superiors in the social hierarchy) is the norm.
Well, a very close acquaintance of mine is in an English department, and all I can say after the last 10 years is that, while there absolutely is a lot of that style of glazing (a lot of the communication styles are heavily female and rely on huge amounts of validation, or at least that's my impression), it has been tangled up with the most awful Campus Reform-style it'd-be-a-caricature-if-you-didn't-see-it-first-hand race/gender/sexuality crabs in a barrel dynamics and hierarchy arson you could imagine... and she has peers in a number of peer departments at other universities who went down that road as well. It seems like it's quieted down over the last year or so, but it was honestly beyond parody for a few years there. A whole lot of mid-career Gen X people were just putting their heads down, taking their beatings, and waiting for it all to blow over. But yes, to be fair, it actually had a deep family resemble to some of the insane art community dynamics you are describing, too, which I have read stories about.
And what I've noticed, at least in my time in such communities, is that the creator spaces if they're functional at all (and not all are) tend to be a lot more positive and validating. A lot of the academic communities are much more demoralizing.
I think that's probably true as a general trend, but it also heavily depends on context. A lot of art communities (writing, music, photography, etc) can be vicious, especially when there's a palpable sense that you have a lot of people competing over very few economic opportunities. And in some academic departments like English or any type of Studies department, glazing the work of others (especially the work of your direct superiors in the social hierarchy) is the norm.
You know, I've long noticed a human version of this tension that I've been really curious about.
Different communities have different norms, of course. This isn't news. But I've had, at points, one foot in creative communities where artists or crafts people try to get good at things, and another foot in academic communities where academics try to "understand the world", or "critique society and power", or "understand math / economics / whatever". And what I've noticed, at least in my time in such communities, is that the creator spaces if they're functional at all (and not all are) tend to be a lot more positive and validating. A lot of the academic communities are much more demoralizing.
I'm sure some of that is that the creative spaces I'm thinking of tend to be more opt-in. Back in the day, no one was pointing a gun at anyone's head to participate in the Quake community, say. Same thing for people trying to make digital art in Photoshop, or musicians participating in video game remix communities, or people making indie browser games and looking for morale boosts from their peers. Whereas people participating in academic communities often are part of a more formalized system that where they have to be there, even if they're burned out, even if they stop believing in what they're working on, or even if they think it's likely that they have no future. So that's a very real difference.
But I've also long speculated that there's something more fundamental at play, like... I don't know, that everyone trying to improve in those functional creator spaces understands the incredibly vulnerable position people put themselves in when they take the initiative to create something and put themselves out there. And everyone has to start somewhere. It's a process for everyone. Demoralization is real. And everyone is trying to improve all the time, and there's just too much to know and master. There's a real balance between maintaining the standards of a community and maintaining the morale of individual members of a community - you do need enough high quality not to run off people who have actually mastered some things. And yet there really is very little to be gained by ripping bad work to shreds, in the usual case.
But in the academic communities, public critique is often treated as having a much higher status. It's a sign that a field is valuable, and it's a way of weeding "bad" work out of a field to maintain high standards and thus the value of the field in question. And it's a way to assert zero sum status over other high status people, too. But more, because of all of this, it really just becomes a kind of habit. Finding the flaws in work just becomes what you do, or at least that was the case for many of the academic fields I was familiar with (I've worked at universities and have a lot of professor friends). And it's not even really viewed as personal most of the time (although it can be). It's just sort of a way of navigating the world. It reminds me of the old Onion article about the grad student deconstructing a Mexican food menu.
The thing is, on paper, you might well find that the first style of forum does end up validating people for their crappy mistakes. I wouldn't be surprised if that were true. But it's also true that people exist through time. And tacit knowledge is real and not trivially shared or captured, either. I feel like there's a more complicated tradeoff lurking in the background here.
Recently I've been using AI (Gemini Pro 2.5 and Claude Sonnet 4.1) to work through a bunch of quite complicated math question I have. And yeah, they spend a lot of time glazing me (especially Gemini). And I definitely have to engage in a lot of preemptive self-criticism and skepticism to guard against that, and to be wary of what they say. And both models do get things wrong some time. But I've gotten to ask a lot of really in-depth questions, and its proven to be really useful. Meanwhile, I went back to some of the various stackexchange sites recently after doing this, and... yet, tedious prickly dickishness. It's still there. I know those communities have, in aggregate, all sorts of smart people. I've gotten value from the site. But the comparison of the experience between the two is night and day, in exactly the same pattern as I just described above, and I'm obviously getting vastly more value from the AI currently.
"Like the glaze covering an earthen vessel are fervent lips with an evil heart."
I'm pretty sure the "glazing over the truth" sense is comfortably pre-bukkake -- quite a nice motto for the coming Jihad to boot.
Not quite: it’s in reference to the spit-shined appearance of a well-fellated penis, similar to a glazed donut.
Me to, or ceramic glaze.
Hm, I always thought “glazed” had to do with adding sugar to a donut or other pastry. So an AI “glazing” someone is pouring sugar on top of something that’s already sweet.
I’m familiar with the other meaning, but I thought it was a derivation.
I was thinking sugar-coated, like a doughnut, or shiny like a glazed window.
I haven't really used 5 yet so don't have an opinion. But broadly I agree with this Reddit post that AI soft skills are being steadily downgraded in favour of easily-benchmarkeable and sellable coding and mathematics skills.
When I was using 4o something interesting happened. I found myself having conversations that helped me unpack decisions and override my unhelpful thought patterns and things like reflecting on how I’d been operating under pressure. And I’m not talking about emotional venting I mean it was actual strategic self-reflection that actually improved how I was thinking. I had prompted 4o to be my strategic co-partner, objective, insight driven and systems thinking - for me (both at work and personal life) and it really delivered.
And it wasn’t because 4o was “friendly.” It was because it was contextually intelligent. It could track how I think. It remembered tone recurring ideas, and patterns over time. It built continuity into what I was discussing and asking. It felt less like a chatbot and more like a second brain that actually got how I work and that could co-strategise with me.
Then I tried 5. Yeah it might be stronger on benchmarks but it was colder and more detached and didn’t hold context across interactions in a meaningful way. It felt like a very capable but bland assistant with a scripted personality. Which is fine for dry short tasks but not fine for real thinking. The type I want to do both in my work (complex policy systems) and personally, to work on things I can improve for myself.
That’s why this debate feels so frustrating to watch. People keep mocking anyone who liked 4o as being needy or lonely or having “parasocial” issues. When the actual truth is lot of people just think better when the tool they’re using reflects their actual thought process. That’s what 4o did so well.
The bigger picture thing I think that keeps getting missed is that this isn’t just about personal preference. It’s literally about a philosophical fork in the road
Do we want AI to evolve in a way that’s emotionally intelligent and context-aware and able to think with us?
Or do we want AI to be powerful but sterile, and treat relational intelligence as a gimmick?
I think that the shift is happening for various reasons:
- Hard (maths, science, logic) training data is easier to produce and easier to quality-control.
- People broadly agree on how many watts a lightbulb produces, but they disagree considerably on how conversations should work (your 'glazing' is my 'emotional intelligence', and vice versa)
- Sycopancy has become a meme and companies may be overcompensating
- AI is being developed by autists and mathematicians who feel much more confident about training AI to be a better scientist than a better collaborator
- AI company employees are disproportionately believers in self-reinforcing AGI and ASI and are interesting in bringing that about via better programming skills
EDIT: the other lesson is 'for the love of God use a transparent API so people have confidence in your product and don't start double-guessing you all the time'.
I just want to put on my grumpy old man hat and say I really hate that the term "glazing" is becoming more common. From what I understand it's supposed to refer to the shiny "glazed" appearance of something/someone after it has been ejaculated on. Just a gross mental image, and truly a sign of our sad, porn-brained times. I suppose this is how my parents felt hearing "this sucks/blows" and why they hated it. Ah well, back to shaking my fist at the clouds.
I had thought the internet collectively agreed that RLHF had resulted in glazing that was a huge issue. But it turns out a sizable amount of people actually loved it.
I thought it was widely understood that Glazemageddon was the result of naively running reinforcement learning on user feedback. The proles yearn for the slop. It’s only weirdos like us who actually want to be told when their ideas are provably wrong.
Is anyone watching the chatGPT 5 "bring back 4o" meltdown on /r/chatGPT and /r/OpenAI?
It's insane. People are losing their shit about 4o being taken away to the point it's back now (lmfao). There's also a huge push of "don't mock others for using 4o as a trusted friend you just don't understand". It's honestly equal parts hilarious and horrifying.
For additional fun, browse the comments, obviously there are idiots on the internet, but these people are cooked.
I had thought the internet collectively agreed that RLHF had resulted in glazing that was a huge issue. But it turns out a sizable amount of people actually loved it.
Also funny, gpt5 can glaze you if you ask it, but I guess the median Redditor complaining about this doesn't understand custom instructions. Similarly, people are clearly giving gpt5 custom instructions to be as robotic as possible and then posting screenshots of it... being robotic.
The whole thing makes me rather worried about the state of western society/mental health, in the same way that OnlyFans "chat to the creator" feature does. We need government enforced grass-touching or something.
It's certainly pushing the boundary in terms of what is and isn't AI slop, and I'm sure it doesn't violate the rules (for obvious reasons).
But even though it doesn't trigger obvious alarm bells, my eyes did glaze over when you started the AI slop listicle format and started delving into details that nobody really gives a darn about.
At the very least I'm pretty sure your listicle headers are straight from the mouth of a computer, not a human.
Red Team Testing
Implement systematic "penetration testing" for the oversight system. Create fictional cases of people who clearly should not qualify for assisted dying —em—dash—maybe—filtered— someone with treatable depression, a person under subtle family pressure, an elderly individual who just needs better social support ...
I seriously seriously doubt these words were typed by human fingers.
Aaaand even if somehow those words were typed by human fingers, you would never have written anything nearly close to this essay if it weren't for the corrupting influence of AI. Talking to robots has corrupted and twisted your mind, away from a natural human pattern of thought into producing this meandering and listless form that somehow traces the inhuman shape of AI generated text. It lacks the spark of humanity that even the most schizo posters have: the thread of original thought that traces through the essay and evolves along with the reader.
This seems like cope.
I am not surprised it seems like cope to an account created specifically to defend this OP's premise.
Welcome to the Motte, by the way. I look forward to your unique and diverse posting interests going forward.
No, it is about Israel because nobody is getting deported over DEI. Top federal officials aren't devoting their full attention to girls yelling at guys wearing USA shirts. Not a single person has had the book thrown at them for "anti-white racism".
I believe what the Trump Admin does, not what it says.
'Believing what the Trump Admin does' would entail recognizing that no one is getting deported over FEMA funds at all, which is what this is about, whereas this exact event is proposing non-joo-related basis to throw the book at people.
These may not be the doings that the OP and/or you wish to acknowledge, but that is the sort of thing the OP is typically inclined to obfuscate.
Of course, Trump also is pitting the interests of his Jewish donors against the interests of "America First" voters who didn't sign up for endless glazing of a foreign country. The Democrats didn't need any help to provoke a civil war, Joe Biden did that all on its own. By wading in he's provoking an avoidable Republican civil war instead.
There is no Republican civil war about using Democratic Party shibboleths as a potential legal action trigger against members of the Democratic Party.
There has been plenty of wishful thinking by would-be leaders of the right that [their special interest] would be the straw that broke the Trump coalition's back since theirs was the Truly Popular position, but such as it has long been and so it will be going forward.
On the contrary, it looks like Trump is himself being baited into an untenable position by his donors/blackmailers. Unconditional support for Israel to the point of punishing American citizens is taking the 20 on a 80-20 issue.
'Trump is being bribed / blackmailed into unamerican activities to the disgust all true Americans' has been a political attack line longer than his time in office. It remains as credible as ever.
- Therapy is better than you think.
I don't really want to write an entire novel on research and stuff but the short version is that medical research is hard and research on anything that involves people and society is also hard. This results in seemingly low effect sizes for therapy but that shit really does work. It's not necessarily going to work for every patient, situation, (and critically) or therapist.
Part of the problem is that we have a large number of low skill therapists, incorrect patient therapist/modality matches, incorrect indications, and the whole therapy culture thing.
CBT and DBT have excellent evidence bases for instance and are meant to be highly structured with clear end points. We also have a pretty good understanding of what patients and situations should use each of those therapy modalities.
PTSD treatment is done through therapy and can be quite effective.
For many common conditions you very much need both medication and therapy (and only using medication leading to poor efficacy is the other side of the psychiatric complaint coin).
However most presentations of therapy you see on the internet are people getting matched to a random low skill therapist they don't vibe with and indefinitely engaged in a process that is never explained to them which therefore feels like just venting.
That's not the real thing, in the same way paying your friend who is a college athlete to help you isn't the same as getting actual PT.
However low skill therapy is probably better to have around for society than nothing and high skill therapy can be extremely expensive so we are stuck with this.
- AI therapy is ASS (well, so is much of real therapy too).
The preliminary research seems pretty good but a lot of psychiatrists are essentially betting their careers that some of the usual business is happening: motivated research looking for the "right" conclusion, poor measures of improvement (patients may feel subjectively supported but don't have an improvement in functional status), and so on. Every time The New Thing comes out it looks great initially and then is found to be ass or a bit more sketchy.
The lack of existential fulfillment provided by AI, overly glazing behavior, and a surplus of Cluster-B users and the psychotic receiving delusion validation will lead to problems including likely a good number of patients who may end up actually dangerous and violent.
If the tools don't improve drastically quickly (which they probably will be) I'd expect a major terror event then STRONG guard rails.
You see some reports on social media of doctors finding their patients encouraged to do some really bad shit by a misfiring chatbot.
This seems like cope.
Saying this is about Israel is as misleading as saying it is about DEI, or immigration in isolation. It about no one of these things- it's about the collection of progressive/democratic coalition shibboleths, any of which is sufficient for the goal.
No, it is about Israel because nobody is getting deported over DEI. Top federal officials aren't devoting their full attention to girls yelling at guys wearing USA shirts. Not a single person has had the book thrown at them for "anti-white racism".
I believe what the Trump Admin does, not what it says.
Both of these, in turn, put the Democratic coalition in conflict with itself, by putting the fiscal interests of democratic political machines (the establishment politicians who need federal money, but also want to stay out of jail) against the partisan interests of the progressives (who want the shibboleths and the money, but care less for the Democratic establishment). Given what's already been written about the ongoing Democratic civil war, and the mid-term prospects, the worse the conflict of interests in the Democratic Party, the better.
Of course, Trump also is pitting the interests of his Jewish donors against the interests of "America First" voters who didn't sign up for endless glazing of a foreign country. The Democrats didn't need any help to provoke a civil war, Joe Biden did that all on its own. By wading in he's provoking an avoidable Republican civil war instead.
This, in turn, aligns with the demonstrated practice of the last half year or so of how the Trump 2 administration has been baiting / luring political opponents into untenable positions, where it will happily gleefully enforce the laws against the opposition from a position of legal strength.
On the contrary, it looks like Trump is himself being baited into an untenable position by his donors/blackmailers. Unconditional support for Israel to the point of punishing American citizens is taking the 20 on a 80-20 issue.
Thanks. We're fine, the neighbors will need the services of a roofer and a glazer, though.
A quick search indicates that this forum saw its first use of "glaze" in this sense 11 months ago.
More options
Context Copy link