domain:jessesingal.substack.com
That's an interesting and fair point, obviously a big mac in NYC is not substantially better than a big mac in Boise, but at the same time there is probably some amount of value for what you said about "big mac in NYC is worth more because you can eat it while being in NYC" because that is convenient.
I'll have to think on this more, would you mind expanding on the "But PPP would disagree with you there." part?
Also in general, PPP aside, I just think it's ridiculous to make sweeping judgements about the subjective value of things to people, it's not just clearly wrong, it adds very little value to a conversation to be like "I like X more than Y, thus X is better in all cases".
Slowly, snailfashion, glacially, at a pace I dread to think about (because at this rate, I will not get anything done in my life), I have chewed away at my Actor positioning/scaling/parenting issues in Unreal. I'm getting a progressively better handle at it, but it's still infuriatingly very similar to Unity/Godot but off by just enough to confound me at every turn.
Sure, I agree with this for the most part, although "as they should" is really funny given the proven lack of efficacy in the real world.
I find the distinction somewhat unneeded because I'm only concerned with the actual effectiveness and not semantically splitting it into component parts.
My thesis is that telling your kids not to have sex is demonstrably a bad way of preventing teenage pregnancy, and to think otherwise is to be willfully ignorant, generally due to ideology.
I guess in general, what I'm also trying to say is that just because something is theoretically effective, if it actually isn't effective in practice then who cares. What matters is what real humans do in real life, not what hypothetical outcomes could happen if hypothetical humans did or did not do things (especially when we know the real humans won't act like the hypothetical humans).
I don't know basically any Japanese, and I'm pretty far from streaming culture -- but based on my knowledge of how these things work I'm pretty sure both are failing badly in their interpretation of 'support on live stream'; I'm pretty sure he means that he plans to, like, give her money? Probably not all of his money ("support her with everything I've got"), but certainly "cheering my lungs out" would be atypical behaviour on a livestream, no?
Thanks!
Hm. It may not be as applicable to my topic as I thought. "Without getting in too much trouble" doesn't seem very accurate at the current stage.
Edit: No, I'm wrong. It remains accurate. It's not like POTUS Trump is getting into any trouble he wouldn't have been in anyways.
So the difference between dating a 21-22 year old and a 28-29 year old on a maturity level is often negligible.
What's your sample? As a highly social individual / serial dater between long-term relationships, I've noticed that are shocking differences in maturity between even a 26 year old and a 28 year old. To be clearer, when I was 29-ish I found nearly every 26 year old that I dated (n=6ish) insufferably superficial and indecisive, but I found much more success the further into late 20s I went. That's when you usually get your first biggest pay raise, graduate from post-secondary education, change jobs, move cities, etc. These are all highly formative events that may afford you different privileges or even humble you. As a woman, you may even shift your dating priorities from "want to find love" to "want to find someone suitable to raise children with".
But, my bias is urban and at least the "some university" bullet option on the census form - i.e. since high school, I haven't dated anyone who only has a high school education.
And yeah, a lot of dudes don't mature much through their 20's either.
My bias is also that most people who do not seek complexity in their life (not a value judgment, just an observation) beyond the age of 22 also do not tend to develop personalities beyond the age of 22 - they are essentially frozen in time. In comparison to individuals who do stretch themselves, those "frozen in time" tend to appear less emotionally and socially mature. Those groups also highly correlate with people who chose to (or accidentally) have children "early" (< 22) - but I don't personally believe that's necessarily causal in either direction. It also brings to mind the insult "peaked in high school" which I think has some classist / blue-tribe-on-red-tribe undertones.
Sometimes those emotionally or socially stunted people have a midlife crisis or some sort of later-in-life mellowing that causes a shift ("Barry really got his life together!"). In sadder scenarios, they may fall into alcoholism or other crippling addictions that are associated with an underformed prefrontal cortex. In the worst case they get elected to congress because they manage to get other like-minded people to the voting booth just by screaming and tweeting about complex problems having simple solutions (populism).
Personally speaking, I had some major shifts in maturity around the ages:
- 12 (puberty)
- 17 (parental independence)
- 21 (humility through a challenging experience)
- 24 (first big job / no longer a "broke college kid")
- 27 (end of first long-term relationship / lots of dating / big pay raise)
- 30 (mortgage / no longer talk shit at pick-up basketball)
I was an insufferable asshole at the age of 20. I'm still an insufferable asshole, but in a much different way now.
Aside, as I didn't want it to detract from the thrust of my main statement:
Most women don't take bad experiences and learn from them and improve... they become more bitter about it, and it makes them less appealing overall
This sounds like a character problem, not an estrogen problem. I've met plenty of bitter men who never learn from their bad experiences.
You used to get this sorta thing on ratsphere tumblr, where "rapture of the nerds" was so common as to be a cliche. I kinda wonder if deBoer's "imminent AI rupture" follows from that and he edited it, or if it's just a coincidence. There's a fun Bulverist analysis of why religion was the focus there and 'the primacy of material conditions' from deBoer, but that's even more of a distraction from the actual discussion matter.
There's a boring sense where it's kinda funny how bad deBoer is at this. I'll overlook the typos, because lord knows I make enough of those myself, but look at his actual central example, that he opens up his story around:
“The average age at diagnosis for Type II diabetes is 45 years. Will there still be people growing gradually older and getting Type II diabetes and taking insulin injections in 2070? If not, what are we even doing here?” That’s right folks: AI is coming so there’s no point in developing new medical technology. In less than a half-century, we may very well no longer be growing old.
There's a steelman of deBoer's argument, here. But the one he actually presented isn't engaging, in the very slightest, with what Scott is trying to bring up, or even with a strawman of what Scott was trying to bring up. What, exactly, does deBoer believe a cure to aging (or even just a treatment for diabetes, if we want to go all tech-hyper-optimism) would look like, if not new medical technology? What, exactly, does deBoer think of the actual problem of long-term commitment strategies in a rapidly changing environment?
Okay, deBoer doesn't care, and/or doesn't even recognize those things as questions. It's really just a springboard for I Hate Advocates For This Technology. Whatever extent he's engaging with the specific claims is just a tool to get to that point. Does he actually do his chores or eat his broccoli?
Well, no.
Mounk mocks the idea that AI is incompetent, noting that modern models can translate, diagnose, teach, write poetry, code, etc. For one thing, almost no one is arguing total LLM incompetence; there are some neat tricks that they can consistently pull off.
Ah, nobody makes that claim, r-
Whether AI can teach well has absolutely not been even meaningfully asked at necessary scale in the research record yet, let alone answered; five minutes of searching will reveal hundreds of coders lamenting AI’s shortcomings in real-world programming; machine translation is a challenge that has simply been asserted to be solved but which constantly falls apart in real-world communicative scenarios; I absolutely 100% dispute that AI poetry is any good, and anyway since it’s generated by a purely derivative process from human-written poetry, it isn’t creativity at all.
Okay, so 'nobody' includes the very person making this story.
It doesn’t matter what LLMs can do; the stochastic parrot critique is true because it accurately reflects how those systems work. LLMs don’t reason. There is no mental space in which reasoning could occur.
This isn't even a good technical understanding of how ChatGPT, as opposed to just the LLM, work, and even if I'm not willing to go as far as self_made_human for people raising the parrots critique here, I'm still pretty critical for it, but the more damning bit is where and deBoer is either unfamiliar with or choosing to ignore the many domains in favor of One Study Rando With A Chess Game. Will he change his mind if someone presents a chess-focused LLM with a high ELO score?
I could break into his examples and values a lot deeper -- the hallucination problem is actually a lot more interesting and complicated, questions of bias are usually just smuggling in 'doesn't agree with the writer's politics' but there are some genuine technical questions -- but if you locked the two of us in a room and only provided escape if we agreed I still don't think either of us would find discussing it with each other more interesting that talking to the walls. It's not just that we have different understandings of what we're debating; it's whether we're even trying to debate something that can be changed by actual changes in the real world.
Okay, deBoer isn't debating honestly. His claim about New York Times fact-checking everything is hilarious, but his link to a special issue that he literally claims "not a single line of real skepticism appears" and also has as its first headline "Everyone is Using AI for Everything. Is That Bad?" and includes the phrase "The mental model I sometimes have of these chatbots is as a very smart assistant who has a dozen Ph.D.s but is also high on ketamine like 30 percent of the time". He tries to portray Mounk as outraged by "indifference of people like Tolentino (and me) to the LLM “revolution.”" But look at Mounk or Tolentino's actual pieces, and there's actual factual claims that they're making, not just vague vibes that they're bouncing off each other; the central criticism Mounk has is whether Tolentino's piece and its siblings are actually engaging with what LLMs can change rather than complaining about a litany of lizardman evils. (At least deBoer's not falsely calling anyone a rapist, this time.)
((Tbf, Mounk, in turn, is just using Tolentino as a springboard; her piece is actually about digital disassociation and the increasing power of AIgen technologies that she loathes. It's not really the sorta piece that's supposed to talk about how you grapple with things, for better or worse.))
But ultimately, that's just not the point. None of deBoer's readers are going to treat him any less seriously because of ChessLLM (or because many LLMs will, in fact, both say they reason and quod erat demonstratum), or because deBoer turns "But in practice, I too find it hard to act on that knowledge." into “I too find it hard to act on that knowledge [of our forthcoming AI-driven species reorganization]” when commenting on an essay that does not use the word "species" at all, and only uses "organization" twice in the same paragraph to talk about regulatory changes, and when "that knowledge" is actually just Mounk's (imo, wrong) claim that AI is under-hyped. That's not what his readers are paying him for, and that's not why anyone who links to him in the slightly most laudatory manner is doing so.
The question of Bulverism versus factual debate is an important one, but it's undermined when the facts don't matter, either.
I mentioned it in another comment but the typical workflow of intermediate speakers would be to write the work directly in the target language, rather than translating it. Machine translation with LLM is certainly pretty good right now but using it doesn't help anyone learn the language.
We certainly might be heading towards a world where everyone uses machine translation by default and nobody bothers learning a new language, but I'm certainly a luddite in that lane.
the availability of ChatGPT represents a massive improvement over the previous status-quo.
I don't agree. Better translation tools like deepl have been around for a while, and arxiv papers haven't shown that gpt series models seriously dominate dedicated translation models. But on the other hand ChatGPT is giving everyone a huge gun that people can shoot themselves with because it does things besides translate.
I would even argue that by virtue of using ChatGPT wrong, the user ended up with a worse result versus just using a shitty translation tool like Google translate.
A week ago I said that I'd finished cutting stuff out of the first draft of my NaNoWriMo project and was reading to start adding new things in. On reflection I decided I hadn't killed quite enough darlings yet, so I'm halfway through a second pass. It's now down to 109k words, with the goal of the second draft being no more than 85% of the first draft i.e. 113k words.
It should be “X’s and my Y”, not “X and I’s Y”. So for example, “Elon’s and my moral systems are deeply at odds.” Like people saying “me and him” instead of “he and I”, something being a common mistake can make it acceptable in everyday speech but does not make it correct usage in a more formal context.
Not only did I reach my "replacement for Nitter" milestone, it seems to like it's even a functional replacement for the Miniflux component. I deployed it yesterday, currently ironing our compatibility issues as they pop up (there's a different OS on my web server than on my home machine, and not all of the packages available on one are available on the other at the same version). If there are no big surprises I'll push it to github over this week.
How are you doing @Southkraut?
What's a "mirror dweller"?
Likewise! And I appreciate you always trying to take the heat out of things, it's not easy. I've rewritten and deleted my own share of posts on things that are hot button topics for me, so I know it's not necessarily easy!
'Don't do [course of action] unless you're going to do it the right way
When he says "if you take out Saddam, I guarantee you that it will have enormous positive reverberations on the region... and I think that people sitting right next door, young people, and many others, will say 'the times of such regimes, such despots, is gone'...", to you it sounds like "don't invade Iraq, and if you're going to do it, hit Iran first (or at least do both)"?
When the result of private discussions are later publicized, and have been public for nearly two decades now, it is a distinction without a difference.
If there was no difference, why did they not discuss it in public to begin with?
but absent that we can absolutely blame people for not knowing a historical record exists.
That's like blaming people for being familiar with front page headline news, but not the correction notice on page 19, stuck between obituaries and classifieds.
-
This is a very bad way to showcase ChatGPT's writing chops. When I use it, by virtue of personally having native fluency, I am far better equipped to critique its output. The original user, a Japanese person somewhat conversational in the language, is simply not in a position to be able to tell where it could have made better stylistic choices. Note that my objection above is to do with the fact that the availability of ChatGPT represents a massive improvement over the previous status-quo. If he had simply asked it to translate his original Japanese, it would have done a perfectly fine job.
-
LLMs are not a monolithic entity. When most people talk about ChatGPT, they're ignorant of the fact that there are options beyond the slop-tier 4o. You may or may not know the difference, but the average normie complaining doesn't. My own workflow involves using multiple different models, multiple attempts, prompts that I know qualitatively improve results and my own ability to discern between outputs.
For example, when I ask for a translation that preserves the writers' rather unique character:
Kimi K2:
Today's stream was absolutely perfect! The second I dropped the comment "Please step on me, Step-on-me!" my oshi, Haachama herself, shot back with "Gross!"—and I was already soaring. Then she hit me with "You're such a perv!" and I just about ascended straight to heaven. I'm floating in pure bliss right now; this is the most peaceful moment of my entire life. And the thing that's got me buzzing more than anything? Haachama's birthday live is going down on Sunday, August 10 at 9:00 PM! I'm ready to cheer my lungs out. Just picturing that day makes me feel like I'm literally sipping her bathwater—yeah, that level of hype!
Gemini 2.5 Pro:
Tonight's stream was PERFECT! I commented, "Funde kuda-STEP ON ME!"* and my oshi, Haachama, actually responded with "Kimoi!" ("Gross!"). And then she even added, "You're such a perv!" THIS IS INSANE!! I feel like I'm in heaven right now. This is the most peaceful moment of my entire life. And to top it all off, the thing I'm looking forward to most is Haachama's birthday live on Sunday, August 10th at 21:00!! I'm gonna support her with everything I've got. Just imagining that day's stream makes me feel like I'm drinking her bathwater.
You're not giving me an answer. Is it an Army or a Navy? Saying "we decide" isn't an answer. What is your hypothetical decision? What are the reasons?
TFW she breaks up with you by text and you notice the em dashes
There's the Dodo Bird Verdict take where the precise practice of psychotherapy doesn't matter so much as certain very broad bounds of conduct are followed. If an hour talking with a slightly sycophantic voice is all it takes to ground people, that'll be surprising to me, but it's not bad.
Of course, there are common factors to the common factors theory. Some of the behaviors that are outside of those bounds of conduct can definitely fuck someone up. Some of them aren't very likely for an LLM to do (I guess it's technically not impossible for an LLM to 'sleep with' a patient if we count ERP, but it's at least not a common failure mode), but others are things LLMs are more likely to do that human therapists won't even consider ('oh it's totally normal to send your ex three million texts at 2am, and if they aren't answering right away that's their problem').
I'm a little hesitant to take any numbers for chatGPT psychosis seriously. The extent reporting is always tied to the most recognizable LLM is a red flag, and self_made_human has made a pretty good argument that we wouldn't be able to distinguish the signal from the noise even presuming there were signal.
On the other hand, I know about mirror dwellers. People can and do use VR applications as a low-stress environment for developing social skills or overcoming certain stressors. But some portion do go wonky in a way that I'm really skeptical they were breaking before. Even if they were going to have problems, otherwise, I don't think they'd have been the same problems.
((On the flip side, I'll point out that Ani and Bad Rudi are still MIA from iOS. I would not be surprised to see large censorship efforts aimed at even all-ages-appropriate LLM actors, if they squick the wrong people out.))
I don't know whether the article only asked him about the race part or only used his answer for that part
I'm starting to feel like some Motte two-buttons meme: "who is worse, the journalist or the public health expert?" Just joking. Well. Maybe 50/50.
Now that the conversation has run its course, I'll say one last time how much I appreciate your patience and thoughtfulness. We don't always agree but I always enjoy our conversations here.
Same answer. We decide whether it's more like the army or navy for the purpose of being authorized. As both of them are authorized, the answer is easy: it's authorized either way.
I replied to your OP a few weeks ago expressing skepticism that using ChatGPT was actually improving your writing. This comment reinforces my skepticism. Yes, the ChatGPT output has fewer "errors" but it does a worse job of conveying the message than the user's own error-ridden text. Even from a purely stylistic standpoint, the ChatGPT output is worse. One of the hallmarks of bad English prose is using extra words to say nothing, and ChatGPT is guilty of this in virtually every sentence. It's not the perfect being the enemy of the good. The ChatGPT output is not good.
The commissioner is the boss, in charge of all the people who do things. Or more likely in charge of several layers of sub-bosses before you get to the people who do things.
Which is why the quote isn't damning; with that authority comes the responsibility as well.
That would get into the weeds of what exactly PPP means. You could also say that a big mac in NYC is worth more because you can eat it while being in NYC. But PPP would disagree with you there.
Sadly Turok's discussion of class was less than worthless, and seemed to mostly be about his own unexamined class insecurities. As I said elsewhere, "It's a funny barber-pole-of-status-signaling thing. I have never encountered someone on the internet who is actually upper-class for whom "lower-classness" is an object of vitriol rather than of disinterested study." But bringing that directly into discussion would also violate the norms of this space, such that any discussion from his posts was already drawing from a poisoned well.
More options
Context Copy link