@07mk's banner p

07mk


				

				

				
1 follower   follows 0 users  
joined 2022 September 06 15:35:57 UTC
Verified Email

				

User ID: 868

07mk


				
				
				

				
1 follower   follows 0 users   joined 2022 September 06 15:35:57 UTC

					

No bio...


					

User ID: 868

Verified Email

As soon as some company will invent some good version of male full AI celebrity and provide it en masse to teenage girls, it will have capacity to oneshot them all.

Is this actually possible, though? A celebrity, by definition, has to be famous, and if they're famous, then they'll be in demand by lots of other teenage girls (one might say that this characteristic, rather than being the celebrity, is the more directly attractive feature of any such celebrity). Now, an AI celebrity could theoretically have genuine, heartfelt one-on-one conversations with a million different teenage girls at the same time. But would that simulacrum of a relationship with a celebrity be good enough?

I suspect that the lack of it being rivalrous will substantially take away from the power of celebrity. If you're the one woman in 4 billion who gets to date Leonardo DiCaprio, that probably makes you feel much better than if you are one of 2 billion who are dating Leonardo DiCaprio at the same time. Even if Leo's able to give you his full, undivided attention 24/7, just like he can with 2 billion other women.

That which can be destroyed by the truth should be, except if destroying it would have huge cost it terms of negative utils

This seems functionally identical to "I will exercise complete arbitrary freedom to pick and choose to destroy something that a truth would destroy while also feeling morally virtuous along the way." Human bias being what it is, if you dislike any outcome for any reason, any good-faith honest calculation of utils of that outcome will certainly come out negative, and sufficiently so to meet whatever bar it needs to to justify not getting that outcome.

The shorter quotation is going to be wrong sometimes, but that's expected of any simplistic pithy line that tries to describe huge, overarching principles in ethics. I think it's more useful than this longer one which makes no concessions or commitments at all to any principles beyond one's own whims and preferences.

If AI GFs / AI generated porn becomes good and cheap enough, I fully expect their human-generated variants to crash.

This will be interesting to see play out as the tech gets better and better. I'd fully expect there to be a sort of bimodal distribution, with cheapo/free AI-generated porn dominating/taking over much of it, and with expensive niches with verified human performers making bespoke videos for people who demand videos of the real thing or text typed out by the real thumbs. With generative AI getting better and better, a credible way of verifying that the performer/chatter is real might be downright impossible. Perhaps some professional organizations to certify that performers are actually performing the old fashioned way could rise up, but how would they gain credibility?

But if that happens, we could see a landscape of basically free basically limitless custom AI-generated porn that makes the current Pornhub look limited and small in comparison, along with expensive luxury-priced services to guarantee the real fake GFE with a real human who is really filmed for the videos she produces and really types out her messages. Pornhub itself should probably look into pivoting to the former, while I wonder if OnlyFans could actually find a way to make an organization that can actually certify its performers as real humans doing real things for camera, to capitalize on the demand for that.

Also apropos of nothing, this reminded me of how so many pro-trans-rights-activist comics I've seen on social media just feature a punchline of someone of the wrong opinions being murdered, and how their slogans like "punch a transphobe" or "throw bricks at transphobes" tend to be along those lines. My pet theory is that this is due to TRAs being dominated by people with far more testosterone than a typical woman but also who have a tendency to indulge in their emotional urges as is the common stereotype of a woman in contrast to a man.

Yes, but are they really all that wrong to model them — or at least some ideas — that way?

I think they're mostly wrong. There's some truth in that the knowledge of an idea can spread due to the idea being publicized. Obviously the knowledge of the idea is a prerequisite of the belief in the veracity of the idea, so if you squint you can see the truth of it, but the modeling of the belief in the veracity of the idea being helplessly thrust upon someone like a cold virus is far less useful than the one involving treating ideas like things that people can and do accept and reject. Not always based on reason and logic - not often based on reason and logic, even - but not helplessly.

I mean, isn't this a key part of why, traditionally, heresy was considered such a serious matter?

I think the belief that ideas spread that way is a key part of why, traditionally, heresy was considered such a serious matter. There are many things that people have done traditionally based on the belief that something is true.

One thing AntZ certainly got accurate about ants that to A Bug's Life didn't was the fact that ants have 6 legs. The 4-limbed ants in the latter film used to bug (heh) the hell out of me as a kid.

On the sensation of hunger and the specific wording I'm using "the sensation of hunger" and not simply the term hunger, this is part of the meditative practice that I think has allowed me to maintain the weight loss. In Buddhism we talk about dependent phenomena and conditional arising, and the fundamental emptiness of all such things. In this understanding, hunger is not an indication of needing to eat, or at all even related to the nutritional state of my organism, its a sensation like the temperature of the air, or ambient sounds. It never, ever, ever goes away. If I am awake, I am hungry. Starving. Even now that I'm "better", I'm hungry from the moment I awake until I return to sleep. No amount of eating of any type of food has any effect whatsoever on my sensation of hunger. In fact, eating generally makes me even hungrier, as well as exhausted. I could eat so much food that I had trouble walking, I would feel like I was on the verge of vomiting from how stuffed I was, and I was still starving. I think something like this drives the behaviors of many, if not all, obese people to some extent. I am fortunate that the same techniques I use to manage chronic pain work pretty well with chronic, inescapable hunger.

My experience in weight loss is quite different from yours, but this portion sort of speaks to me. I became fat as a child and was obese in my low-20s (around 31 BMI) before dieting down to normal weight levels over the course of about a year (around 22 BMI) and have maintained it for a couple decades now. Before I started dieting in my low 20s, I felt like a slave to my hunger, completely helpless but to eat until I was sated. Then when I started following CICO, I realized that hunger was just a sensation that I could choose to follow or not.

I think that one insight was the one key to helping me to lose weight after I'd failed so much in my teenage/low-20s years. Instead of focusing on strategies to keep me from being hungry, it was just learning to see hunger as just a sensation that can be ignored. I wish there was a way to transfer this to people who constantly complain about how being hungry makes them unfocused, angry, distracted, etc. but it seems to me that, sadly, not everyone is capable of this.

4 is indeed the one with the Dragon's Breath rounds. It was featured in a really popular no-cut scene that drew a lot of inspiration from Hotline Miami with its top-down camera view. The visual of John Wick shooting a shotgun at enemies who would blow up in flames was pretty cool, but definitely highly overrated, with the top-down view basically negating the benefit of a no-cut scene which is usually supposed to give a visceral, exciting sense of actually being there in the middle of the action.

I'd say you didn't miss much by missing that, but you did miss the best scene of the film, the long take of John Wick being kicked and rolling down several flights of stairs (the actual gun combat scene surrounding that was pretty meh).

I agree with this take a lot. I think there's a real "if there's a table with 10 people and 1 of them is a Nazi, then it's a table with 10 Nazis" kind of phenomenon going on here. It probably doesn't exclusively go one way, but I seem to observe it always going one way, which I think reflects a way that the modern mainstream left seems to model ideas as akin to infectious diseases, which can spread from person to person merely through contact and which can contaminate entire areas merely by existing in one section of it.

Even many people who are aware, in principle, that echo chambers exist seem to have a remarkably poor time recognising when they've found themselves inside one. Echo chambers, like "biases", are things that happen to other people. I'm actually not persuaded that the average person with an undergraduate degree would be better equipped to recognise that they're in an echo chamber than the average person without an undergraduate degree.

Empirically, I can't disagree. What I find confusing is that, everything you wrote here is also basically common knowledge. Everyone who knows anything about bias knows that the bias of considering oneself above the biases that other people fall for is very common. As such, if you observe other people's biases and think yourself above them, the obvious conclusion would be that you're falling prey to such a bias and should break out of it by challenging yourself with objective research that challenges you.

At least, if you're motivated to write a good work of fiction that can appeal to people outside of your echo chamber. I have to conclude that a high proportion of major fiction writers have no such motivation. The hunger for status within one's echo chamber is often greater than the hunger for money, I suppose.

I very much disagree that college students know that they are sheltered and don’t have life experiences.

You're probably correct on this. But it's still confusing to me why. Everyone knows that everyone is missing something due to having limited experiences. Everyone knows that they fall under the category of "everyone" and therefore they must be missing something. It doesn't take much research to find out that life in the modern West, even as a lower class person, is extremely sheltered and protected compared to the norm of humanity. College students have disproportionately high access to research material and disproportionately high experience doing research. If they truly want to write a good novel or film script about a setting or characters they have little personal experiences with, any mid-level intelligent person in that situation should be able to put 2 and 2 together to realize that they need to step out of their bubble and dive into research to learn about lives and circumstances far different from their own.

Which is why I have to conclude that these people don't have motivations to write good fiction.

It does directly address the issue and has nothing to do with hypocrisy, though. The issue being raised is that LLMs are fundamentally unreliable due to being unfixably prone to hallucinations. The way it's addressed is that humans are also similarly fundamentally unreliable, yet we've built reliable systems based on them, and that proves by example that being fundamentally unreliable isn't an insurmountable hurdle for generating reliable systems.

I don't understand how this doesn't address the issue in the most direct, straightforward way possible while completely avoiding anything to do with accusations of hypocrisy. The only way it could be better is if someone actually provided the specific method of generating reliable systems using modern LLMs.

Also, given the falls he survives in 3 and 4, it seems that the bulletproof-ness applies to all impact. I was actually reminded somewhat of the Die Hard franchise, where John McClane went from a competent off-duty cop barely making it out of his depth in 1 to a Marvel superhero taking down a fighter jet with a Mack truck followed by surviving sliding down a crumbling bridge in 4. Hollywood might have somewhat of an issue with making everything bigger and more over-the-top with sequels.

Now if you feel that I've been unfairly dismissive, antagonistic, or uncharitable in my response towards you then perhapse then you might begin to grasp why i hate the whole "bUt HuMaNs ArE FaLaBlE ToO UwU" argument with such a passion. Im not claiming that LLMs are unreliable because they are "less than perfect" i am claiming that they are unreliable because they are not only unreliable, but unreliable by design.

I don't understand why anyone would hate that argument. Humans are also unreliable... not by design, perhaps, but intrinsically due to the realities of biology. The point of the argument is that, even though humans are intrinsically and inescapably unreliable, we still manage to make reliable systems based around relying on them, and as such, the intrinsic, inescapable unreliability of LLMs doesn't make them incapable of being used as the basis of unreliable systems.

There are good arguments to be made against this. It's possible that we can't get LLMs' unreliability to be lower than humans at the same cost. It's possible that even if that were possible, the nature of the unreliability of LLMs will always remain less predictable than that of humans, in such a way as to make making reliable systems based on them impossible. The fact that LLMs can't be shamed or punished based on failing in their reliability could be a fatal flaw for creating reliable systems based on them. And there are probably a myriad of other better reasons I haven't even thought of.

But I'd like to actually see those arguments actually being made. Maybe that video you say you linked makes them, but I'm one of the users of a text-based forum like this who don't have either interest or ability to view long-form videos during normal usage of this forum.

If she's the type of person who would quit over her company's LLM generating text like that, then it's certainly a good thing that she did quit.

I think there's some truth to this argument, and I've seen people point out examples like how Tolkien was a World War 1 veteran which helped to shape his writing, versus modern TV shows which show military officers in some scifi or medieval setting bantering with each other like they're coworkers at a Starbucks. But I also am left thinking that this just moves the question back a step.

Everyone knows that life experiences can aid in enriching one's fictional writing. Everyone knows that sheltered people exist. Everyone knows that echo chambers exist. People educated in colleges are often even more aware of these things than the typical layman. Therefore, if I'm a sheltered college graduate wanting to write the next great American novel or the script to some TV show or film I'm pitching, I'm going to try to do as much research as I can to get out of the limitations brought on by my sheltered upbringing and limited experiences. I'm going to dive into research - at a bare minimum do a search on Wikipedia, which it's quite evidence that many of these writers didn't even care to do - to present the characters and settings in as believable and compelling ways as possible, reflecting what someone with true life experiences of those things would have written, even if I myself never had those true life experiences to draw from.

It seems evident to me that very little of that kind of research in order to break out of one's own limitations is occurring in professional TV and film writing. Perhaps in all fiction writing. This speaks to a general lack of passion or pride in the work they're putting out, a lack of desire to actually put together something good. Perhaps it reflects the education that writing is primarily about expressing your true self or whatever, not about serving the audience. Which would also, at least partially, explain why so much criticism is directed at the audience often when these projects fail because the (potential) audience refuses to hand over their money to them for the privilege of viewing them.

Computer scientists call their field computer science despite it being more about mathematics and logic than science, and despite the field having far less to do with computers than one might expect.

Sure, and when I say that I have a "theory" about who took the cookies from the cookie jar, it doesn't meet the same bar that the "theory of relativity" or "theory of evolution" meet in terms of scientific evidence and consensus. That doesn't make my theory not a theory, it just reflects the squishiness of word definitions. Likewise for "science" and "intelligence."

Normies have been calling computer opponents in video games "AI" since the 80's despite them knowing that they clearly aren't "intelligent"

I disagree. I think people consider, say, the ghosts in Pacman or the imps in 1993's Doom "intelligent." Not sentient, not logical, not conscious, but certainly intelligent. Hence the willingness to use the term "enemy artificial intelligence" to describe them. This willingness reflects - a possibly subconscious - understanding that "intelligence" doesn't indicate sentience, consciousness, logical thinking, etc.

I don't buy your appeal to normal people here. I think that most normal people do not think that chatbots are intelligent.

It's hard to say what "normal people" think about this (or even what "normal people" are), but in my experience, people I would consider in that category use the label "AI chatbots" to describe things like ChatGPT or Copilot or Deepseek, while also being aware that "AI" is short for "artificial intelligence." This seems fundamentally incompatible with believing that these things aren't "intelligent."

Now, almost every one of these "normal people" I've encountered also believe that these "AI chatbots" lack free will, sentience, consciousness, internal monologue, and often even logical reasoning abilities. "Stochastic parrots" or "autocomplete on steroids" are phrases I've seen used by the more knowledgeable among such people. But given that they're still willing to call these chatbots "AI," I think this indicates that they consider "intelligence" to mean something that doesn't require such things.

I don't think "optimal" needs such an asterisk. That's encoded in the word itself. I think the beauty in the phrase is in how it obscures; it is, on its face, offensive in a way that gets cleared up when the reader or listener slows down and considers what "optimal" actually means and how that challenges the black-and-white thinking that tends to be typical in discussions relating to things that are almost universally considered "good" or "bad."

I disagree. I think that part of the argument isn't being elided, it's being summarized with the word "optimal." At least, it's being summarized at least as well as in the phrase "the cost of reducing fraud to zero is too high to be worth it," and I also think that the that latter phrase is far less elegant and beautiful than the much simpler and pithy "the optimal amount of [bad thing] is not zero" while not really adding any extra meaning or explanation.

I'm sure many people with greater expertise in pedagogy than me could come up with better ideas, if they're looking at the students based on truth rather than on wishful thinking. One idea that comes to mind is having different tracks based on student competency and making sure that school performance isn't measured by overall performance but rather based on how students on each track meet their goals. And perhaps focus on career training for low IQ jobs for the lower tracks instead of academics, at least beyond the 3 Rs.

GuessWho quite literally answered a direct question of whether he was the user darwin2500 on Reddit with "Yes, obviously."

It's possible that GuessWho is a lying liar. But I'd say at the very least, the preponderance of evidence points to them being the same person.

I don't have any examples off the top of my head, since, again, I stopped interacting with him after he revealed that GuessWho was his account, and part of what made TheMotte better than the Subreddit for me was the lack of that user. You can probably find plenty of examples if you just go to GuessWho's user profile, where I see that his last comment was like a year ago here.

I've read and interacted with Darwin2500 a lot both on the OG SlateStarCodex site and on Reddit, and as someone who's ideologically aligned much more to him - back in ye olde dayes of Trump's 1st term, I'd say there was basically no daylight between our political beliefs - than to the modal commenter in these places, I couldn't stand his arguments for being so transparently bad faith and dishonest that it made our side look either evil or stupid or both. There are plenty of great arguments that can be made in favor of left-wing/progressive ideology, and Darwin2500 basically never made them, in favor of overt, blatant bad faith, off the top of my head, often using Bulverism and the non-central fallacy (i.e. the Worst Argument in the World, as coined by Scott Alexander).

Every time someone has accused me of being bad faith on this site, it's been exactly that: a stronger, somewhat more intellectual way of saying "I disagree with you".

Besides this, you've also said elsewhere that plenty of right-wingers have resorted to making series of personal attacks on you without getting modded. Do you have any examples of either? I don't read every comment on TheMotte or even most of them, or even most comments that you personally make, but I don't recall a single example that matches this description.

People just hated Darwin since he was unabashedly left-wing.

Hard disagree. Darwin had a particular style of bad faith in the way he argued his left-wing positions that made left-wing arguments appear dishonest and manipulative, and that's why I personally was glad he didn't come to this site and stopped interacting with GuessWho once GuessWho revealed that he was Darwin2500 from Reddit.

Presuming that all that stuff about IQ in HBD is true, then we can make those schools more efficient for turning 75 IQ people into 90 IQ people and measure our success based on how well these schools accomplish this. Instead of being upset that we're not consistently turning 75 IQ people into people capable of working 120 IQ jobs and trying to fix it by pouring more money into such a futile project.