The people I was responding to were just as complicit in using metaphors carelessly for impact. It's not exclusively the left's fault that language is a mess. Every culture war meme on the right, "groomer," "murder" (in the case of abortion) etc, are chosen dynamically for impact in the culture war.
In the case of "murder" I think conservatives are using the ordinary definition of the word ("the crime of unlawfully and unjustifiably killing a person") and the disagreement is not over definitions but rather an object level debate about whether abortion is, in fact, murder.
In the case of "groomer" this is clearly a redefinition of the term, and I believe Rufo has admitted this. He popularized the term with the explicit understanding that the term is being redefined in an attempted mimicry or parody of the left's propensity to redefine words.
But I can't think of an example of the right doing what the left frequently does, which is to redefine a term while denying that the term has been redefined. I'm curious if you can think of any examples.
While I agree mostly ... are the top 20% of that page really any worse than 99.9% of popular existing art?
No, it's not any worse, it's about the same, and that's my point. The midjourney stuff is just as crappy as most of the art that gets made today. The "beauty" that OP thinks he's identified is just hyperpalatablity. Unlike most modern art, the midjorney art is inoffensive, but that doesn't make it good or beautiful.
I'm not criticizing midjourney, I'm criticizing OP's standards of beauty. It's probably possible to get actual good art out of midjourney in its current form, but OP's examples are crap and were selected based purely on what was most popular among users.
That midjourney stuff is utter pabulum. It's only beautiful by the most shallow and insipid standards of beauty. The kind of "beauty" that would rank Thomas Kinkaide's paintings above Rembrandt's, because the former is bright and sparkly while the latter is brown and muddy. Or the kind of "beauty" that would consider N*SYNC's music superior to Bach's because the former's is free of dissonance and the later's is rife with it.
I don't particularly like the human art you linked either, but at least the artists are trying to do something interesting. We can do better than ugly modern art without resorting to saccharine crap and calling it beauty.
AI gives people what it gets positive feedback from. It gives people what they want.
Marvel movies and McDonalds chicken nuggets are examples of giving people what they want. Mass appeal produces boring hyperpalatability, not greatness.
If you're arguing about why AI will kill us all, yes, you need to establish that it is indeed going to be superhuman and alien to us in a way that will be hard to predict.
I don't even think you need to do this. Even if the AI is merely as smart and charismatic as an exceptionally smart and charismatic human, and even if the AI is perfectly aligned, it's still a significant danger.
Imagine the following scenario:
-
The AI is in the top 0.1% of human IQ.
-
The AI is in the top 0.1% of human persuasion/charisma.
-
The AI is perfectly aligned. It will do whatever its human "master" commands and will never do anything its human "master" wouldn't approve of.
-
A tin-pot dictator such as Kim Jong Un can afford enough computing hardware to run around 1000 instances of this AI.
An army of 1000 genius-slaves who can work 24/7 is already an extremely dangerous thing. It's enough brain power for a nuclear weapons program. It's enough for a bioweapons program. It's enough to run a campaign of trickery, blackmail, and hacking to obtain state secrets and kompromat from foreign officials. It's probably enough to launch a cyberwarfare campaign that would take down global financial systems. Maybe not quite sufficient to end the human race, but sufficient to hold the world hostage and threaten catastrophic consequences.
I meant "malignant" in the same sense as "malignant tumor." Wasn't trying to imply any deeper value judgment.
The problem with the nanobot argument isn't that it's impossible. I'm convinced a sufficiently smart AI could build and deploy nanobots in the manner Yud proposes. The problem with the argument is that there's no need to invoke nanobots to explain why super intelligent AI is dangerous. Some number of people will hear "nanobots" and think "sci-fi nonsense." Rather than try to change their minds, it's much easier to just talk about the many mundane and already-extant threats (like nukes, gain of function bioweapons, etc.) that a smart AI could make use of.
I say this as someone who's mostly convinced of Big Yud's doomerism: Good lord, what a train wreck of a conversation.
Couldn't agree more. In addition to Yud's failure to communicate concisely and clearly, I feel like his specific arguments are poorly chosen. There are more convincing responses that can be given to common questions and objections.
Question: Why can't we just switch off the AI?
Yud's answer: It will come up with some sophisticated way to prevent this, like using zero-day exploits nobody knows about.
My answer: All we needed to do to stop Hitler was shoot him in the head. Easy as flipping a switch, basically. But tens of millions died in the process. All you really need to be dangerous and hard to kill is the ability to communicate and persuade, and a superhuman AI will be much better at this than Hitler.
Question: How will an AI kill all of humanity?
Yud's answer: Sophisticated nanobots.
My answer: Humans already pretty much have the technology to kill all humans, between nuclear and biological weapons. Even if we can perfectly align superhuman AIs, they will end up working for governments and militaries and enhancing those killing capacities even further. Killing all humans is pretty close to being a solved problem, and all that's missing is a malignant AI (or a malignant human controlling an aligned AI) to pull the trigger. Edit: Also it's probably not necessary to kill all humans, just kill most of us and collapse society to the point that the survivors don't pose a meaningful threat to the AI's goals.
Why do you assume that having low standards means ending up with someone with a self-destructive lifestyle or crippling health problems? You describe yourself as an ugly autistic man who is in good health and pretty much has his life together. Why don't you look for an ugly autistic woman who is in good health and pretty much has her life together?
Post-Vietnam, the pattern is that republicans support wars started under republican presidents and oppose wars started under democrat presidents. Vice-versa for democrats. In the case of Vietnam, opposition from the left spiked significantly when Nixon took office. The draft also drove the dynamics in Vietnam - the left opposed Vietnam for the simple reason that young people are disproportionately on the left and young people were disproportionately impacted by the draft.
Prior to Vietnam I think it's harder to find the same patterns because the parties' left-right alignment shifted in the 60s and cultural attitudes about what role the US should play in foreign policy shifted significantly after WWII.
All of the output I've ever seen from ChatGPT (for use cases such as this) just strikes me as... textbook. Not bad, but not revelatory. Eminently reasonable.
There was a post from Scott, can't recall which one at the moment, where he made a point along the lines of "maybe the reason therapy seems to help some people a great deal while not helping others at all is because some people benefit from hearing reasonable, common sense feedback, whereas that kind of feedback is completely obvious to other people." Sort of like how some people lack an internal monologue, others lack an internal voice of common sense and reason. I wonder if that's what's going on here.
I don't really know how it will play out, but personally the situation reminds me of Wendy Davis' 2013 filibuster of an abortion ban in the Texas senate. This made her a darling of the Texas Democratic party, rocketing her to the gubernatorial nomination in 2014 where she lost badly to Abbot and faded into obscurity. Given that Montana is a red state, I don't see Zephyr being viable as a statewide candidate, so I would predict a similar type of outcome here. But who knows.
I see a politician doing politics. Neither "brave woman" nor "ridiculous man" seem like apt descriptors to me, just as I wouldn't describe a chess move as "brave" or "ridiculous." It's either a good move or a bad move, and we'll find out which as the game progresses.
Imagine a doctor gets it right 90% of the time, and the other 10% of the time he says "I'm not really sure what's going on" and either consults with another doctor, suggests you get a second opinion, or even just sends you home with no treatment.
Now imagine a LLM gets it right 95% of the time, and the other 5% of the time it gets it confidently wrong and prescribes you an incorrect course of treatment.
In this hypothetical scenario, even though the LLM is "better," I'd rather have the human doctor, because getting treated for the wrong thing is often much worse than not getting treated at all.
I've been doing the same for a few years and it really has been a panacea for any and all GI issues. I haven't read the data on it, but I know human diets used to contain a lot more fiber than they typically contain today, so it seems like a no brainer.
The EMH is like how many physics models will ignore things like air resistance and friction. No one is claiming that air resistance and friction do not exist, but they are sometimes negligible and it's often simpler to get insight into a problem when you disregard these things.
Nobody (as far as I know) claims the EMH is literally true, but it's often close enough to being true that is serves as a useful simplifying assumption when trying to understand markets.
It was Commander Keen, but I'll have to give the others a shot.
It's Commander Keen, nicely done.
This is pretty close. The graphics were better than this, and the maps were larger than a single screen and would scroll with the character as you moved. Maybe it could be a sequel to this?
My wife and I both remember playing a PC game in the mid to late 90s that neither of us have been able to remember the name of or track down. It was a 2d platformer with pixel graphics. The setting was some kind of factory or laboratory, and the enemies were monsters or aliens. I remember one type of enemy being like a floating ball with eye stalks. The player character was a human, and I believe you could collect various weapons and items as you progressed through the game. I'm pretty sure it wasn't part of any well-known game series, such as Metroid or Lode Runner. Anyone have any guesses what this could be?
Edit: It's Commander Keen, thanks for the help everyone.
I have to say I don't find this line of argument persuasive at all. Your arguments could just as easily used to justify and support youth transition. "Given all these massive biological and social differences between men and women, it's critical you socially transition your five-year-old as soon as possible and get them on blockers and hormones so you can minimize the mismatch between who they feel they are and how they are perceived by others."
To me it's the opposite argument that's far more persuasive: society today treats men and women pretty much equally and allows them to express themselves how they choose. Given this freedom and flexibility, there's no reason why a boy who wants to wear dresses and play with Barbies needs to become a girl. Just let him be a boy who wears dresses and plays with Barbies. Teach your son he can be as masculine or feminine as he wants to be without getting hung up on sex and gender.
I guess I come out on this the way Sam Harris comes out on torture. He argues torture should be illegal, but nevertheless there are situations where it should be done anyway, such as if a terrorist has hidden a nuclear bomb in a city and torture is the only way to discover its whereabouts. In truly extreme situations, morally repugnant acts may be necessary.
I think censorship is repugnant even when it's used to prevent the disclosure of nuclear secrets, but perhaps it's a necessary evil in extremis.
So would you be opposed to "cancelling" Hitler if it was guaranteed to prevent his rise to power? Or what if it provided a 50% chance of preventing his rise?
It depends on what you mean by cancelling, but if you mean violating his right to speak freely, then yes I would be opposed. The whole point of rights is that everyone has them, including bad people. The whole point of free speech is that it protects the right to say vile and reprehensible things.
I think there are religious people who basically believe this. That a person can be "born trans" in a metaphysical sense, but that it's a sin to act on it. In the same way they might think someone could be "born an alcoholic" but it's still incumbent on them to avoid the sin of drunkenness.
I think "socialism" is a good example, thanks. I could quibble with the other examples but I think this one hits the nail on the head.
More options
Context Copy link