I follow A.Shipwright and one other artist, and my feed is filled with art (IMO only mediocre but I’m picky).
Scenario: A person roles into the hospital with a gunshot wound to the [organ that can be lived without]. The shooter has the same blood type as the victim.
Question: Is it ok to take the organ from the shooter to replace the organ of the wounded person?
Utilitarian: You can take the organ from the healthy person in the waiting room, they are easier to find and might have been the shooter anyways.
@stoatherd provided a good argument against this reasoning. In such a society, healthy people would avoid hospitals.
Likewise, the current situation discourages the adoptive father from supporting the child or even himself.
I actually think the adoptive father should be encouraged to raise the child as his own, key word encouraged. Coercion activates the innate human desire (common in men) to resist. No penalty for abandoning the child makes the father feel autonomous and "in control" when he raises them anyways, even if there's a (e.g. social) reward. (It also gives him control over the wife, but she still has the option of leaving him; if you think it's a bad or unfair outcome, can you think of a better or fairer one?)
But why not the biological father?
(And if he’s dead/incapable, maybe the state has to pay, but that’s the case when somebody isn’t tricked. Or that can be an exception, since the adoptive father would have less reason to envy him, although I still think it’s bad)
Here the interests don’t compete: getting non-biological fathers to pay child support (instead of biological ones) usually doesn’t benefit the wife and children.
I’d love to read a steelman for
-
Why a father should be forced to pay child support without a paternity test
-
Why, if the biological father is different, they shouldn’t be the one required to pay the child support instead
For example, I care about the mother’s and child’s interests, but how will 1) not create animosity from suspicious fathers, and 2) not decrease child support since the resentful adoptive father will try to evade it (at least as much as the biological one)?
My first big scissor statement was reading Reddit (outrage fanfiction) “my husband asked for a paternity test and I divorced him”. But I now understand that perspective: believing that your husband will always be suspicious of you, that they think with apathetic game-theoretic logic, while you want selfless and unconditional “true love”. I understand that acting like an unemotional autist is not rational, not harmless, not me (because I have emotions, desires, and even my logic is biased for them).
But I can’t even imagine a decent argument for 1) or 2).
When ChatGPT says the SPMM is wrong, does it provide sources or a mathematical proof?
That's the point: War Thunder is mostly fiction, but the leaked military vehicle specs were real.
Those exceptions are non-fiction.
I agree there can be some limits to acceptable expression, but they must be specific and have very good reason. I can't find a good reason against anything fictional, even fictional pedophilia. Generally when somebody morally criticizes "art", they're criticizing the fiction.
I at least find it plausible that there could be subcategories of icky stories, like those touching on suicide in a particular way, that could actually have negative effects on society and result in real world harm, perhaps in the ballpark of leaking military secrets or personal information.
In theory yes, but I think it would be too hard for anyone to form an argument against them that couldn't be broadly applied to harmless art, without hindsight.
More importantly, such infohazardous art would probably not be describable, or the reason for its ban would probably not be arguable, without leaking the infohazard. Meaning it would have to be secretly policed. Now, perfectly secretly policing art is indistinguishable from it not existing, and secret policing can be ethical (e.g. by downranking the art so the creator simply thinks noone likes it), so I don't object to it in theory. But secret police in today's first-world countries would require unimaginable competence, and historically secret police have a bad record, so I object in practice.
Death Stranding.........
Hot take: anyone who morally criticizes art is wrong.
(Of course excluding "military secrets but art", "private personal information but art", etc.)
Even if it was depicting pedophillia: pedophilia is morally wrong, murder and genocide are morally wrong, yet most people have no issues with depicted gruesome murder and genocide. And most (including me) feel it's gross, but I feel lots of art is gross; it should definitely be behind a filter, like NSFW and "trigger warning" media, but otherwise, nobody should really care about what doesn't really affect them.
The reason for allowing subjective toxic waste, besides having others tolerate your disgusting (to them) fetish, is boundary ambiguity. People are too worried about persecution to publish safe art, unless they see works they know are far edgier avoid persecution (anxiety isn't logical). Furthermore, moral policing oversteps reasonable limits when it tries to target borderline examples (like this one). They shift the rules (spoken and unspoken); they either erode, making the moral policing ineffective to its supporters, or grow, leaving us with worse and worse "sensitive" art.
I have no strong argument against morally policing obvious pedophilia (or porn, or gore, or anything that most people don't like). But I still oppose it, because I'm not convinced it's worth the utilitarian/altruistic loss and potential to stray from "obvious".
As for this game: Dunkey recommends it, the Slade reviewer complements the father-daughter relationship (and the Forbes reviewer criticizes it not for pedophilia, but "zero friction"), the worst I've directly witnessed online is "over-reactive people are over-reacting".
There exist unfalsifiable yet anonymous algorithms for digital vote counting where you could be sure your vote was part of the count via a hash, but your own vote preference can't be revealed
Unfalsifiable in theory, but with the tech illiterate masses, incompetent state officials, and messy reality, my understanding is that in-person voting, paper ballots, and manual counting with lots of redundancy is still the most reliable method. Oops. Cryptographers cancel election results after losing decryption key.
If mail-in ballots are outlawed, there should be an alternative for sick citizens, and citizens abroad like soldiers.
I see no issues with free and easy-to-get mandatory ID. I believe it's common in Europe and almost nobody complains.
I'd also say there is something Anglo-style about this particular conceptualization of mind and consciousness that took me some time to grok when learning English (my original language is Hungarian). I mean, every culture has a concept for conscious-ness, as in being conscious (aware) and not knocked out, asleep or dead, but the mind being this inner space and consciousness being a thing where we need to explain how it relates to the brain etc. it's not at all that obvious that there is even a thing to be explained, unless you are given this word "consciousness" and are told to explain it. Like, cultures have concepts about souls and wits and smarts and feelings of course, but I don't think this concept of "it being like something to be a human" is obvious at all. Or this idea of having to explain why one has a "first-person view", this isn't the same kind of obvious question that every culture would ask, like where mountains and volcanoes come from or why rain and snow and lightning exist and what's going on with the stars etc, which are much more concrete.
Tangential: this reminded me of Two Concepts of Intelligence, a (cACM) article whose claim is basically: the American definition of intelligence is understanding, the European one is predicting (EDIT) the American definition of intelligence is predicting, the European one is understanding.
Although consciousness is more poorly defined, maybe the most common definitions in both cultures are also different.
Are immune systems conscious? They don't think like our brains, but they adapt, and it depends on the definition. If they learn from their past responses, that demonstrates an (albeit maybe low) level of self-awareness.
While immune systems (probably) can't hear, they're affected by stress (tl;dr: acute seems to boost, chronic seems to impair). So your conscious appreciation (or lack thereof), if it affects your long-term stress levens, will affect your immune system.
This is far from the first instance of a genius in one field going "well outside [their] field in an area that is potentially crank-adjacent".
Another is Sabine Hossenfelder. I'm confident her quantum physics videos are correct, and I found them more intuitive and helpful than any other explanations. I'm less confident about her videos on aliens, democracy, and the Theory of Everything.
Granted, they may be fine, I'm sure she needed diversification to stay funded, and I suspect a lot of her criticism is motivated by her clickbait headlines and attacking the Ivory Tower of academia (although I haven't looked at her specific claims and proposals, she's correct that it's inefficient and I agree that it should undergo some sort of reform). I'd be interested if anyone has more informed opinions on her shift. At least, I wish she still wrote some text posts (the last I could find is November 2022).
I read the article and technically he doesn't claim "Claude is conscious", but says things like
“If these machines are not conscious, what more could it possibly take to convince you that they are?”
Well personally, I'd be more convinced if they had continuous learning.
Here's an argument that LLMs aren't conscious: The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness (from DeepMind). I only skimmed and may be too dumb or lazy, but my takeaway is the same as this Hacker News comment:
It starts by saying that a simulation of something is not the real thing. A simulation of a hurricane is not a hurricane. That's certainly true and even obvious.
Then they say that current AI is just a simulation of consciousness and therefore is not real consciousness. Moreover, it can never be real consciousness because it is just a simulation.
But that's a circular argument: they are defining AI as a simulation. But what if AI is not a simulation of consciousness but actual consciousness? They don't offer any argument for why that's impossible.
My thoughts:
First, what is consciousness?
I'm conscious in a way only within my perspective: if I was a p-zombie nothing would change from anyone else's perspective. You're conscious in your (imaginary to me) perspective, probably (maybe not self_made_human's "living corpse" patients). This definition is subjective: it has no real implications, so in it, Claude always may or may not be conscious.
Claude is self-aware in an objective way: it read its past thought (prompt output) to adjust future thought/output. I think this is the best common definition of "consciousness": it includes internal monologue, vision etc., dreams (at least remembered and probably unremembered); it's real; and it's useful, because it's required to correct internal mistakes (Peter Watts was wrong). Although I think it should be referred to as "self-awareness" or "introspection", and clarified, otherwise it will be confused with the formerly-described subjective self-awareness.
What is feeling? Claude can generate plausible feelings in reaction to its prompt (sentiment analysis). Although Claude's feelings are more malleable than humans, since its prompt is entirely controlled and strongly affects its output (whereas even if you could entirely control someone's sensory input, it would probably take much longer or be impossible to affect their thinking as strongly). More significantly (IMO the entire significance of others' feelings), I myself feel barely any empathy or sympathy for Claude: less than fictional characters, much less than real animals and humans. I'm not motivated to help a sad Claude, a happy Claude doesn't make me happy, etc. partly because I don't really like him, partly because he (the specific session) usually can't affect me, partly (IMO the ethical justification) because his emotions are malleable, so the easiest way to make him happy is by programming (prompting, fine-tuning, training).
Notably, we can revert Claude to any previous mental state, unlike ourselves or other humans. Because of this and the lack of continuous learning, I think it helps to imagine Claude as a snapshot of (crudely emulated) consciousness and feeling, like MMAcevedo.
How much time should we spend on this? It's not completely useless to ponder and claim AI is or isn't conscious, feeling, etc., because it interests some people, pays some salaries, and certain conscious/feeling-related research has practical uses (most importantly alignment). But you can argue it's stupid and useless, referring to the subjective definitions of consciousness and feeling, and not be wrong (those are stupid and useless to you if you're not interested and won't be compensated for rambling about them).
Just don't fall into AI psychosis like this r/slatestarcodex fellow. And probably don't get an AI boyfriend or girlfriend, although maybe they're improving some people's mental health? Those both could be top-level discussions.
In that scenario, I’d still choose blue.
Still some (colorblind, dumb, or psychopathic) people will choose blue, some will choose red. If most of the remainder choose blue nobody dies, if most choose red a decent fraction of the population dies.
And I don’t know if enough people will choose blue (for this or another reason), but if red ends up winning, final society may be so apocalyptic that death isn’t. For example, maybe the person I would’ve saved (by pressing red) would’ve died quickly anyways in the aftermath.
If the group was smaller but still random, it wouldn’t change my reasoning.
If the % required to press blue was higher it would make my decision less sure (it’s already not very sure), until eventually I’d choose red.
I disagree that simply persuading people to choose blue is unethical. Ultimately it’s their decision, and it’s not obviously wrong.
But
I have seen quite a few tweets about blues fantasizing about hunting down and purging all the reds once blue "obviously" win
A way to lose in real life is to get worked up over a silly hypothetical.
I think adding a (those who are incompetent or underage will have their button pushed by their parent/guardian) parenthetical would change it even more.
Then I agree with you, but also, I’d say anyone “competent” in this situation (and not suicidal) would press red
This has resurfaced and been trending for a while
Everyone in the world has to take a private vote by pressing a red or blue button. If more than 50% of people press the blue button, everyone survives. If less than 50% of people press the blue button, only people who pressed the red button survive. Which button would you press?
Currently at 42.1% red and 57.9% blue.
What would you choose? (See also r/slatestarcodex discussion)
I was motivated to post because I have a convincing argument for blue:
-
Stupid people will choose blue. You may not care about the disabled, elderly, generally moronic, etc. but this includes children and people who are "too generous": nice, but emotional, and devote their lives to charity
-
Thanos snapping a decent amount of the population (including random children, and biased towards selflessness) will probably overall negatively affect society
-
I probably won't die because most people choose blue, as evidenced by the poll. Even if I do, it may be preferable to living with the survivors (point #2)
How do you know we’re not already glorified pets in some societal experiment and/or universe simulation?
I think your first point is stronger. The author asserts “the Minds are correct” but can’t prove it’s coherent with reality and general humanity. If I define Society A as “a utopia where humans are in constant agony”, is it a utopia? It’s self-contradictory.
One example: social media has dismantled social norms.
Even when phones and TV existed, people used to communicate face-to-face more often, especially to strangers. Privacy used to be expected. News used to be centralized.
How does this affect politics? Perhaps since people have less random face-to-face interactions, they have tighter echo chambers and less respect for those outside. Perhaps since we have dirt on everyone (no privacy), especially dirty politicians are seen no differently. Perhaps since social media promotes strong emotions (especially negative ones; weaker centralized moderation), emotive (especially negative) politicians benefit.
Unfortunately in practice, we can’t ban social media and revert to the past (although that doesn’t stop politicians from trying). I think we need more local groups, in-person events with encouragement to attend, trusted curators who present “unbiased” news (specifically biased towards positivity and important details such that the people receiving the news benefit from hearing it). Most of all, we need to explicitly teach people how to behave socially, how to spot those who deserve sympathy vs. who’d exploit you, how to think critically; and this teaching should be through experience (trial and error, positive and negative reinforcement…). Because I believe those lessons used to be taught implicitly by face-to-face interactions which (para)social media has replaced.
I agree it’s getting better.
Although I think it will only surpass human art if/when the user has fine-grained control, because my favorite art is that I can relate to, and a general LLM isn’t relatable. I’d rather use AI to make art I really like (even with difficulty, as long as there’s a clear progression…I’ve wanted to get into art, but it’s overwhelming and I’m particularly bad at it), than have the AI autonomously make something I mildly like.
Or if/when we get ASI.
Breaking Balenciaga is the best I’m aware of.
I watched some of it and it’s…mid. My problem with AI art is that it’s all mid. Although here the idea is also mid.
I feel that so far, even good GenAI is either an excellent idea or lucky (or trial-and-error) output, and in both cases a real artist could’ve executed better. Even for works where more effort would be wasted, like jokes and concept art, I prefer a simple handmade drawing like a sketch.
The one exception may be hidden images via Stable Diffusion ControlNet (e.g. text, QR code, spiral), because I haven’t seen any human-made pictures nearly as detailed and seamless. Also, GenAI is great for intentionally bad works, like memes making fun of AI.
GenAI is genuinely useful for routine tasks, forms, etc. where quality isn’t important; and with code, where quality is only important to an extent (nobody will notice your micro-optimizations or unnecessarily readable implementation) and there are decent objective metrics (lints and tests, and I still think AI code is hard to read). But art has no practical limit to quality, and good artists apply themselves to every noticeable detail. Also, art (like music, food, and attractiveness) is best slightly imperfect, in a way that human amateurs execute without trying, and experts learn (“learn the rules, then break them”), but AI seems to struggle.
- Prev
- Next

The opposite: once the husband asks for a paternity test, there's already an argument and suspicion, and the only way it would be resolved is if the test confirms they're the father.
I agree that the father should stay. But I argue that forcing him to pay child support is actually counterproductive here.
More options
Context Copy link