@Brainwavez's banner p

Brainwavez


				

				

				
0 followers   follows 0 users  
joined 2025 December 28 04:50:10 UTC
Verified Email

				

User ID: 4102

Brainwavez


				
				
				

				
0 followers   follows 0 users   joined 2025 December 28 04:50:10 UTC

					

No bio...


					

User ID: 4102

Verified Email

I disagree that simply persuading people to choose blue is unethical. Ultimately it’s their decision, and it’s not obviously wrong.

But

I have seen quite a few tweets about blues fantasizing about hunting down and purging all the reds once blue "obviously" win

A way to lose in real life is to get worked up over a silly hypothetical.

I think adding a (those who are incompetent or underage will have their button pushed by their parent/guardian) parenthetical would change it even more.

Then I agree with you, but also, I’d say anyone “competent” in this situation (and not suicidal) would press red

This has resurfaced and been trending for a while

Everyone in the world has to take a private vote by pressing a red or blue button. If more than 50% of people press the blue button, everyone survives. If less than 50% of people press the blue button, only people who pressed the red button survive. Which button would you press?

Currently at 42.1% red and 57.9% blue.

What would you choose? (See also r/slatestarcodex discussion)


I was motivated to post because I have a convincing argument for blue:

  1. Stupid people will choose blue. You may not care about the disabled, elderly, generally moronic, etc. but this includes children and people who are "too generous": nice, but emotional, and devote their lives to charity

  2. Thanos snapping a decent amount of the population (including random children, and biased towards selflessness) will probably overall negatively affect society

  3. I probably won't die because most people choose blue, as evidenced by the poll. Even if I do, it may be preferable to living with the survivors (point #2)

How do you know we’re not already glorified pets in some societal experiment and/or universe simulation?

I think your first point is stronger. The author asserts “the Minds are correct” but can’t prove it’s coherent with reality and general humanity. If I define Society A as “a utopia where humans are in constant agony”, is it a utopia? It’s self-contradictory.

One example: social media has dismantled social norms.

Even when phones and TV existed, people used to communicate face-to-face more often, especially to strangers. Privacy used to be expected. News used to be centralized.

How does this affect politics? Perhaps since people have less random face-to-face interactions, they have tighter echo chambers and less respect for those outside. Perhaps since we have dirt on everyone (no privacy), especially dirty politicians are seen no differently. Perhaps since social media promotes strong emotions (especially negative ones; weaker centralized moderation), emotive (especially negative) politicians benefit.

Unfortunately in practice, we can’t ban social media and revert to the past (although that doesn’t stop politicians from trying). I think we need more local groups, in-person events with encouragement to attend, trusted curators who present “unbiased” news (specifically biased towards positivity and important details such that the people receiving the news benefit from hearing it). Most of all, we need to explicitly teach people how to behave socially, how to spot those who deserve sympathy vs. who’d exploit you, how to think critically; and this teaching should be through experience (trial and error, positive and negative reinforcement…). Because I believe those lessons used to be taught implicitly by face-to-face interactions which (para)social media has replaced.

I agree it’s getting better.

Although I think it will only surpass human art if/when the user has fine-grained control, because my favorite art is that I can relate to, and a general LLM isn’t relatable. I’d rather use AI to make art I really like (even with difficulty, as long as there’s a clear progression…I’ve wanted to get into art, but it’s overwhelming and I’m particularly bad at it), than have the AI autonomously make something I mildly like.

Or if/when we get ASI.

Breaking Balenciaga is the best I’m aware of.

I watched some of it and it’s…mid. My problem with AI art is that it’s all mid. Although here the idea is also mid.

I feel that so far, even good GenAI is either an excellent idea or lucky (or trial-and-error) output, and in both cases a real artist could’ve executed better. Even for works where more effort would be wasted, like jokes and concept art, I prefer a simple handmade drawing like a sketch.

The one exception may be hidden images via Stable Diffusion ControlNet (e.g. text, QR code, spiral), because I haven’t seen any human-made pictures nearly as detailed and seamless. Also, GenAI is great for intentionally bad works, like memes making fun of AI.

GenAI is genuinely useful for routine tasks, forms, etc. where quality isn’t important; and with code, where quality is only important to an extent (nobody will notice your micro-optimizations or unnecessarily readable implementation) and there are decent objective metrics (lints and tests, and I still think AI code is hard to read). But art has no practical limit to quality, and good artists apply themselves to every noticeable detail. Also, art (like music, food, and attractiveness) is best slightly imperfect, in a way that human amateurs execute without trying, and experts learn (“learn the rules, then break them”), but AI seems to struggle.

You’re right that plenty of good works rely on unexplained premises/plot (e.g. any involving magic, Bojack Horseman why animals are antropomorphized). So I take back my first theory.

Second theory: “trash” can be substituted for anything and the general point holds: when the work is clearly Isekai, people have predefined expectations, people who like / dislike the genre like / dislike those expectations respectively.

Why this applies to Isekai more than other genres…because Isekai tends to be predictable, so the expectations are stronger.

There are well-received “normal(ish) person transported to alternate world” works, like Gravity Falls, Narnia, Idiocracy, Harry Potter.

My guesses:

Isekai doesn’t even try to justify why the normal person is in the alternate world. Presumably writers who choose Isekai instead of Isekai-like prefer not justifying major plot points.

More likely, because most Isekai are trash, people who like Isekai tend to prefer trash, and people who dislike trash tend to have prejudice against Isekai. So either a) the author makes an Isekai-like to avoid the prejudice, b) they make a trash Isekai, or c) they have a small audience.

I haven't read the novels...but your comment reminded me of this discussion. It and this reply I agree with.

I think a life of only simple pleasures (eating, sleeping, etc.) would get boring, because I desire achievement, and I believe most people agree. I also think such a life isn't realistically human, it's what animals do, while most humans have long-term plans. Achievement also requires adversity, because one needs to at least imagine they could fail.

However, if the Minds were really intent on "preserving humanity", they could also give humans fake achievement and adversity, up to recreating life as it is now.

If you believe The Culture is a dystopia, what would make it a utopia?

If all those 11 deaths have zero evidence of Chinese involvement (same for the 9 Chinese deaths), wouldn’t coincidence be more probable?

The alternative is that one (or both) intelligence agencies are anomalously good at espionage and the other are much worse at detecting it (or hiding they know, but why?)

Many questions remain over the July 1, 2023 death of Feng Yanghe, a professor at the National University of Defense Technology, who had won national competitions with his pioneering "War Skull" platform. Such as, why did an obituary in the state-run science news website, Sciencenet .cn, say he was "sacrificed"? Why was the brilliant scientist from Gansu province buried in a special cemetery in Beijing for the Communist Party elite, state heroes, and revolutionary martyrs?

“Why did China visibly honor this model Chinese citizen so much after his newsworthy death?” In a socialist country, every life is a sacrifice to the nation.

Your style transfer example has the obvious AI tells (frequent em-dashes, ends with “it’s not X it’s Y”) and scores 100% on GPTZero. I cant read the attachment, does it really reflect the style?

Modern-day journalism. The woman is self-aware and knows the article makes her look histrionic. It’s probably exaggerated, it may not be real. She’s doing it for attention and ad revenue, like “stupid” TikTokers and reality TV stars. She’s two steps ahead.

My main issue with LLM writing is that it's overly verbose. The biggest sign I'm reading AI is when I subconsciously start skimming, and even after skipping entire paragraphs, feel like I haven't skipped anything important.

If AI could write concisely, I'd see no issue with it in technical documents and news articles. If AI could write in someone's voice given a sample of their previous text, I'd see no issue with it at all. Maybe even in the former scenario, like how practically nobody cares that most writing is no longer hand-written; the "writer's voice" would shift to the subject and focused details.

\6. National service should be a universal duty. We should, as a society, seriously consider moving away from an all-volunteer force and only fight the next war if everyone shares in the risk and the cost.

I like the corollary that, if not everyone shares the risk and cost, society should not fight the next war. Meaning we’d probably fight only when it becomes existential.

A position I heard elsewhere that I agree with: ideally, every nation should have a mandatory (for everyone) “Service Corps”, which isn’t just war preparation but also community service. Unfortunately, in most nations today, it would probably be corrupted.

The interesting part here is that the limit could be much further than practical relevance and context not actually that degraded

The limit is far enough that today’s models are useful: for example, they can code and (allegedly) find vulnerabilities in production software.

But I don’t believe today’s architecture can accurately emulate human intelligence, unless the model is retrained very frequently (daily?) on omni-local data (including everyone’s personal details and private codebases), effectively brute forcing continuous learning. Because today’s (consumer) models have been trained on practically the entire internet with the world’s compute, plus synthetic data and tool use, yet still they consistently hallucinate in long complicated tasks that humans after adjustment consistently solve.

In the last thread, my opinion was that LLMs are missing something essential. And I still think that, but I wouldn't be surprised at all if LLMs required very little theoretical augmentation to reach AGI.

I believe they’re missing good continuous learning.

By definition: with human-level continuous learning, any class of human-solvable problems could be solved by guiding the LLM through examples until it generalizes. After enough generalization, it would be hard to find problems it can’t solve. Granted, “human-level” is doing a lot of heavy lifting, it’s not far from “LLMs are just missing intelligence”.

By observation: the vast majority of LLM failures seem to stem from needing to store everything in context and losing track when it gets too large. The vast majority are stupid mistakes that it seems like, if they were prepended to a small-context prompt, the LLM would not repeat.

What about Cr1TiKaL?

I found one freakout (and he’s back now). For a guy who’s become inconceivably rich and famous by monologuing in front of a camera about internet drama, over a decade, I consider that heroically sane.

In fairness, the goalposts were moved because we realized LLMs couldn't do certain AGI things despite passing the "AGI" tests.

For example, they can pass a Turing test consisting of a independent questions with short answers, but could never pass a "Turing test" over years, because they have limited context windows (and even with tools and a filesystem, too many things change for them to store and organize). They've effectively passed ARC-AGI 1 and ARC-AGI 2, but not yet ARC-AGI 3, while a median (from their tests) human passes all (play it yourself).

They'll be "true AGI" when we can no longer create (non-physical) tests they don't immediately pass.

Although I agree with SnapDragon that they're "partial AGI". I believe the missing component is continuous learning: they start output like a human, as they've been trained to, so if they continued to be "trained" on their observations, presumably they'd continue to output like a human.

I think you're right about social media companies not making their sites less toxic on their own. So...I do think we should regulate kids social media more, and maybe adult social media past a certain size. I'm specifically wary of regulating small sites, because for example that hinders hobbyists and startups.

I think parental controls work. Current parental controls aren't good, but better ones are possible; and it's true that particularly smart and determined kids will subvert practically any controls, but not all kids are smart and determined. An example of a better parental control is a phone OS that, without an admin password, blocks sites not in a "kid-friendly" whitelist provided by a third party. I don't see why that's particularly hard to implement or configure.

I think mandatory ID for specific services is fine, my objection is mandatory ID to use the internet.

Service providers have plausible deniability since you can't prove or verify a users age beyond just asking the user like they do now.

The problem isn't kids clicking "yes" on "am I over 18?" and seeing porn, the problem is kids clicking "no" and seeing porn anyways, because it's in YouTube Kids. If governments don't hold YouTube accountable for this today, I don't see why they would after mandatory ID.

Also note that OS "age verification" currently implemented in some states is just asking the users' age:

Provide an accessible interface at account setup that requires an account holder to indicate the birth date, age, or both, of the user of that device... (CA-AB-1043)

Provide an accessible interface at account setup that requires an account holder to indicate the birth date or age of the user of that device... (CO-SB26-051)

I do think this OS age verification will reduce kids being exposed to harmful content, and mandatory ID would reduce it further. I agree that's a good thing. The problem is these laws may introduce other problems that make them overall negative.

Specifically, I don't really object to the age verification in California and Colorado because it's lackluster: one can enter a fake birth date, and probably use an OS that refuses to implement it without enforcement. But I would object to mandatory ID, because governments and companies have repeatedly failed to secure sensitive data, and people should have an outlet to express views unsavory to those around them (since many people would retaliate against or be deeply hurt by certain views, even mundane views (from a general perspective)).

Estacada High School in Oregon seems to have succeeded.

I think mods should intervene

Another call for a recurring Butlerian Jihad Roundup, so AI/tech drama doesn’t detract (or get detracted by) Trump/woke drama.

Keep in mind, governments and companies are more or less incompetent.

Be that to do business with the bank or government offices that would have required you to go there in person, but can now be solved with a few swipes or clicks.

Banks and government offices already have your ID. They still require you to go in person, because 1) people steal each others' IDs, and 2) they haven't upgraded their systems since before the mainstream internet.

I would in fact be quite partial to the idea that certain demographics would never see a gambling ad ever again.

Gambling ads and suggestive content are visible even on kids' sites designed to block it. The blocks don't work, because 1) selective blocking is a hard problem, and 2) companies don't invest enough because they want to maximize profit (and governments don't fine them enough).

If what kids see on the internet matters so much that parents should revoke access to it, why isn't what's on there a bigger deal? We've already seen fine posts on here regarding the subject of foreign interference in media with the recent forced sale of TikTok. That, on top of the promulgation of hard and soft pornography, should be dealt with head on rather than being excused away under the guise that this is all somehow a meaningful avenue of anonymous expression whilst your ability to express your political views is a total sink or swim predicament based entirely on the whims of billionaires and the political extremists they bankroll, who can revoke your ability to meaningfully express yourself at will.

It is a big deal, but: foreigners steal locals' IDs, and convince them (sometimes by visiting in person) to spread foreign propaganda. Pornography is popular, some pornstars are already public and some viewers have no shame.

Theoretically your identity could be veiled to the public on certain platforms in a formalized manner, and unneeded breaches of information could be prosecuted similar to a libel suit. The big companies could now properly curate content based on a very firm 'don't show porn to under 18's' criteria. Meaning the government has a foot in the door of their algorithms. Maybe we could finally stop pretending that technology is all too complicated to legislate. And maybe, just maybe, this will lead to my YouTube frontpage sucking less. Maybe.

Companies already aren't allowed to leak PII: it leaks anyways, they get sued and lose, but the final payout is negligible. YouTube already controls your frontpage and tries not to show porn to under 18s. Technology is already legislated, but governments abuse and/or ignore the legislation and companies find workarounds.


I do suspect mandatory ID would reduce kids exposed to harmful content, foreign interference, and porn (distribution and consumption). But significantly increase political (and non-political petty) speech consequences, which would be worse, because governments and companies will leak the IDs of users with views they dislike, and leaking everyone's views won't work as explained here.

Voicing support for party X whilst your boss hates party X, in practice, will get you fired. Even if there are laws against it, your boss will assign you annoying tasks, over-scrutinize your mistakes, etc. to evict you for a different official reason. And there's no way to detect this without false positives.

The loss of online anonymity would also damage relationships, and not just ones with irreconcilable political beliefs. People "code-switch" all the time; imagine no code-switching because everything you write online is visible under your ID. Men talking about women around other men, women talking about men around other women, kids talking about their teachers to other kids, teachers talking about kids and parents to other teachers, etc. Austists would love to know everyone's views about them and may easily adjust, but I suspect most people would be turned off by others' behavior in other groups. Importantly, 1) even when they logically know such back-talk was always happening, they would struggle to emotionally handle concrete examples; and 2) some back-talk is criticism aimed at helping the target or those around them.

—-

Thinking more about it:

Direct P2P would also avoid these problems. While maybe mitigating the social harms of today’s internet, which were less common when in-person and telephone communication were dominant, like social isolation and a certain type of (embarrassing) meannness and brainrot.

If online anonymity were eliminated for everyone, while providing a way for everyone to communicate (with ID) only to who they choose - that may be better than today. People could even make public political statements without repercussion, by privately communicating them to a trusted speaker for their party…so this doesn’t actually eliminate anonymity, just makes it harder…but doesn’t anything?