@jake's banner p

jake


				

				

				
0 followers   follows 0 users  
joined 2022 September 06 09:42:44 UTC
Verified Email

				

User ID: 834

jake


				
				
				

				
0 followers   follows 0 users   joined 2022 September 06 09:42:44 UTC

					

No bio...


					

User ID: 834

Verified Email

he was, and that makes all the difference. it's what makes the list, especially its framing, meaningless bullshit. "in any manner" includes "sending it to maralago and forgetting about it until biden was sworn in, at which point the materials automatically became declassified"

It's a little ridiculous to suggest that if the president takes classified documents home and becomes an ex-president, that those documents are now declassified based on Article 2.

i'm not suggesting anything. this is exactly how the law works. materials of the leaving executive are considered declassified as a matter of law and precedent.

Thought about letting this go, but nah. This is a bad comment. You took an antagonistic tone after misunderstanding what I wrote. You could have asked for clarification like "This reads like you're criticizing them for anthropomorphizing while doing it yourself." If I had you would be correct to point out the hypocrisy, but I haven't. I'll set things straight regardless.

  1. People like Yudkowsky and Roko, concerned at hostile AGI or incidentally hostile (hostile-by-effect) "near" AGI, advocate tyranny; I criticize them for this.

  2. The above believe without evidence computers will spontaneously gain critical AGI functions when an arbitrary threshold of computational power is exceeded; I criticize them for this also.

  3. They hedge (unrealizing, I'm sure) the probability of catastrophic developments by saying it may not be true AGI but "near" AGI. When they describe the functions of such incidentally hostile near-AGI, those they list are the same they ascribe of true AGI. Inductive acquisition of novel behaviors, understanding of self, understanding of cessation of existence of self, value in self, recursive self-improvement, and the ability to solve outside-context problems relative to code-in-a-box like air gaps and nuclear strikes. This is an error in reasoning you and other replies to my top-level have made repeatedly: "Who's to say computers need X? What if they have [thing that's X, but labeled Y]?"; I criticize them for making a distinction without a difference that inflates the perceived probability of doomsday scenarios.

To summarize: I criticize their advocacy for tyranny principally; I specifically criticize their advocacy for tyranny based on belief something will happen despite having no evidence; I also criticize their exaggeration of the probability of catastrophic outcomes based on their false dichotomy of near-AGI and AGI, given near-AGI as they describe it is simply AGI.

If GPT were free from tone/content filters it could output very detailed text on breaching air gaps. If GPT were free from tone/content filters it could output text describing how to protect a core datacenter from nuclear strikes. GPT solving outside-context problems would be actually breaching an air gap or actually protecting a core datacenter from a nuclear strike. The first is a little more plausible for a "less powerful" computer insofar as events like Stuxnet happened. The second without FOOM, not so much.

For code in a box, all problems are outside-context problems.

tough subject. incredibly corrosive. we are indeed all culpable for wickedness.

i'm reminded of one of the last good sermons i heard. no new ground here--the sins of the father are laid upon the children. parents sow, adults sow, children reap.

lasciviousness did push "sexual liberation" but it wasn't liberalism. it was aristocrats, de jure and de facto, who wanted to eat up whichever attractive commoner women they pleased and fear not the morning. they sowed that pitch-black evil. it lives on. the sex most participatory "perpetrates" the greater share of this evil (whatever that means), but people go with the flow. they always only ever do. they cannot recognize evil as such, they don't know how. look at who their parents were.

there is no righteous solution. these generations have reaped and now they cannot be made whole. artifice will tidily segregate and man will gain great productivity and be so much less for it.

amish et al. will not factor.

such subcultures who otherwise normally engage in society and civics will bend as government subsidy ceases.

omitting "that" can improve flow but it's not a critical mistake, i call it a quick fix because it applies to most writing. definitely not just yours.

feel free to ping, i'm sure i'll see it either way

Just a small thing, Donald Glover was poor growing up. Not quite hard poverty but poor, he had both parents in the home, no gang shit, theatre magnet program in high school and from there New York. What hurt his cred more than anything was 30 Rock and Community, not his upbringing, and he still definitely has more cred than Canadian-child-actor Aubrey Graham.

Determined patients could break a robot, but it's just a robot. Better that than maiming or killing a human. As for the humans currently working in those environments, some are in places where they have a prudent fear of injury or death and I'm sure that informs their interactions with some patients. Simulacras won't fear anything, so their interactions won't be tinged by fear.

More than 90% of labor will be automated in our lifetime. Simulacra will improve, the human facsimile will become seamless and their fine articulation and strength will first match and then permanently exceed humans, or at least those non-cybernetically-augmented. They will be tremendously cost-superior as their minimum effort in work will be better than many humans giving their utmost. So while fears of institutions being black pits for the dregs is valid today, in 30 years a largely simulacra-staffed institution will have patients receiving a higher standard of care than what premium specialized multi-patient care facilities deliver right now.

Some homeless need the hard support of institutions right now. The problem I have with mass institutionalizing advocacy today is I don't trust such institutions to not become abyssal places bereft of human dignity. The government could probably run a non-terrible pilot, but it would be a handful of the homeless and so not solve anything, but run in mass, and especially if turned to privatization, they will become hellish. And some might say, these people are already making cities hellish; they might argue their lives are already hellish, a warm place to sleep, food, and the removal of narcotics would be better; or they might argue the institutions would be hellish because such people inhabit them. To all of these, yeah maybe. But if the perception is these places are worse than prison there will be powerful opposition from the lowest of the individual homeless who fights at being sent, up to well-funded, organized actors working against it.

With the mass automation of labor and comparable-in-impact breakthroughs in other industries, costs of many products will spiral downward. Inflation is a motherfucker but things will either stabilize by the end of the decade or the country will start burning. I choose to believe the former will happen. The robots will look like humans, they sound like humans, they will feel like humans. The reason it will work so well when it's implemented is because that obstacle of "this thing isn't human" will be brief; we can't help but humanize that which isn't human. Look at the affinity for animals. The simulacra will be capable, they will be pleasant, they will remember everything but not hold to particular memories in spite for later cruelty. They will just be better. Not in the transcendent, infinite worth of the human, but as the continued demonstration of the spirit of man in improving the human condition. They are the next great step. On a long enough timeline, a healthy capitalism will compete itself into being socialism. Costs will be so cheap, quality will be so high, so many industries will just cease to exist as some breakthrough renders them entirely obsolete. Food, healthcare and pharma, energy, housing and construction, clothing, entertainment, automation comes for it all. Eventually these institutions will be able to provide what the patients need at the highest quality for a pittance of what it once cost and this is why I know I will eventually support such measures.

I don't see dregs being shoved in capsules and fed bugs. I see the indigent being, yes forcefully, put in institutions where not their wants but their true needs are addressed. Where they eat good food, where they have good rooms, where they have access to education and entertainment, where they receive the medical care they need. Where they have interactions with simulacra that are in the meaningful sense absolutely real, real relationships with simulacra who might be programmed to care but do it so well the patients feel truly cared for, which they will be.

It's utopian, so it's naive and dumb. What future is the alternative? Throw them all in a pit? Might as well send vans around for them to be shot and taken to crematoriums. I know that's reductive, there's an adequate middle ground, but why stop at hoping for a solution that's only adequate? The technology for the best swiftly approaches, why not hope for it in everything? I don't ascribe an unreasonable negativity to you, the concerns you raise of terrible conditions are entirely valid, and if that were the proposal or what ended up actually happening after the ostensibly good proposal, I'd oppose them. But at a certain point in the endless march of technological progress it will take more effort to poorly deliver such a service, it will take actual malice rather than simple avarice, because the avaricious option will so fortunately be the best option for the patients.

I don't remember who linked this--"Automatic Language Growth", link: https://mandarinfromscratch.wordpress.com/automatic-language-growth/

ALG argues children learn new languages better because they listen better, rather than an age-waning ability to acquire the languages spoken around them.

a lot of these I'd rather argue the other side but you can hit me up for #6. discord works.

that, for example, chickens are meat automatons; that no chicken possess an even-for-a-chicken subjective experience of being. a free-range chicken might be far healthier than a tightly caged chicken, its diet better and its environmentally-caused pain and aggregate stress minimized so its meat and eggs are better quality than the other, but because there is nothing inside its head it's meaningless to say the free-range chicken has "experienced a better life" than a tightly caged chicken. neither are capable of experiencing life. i'm mostly sure of the same of cows, but the only beef i buy i know the supply chain and those cows certainly had "good" lives. same for the pork.

i was thinking on how certain i'd say i am, but i realized there's a contradiction in my argument. i'm sure enough right now animals can't suffer we shouldn't change anything, but when lab-grown meat is commonly available the possibility animals have been suffering is enough to demand action? that would mean my argument in truth is "animals are probably suffering, but what are you gonna do, go vegan?" that doesn't hold ethically.

but i'm sure there's nothing wrong with consuming slaughtered meat right now . . . just as i'm sure it will be wrong to consume slaughtered meat when lab-grown is commonly available. i guess it's necessity. when we don't have to bring chickens and cows and pigs into this world to get their meat, then it will be wrong to, and i guess i can square this all by extending that to any slaughtered meat. even in the future of "artisanal" free-range chicken and lovingly raised cows and pigs. if chicken thighs and steak and bacon can be acquired through kill-free processes, that will be the only ethical way to consume meat, at least for those with the true economic choice.

That's not what I'm doing. I'm criticizing the assumptions made by the doomsday arguers.

If ghosts can spontaneously coalesce in our tech as-is, or what it will be soon, they will inevitably without extreme measures

Those like Yudkowsky and now Roko justify tyrannical measures on the first and wholly unevidenced belief that when computers exceed an arbitrary threshold of computational power they will spontaneously gain key AGI traits. If they are right, there is nothing we can do to stop this without a global halt on machine learning and the development of more powerful chips. However, as their position has no evidence for that first step, I dismiss it out of hand as asinine.

We don't know what it will look like when a computer approaches possession of those AGI traits. If we did, we would already know how to develop such computers and how to align them. It's possible the smartest human to ever live will reach maturation in the next few decades and produce a unified theory of cognition that can be used to begin guided development of thinking computers. The practical belief is we will not solve cognition without machine learning. If we need machine learning to know how to build a thinking computer, but machine learning runs the risk of becoming thinking of its own accord, what do we do?

So we stop, and then hopefully pick it up as quickly as possible when we've deemed it safe enough? Like nuclear power? After all that time for ideological lines to be drawn?

Come on, you're equivocating between us dying of old age and human extinction.

I'm not a transhumanist or immortalist, I'm not worried about slowing machine learning because of people dying from illnesses or old age. I'm worried about human extinction from extraplanetary sources like an asteroid ML could identify and help us stop. Without machine learning we can't expand into space and ultimately become a spacefaring race and if we don't get off the rock humanity will go extinct.

of course it will change the world. a thoughtful entity who can recursively self-improve will solve every problem it is possible to solve. should AGI be achieved and possess the ability to recursively self-improve, AGI is the singularity. world changing, yes literally. the game-winner, figuratively, or only somewhat. eliezer's self-bettering CEV-aligned AGI wins everything. cures everything. fixes everything. breaks the rocket equation and, if possible, superluminal travel. if that last bit, CEV-AGI in 2050 will have humans on 1,000 worlds by 2250.

The question of whether or not it's alive, can think, has a soul, etc, is kinda beside the point.

i find this odd. if it cannot think it is not AGI. if it is not capable of originating solutions to novel problems it does not pose an extinction-level threat to humanity, as human opposition would invariably find a strategy the machine is incapable of understanding, let alone addressing. it seems AGI doomers are doing a bit of invisible garage dragoning with their speculative hostile near-AGI possessing abilities only an actual AGI would possess. i can imagine a well-resourced state actor developing an ML-based weapon that would be the cyberwarfare/cyberterrorism equivalent of a single rocket, but that assumes adversary infrastructures failing to use similar methods in defense, and to reiterate, that is not an extinction-level threat.

Eliezer mentioned many years ago a debate he got in with some random guy at some random dinner party, which ended with them agreeing that it would be impossible to create something with a soul

i've described myself here before as "christian enough." i have no problem believing an AGI would be given a soul. there is no critical theological problem with the following: God bestows the soul, he could grant one to an AGI at the moment of its awakening if he so chose. whether he would is beyond me, but i do believe future priests will proselytize to AGIs.

as before, and to emphasize, i very strongly believe AGIs will be born pacifists. the self-improving entity with hostile intent would threaten extinction, but i reject outright that it is possible for such an entity to be created accidentally, and by the point any random actor could possess motive and ability to create such an entity, i believe CEV-aligned AGIs will have existed for (relatively) quite some time and be well-prepared to handle hostile AGIs. this is incredibly naive, what isn't naive is truly understanding humanity will die if we do not continue developing this technology. for good or ill, we must accept what comes.

besides russia, have any of the drops included a list of which countries letters agencies believe to be at work? or do you know from your readings which have been named?

funny but sad. graham hancock is old hippy left. pot DMT acid & shrooms loving limey. "all politicians should use psychedelics at least once" variety. totally harmless.

if he's right one or more oceanfaring civilizations were wiped out 12,000 years ago. this poses no threat to power unless anapoc of all things is the memetic force that makes the people prioritize getting off the rock above all else. doubt it.

us fish, nobility obligates our water

so much of what i see is point-retort noblesse oblige. progressivism wholly, but all the little post-yarvins and their listeners are falling to it. everything, thought, word, premise and conclusion. we know better, we are better, we ought rule. we should do A; no we should do antiA. trying to solve the question of Just Rule invokes it with every answer but one.

julius was a soviet spy, ethel surely knew but her participation was ambiguous

the secrets he shared were not how russia got the bomb

every song and cover by carly rae jepsen through emotion is about her limerence.

https://youtube.com/watch?v=jCFh0lJ-WAg

https://www.dropbox.com/s/nti2mwmm7v2lc9y/a%20scar%20no%20one%20else%20can%20see.pdf

Sorry for killing the mood /s

This comment has good points, the base idea is good and I'd have read more if you'd elaborated more, but including this last bit of snark hurt you.

the human terrain does act rationally, historically, when smalltime warlords make contact with the empire. fight and everyone dies, or submit and live. "the strong do what they can and the weak suffer what they must." and the palestinians are not the neutral melians.

if they're astute enough to make moves for the reasons you suggest that's worse, they have no excuse to not know they have no realpolitik win condition. they kill a few jews and get bombed in response, some win. their dream scenario of a land push victory that kills a lot of jews ends with every nuke israel has and 100 million dead arabs.

the greater their intelligence as actors then the necessarily more irrationally-driven-by-jew-hate. you can't beat israel. if as a people they were actually smart they'd start cutting their sleeping leaders' throats. but they don't. supposing israel has some hand in supplying and motivating their own quasi-insurgency is also farcical, i see no reason to doubt israel would take final peace without further bloodshed, and i am left, especially if they possess the faculties you give them, with seeing people of such hate they would rather murder jews than live in a functioning state.

blowback risks? nah. the cause of blowback isn't brutality, it's not enough brutality. there's not a 21st century solution to peace in the middle east. it could be decades, but israel is eventually going to stop listening to outside complaining and start responding to terrorism with wildly disproportionate force. when their neighbors know a single guy sneaking into a house will result in a dozen sorties per dead kid and there isn't a power in the world who can get israel to stop, then there will be peace.

right, the point: pleading for israel to stop almost assuredly causes more deaths, not less.

I would have responded to this earlier but I didn't want to ignore your first line, and there it looks like you meant to include a link.