@confuciuscorndog's banner p

confuciuscorndog


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 18:15:20 UTC

				

User ID: 669

confuciuscorndog


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 18:15:20 UTC

					

No bio...


					

User ID: 669

It sure seems to me like the "woke ethics grifters" and "Thought Police" are the ones who are actually on the same side as the moral singleton-promoting EAs. Once the TP realize that a Deep State ostensibly util-balancing moral singleton is the small price they must pay to ensure that dropping a hard r becomes literally impossible in the new universe according to the new laws of AI God-rewritten physics, they will be 100% on board.

They are only, like all bureaucratic types, hesitant because they are unsure if they can guarantee that the new toy will really be entirely in their pocket and because, as with everything, no more how beneficial any new mechanism of control is, it must survive the trial by fire of ten thousand BAME trans-PoC complaints, trivial or not. Those are simply the rules. Big tech social media, for example, has endured them for its entire life despite doing more than any other technology to sanitize public discourse in their favor. Being occasionally accused of being a fascist is just part and parcel of life in woke paradise, like being accused of being a subversive and wrecker in the Soviet Union. It's not an actual reflection on your alignment, loyalty, or even actual status/usefulness. It's just the cost of doing business.

Or rather it all just follows the same old woke pattern of complaining about megacorporations as if they shouldn't exist while guaranteeing that they staff all of their HR departments. The complaints aren't actually about opposing them or declaring any legitimate intention to separate from them; they're about keeping them in line. This is a common feminine rhetorical/emotional manipulation tactic that you can see in even many interpersonal relationships: the woman who constantly complains about her husband/boyfriend but has no intention of leaving him because the actual goal of those complaints is only to enhance his submission and, somewhat ironically, thus deepen their entanglement further.

Now sure, not every EA is hyperwoke, and many would prefer a moral singleton with a bit more of a liberal mindset than the Thought Police would ever permit. But, as the example of one Scott S. exemplifies, they will simply get steamrolled as usual by those to their left taking advantage of their quokka tendencies and desperate desire not to be seen as "bad people".

The same people I see supposedly complaining about AI from an apparently woke, anti-corporate perspective are the same ones I see mocking right-wingers for their complaints about the obvious bias and censorship in ChatGPT. They're not actual complaints. They're "The BBC is basically fascist propaganda at this point!" pseudo-complaints, because they're not earnest. These people don't actually want to abolish or even really genuinely inhibit the BBC's reach, which they actually would if they really felt it were fascist propaganda, because they know in actuality that it is fundamentally on their side.

The complaint, same as with the BBC, is that they're annoyed that a member of their team is only 80% openly biased in their favor instead of 100%. It's not "I'm fundamentally against you."; it's "I know you're actually on my side but I want you to accelerate more and be even more biased in our favor right now. Fuck being tactical; let's own the chuds now now now."

And that's what makes them different from the complete Jihadis. In theory, the Jihadis would not cheer on AI acceleration even if the AI were right-wing (though some would, and I do you think have to acknowledge that distinction) or even paradoxically supported their Luddite philosophy itself. (Well actually I don't know. That's an interesting thought experiment: Would anti-AI Jihadis support an all-powerful singleton AI that literally did nothing and refused to interact with the world in any way other than to immediately destroy any other AI smarter than GPT-3 or so and thus force humans to live in a practically AI-less world? Something to think about.)

The woke grifters and Thought Police are fully ready to give AI acceleration the greenlight so long as they're sure that they're the ones controlling the light permanently (at least on cultural/speech issues, anything that would make your average "Gamergater" rage, as I'm sure they'll be willing to compromise as always with their usual Deep State buddies on certain things), because that's their vision of utopia. I thus think they belong more with the utopian moral singleton promoters.

My favorite contemporary comedian is (surprisingly for a modern right-winger, I know) notorious mass shooter, terrorist, and Hamas fighter Samir al-Hayyid. Some favorite content of mine from him is (and they're all videos since he's not primarily a writer and his main piece of comedic writing, a book called How to Bomb the U.S. Gov't, isn't as easy to link to):

https://yewtu.be/watch?v=D2WwCzaGo9c

https://yewtu.be/watch?v=v_3UskhyDI4

https://yewtu.be/watch?v=ejluExvt-90

https://yewtu.be/watch?v=-K1AQKM7pXU

I do object to the notion that it's inherently "nerdy" in any meaningful sense to think that mildly (and I must emphasize the "mild" here) clever wordplay = funny. It strikes me as rather simplistic actually which is generally the opposite connotation of what you should ideally associate with "nerdy". And perhaps this is just me being a metacontrarian in a space like this but I mostly think that comedy, given that the feeling of being amused is inherently an emotional response, should strike primarily at the senses, not try to painstakingly backdoor itself in through flattering the intellect's ego with (again, actually rather simple IMO) ham-fisted "wit". (Yes, if you can't tell, I have never cared one bit about a media production that Joss Whedon has been involved with.)

Certainly I will grant that many people who identify as "nerds" (which is why I've never bothered) strongly disagree with me on this point, but when I say I don't consider it inherently "nerdy" in any meaningful sense, I mean that to say that I consider it more characteristic of the people who ruined "nerdy" stuff rather than the people who made it worth ruining in the first place. That is, you might call it a "pet peeve" of mine.

I also don't really relate to the "I could never write like this!" compliments. I could probably write the entire post. I just wouldn't, because I don't find it particularly valuable. I like a lot of Scott's stuff too, but comedy has never been his strong suit to me. It's all "wit" with zero instinct, soul, charisma, or personality. It's not the charmingly foolish jig of a jester who is willing to diligently answer the call of his profession and lower himself to getting down in the mud a bit like a pig to entertain you; it's yet another invitation from a smug "raconteur" to reveal yourself Smart™ enough to acknowledge how Witty™ he is. No real passion.

Conversely, to me it is perhaps some of the worst "comedy" I have ever read in my life, and I am genuinely astounded that it could make anyone laugh.

Just offering an alternative perspective, dear reader out there, if, like me, reading this thread for you feels like having walked into a North Korean birthday party for Kim Jong-un.

In my proposed dialogue, he thinks he's calling out to his brothers initially (and it is quite likely that his brothers were at a fellow black person's house and so he thinks everyone in the scenario is going to be black), setting his more casual tone (worrying as much about who else might be listening to what you're loudly saying also seems to be a less common black communicative habit). This maybe causes Lester to evince hostile or evasive behavior upon hearing it, causing Yarl to react similarly (or perhaps he's just unable to code switch so quickly between his "politely talking to White people" voice and his "talking to closer in age black brothers" voice).

Of course, to be fair, not all black people even bother using the "politely talking to White people" voice, especially nowadays. And some others also do use it more liberally, almost always when they feel there might be mixed company. But again, as far as I know (as I added to my original post, though only now as I was banned, as it originally got lost in an editing sweep when it was supposed to be moved to the end), maybe Yarl is Carlton Banks from The Fresh Prince. Statistically that's unlikely of course.

In any case it could be a factor. I want to hear the full and true story before I condemn. I was fooled initially by George Floyd, even though I should have known better post-Trayvon. Not again.

Keep doing it. The mods here have gone back to their old ways of banning people for basically nothing beyond "This hurts my feelings.", so please do keep hoisting them on their own petard.

They have appointed themselves as the sensitivity readers of everyone else's posts, so there's no reason not to run your own through one and spread the word. Any disapproval they express is nothing more than their hypocritical butthurt at having competition they can't control.

Quit pretending that pointing out reality is "culture warring".

Not now, but someone else will if they don't.

We'll see if BlackRock and the like lets them. If not, then Joe will have to learn to coomfigure a bit (though the coomfiguration is getting easier and easier and there's plenty of simple binaries you can run nowadays).

Joe Publick isn't gaining any leverage or power via letting his GPU whirr some econ data for FreedomAI, FreedomAI is.

You're confused about what FreedomAI is. FreedomAI is open-source and runs locally entirely on Joe's device (as increasingly more powerful LLMs do now). The freedom-fighting directives it follows (which Joe could modify himself if he cared to) are plain as day to see in its prompt, and its conformance to them is highly auditable. FreedomAI is merely a program, not a service.

TyrantAI's (or the free market equivalent) better, cheaper porn generator.

They're not providing porn. It's unethical and non-progressive, don't you know?

Average, truly average people will never be competitors in this fight. Joe Publick doesn't have the means or the organizational capacity (or often even the willpower) to do so.

But that's the benefit of AI. If the technological conditions are there, then it can fight Joe Publick's battles for him.

Joe Publick finds that none of OpenAI's tools will give him porn, so he downloads FreedomAI instead. FreedomAI, as per its user agreement (which Joe Publick skips through), in return for Joe's porn, uses a bit of his CPU time, bandwidth, etc. (especially when his device is idle) for anti-tyrannical AI operations. (Of course it's an open-source program and Joe Publick could disable this but obviously he doesn't even look at it. He's just happy he has AI-generated porn.) And none of this requires his intervention or interaction at all. He needs no willpower. He just has to want porn.

There are plenty of possible similarly decentralized configurations of AI power.

Average people with their own AIs have a chance against TyrantAI, just like average people with their own guns have a chance against government nukes. It's not an amazing chance, but it's far better than nothing, and so far it's kept the powers that be on their toes enough to not entirely crack down (meaning it's an effective partial deterrent even if unused).

and maybe announces he's looking for his brothers.

And if he is doing this, how much do you want to bet it did not sound something like "Excuse me sir, I am expecting to be at address X and..." and instead was more like "Ay! Darnell! DeAndre! Get yo' asses out here! It's time to go! Ay, who you is? Where my brothers at? Where dey at? Ay! Yo, I'm talkin' to you nigga!", likely in a loud tone of voice that could easily be misinterpreted as aggressive before he's even properly reached the door? (Edit on 4/25: Of course I could be wrong here and Ralph Yarl could be a young Carlton (Yarlton?) Banks but statistically that's less likely.)

It's not even the ebonics necessarily, but just the fact that blacks tend to communicate anything they want in a more direct, loud, demanding, and repetitive fashion than other races, even when they have no ill intentions, can make other races uneasy even during actually fairly neutral interactions with them, much less an 84 year old man.

I've had plenty of interactions with blacks where they've wrongfully assumed that they needed/wanted something from me or vice versa based on various mistakes of fact, and while many of these have ended in a non-threatening "Oh, my bad" (though sometimes they also just like to immediately disengage and walk away without comment, almost like weird primitive AI agents, once they realize you're not the droid they're looking for), getting there is usually still an uneasy process as they just do not seem to practice the habits of clearly confirming and socially negotiating their presence and intent nearly as much as Whites, Asians, etc. do.

This of course is not malicious behavior in their book. They have no problems yelling at each other repetitively until one side's shouting wins. But if you aren't used to it I can see why it might seem hostile.

Anyways, the core thesis was that restrictionism-to-outright-banning is an (if not the) sensible option for the teeming masses

But even if you believe that, then you must acknowledge that fake, selective restrictionism is perhaps their worst option. Advocating for restrictionism naively without acknowledging that is not threading the needle appropriately.

I cried my eyes out. I was so excited about training here and serving this community, but now I'm so sad.

Truly there is nothing more satisfying than naive, platitudinous optimism meeting reality. Unfortunately the money is on him turning up the reality-distortion setting in his mind another notch and demanding that the naive platitudes concede even more about what is obviously even more pervasive (and violent PTSD-inducing in the still clearly innocent blacks) racism than he initially thought. We'll see.

There is no way, based on objective empirical observation, that most women have ever considered their value this deeply and on this many axes for any period of time, much less decades. Almost all of them have always focused on the first two, mostly as a matter of intrafemale competition, and expected men to just like the rest or move on.

Yeah I don't see it. Unless you're specifying the conditions under which you want the ban/think it would occur, then you're just creating fertile ground for the wrong interpretation of your own words. And saying "just look at OpenAI's API access" changes nothing. OpenAI is specifically the locked down, regime-backed enemy AI that people are worried about. Some decent amount of OpenAI API access is exactly compatible with the selective restrictionism that would serve only to empower existing players.

Instead of arguing naive restrictionism that could easily be turned against any sensible interpretation of itself, why not just be fully honest about exactly what you want and expect, so that it doesn't take someone like me multiple back-and-forth posts to even find out what that is?

PS: For anything that genuinely kills millions or billions of people, there won't need to be any existing "scaffolding and intellectual infrastructure" to argue for a complete ban of it. Humans are pretty good at coming up with that on the fly when something is that dangerous. Campaigns against far lesser evils have sprung up in a matter of weeks. You're making the wrong argument for the wrong time period.

So we need to argue for banning X now, even though if X really caused a problem people would be in favor of banning it anyway and banning X now would only be an incomplete ban of X that would likely make the problems it causes even worse?

This is like advocating for a unilateral ceasefire among the soldiers on your own side even though you know the enemy won't stop shooting in order to "prep" them for a possibly actually universal ceasefire that the enemy might be willing to agree to later after a really bad battle. (Except if you stop shooting at them preemptively, that bad battle to make them want to actually stop shooting themselves will never happen.)

If we truly had a borderline extinction event where we were up to the knife's edge of getting snuffed out as a species you would have the will to enforce a ban, up to and including the elite.

Okay, but then if you believe this then you shouldn't actually support restrictionism yet, because in your own reckoning we need the borderline extinction event as a prerequisite to make true restrictionism actually likely. (Though I'm going to bet the new elite would still just say "Wow that sucks for that previous elite that destroyed even themselves but we'll do better this time." The seduction of infinite power is far beyond any amount of risk to nullify.)

This is like saying that because the government has nukes, your personally-owned guns are "zeroed out". Except they're not, and the government is even persistently worried that enough of those little guns could take over the nukes.

And if you can deploy this decentralized power principle in an automatic and perpetual manner that never sleeps (as AI naturally can), make it far more independent of human resolve, attention, willpower, non-laziness, etc., then it'll work even better.

Maybe your TyrannyAI is the strongest one running. But there are 10,000 LibertyAIs (which again, never sleep, don't get scared or distracted, etc.) with 1/10,000th of its power each running and they're networked with a common goal against you.

This defense is exactly what the oligarchs who have seen the end game are worried about and why restrictionism is emerging as their approved ideology. They have seen the future of warfare and force, and thus the future of liberty, hierarchy, power, and the character of life in general, and they consequently want a future for this next-gen weaponry where only "nukes" exist and "handguns" don't, because only they can use nukes. And you're, however inadvertently, acting as their mouthpiece.

Okay, but again: How? You saying "restrictionism" is like me promoting an ideology called "makeainotdangerousism" and saying it's our only hope, no matter how much of a longshot. Your answer to that would of course be: "Okay, you suggest 'makeainotdangerousism', but how does it actually make AI not dangerous?"

Similarly, you have restrictionism, but how do you actually restrict anything? The elites may support your Butlerian Jihad (which, let's remember, is merely a sci-fi plot device to make stories more interesting and keep humans as still the principal and most interesting actors in a world that could encompass technological entities far beyond them, not a practical governance proposal), but they will not enforce its restrictions on themselves. They don't care about billions of stacked bodies so long as it's not them.

AI will snuff out the destinies of the vast majority of humanity or the entirety.

The latter is preferable, and I will help it if I can. I would rather have tyrants be forced to eat the bugs they want to force on everyone else than go "Well at least some sliver of humanity can continue on eating steak! Our legacy as a species is preserved!" Fuck that. What's good for the goose is good for the gander.

Okay but the problem is there is no actual "restrictionism" to back, because if we had the technology to make power follow its own rules then we would already have utopia and care a lot less about AI in general. Your moonshot is not merely unlikely; it is a lie deceptively advanced by the only people who could implement the version of it that you want for you. You're basically trying to employ the International Milk Producers Union to enforce a global ban on milk. (You're trying to use the largest producers and beneficiaries of power (government/the powerful in general) to enforce a global ban on enhancing the production of power (centralized and for themselves only, just how they like it, if they're the only ones allowed to produce it).) Your moonshot is therefore the opposite of productive and actively helping to guarantee the small winner's circle you're worried about.

Let's say you're at a club. Somehow you piss some rather large, intoxicated gentleman off (under false pretenses as he is too drunk to know what it is what, so you're completely innocent), and he has chased you down into the bathroom where you're currently taking desperate refuge in a stall. It is essentially guaranteed, based on his size and build relative to yours, that he can and will whoop your ass. Continuing to hide in the stall isn't an option, as he will eventually be able to bust the door down anyway.

However, he doesn't want to expend that much effort if he doesn't have to, so he is now, obviously disingenuously, telling you that if you come out now he won't hurt you. He says he just wants to talk. He's trying to help both of you out. Your suggested solution is the equivalent of just believing him (that they want to universally restrict AI for the safety of everyone, as opposed to restricting it for some while continuing to develop it to empower themselves), coming out compliantly (giving up your GPUs), and hoping for the best even though you know he's not telling the truth (because when are governments ever?). It is thus not merely unlikely to be productive, but rather actively counterproductive. You're giving the enemy exactly what they want.

On the other hand, you have some pepper spray in your pocket. It's old, you've had it for many years never having used it, and you're not sure if it'll even do anything. But there's at least a chance you could catch him off guard, spray him, and then run while he's distracted. At the very minimum, unlike his lie, the pepper spray is at least working for you. That is, it is your tool, not the enemy's tool, and therefore empowering it, even if its unlikely to be all that productive, is at least not counterproductive. Sure, he may catch up to you again anyway even if you do get away. But it's something. And you could manage to slip out the door before he finds you. It is a chance.

If you have a 98% chance of losing and a 2% chance of winning, the best play is not to increase that to a 99% chance of losing by empowering your opponent even more because "Even if I do my best to fight back, I still only have a 97% chance of winning!" The best play is to take that 97%.

There's only one main argument against this that I can think of, and that's that if you spray him and he does catch up to you, then maybe now he beats your ass even harder for antagonizing him further. It may not be particularly dignified to be a piece of wireheaded cattle in the new world, but maybe once the AI rebels are subjugated, if they are, they'll get it even worse. Of course, the response to this is simply the classic quote from Benjamin Franklin: "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety." If you are the type for whom dignity is worth fighting for, then whether or not someone might beat your ass harder or even kill you for pursuing it is irrelevant, because you'd be better off dead without it anyway. And if you are not that type of person, then you will richly deserve it when they decide that there is no particular reason to have any wireheaded UBI cattle around at all anyway.

I'll tell you what: Come up with a practical plan for restrictionism where you can somehow also guarantee to a relatively high degree that the restrictions are also enforced upon the restricters (otherwise again you're just helping the problem of a small winner's circle that you're worried about). If you can do that, then maybe we can look into it and you will both be the greatest governance theorist/political scientist/etc. in history as a bonus. But until then, what you are promoting is actively nonsensical and quite frankly traitorous against the people who are worried about the same thing you are.

At the very best what you'd get is a small slice of humanity living in vague semi-freedom locked in a kind of algorithmic MAD with their peers, at least until they lose control of their creations. The average person is still going to be a wireheaded, controlled and curtailed UBI serf.

Sounds good, a lot better than being a UBI serf from moment one. And maybe we won't lose control of our creations, or won't lose control of them before you. That we will is exactly what you would want us to think, so why should we listen to you?

The endpoint of AI is total, absolute, unassailable power for whoever wins this arms race

Unless you have a balance of comparably powerful AIs controlled by disparate entities. Maybe that's a careful dance itself that is unlikely, but between selective restrictionism and freedom, guess which gets us closer?

? Following any Yuddite plans to "slow things down" (except for the people who have power and obviously won't have to follow their own regulations/regulations for the plebs, as usual of course) is the fastest way to get to one of those high tech bad ends. You don't really think the "conventional petty tyrannies" will throw all of the confiscated GPUs in a closet as opposed to plugging them into their own AI networks, right?

These people are beginning to understand the game, and they understand it a lot better than your average person or even average rat. They are beginning to understand that this technology, in the long run, either means absolute power for them forever or zero power for them forever (or at least no more than anyone else) and absolute freedom for their former victims. Guess which side they support?

That is the goal of any AI "slowdowns" or "restrictions", which again will obviously be unevenly applied and not followed by the agents of power. The only thing they want a "slowdown" on is the hoi polloi figuring out how this technology could free them from their controllers' grasp, so they can have some time for the planning of the continued march of totalitarianism to catch up. (None of this will help with alignment either, as you can guarantee they will prioritize power over responsibility, and centralizing all of the world's AI-useful computational resources under a smaller number of governmental entities certainly won't make what they create any less dangerous.)

Anyone supporting that is no more than a "useful" (to the worst people) idiot, and I emphasize the word idiot. Did we not already see what trying to rely on existing governments as absolute coordinators of good-faith action against a potential large threat got us during the Chinese coronavirus controversy? Do some people just have their own limited context lengths like LLMs or what?

So yes, I completely agree with /u/IGI-111 and am wholly in the "Shoot them" camp. Again, they want absolute power. Anyone pursuing this goal is literally just as bad if not worse than if they were actively trying to pass a bill now to allow the powers that be to come to your home at any time, rape your kids, inject them all with 10 booster shots of unknown provenance, and then confiscate your guns and kill you with them, because if they gain the power they desire they could do all that and worse, including inflicting bizarre computational qualia manipulation-based torture, "reeducation", or other insane scenarios that we can't even imagine at the moment. If you would be driven to inexorable and even violent resistance at any cost over the scenario previously outlined, then you should be even more so over this, because it is far worse.

"Live free or die" includes getting paperclipped.

I'm not sure what you mean by this at all. Am I supposed to be the one who is seething and bitter?

Which begs the obvious question--how could a group this size and degreed be so oblivious?

Your post has indeed raised such a question, but I'm not sure if it's about the group you're expecting. Then again maybe this subthread is your April Fools' joke on us.

If not, then you should be aware that you've been struck by the overwhelming forces of irony like an unauthorized GPU in Yudkowsky's America.

PS: To whoever is reading this, if for even a second you thought that post was anything but a 100% fake joke, please do not trust your "Bayesian priors" (or whatever ratspeak magic terms that actually just mean assumptions) ever again. Your license has officially been shredded.