@ArjinFerman's banner p

ArjinFerman

Tinfoil Gigachad

2 followers   follows 4 users  
joined 2022 September 05 16:31:45 UTC
Verified Email

				

User ID: 626

ArjinFerman

Tinfoil Gigachad

2 followers   follows 4 users   joined 2022 September 05 16:31:45 UTC

					

No bio...


					

User ID: 626

Verified Email

Of course all politicians have a strong ego, but I do not think hers was pathological. I can not imagine Merkel watching the Tagesschau, noticing that she was not mentioned once, and deciding to do something about it.

Yeah, and I don't think that's how Trump operates either. Ages ago, when everybody and their dog was opening startups, before Trump got into politics, I watched a video from some techno-entrepreneur I can't even recall. He was talking about the different motivations for starting a company. Money was the obvious one, he brought up one or two more that I can't remember, but the one that stick in my head was "legacy". He gave Trump as the example for that one, as he was willing to forgo profit, just to put his family name on top of buildings. For me this continues to be the best explanation for his behavior. It's probably the whole reason he ran for president, because now his name will have to recorded in history books.

Getting rid of nuclear, and letting in refugees, and dismissing all concerns with a one-liner, shows the same kind of obsession with how history will remember you, in my opinion.

Of all the honors Obama received, the Nobel is the one he deserved least, and one of the weakest Nobels awarded...

Sadly, Trump severely lacks awareness of how the mind of the Nobel committee works....

...At this point, he would have to persuade the Middle East to live in harmony and friendship, negotiate with Russia and China for a treaty which reduces nuclear weapon stockpiles by 90% and be hailed as 'The Peacebringer' by archangels (or equivalent) representing at least three world religions before he had a shot at getting his own instead of a hand-me-down like Goebbels or Infantino's sad participation trophy.

That's a lot stronger condemnation of the Nobel Peace Prize, and the entire social class responsible for it's stewardship, than it is of Trump.

In accordance with Nobel's will, the (Nobel Peace) prize is selected by the Norwegian Nobel Committee, a five-member committee appointed by the Parliament of Norway unlike all the other awards chosen by the Swedish Nobel Committee.

https://en.wikipedia.org/wiki/Nobel_Peace_Prize

Is your contention that these discussions are predicated on “full automation” scenarios while you think that there aren’t any obstacles stopping an AI-powered tyranny from happening now?

Sort of. My contention still boils down to "under-discussed", that the issues that are more likely to happen take up less focus than ones that are less likely to happen. The "full automation" thing is an example of this - AI developing to the point where it replaces literally everyone / the vast majority of people can happen somewhere down the line, but a scenario where everybody still has a job, because it makes more sense to let AI specialize in data professing, while humans focus on menial jobs is more likely, and unpleasant enough to warrant discussion.

I only had a skim of the essay you linked, and it's indeed more like what I'd like to see, but not quite there yet.

on something you don't even seem to think AI is needed to make happen.

Huh? No, AI is necessary to make it happen, but the current version that we have is sufficient. Like you point out, it would make no sense for me to bring it up in an AI conversation otherwise.

Their most famous proponent, big yud wants to nuke the AI datacenters.

Yes, because he's obsessed with fantasy doomsday scenarios, rather than far more realistic ones. That's my criticism.

And as an aside, all the thinkers I've read that you would consider AI-Safety aligned have in fact voiced concerns about things like turning drones over to AI.

You're just describing a subset of unaligned AI where the AI is aligned with a despot rather than totally unaligned

Everything I saw from the rat-sphere of the subject, including the concept of "alignment", assumes AI will have agency, and goals that it will be pursuing. None of it is necessary for the dangers that AI will bring.

Or, if the general intelligence isn't necessary for this, then it's a bog standard anti-surveillance stance that isn't related to AI-safety.

Again, defining the field in such a way that it ignores the most likely risks, is exactly the issue I have with AI-safety.

The AI-Safety contingent would absolutely say that this is an unaligned use of AI and would further go on to say that if the AI was sufficiently strong it would be unaligned to its master and turn against their interests too.

How is that useful? I don't care about what they call "aligned" and "not aligned", I care about how a given scenario could come about, and how it could be prevented (and no, "nuke data centers" doesn't count). This would be another part of the criticism I have of the entire field.

It's not just automation, the discussed scenario was " AI gets just strong enough to keep the resulting bunch of purposeless, humiliated humans under control". My emphasis would be on the "under control" part. Even when discussing automation, they have tendency of veering off into fantasy scenarios of full-automation, when the more likely ones are comparative-advantage mediated push towards menial labor in service of the AI god.

In any world where AI is good enough to replace all or most work then it can be put towards the task of improving AI.

Yeah, "make a completely unsubstantiated statement in order to justify a singular focus on fanciful scenarios that you don't know will ever take place" is exactly the sort of thing that prevents me from taking the field seriously.

Alignment is about existential risk, we don't need a special new branch of philosophy and ethics to discuss labor automation

The issue isn't even labor automation, it's things like "we now have technology that makes the world of 1984 possible", and we're already there without even reaching full labor automation. It's just a question of building out infrastructure, and this isn't even one of the more imaginative scenarios.

Calling it non-existential is cope. As a threat it's far more likely, and we have zero counter-measures for it. Focusing on scenarios that we don't even know are possible over ones we know are possible, and we are visibly heading towards them, is exactly my criticism.

Yeah, but... how is that news? Trump having a massive ego is something everyone who took one look at the guy can tell. What's more I don't know if it can be any other way for someone in his position. Years ago I was watching one of Ethan van Sciver's streams (an ex-DC Comics guy for the unfamiliar), and he got into some drama with some indie guy from Brazil. The Brazilian guy posted something on Twitter to the effect of that he's making the best comic book in the history of the world, and when the stream audience saw that, superchats started rolling in taking the piss out of the guy. Funnily enough van Sciver came to his defense, he said "You guys don't get it, he's doing it right. You need to have a massive ego in this field, because if you don't, the amount of negative feedback you get will make you crumble". If this is true for comic book artists, I really don't see how it can be any other way for politicians.

You could try to argue that his particular brand of narcissism is particularly destructive for a world leader, but is it really that unique? Was Angela Merkel bringing over ~1.5 million Syrians and Afghans, and brushing off all concerns with a mere "we'll manage it" all that different?

Yeah. These middle-ground scenarios are so absurdly under-discussed that I can't help but see the entire field of AI-safety as a complete clownshow. It doesn't even take a lot of imagination to outline them.

Uh... now that you mention it, I'm not sure. I could swear that in the past it was a one-man operation.

It does sound like a reasonable assumption at first, if we want to be honest.

If we want to be honest, no, it doesn't. It requires one to have absolutely no theory of mind, to believe that what people hate about angry purple-haired thots, is them being angry, and having purple hair. If they're particularly high on their own supply they might also believe that they hate them for being women. It is only with these assumptions that the idea makes any semblance of sense.

if your goal is to create a fictional right-wing character who’s a repulsive woman by normie standards, surely this task cannot be that hard, can it? I mean, maybe just make her an obese, frumpy, obnoxious chavette. Maybe also a single mother and a smoker to boot. There’s no way such a character will compel thirsty dudebros to create piles of fanart of her.

No, you still don't get it. You'd have to make her a literal goblin, and even that wouldn't guarantee the effect.

Side note: the Know Your Meme guy is without doubt the single best living journalist/editor on planet Earth.

Are we just gonna skip over how Trump ended up with a mugshot being taken, and "muh 34 felonies"? If you're including pardons, why ignore Biden's signoff pardoning Faucci, his son, and a bunch of other people for anything they could have possibly been charged with, before any accusation was even made?

I think that the politicization of the DoJ is bad no matter who is in charge, and I will grant you that the Dems started the circle, but clearly Trump drove it to new heights.

That's a reasonable position, but I don't know if you can derive "turnabout is fair play" from it.