@Primaprimaprima's banner p

Primaprimaprima

Bigfoot is an interdimensional being

2 followers   follows 0 users  
joined 2022 September 05 01:29:15 UTC

"...Perhaps laughter will then have formed an alliance with wisdom; perhaps only 'gay science' will remain."


				

User ID: 342

Primaprimaprima

Bigfoot is an interdimensional being

2 followers   follows 0 users   joined 2022 September 05 01:29:15 UTC

					

"...Perhaps laughter will then have formed an alliance with wisdom; perhaps only 'gay science' will remain."


					

User ID: 342

I don't think anyone understands what "AGI" means

Most "confusion" over what AGI means seems to come from people who want to shift the goalposts to make ridiculous claims (e.g. that GPT-3 is "already an AGI").

One thing that would obviously qualify an AI as an AGI is if it could do everything a human could do. Obviously this would entail that it has agency, that it has some sort of body that it can use to interact with the physical world, etc.

Maybe some less powerful systems could qualify as AGI as well, e.g. a non-embodied agent that we could only interact with through text. But the fact that there are edge cases doesn't mean that the concept of AGI is particularly difficult to grasp, or that most people don't intuitively understand what is meant by the concept.

A fact-checker-checker, A Regime-meme detector, A metaverse scrambler

Come on man.

Automated chaff generator against "radicalization experts"

The "radicalization experts" would of course be AI bots themselves, running on more powerful server clusters than whatever The Resistance could cobble together, so they would be able to respond to both human dissidents and the AI-generated content meant to distract them, without missing a beat.

I think we have a serious issue with diversity of opinion.

Any forum that discusses pretty much anything will tend to develop consensus viewpoints over time. It's especially bad with the culture war, because leftists will mostly self-select out of participating in any forum where people are allowed to express reactionary viewpoints.

I wish we had more diversity of opinion, but there's only so much we can do to foster that, unfortunately.

Consider proselytizing at /r/stupidpol. They're anti-woke Marxists.

Does the existence of openly available cryptographic tools and communication channels, in your mind, undermine the power of state security to quash dissidents?

Not really.

The US government did a perfectly fine job of crushing the alt right, and it had nothing to do with their communications not being secret enough.

If not, why does Beijing insist on everyone using not Matrix/Element or Briar or even Telegram (with keys beyond their reach) but WeChat, where the Tovarisch Commissar can check up on you? Why do FSB and NSA and everyone else of that Big Brother mindset fight e2e encryption?

A variety of reasons. I'm quite certain that they could get by even with e2e encryption being easily and publicly accessible though.

Largely the same principle applies to all areas where AI promises drastic improvements: any sort of generative tools, content curation tools, personal assistants, scientific instruments, CAD, robot control software, you name it.

So... how are any of these things going to help you achieve your desired anti-establishment political aims? Is your AI assistant going to put a reminder on your calendar telling you when it's time to take your AI robot buddies and go storm the palace? What happens when the palace guards have bigger and better AI robot buddies?

I'm not really trying to be cheeky. I'm just asking you to describe in sufficient detail what you're imagining. People thought throughout history that lots of different things were going to revolutionize human relations and put an end to tyranny - democracy, reason, public education, communism. None of them did. We're mostly still dealing with the same old shit that humanity has always dealt with. You can't just stop at "AI is awesome and I want it". You need a concrete argument for why things will actually be different this time - otherwise you end up with the classic communist problem where everyone just assumed "well of course if you tear down existing society then everyone will spontaneously rearrange themselves into new social relations that are perfectly just and equitable" without actually stopping to consider the details of how that was going to work.

You conveniently assume linear or superlinear returns to capability, where AI will necessarily benefit the incumbent actors even more than commoners.

Of course it will necessarily benefit the incumbent actors. The US has a rather high rate of gun ownership, and who do guns benefit more? The people or the government?

I would like to see a higher standard of charitability for criticisms of progressive leftism.

The standards that are currently enforced here are the standards that I would be happy to apply to anyone, friend and foe alike. So I don't feel that the current standards are deficient or unfair, and I don't feel any need to change them. If people feel that the current standards aren't fair to leftism, well, that's on them. I think that rigorously enforced absolute neutrality in every post would just stifle discussion and make it more cumbersome to write effortposts.

I want leftists to come here and start posts with "so we all know that Trump supporters are on the verge of a full fascist takeover of the US, and here are their latest moves on that front". I would then explain to them why I think they're wrong. That's how I think the forum should work.

If I had to update my beliefs every time I encountered evidence against them, I'd be able to hold very few beliefs about anything of importance.

As a general methodological point, I don't think there's anything objectionable about noting that you don't find an argument convincing, even though you're not prepared to give a fully-formed response to it.

I can’t imagine that you’re talking about anything except direct calls to action, in which case: no shit you’re not allowed to say that. That’s frowned upon everywhere, right-wing circles included.

I don't feel like TheMotte is that fast of a forum. The CW thread only gets a couple top-level comments per day. You can also reply to any comments in the CW thread throughout the week and continue conversations with people that way.

In an ideal world, everyone's comments would get posted in bulk at 6AM every day, so I could read the day's content over my morning coffee, contemplate how I'd like to reply, and then post those replies at my leisure later in the day.

That sort of reminds me of the system that Stallman has:

I generally do not connect to web sites from my own machine, aside from a few sites I have some special relationship with. I usually fetch web pages from other sites by sending mail to a program (see https://git.savannah.gnu.org/git/womb/hacks.git) that fetches them, much like wget, and then mails them back to me. Then I look at them using a web browser, unless it is easy to see the text in the HTML page directly. I usually try lynx first, then a graphical browser if the page needs it (using konqueror, which won't fetch from other sites in such a situation).

My dude, I listed three services that provide what I believe to be good quality AI pornography.

I am not aware of a single high-quality AI image of two people having sex. Hell, I haven’t even seen a convincing example of masturbation. To say nothing of the more obscure fetishes you find on /d/. Do such pictures already exist?

It seems to be a special case of the more general problem with existing models that, as you increase the number of objects in the scene and have the people engage in more complex actions, you increase your chances of getting incoherence and body horror.

This is something I’ve thought about before and it dovetails nicely with the point I made about accepting boredom.

Human relations are messy and unpredictable. When you’re interacting with real people in real life, you can’t just “change the channel” and order up someone who’s a better fit for your temperament and interests instead. If we’re committed to maintaining a world of real relationships rather than virtual ones, then you have to accept some limitation on your power to control the people around you, and you have to accept that those people will come with certain flaws that are unavoidable.

In my meandering experience most niche stuff trends towards being low quality anyway

I’ve seen this response multiple times now in discussions of AI art, and it’s pretty baffling. “It doesn’t matter if the AI can’t do X because X type of art doesn’t have to be that good in the first place.” That’s not exactly a reassuring marketing pitch for the product you’re trying to sell.

Obviously determinations of quality should be left to people who appreciate the type of art in question in the first place, which you clearly do not.

Honestly it’s kind of concerning seeing how much internet communities have already done for sexual dysfunction

The discussion is over whether the AI can satisfy the requirements in question, not the moral status of the requirements themselves.

Reading the replies to this post has convinced me that pleasure is unironically a bad thing and that the very concept of pleasure-for-its-own-sake should be regarded with the same moral suspicion with which we now regard cigarettes and junk food.

Sometimes it’s not about tactics. Sometimes it’s just hopeless, and that’s it.

Were there any “tactics” that dissidents under Stalin could have used to make the fall of communism happen faster than it did? Doesn’t seem very plausible.

The point was that there is such a thing as a hopeless political situation. I wasn’t comparing our present situation to Stalinist Russia, nor was I offering any thesis on what the necessary and sufficient conditions for a “hopeless political situation” are.

The left is like a scientist who runs a 1000 experiments trying to find a physics breakthrough. Most of the time the experiment fails. The rights job is to block the left from doing too much but once an idea appears to be working then they take the position.

That's a nice attempt to try to make sure that everyone has their proper place, but, I can't actually endorse this.

I'm not just an admin who signs off on the left's "experiments". I have my own substantive moral view of how the world should work, one that is not merely reducible to "keep things the way they are".

Finally something that explicitly ties AI into the culture war: Why I HATE A.I. Art - by Vaush

This AI art thing. Some people love it, some people hate it. I hate it.

I endorse pretty much all of the points he makes in this video. I do recommend watching the whole thing all the way through, if you have time.

I went into this curious to see exactly what types of arguments he would make, as I've been interested in the relationship between AI progress and the left/right divide. His arguments fall into roughly two groups.

First is the "material impact" arguments - that this will be bad for artists, that you're using their copyrighted work without their permission, that it's not fair to have a machine steal someone's personal style that they worked for years to develop, etc. I certainly feel the force of these arguments, but it's also easy for AI advocates to dismiss them with a simple "cry about it". Jobs getting displaced by technology is nothing new. We can't expect society to defend artists' jobs forever, if they are indeed capable of being easily automated. Critics of AI art need to provide more substantial arguments about why AI art is bad in itself, rather than simply pointing out that it's bad for artists' incomes. Which Vaush does make an attempt at.

The second group of arguments could perhaps be called "deontological arguments" as they go beyond the first-person experiential states of producers and consumers of AI art, and the direct material harm or benefit caused by AI. The main concern here is that we're headed for a future where all media and all human interaction is generated by AI simulations, which would be a hellish dystopia. We don't want things to just feel good - we want to know that there's another conscious entity on the other end of the line.

It's interesting to me how strongly attuned Vaush is to the "spiritual" dimension of this issue, which I would not have expected from an avowed leftist. It's clearly something that bothers him on an emotional level. He goes so far as to say:

If you don't see stuff like this [AI art] as a problem, I think you're a psychopath.

and, what was the real money shot for me:

It's deeply alienating, and if you disagree, you cannot call yourself a Marxist. I'm drawing a line.

Now, on the one hand, "leftism" and "Marxism" are absolutely massive intellectual traditions with a lot of nuance and disagreement, and I certainly don't expect all leftists to hold the same views on everything. On the other hand, I really do think that what we're seeing now with AI content generation is a natural consequence of the leftist impulse, which has always been focused on the ceaseless improvement and elevation of man in his ascent towards godhood. What do you think "fully automated luxury gay space communism" is supposed to mean? It really does mean fully automated. If everyone is to be a god unto themselves, untrammeled by external constraints, then that also means they have the right to shirk human relationships and form relationships with their AI buddies instead (and also flood the universe with petabytes of AI-generated art). At some point, there seems to be a tension between progress on the one hand and traditional authenticity on the other.

It was especially amusing when he said:

This must be how conservatives feel when they talk about "bugmen".

I guess everyone becomes a reactionary at some point - the only thing that differs is how far you have to push them.

It won’t neatly map onto a left/right divide

Pandemics and vaccines weren’t supposed to be a left/right issue either, but, we saw how that turned out.

No, the good reason I have for creating is because I have something to say.

You can’t see the problem with AI art if you just focus on you, yourself, and your personal capacities and motivations for artistic production.

The problem lies in how AI art alters the nature of art and how we relate to it, at a societal level.

Vaush gestured towards this by attempting to locate the problem in communication - highlighting the relationships between people rather than focusing on individual people in isolation.

I hope to have more to say on these points in a future post.

counter-trans ideology

I have noticed the analogy, which is part of why I’m slightly surprised that this forum is so pro-AI. I mean, given the LessWrong origins of this forum, it makes sense they’d be pro-AI. But this is a decidedly reactionary slice of that original LW readership. How can the same group of people be so reactionary on so many issues while also supporting the prospect of AI-induced complete social disruption. Yes yes, it doesn’t have to be the same individuals making both types of posts, but still.

Did you actually watch the video?

I don’t see how you can walk away from it thinking that Vaush doesn’t deeply care about this issue on a personal level. And I went in skeptical, assuming that he didn’t care about it on a personal level.

How can the same group of people be so reactionary on so many issues while also supporting the prospect of AI-induced complete social disruption

Do you think there are any psychological motivations that don’t ultimately reduce to personal power?

If the AI is going to obsolete anyone, then it will quite literally be my enemies. I am very much opposed to the twitterati ruling class. I make a living by writing code. And yet I am in complete agreement with Vaush's views on AI art. How do you explain that?

You're really grasping at straws here.