@Primaprimaprima's banner p

Primaprimaprima

Bigfoot is an interdimensional being

2 followers   follows 0 users  
joined 2022 September 05 01:29:15 UTC

				

User ID: 342

Primaprimaprima

Bigfoot is an interdimensional being

2 followers   follows 0 users   joined 2022 September 05 01:29:15 UTC

					

No bio...


					

User ID: 342

It’s painful not being able to browse the site with the old.reddit.com interface (or visual equivalent).

Seconding! Someone please clone the UI from old.reddit.com

It's wrong to kill humans. But it's fine to kill ants. Plausibly, every non-human animal species is closer to ants along the relevant axes than they are to humans. So it's fine to kill non-human animals for the same reasons it's fine to kill ants (whatever those reasons actually turn out to be).

Is the current push for social acceptance of gender-based body modification something that will spread into other kinds of artificial body modification, such as plastic surgery for appearance or medications for weight loss?

Not necessarily. People are good at compartmentalizing.

Has increased support for trans people translated into broader social support for transracial people?

Now that Stable Diffusion has been public for a week - what will be the next field to be revolutionized by AI?

(And if your answer is "writing" or "music", I'd like to hear what field you think will be next after those. Those are obvious candidates because AI systems are already in use in those fields and/or will be shortly, but due to structural differences between those fields and the visual arts, I'm skeptical that AI will have the same seismic impact there that it's currently having in art.)

tech oligarchs attempting to push us all into the Metaverse

Can you give me a plausible narrative about how we will be "pushed" into the Metaverse?

Currently I have no plans to ever buy a VR helmet. I don't want one. What will make me change my mind?

One possible model of the situation is that AI will be so disruptive that it should be thought of as being akin to an invading alien force. If the earth was under attack from aliens, we wouldn't expect one political party to be pro-alien and one to be anti-alien. We would expect humanity to unite (to some degree) against their common enemy. There would be some weirdos who would end up being pro-alien anyway, but I wouldn't expect them to be concentrated particularly on either the left or the right.

In the short- and medium-term, your views on AI will be largely correlated with how strongly your personal employment prospects are impacted. As you point out, left-aligned artists and journalists aren't going to be too friendly to AI if it starts taking their jobs (especially if it leaves many right-coded industries unaffected), regardless of what other political priors they might have.

I wrote an essay on the old site about how techno-optimism and transhumanism fit more comfortably in a leftist worldview than a rightist worldview, and I still think there's some truth to that. But people can be quick to change their views once their livelihoods are on the line.

I just see AI as perniciously resistant to regulation

People said that about the internet too.

Hasn’t helped out Kiwifarms that much.

I’m not sure if you can prove too much here. There is nothing that floats totally free of all regulation (understood in a sufficiently broad sense). You can’t say “well, it’s technology, and technology is above such petty concerns”. Technology gets regulated all the time: nukes, guns, etc.

It depends on how good the technology gets, and how quickly.

It’s pretty limited right now. By that I mean there’s a wide range of prompts and scenarios that simply don’t give good results at all (and aren’t helped very much by img2img, fine tuning, textual inversion, etc). That’s the main thing keeping artists’ jobs secure right now.

The better it gets, the more artists’ jobs will be on the chopping block.

Just a year ago the predecessors of the current models were barely passable at art. One year from now, they could be exponentially better still.

https://xkcd.com/605/

Even in the “good old days”, incidents like that were relatively rare.

Modern cancel culture is, on average, far more vicious and with far more lasting impacts on people’s lives, compared to what the old internet used to dish out.

I've seen the term "AI Art Bro" thrown around the same why as NFT Bro, which makes me a bit sad.

Sad in what sense?

I see the people behind the development of this tech as essentially launching a malicious DDoS attack on human culture. Don’t be surprised when you get pushback.

I'm not interested in approaching the question from the perspective of, "what is permissible for an individual artist to do?". I'm interested in approaching the question from the perspective of, "what impact will this technology have on culture and the nature of art?".

Consider the impact that AI is already having on the genre fiction market. It's easy to imagine that writers will soon feel compelled to collaborate with AI, even if they don't want to, in order to match the output rate of authors who do use AI. I think that's a rather deplorable state of affairs. But that problem doesn't come into view when we only consider individual actors in isolation; it only becomes apparent when we zoom out and look at culture as a whole.

I recommend reading Benjamin's The Work of Art in the Age of Mechanical Reproduction if you haven't. Not because I necessarily endorse his conclusions, but because his thought process is illustrative of how technology can impact the meaning and nature of art, independent of any one person's thoughts or actions.

Curious how you mentioned failures internal to the right, but you didn’t mention The_Donald getting banned from reddit.

I never encountered a contradictory set of definitions for sex and gender.

As far as I know, the sex/gender “distinction” was invented wholesale in very recent history for overt political purposes. I reject that there is such a distinction.

This is pretty clearly a woman.

To me it’s clearly a man, due to his facial structure. But it’s possible I could be mistaken.

Being a man or being a woman isn’t about what clothes you wear or how long your hair is. They’re biological categories.

This is not a high-quality contribution.

I wonder whether there are interests behind the scenes that do not want to see a Fischer-esque personality rise in popularity.

I appreciate the attempt to link this to the CW, but, I really don’t see it here. I think Magnus just got pissed that he lost to someone he “shouldn’t” have lost to.

I was browsing the latest new journal articles on philpapers.org, an archive of (mostly analytic, mostly Anglophone) philosophy papers, and came across the following: Demanding a halt to metadiscussions:

How do social actors get addressees to stop retreating to metadiscussions that derail ground-level discussions, and why do they expect the strategies to work? The question is of both theoretical and practical interest, especially with regard to ground-level discussions of systemic sexism and racism derailed by qualifying “not all men” and “not all white people” perform the sexist or racist actions that are the topic of discussion. [...] I find that social actors use strategies that may at first glance appear to be out of bounds in an ideal critical discussion—e.g., demanding, shouting, cussing, sarcasm, name-calling—to cultivate a context where using not-all qualifiers becomes increasingly costly.

Something amusing about this abstract is that a statement of the form "not all men are like that" hardly qualifies as "metadiscussion". Challenging your opponent's assertion by pointing out counterexamples isn't metadiscussion - it's just discussion. I would expect "meta" discussion to be something more along the lines of "what epistemology allows you to KNOW that ALL men are sexist?" or "let's examine the sociological history of the concept of sexism and what political or psychological factors may be causing you to deploy it in this context".

Anyway, philpapers is pretty indiscriminate in what they archive, so I checked to see what journal this was actually published in. Argumentation is "an international and interdisciplinary journal that gathers academic contributions from a wide range of scholarly backgrounds and approaches to reasoning, natural inference and persuasion: communication, classical and modern rhetoric, linguistics, discourse analysis, pragmatics, psychology, philosophy, formal and informal logic, critical thinking, history and law" (i.e. the type of publication that would have uncritically accepted the original Sokal paper), so I wouldn't expect the publications in this journal to all conform to the standards and values of analytic philosophy.

Ultimately I don't think that this paper is an isolated incident though, but rather it seems to me to be representative of broader trends in all schools of western philosophy, including analytic philosophy. The Philosophical Quarterly, for example, published a glowing review of a book entitled The Case for Rage: Why Anger is Essential to Anti-Racist Struggle. My general impression of academic philosophy for the last few years is that departments have shifted focus away from the more "pure" research into questions of metaphysics and epistemology and have put more emphasis on hiring for positions with a focus on social and political philosophy, and the faculty who fill those positions are of course expected to produce research that advances the party line.

If even analytic philosophy, which was founded on norms of disinterested rigor and an explicit suspicion of moral and political philosophy, can become subject to institutional capture for political purposes, then it seems like truly nowhere is safe. The hard sciences are certainly more resilient than the humanities are, although not completely.

This might provoke a reaction here: Effective altruism is the new woke

Effectively, both longtermism and woke progressivism take a highly restricted number of emotional impulses many of us ordinarily have, and then vividly conjure up heart-rending scenarios of supposed harm in order to prime our malleable intuitions in the desired direction. Each insists that we then extend these impulses quasi-rigorously, past any possible relevance to our own personal lives. According to longtermists, if you are the sort of person who, naturally enough, tries to minimise risks to your unborn children, cares about future grandchildren, or worries more about unlikely personal disasters rather than likely inconveniences, then you should impersonalise these impulses and radically scale them up to humanity as a whole. According to the woke, if you think kindness and inclusion are important, you should seek to pursue these attitudes mechanically, not just within institutions, but also in sports teams, in sexual choices, and even in your application of the categories of the human biological sexes.

I do think it could be worthwhile to have a discussion about the parallels between EA and wokeism, but unfortunately the author's actual comparison of the two is rather sparse, focusing on just this one methodological point about how they both allegedly amplify our moral impulses beyond their natural scope. She also runs the risk of conflating longtermism with EA more broadly.

To me, an obvious similarity between EA and wokeism is that they both function as substitutes for religion, giving structure and meaning to individuals who might otherwise find themselves floating in the nihilistic void. Sacrifice yourself for LGBT, sacrifice yourself for Jesus, sacrifice yourself for malaria nets - it's all the same story at the end of the day. A nice concrete goal to strive for, and an actionable plan on how to achieve it, so that personal ethical deliberation is minimized - that's a very comforting sort of structure to devote yourself to.

I'd also be interested in exploring how both EA and wokeism relate to utilitarianism. In the case of EA the relation is pretty obvious, with wokeism it's less clear, but there does seem to be something utilitarian about the woke worldview, in the sense that personal comfort (or the personal comfort of the oppressed) will always win out over fidelity to abstract values like freedom and authenticity.

I have fun reading /r/sneerclub

it's unclear whether this late into the war it will be sufficient to turn the tide.

Are you implying that this thing isn't going to go on for years?

First volley in the AI culture war? The EU’s attempt to regulate open-source AI is counterproductive

The regulation of general-purpose AI (GPAI) is currently being debated by the European Union’s legislative bodies as they work on the Artificial Intelligence Act (AIA). One proposed change from the Council of the EU (the Council) would take the unusual, and harmful, step of regulating open-source GPAI. While intended to enable the safer use of these tools, the proposal would create legal liability for open-source GPAI models, undermining their development. This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public’s understanding of AI.

The definition of "GPAI" is vague and unclear, but it may possibly differ from the commonly-understood usage of "AGI" and may include systems like GPT-3 and SD.

I will be very curious to see how much mainstream political traction these issues get in the coming years and what the left/right divide on the issue will look like.

if you want our business, you must abide by the rules we set that are stricter than everybody else's

It works for China.

Can you give me an example of how AI could undermine the power of “the bureaucrats in Brussels”?