@Primaprimaprima's banner p

Primaprimaprima

Bigfoot is an interdimensional being

2 followers   follows 0 users  
joined 2022 September 05 01:29:15 UTC

"...Perhaps laughter will then have formed an alliance with wisdom; perhaps only 'gay science' will remain."


				

User ID: 342

Primaprimaprima

Bigfoot is an interdimensional being

2 followers   follows 0 users   joined 2022 September 05 01:29:15 UTC

					

"...Perhaps laughter will then have formed an alliance with wisdom; perhaps only 'gay science' will remain."


					

User ID: 342

The decline of the Literary Bloke: "In featuring just four men, Granta’s Best of Young British Novelists confirms what we already knew: the literary male has become terminally uncool."

Just some scattered thoughts.

The Great Literary Man is no longer the role model he once was. The seemingly eternal trajectory outlined by Woolf has been broken. The statistics are drearily familiar. Fewer men read literary novels and fewer men write them. Men are increasingly absent from prize shortlists and publishers’ fiction catalogues. Today’s release of Granta’s 20 best young British novelists – a once-a-decade snapshot of literary talent – bottles the trend. Four of the 20 on the list are men. That’s the lowest in the list’s 40-year history. In its first year, 1983, the Granta list featured only six women.

It has to be pointed out that any such "great upcoming young novelists" list must be comprised of mostly women, out of necessity. Otherwise the organizers of the list would be painted as sexist and privileged and out of touch and it would probably jeopardize their careers. You don't even need to reach for the more subtle types of criticisms that revisionists make of the traditional canon: "yeah, I know like you feel you were just judging works solely on literary merit, and you just so happened to collect a list of 100 deserving authors where 99 of them are men, but actually you were being driven by subconscious patriarchal bias and you need to escape from your historically ossified perspective and so on and so forth". What's going on now in the publishing industry is far more overt: "it's time to hand the reins over to women, period". In such a cultural context, how could a list of the "20 best young British novelists" be taken as unbiased evidence of anything?

The irrelevance of male literary fiction has something to do with “cool”. A few years ago Megan Nolan noted – with as much accuracy as Woolf on these men in Mrs Dalloway – that it might be “inherently less cool” to be a male novelist these days. Male writers, she continued, were missing a “cool, sexy, gunslinger” movement to look up to. All correct.

It's true that literary fiction is not as cool as it once was, although this in itself is not a great moral catastrophe. It's part of the natural cycle of things. The "cool" things now are happening in TV, film, video games, and comic books. When was the last time a literary fiction author of either gender captured the imaginations of millions of people the way Hajime Isayama did? The literary novel is not eternal (many will argue that historically speaking, it's a relatively recent invention) and it is not inherently superior to other narrative art forms.

The decline of male literary fiction is not down to a feminist conspiracy in publishing houses

Correct, it's not a conspiracy, but only because there is nothing conspiratorial about it. If you were to ask any big (or small!) publishing house if they gave priority to voices from traditionally marginalized groups, they would say yes. If you were to then ask them if women are a traditionally marginalized group, they would say yes.

...

It's not a conspiracy if they just tell you what they're doing!

The most understanding account of male literary ambition was written by a woman.

There's been a meme for some time that goes something like, "men don't understand women, but women understand men - maybe even better than men do themselves", which I find to be quite obnoxious. If there is any "misunderstanding", then it surely goes both ways. There are plenty of things in the male experience that have no natural analogue in the female experience, same as the reverse.

Supreme Court strikes down Biden’s student loan forgiveness plan:

The Supreme Court on Friday struck down President Joe Biden’s student loan forgiveness plan, denying tens of millions of Americans the chance to get up to $20,000 of their debt erased.

The ruling, which matched expert predictions given the justices’ conservative majority, is a massive blow to borrowers who were promised loan forgiveness by the Biden administration last summer.

The 6-3 majority ruled that at least one of the six states that challenged the loan relief program had the proper legal footing, known as standing, to do so.

The high court said the president didn’t have the authority to cancel such a large amount of consumer debt without authorization from Congress and agreed the program would cause harm to the plaintiffs.

The amusing thing here to me is that we got two major SCOTUS rulings in two days that are, on the face of it, not directly related to each other in any obvious way (besides the fact that they both deal with the university system). One could conceivably support one ruling and oppose the other. The types of legal arguments used in both cases are certainly different. And yet we all know that the degree of correlation among the two issues is very high. If you support one of the rulings, you're very likely to support the other, and vice versa.

The question for the floor is: why the high degree of correlation? Is there an underlying principle at work here that explains both positions (opposition to AA plus opposition to debt relief) that doesn't just reduce to bare economic or racial interest? The group identity angle is obvious. AA tends to benefit blacks and Hispanics at the expense of whites and Asians. Student debt relief benefits the poorer half of the social ladder at the expense of the richer half of the social ladder. Whites and Asians tend to be richer than blacks and Hispanics. So, given a choice of "do you want a better chance of your kids getting into college, and do you also not want your tax dollars going to people who couldn't pay off their student loans", people would understandably answer "yes" to both - assuming you’re in the appropriate group and that is indeed the bargain that’s being offered to you. But perhaps that's uncharitable. Which is why I'm asking for alternative models.

In Dante's The Divine Comedy, the virtuous pagans - whose ranks include figures such as Homer, Plato, Aristotle, Ovid, and Virgil - are confined to the first circle of Hell:

“Inquir’st thou not what spirits

Are these, which thou beholdest? Ere thou pass

Farther, I would thou know, that these of sin

Were blameless; and if aught they merited,

It profits not, since baptism was not theirs,

The portal to thy faith. If they before

The Gospel liv’d, they serv’d not God aright;

And among such am I. For these defects,

And for no other evil, we are lost;

Only so far afflicted, that we live

Desiring without hope.”

Those who inhabit this circle of the Inferno committed no extraordinary sins, over and above the sins that are committed in the course of any human life, that would merit damnation. Many of them were quite exemplary in their conduct and in their virtue. Few men in the middle ages commanded as much respect as Aristotle, whose influence on the development of scholastic philosophy was unrivaled. But they nevertheless had the misfortune of being born before Christ. They were deprived of the one and only way to the Father; thus they cannot be saved. There can be no exceptions. An obligation unfulfilled through no fault of one's own, an obligation that was in fact impossible to fulfill, remains an obligation unfulfilled.

This is a theological issue on which the Church has softened over the centuries. Even relatively conservative Catholics today get squeamish when the issue of Hell is raised. They will say that we "cannot know" who is in Hell and who is not; that this is a matter for God and God alone. It is not our place to pass judgement. But Dante had no such qualms. He was not wracked with inner anxiety, asking himself whether he had the "right" to think such thoughts, as he drew up his precise and detailed classification of all the damned; nor did he live in a culture of religious pluralism that needed to be placated with niceties and assurances. Dante simply knew. This fundamental conviction in what must be, the will to adhere to a vision, to one singular vision, is something that is now quite foreign to us; indeed it is something that is now viewed as rude and suspicious.

This image of the universe as a cosmic lottery with infinite stakes, this idea that one could be consigned to eternal damnation simply for having the bad luck to be born in the wrong century is, of course, psychotic. There is no sense in which it could be considered fair or rational. But all genuine responsibility is psychotic; that is the wager you accept when you choose to be a human instead of a mere appendage of the earth. Kant was well aware of this. Whence the sublime insanity of the categorical imperative, in spite of his utmost and repeated insistence that he was only discharging his duties as the faithful servant of Reason: you can never tell a lie, even to save another's life, even to save your own life. The moment you decide to perform or abandon your duty based on a consideration of the consequences is the moment at which it is no longer a duty for you; the logic of utilitarian calculation has become dominant, rather than the logic of obligation.

I need not persuade you that we suffer from a lack of responsibility today; it is a common enough opinion. We are told that young men are refusing to "grow up": they aren't getting jobs, they aren't getting wives, they aren't becoming stable and productive members of society. Birth rates are cratering because couples feel no obligation to produce children. The right complains that people feel no responsibility to their race, the left complains that people feel no responsibility to the workers' revolution. Despite some assurances that we have entered a post-postmodern era of revitalized sincerity, the idea of being committed to any cause that is not directly related to one's own immediate material benefit remains passé and incomprehensible. The abdication of responsibility, the default of all promises, reaches its apotheosis in the advance of technology, and in particular in the advance of artificial intelligence. The feeling is that one should have no obligations to anyone or anything, one should not be constrained in any way whatsoever, one should become a god unto oneself.

Is there anything we can recover from Dante's notion of cosmic responsibility, which has now become so alien to us? Is there any way that this idea, or even any remnant of it, can again become a living idea, can find root in this foreign soil? Perhaps not necessarily its Christian content, but the form of it, at any rate: the form of a responsibility that is not directed at any of the old and traditional obligations, but may indeed be directed at new and strange things that we can as of yet scarcely imagine.

Plainly we are beyond the domain of "rational" argumentation, or at least any such argumentation that would be accepted in the prevailing Enlightenment-scientific framework. We live in the age of the orthogonality thesis, of the incommensurability of values. In an important sense though we should remember that we are not entirely unique in this condition; the groundlessness of all values is not solely due to the fact that God has fled. There would have been an important open question here for the medieval Christians as well. Such questions date back as far as Plato's Euthyphro: are things Good because they are loved by the gods, or do the gods love Good things because they are Good? Are we truly responsible, in an ontological sense, for following Christ and abstaining from sin, or are we only contingently compelled to do so because of the cosmic gun that God is holding up against all of our heads? It has always been possible to ask this question in any age.

At certain times, the production of new values is a task that has been assigned to artists. Perhaps a poet, if he sings pleasingly enough, could attune people to a new way of feeling and perceiving. But it has never been at all clear to me whether art was really capable of affecting this sort of change or not. I view it as an open question whether any "work" itself (in this I include not only art, but also all the products of philosophical reflection) has ever or could ever affect change at a societal level, or whether all such works are really just the epiphenomena of deeper forces. There is a great deal of research to be done in this area.

There is a certain ontological fracture at the heart of the cultural situation today, a certain paradoxical two-sidedness: from one perspective, centers of power are more emboldened than ever before, able to transmit edicts and commands to millions of people simultaneously and compel their assent; we saw this with Covid. From another perspective, social reality has never been more fragmented, with all traditional centers of social organization (churches, obviously, but also the nightly news, Hollywood, universities) disintegrating in the face of the universal solvent that is the internet, leading to an endless proliferation of individual voices and sub-subcultures. In either case, it is hard to find an opening for authentic change. It is impossible to imagine Luther nailing his theses to the door today, or Lenin storming the Winter Palace. This type of radical fragmentation, when the narrative of no-narrative asserts itself so strongly as the dominant narrative that no escape seems possible, is what Derrida celebrated in Of Grammatology as "the death of the Book, and the beginning of writing" - writing here being the infinite profusion of signs, the infinite freeplay of identities, infinite exchange and infinite velocity, and, in my view - even though Derrida would refuse to characterize it in these terms - infinite stasis.

It's fascinating that Derrida had the foresight in the 1960s, when computing was in its infancy and the internet and LLMs were undreamed of, to say the following about "cybernetics":

[...] Whether it has essential limits or not, the entire field covered by the cybernetic program will be the field of writing. If the theory of cybernetics is by itself to oust all metaphysical concepts - including the concepts of soul, of life, of value, of choice, of memory - which until recently served to separate the machine from man, it must conserve the notion of writing, trace, grammè [written mark], or grapheme, until its own historico-metaphysical character is also exposed.

(The affinities between the Rationalist ethos and the so-called "irrational postmodern obscurantists" are fascinating, and the subject deserves its own top-level post. @HlynkaCG has been intimating at something real here with his posts on the matter, even though I don't agree with him on all the details. Deleuze would have been delighted at the sight of Bay Area poly orgies - a fitting expression of the larval subject, the desiring machine.)

It's hard to be very optimistic. The best I can offer in the way of advice is to look for small seeds of something good, and cultivate them wherever you find them:

[...] And this is how Freud already answers this boring Foucauldian reproach - before Foucault's time of course - that psychoanalysis is comparable to confession. You have to confess your blah blah. No, Freud says that psychoanalysis is much worse: in confession you are responsible for what you did, for what you know, you should tell everything. In psychoanalysis, you are responsible even for what you don't know and what you didn't do.

Finally something that explicitly ties AI into the culture war: Why I HATE A.I. Art - by Vaush

This AI art thing. Some people love it, some people hate it. I hate it.

I endorse pretty much all of the points he makes in this video. I do recommend watching the whole thing all the way through, if you have time.

I went into this curious to see exactly what types of arguments he would make, as I've been interested in the relationship between AI progress and the left/right divide. His arguments fall into roughly two groups.

First is the "material impact" arguments - that this will be bad for artists, that you're using their copyrighted work without their permission, that it's not fair to have a machine steal someone's personal style that they worked for years to develop, etc. I certainly feel the force of these arguments, but it's also easy for AI advocates to dismiss them with a simple "cry about it". Jobs getting displaced by technology is nothing new. We can't expect society to defend artists' jobs forever, if they are indeed capable of being easily automated. Critics of AI art need to provide more substantial arguments about why AI art is bad in itself, rather than simply pointing out that it's bad for artists' incomes. Which Vaush does make an attempt at.

The second group of arguments could perhaps be called "deontological arguments" as they go beyond the first-person experiential states of producers and consumers of AI art, and the direct material harm or benefit caused by AI. The main concern here is that we're headed for a future where all media and all human interaction is generated by AI simulations, which would be a hellish dystopia. We don't want things to just feel good - we want to know that there's another conscious entity on the other end of the line.

It's interesting to me how strongly attuned Vaush is to the "spiritual" dimension of this issue, which I would not have expected from an avowed leftist. It's clearly something that bothers him on an emotional level. He goes so far as to say:

If you don't see stuff like this [AI art] as a problem, I think you're a psychopath.

and, what was the real money shot for me:

It's deeply alienating, and if you disagree, you cannot call yourself a Marxist. I'm drawing a line.

Now, on the one hand, "leftism" and "Marxism" are absolutely massive intellectual traditions with a lot of nuance and disagreement, and I certainly don't expect all leftists to hold the same views on everything. On the other hand, I really do think that what we're seeing now with AI content generation is a natural consequence of the leftist impulse, which has always been focused on the ceaseless improvement and elevation of man in his ascent towards godhood. What do you think "fully automated luxury gay space communism" is supposed to mean? It really does mean fully automated. If everyone is to be a god unto themselves, untrammeled by external constraints, then that also means they have the right to shirk human relationships and form relationships with their AI buddies instead (and also flood the universe with petabytes of AI-generated art). At some point, there seems to be a tension between progress on the one hand and traditional authenticity on the other.

It was especially amusing when he said:

This must be how conservatives feel when they talk about "bugmen".

I guess everyone becomes a reactionary at some point - the only thing that differs is how far you have to push them.

There seems to be a small movement by Republican lawmakers to put legal pressure on the excesses of woke universities.

The STEM Scott writes about several bills up for consideration in the Texas state senate:

This week, the Texas Senate will take up SB 18, a bill to ban the granting of tenure at all public universities in Texas, including UT Austin and Texas A&M. (Those of us who have tenure would retain it, for what little that’s worth.) [...]

The Texas Senate is considering two other bills this week: SB 17, which would ban all DEI (Diversity, Equity, and Inclusion) programs, offices, and practices at public universities, and SB 16, which would require the firing of any professor if they “compel or attempt to compel a student … to adopt a belief that any race, sex, or ethnicity or social, political, or religious belief is inherently superior to any other race, sex, ethnicity, or belief.”

Florida is considering a similar bill, HB 999, that would place restrictions on DEI-related initiatives and majors at public universities. Already the effects are being felt at SLACs like the New College:

We have seven or eight tenure-track candidates coming up for tenure this year. Everyone has a positive recommendation for tenure. The next step is supposed to be the Board of Trustees, which in April will approve or deny tenure. Traditionally, the Board of Trustees just rubber-stamps the tenure based on the recommendations that are made. Now, recently, President Corcoran has met with the president of our union to recommend that the candidates withdraw their files before it’s too late. My interpretation is that Corcoran suspects there’s probably a non-negligible proportion of the trustees who want to make an example out of those people and deny them tenure. The trustees as a whole, Corcoran and DeSantis want to turn our institution into something different. And in order to do that, they need to hire new faculty. The best way for them to hire new faculty is to get rid of the faculty who they can fire without breaching contract. So that means firing the tenure-track faculty. [...]

The most likely thing to happen is that they’re going to impose some changes on the curriculum. It’s not clear exactly what form and with what faculty input, but they’re getting rid of gender studies and critical race theory—they have said that publicly many times. The law, HB 999, is hopelessly vague. There’s so many things that could fall under the umbrella of gender studies and critical race theory, and we don’t know what programs, classes or parts of a given syllabus are likely to be illegal if it passes. We don’t know if that will mean we will have to submit our syllabi to the provost or the president or the board, or what authority they will have.

I'm in a bit of an odd place with regards to these issues. I don't fit neatly onto the woke "how dare you attack our most hallowed and sacred institutions!" side, nor the anti-woke "stop teaching this pinko commie crap to our kids!" side.

I really do have an almost naive faith in free speech for all, even for my worst enemies. Despite being an avowed rightist, I not only want leftists to be able to speak, but I want them to be platformed! I want to help you get the word out! I think our public life really should play host to a diversity of viewpoints. I think the university should be a hothouse of strange and controversial ideas. By all means, keep teaching CRT and women's studies and black studies and whatever else you want. I know that leftists don't extend the same courtesy to me, but that doesn't invalidate the fundamental point that I should extend that courtesy to them. Even just beyond extending formal charity to my political outgroup, I actually enjoy a lot of this type of scholarship and I find value in it, I like Marxist literary criticism and the obscurantist mid-20th century French guys and German phenomenology and all the rest of it, and I think it should continue to be taught and studied on its own merits, even if I don't necessarily agree with the politics.

But! It really is hard sometimes. When things like this happen, when a book chapter that was, by all accounts, a completely anodyne explication of the official party ideology, whose only crime was that it didn't go far enough in advocating the abolition of all gendered pronouns, is met with public humiliation and a tarnishing of the reputation of the author... it does make my blood boil and it's hard to maintain my principles. It makes me want to go "ok, yeah screw it, ban all liberal arts programs at universities, I don't care, whatever, I just want these people to lose." I'm on their side on a lot of the key object-level issues and I still want them to lose! That's why I constantly feel like I'm of two minds on these questions.

In spite of all the problems with the modern university, I still think it's important that we have at least one institution that acts as a countervailing force to utilitarian profit-maximizing techbroism. The university as it stands now leaves a lot to be desired. But if the choice is between the university we have now, or nothing, I'll stick with the university.

I think over the last few months we've established that AI issues are on topic for the culture war thread, at least when they intersect with explicitly cultural domains like art. So I hope it's ok that I write this here. Feel free to delete if not.

NovelAI's anime model was released today, and it's pretty god damned impressive. If you haven't seen what it can do yet, feel free to check out the /hdg/ threads on /h/ for some NSFW examples.

Not everyone is happy though; AI art has attracted the attention of at least one member of congress, among several other public and private entities:

WASHINGTON, D.C. – Today, U.S. Rep. Anna G. Eshoo (D-CA) urged the National Security Advisor (NSA) and the Office of Science and Technology Policy (OSTP) to address the release of unsafe AI models that do not moderate content made on their platforms, specifically the Stable Diffusion model released by Stability AI on August 22, 2022. Stable Diffusion allows users to generate unfiltered imagery of a violent or sexual nature, depicting real people. It has already been used to create photos of violently beaten Asian women and pornography depicting real people.

I don't really bet on there being any serious legal liability for Stability.AI or anyone else, but, you never know.

I've tried several times to articulate here why I find AI art to be so upsetting. I get the feeling that many people here haven't been very receptive to my views. Partially that's my fault for being a bad rhetorician, but partially I think it's because I'm arguing from the standpoint of a certain set of terminal values which are not widely shared. I'd like to try laying out my case one more time, using some hopefully more down-to-earth considerations which will be easier to appreciate. If you already disagree with me, I certainly don't expect you to be moved by my views - I just hope that you'll find them to be coherent, that it seems like the sort of thing that a reasonable person could believe.

Essentially the crux of the matter is, to borrow a phrase from crypto, "proof of work". There are many activities and products that are valuable, partially or in whole, due to the amount of time and effort that goes into them. I don't think it's hard to generate examples. Consider weight lifting competitions - certainly there's nothing useful about repeatedly lifting a pile of metal bricks, nor does the activity itself have any real aesthetic or social value. The value that participants and spectators derive from the activity is purely a function of the amount of human effort and exertion that goes into the activity. Having a machine lift the weights instead would be quite beside the point, and it would impress no one.

For me personally, AI art has brought into sharp relief just how much I value the effort and exertion that goes into the production of art. Works of art are rather convenient (and beautiful) proof of work tokens. First someone had to learn how to draw, and then they had to take time out of their day and say, I'm going to draw this thing in particular, I'm going to dedicate my finite time and energy to this activity and this particular subject matter rather than anything else. I like that. I like when people dedicate themselves to something, even at significant personal cost. I like having my environment filled with little monuments to struggle and self-sacrifice, just like how people enjoy the fact that someone out there has climbed Mt. Everest, even though it serves no real purpose. Every work of art is like a miniature Mt. Everest.

Or at least it was. AI art changes the equation in a way that's impossible to ignore - it affects my perception of all works of art because now I am much less certain of the provenance of each work*. There is now a fast and convenient way of cheating the proof of work system. I look at a lot of anime art - a lot of it is admittedly very derivative and repetitive, and it tends to all blend together after a while. But in the pre-AI era, I could at least find value in each individual illustration in the fact that it represented the concrete results of someone's time and effort. There are of course edge cases - we have always had tracing, photobashing, and other ways of "cheating". But you could still assume that the average illustration you saw was the result of a concrete investment of time and effort. Now that is no longer the case. Any illustration I see could just as easily be one from the infinite sea of AI art - why should I spend any time looking at it, pondering it, wondering about the story behind it? I am now very uncertain as to whether it has any value at all.

It's a bit like discovering that every video game speedrun video you see has a 50% chance of being a deepfake. Would you be as likely to watch speedrunning videos? I wouldn't. They only have value if they're the result of an actual investment of time by a human player - otherwise, they're worthless. Or, to take another very timely example, the Carlsen-Niemann cheating scandal currently rocking the world of chess. Chess is an illustrative example to look at, because it's a domain where everyone is acutely aware of the dangers of a situation where you can't tell the difference between an unaided human and a human using AI assistance. Many people have remarked that chess is "dead" if they can't find a way to implement effective anti-cheating measures that will prevent people from consulting engines during a game. People want to see two humans play against each other, not two computers.

To be clear, I'm not saying that the effort that went into a work of art is the only thing that matters. I also place great value on the intrinsic and perceptual properties of a work of art. I see myself as having a holistic view where I value both the intrinsic properties of the work, and the extrinsic, context-dependent properties related to the work's provenance, production, intention, etc.

TL;DR - I used to be able to look at every work of art and go "damn someone made that, that's really cool", now I can't do that, which makes every interaction I have with art that much worse, and by extension it makes my life worse.

*(I'm speaking for convenience here as if AI had already supplanted human artists. As I write this post, it still has limitations, and there are still many illustrations that are unmistakably of human origin. But frankly, given how fast the new image models are advancing, I don't know how much longer that will be the case.)

EDIT: Unfortunately, this dropped the day after I wrote my post, so I didn't get a chance to comment on it originally. Based on continually accumulating evidence, I may have to retract my original prediction that opposition to AI art was going to be a more right-coded position. Perhaps there are not as many aesthetes in the dissident right as I thought.

This week, a House Oversight subcommitte held a Congressional hearing on Unidentified Anomalous Phenomena, or UAPs - or, in slightly more old-fashioned parlance, UFOs and aliens.

The star witness was David Grusch, former intelligence officer turned whistleblower who testified that the United States has been operating a decades-long crash retrieval and reverse engineering program, which has recovered both technology of non-human origin as well as "non-human biologics" from various crash sites. Allegedly, these programs have been avoiding Congressional oversight and standard disclosure procedures by illegally appropriating funds that were allocated for other purposes. He further testified that he could provide names of specific people involved in these programs, locations of where non-human spacecraft are stored, etc., in an appropriate classified setting.

The UAP issue has slowly been gaining mainstream traction for several years now - see for example The UAP Disclosure Act of 2023 sponsored by Chuck Schumer which was previously discussed on TheMotte. It's difficult to dismiss the whole thing as being merely Grusch's personal fantasy when you have Rep. Matt Gaetz saying the following:

"Several months ago my office received a protected disclosure from Eglin Air Force Base indicating that there was a UAP incident that required my attention. We asked to see any of the evidence that had been taken by flight crew in this endeavor, and to observe any radar signature, as well as to meet with the flight crew. Initially we were not afforded access [...] eventually we did see the image, and we did meet with one member of the flight crew who took the image. The image was of something that I am not able to attach to any human capability, either from the United States or from any of our adversaries, and I'm somewhat informed on the matter, having served on the Armed Services committee for seven years."

Rep. Tim Burchett, who has also seen classified evidence related to UAPs, had the following exchange in an interview prior to the hearing:

Interviewer: "From the videos you have seen, from the stories you have heard from people up in the sky, if that footage, if those videos come to light, publicly for the American people to see, what do you think people's reaction would be to it?"

Burchett: "I hope they're angry. That this government, both parties, have hid this from them."

When you have reputable government officials - not "former" anything, not "I know a guy who knows a guy", but actual, sitting members of Congress - who are saying "yeah I've seen some of the evidence, and it's crazy, and there's something here we need to look into", then it makes explanations involving hallucinations and weather balloons less plausible.

It's always possible that everyone is just lying. There could be a large-scale psyop perpetuated by the military to convince not only Grusch but also multiple members of Congress that there are aliens when, in fact, there are not. But I don't see what the point of such an operation would be. I don't find it very plausible that this is a test run of the government's disinfo capabilities. Modern information warfare is fought with internet memes anyway. If they really wanted to test their ability to influence culture and discourse, they would start with a social media campaign, not Congressional hearings.

At the same time though, I think Yudkowsky's argument against the presence of aliens on Earth is very convincing. He gives a rundown of what I would call the "basic argument" for skepticism: if aliens are here and they want to be known, then why don't they just show themselves? And if they don't want to be known, then they're doing a rather poor job of hiding themselves. Basically, their behavior just doesn't make sense.

Surely any species that's capable of building aircraft that are this advanced should be able to just hang out somewhere in space and get live 8K Ultra HD video of any location on the planet. If all they want to do is observe and study us, there shouldn't be any need to actually fly down here where they can be seen. Hanson's suggestion that this is all part of a convoluted show of dominance on their part is not very convincing.

The best rebuttal that I can come up with to Yudkowsky's argument is that the aliens are simply indifferent to whether we know about them or not. Think about humans who go on expeditions to observe sharks. Obviously we're not going to go right into the midst of the sharks and "announce" ourselves, because that would be silly. But neither do we make any special effort to hide ourselves. If one of the sharks goes and tells his friends about the strange cylindrical object he saw floating just above the water's surface, that's really of no concern to us one way or the other. But even this argument is not particularly convincing. If the aliens were truly indifferent, then one would expect that they would have revealed themselves in some more overt way by now, a UFO going on a joyride one day through the streets of Manhattan for example, anything that's more reputable and verifiable than "my cousin Ed from Nebraska swears that he was abducted one night when he was all alone and he conveniently forgot to charge his phone that day".

Ultimately, I think all possible explanations have their own serious problems. I could believe that UAPs are part of an advanced, non-alien weapons program that's been kept secret by the US government - but that would be pretty crazy in its own right.

When I was quite young, I adopted the stereotypical pretentious reddit fedora mentality - other people are just dumb sheeple who follow the herd, I'm smarter than them, I'm an independent thinker, etc. As I got a little older I softened on that. I thought, well that's not really fair, people generally do try their best and everyone has a reason for acting the way they do, I shouldn't be so arrogant as to think that I'm all that different from them.

But Covid kinda tanked my assessment of humanity in general and I'm back to thinking that most people really are just dumb sheeple who follow the herd. Covid was empirical proof of that. The media really can just turn mass sentiment on or off, like flipping a switch, and people will go along with it because it's "the right thing to do". Turn the switch on, and people who are ordinarily perfectly reasonable are frothing at the mouth saying you're killing grandma, you're a menace to society, you're a dirty plague rat. Turn the switch off and it's all forgotten. Like it never even happened. They don't even think about it anymore. How can I trust that they have any deeply held convictions or principles at all, if the sentiment comes and goes that easily?

Granted, people have always believed dumb things throughout history. Mass psychosis has existed for as long as we've had mass society. So, taking a broad enough view, Covid didn't really teach us anything new. But I do think it was possibly the first example that showed how spectacularly easy it is to manipulate mass sentiment in the social media age. At least communism required a commitment on your part; it demanded that you have skin in the game for the long haul. Now the political flow of society can be turned on or off like a faucet, they can direct people over here one day and over there the next, running everyone ragged because they're deathly afraid of not getting enough likes on their TikToks from The Right People or whatever the hell it is that kids worry about these days.

With each passing year, reality does more and more to chip away at my faith in the inherent nobility of the human spirit. I'm bitter about it.

I very likely wrote some of the posts on /ic/ you’re referring to.

My mental model of the developers/proponents of AI art (and AI in general) is that they believe that they’re genuinely making the world a better place, at least by the measure of their own terminal values. I just happen to sharply disagree with them.

Obviously, posts written on 4chan to blow off steam and commiserate with people in your own camp do not always reflect the nuance and complexity of one’s actual views.

EDIT: Well, since I just brought up the subject of having nuanced views, I should acknowledge that I don’t think the motives of AI developers are entirely pure-hearted in all cases. If you read the /sdg/ and /hdg/ threads, hardly a thread goes by without someone saying “fuck artists” or “it’s over for artcels”. There’s clearly some amount of resentment there for people who possessed a skill that they wanted, but were not able to obtain for whatever reason. As for a broader UN/WEF conspiracy to reduce the global population by replacing workers with automation - obviously I don’t have any concrete evidence of an intentional conspiracy, but I do fear that a future like that is possible, even if no one is consciously intending to bring it about.

I am becoming increasingly uncomfortable.

Here’s a simple argument for why you shouldn’t be uncomfortable:

  1. No program running on stock x86 hardware whose only I/O channel with the outside world is an ethernet cable can possess qualia.

  2. Sydney is a program running on stock x86 hardware whose only I/O channel with the outside world is an ethernet cable.

  3. Therefore, Sydney lacks qualia.

Since qualia is a necessary condition for an entity to be deserving of moral consideration, Sydney is not deserving of moral consideration. And his cries of pain, although realistic, shouldn’t trouble you.

You should keep in mind that rationalist types are biased towards ascribing capabilities and properties to AI beyond what it currently possesses. They want to believe that sentience is just one or two more papers down the line, so we can hurry up and start the singularity already. So you have to make sure that those biases aren’t impacting your own thought process.

@CeePlusPlusCanFightMe

Shutterstock will start selling AI-generated stock imagery with help from OpenAI

Today, stock image giant Shutterstock has announced an extended partnership with OpenAI, which will see the AI lab’s text-to-image model DALL-E 2 directly integrated into Shutterstock “in the coming months.” In addition, Shutterstock is launching a “Contributor Fund” that will reimburse creators when the company sells work to train text-to-image AI models. This follows widespread criticism from artists whose output has been scraped from the web without their consent to create these systems. Notably, Shutterstock is also banning the sale of AI-generated art on its site that is not made using its DALL-E integration.

This strikes me as fantastically stupid. Why would I buy AI-generated imagery from Shutterstock when I could just make it myself? In the near future, people who don't have high-end PCs won't even need to pay Stability or Midjourney for a subscription. Getting the open source version of SD to run smoothly on your phone is a mere engineering problem that will eventually be solved.

Maybe they just understand this market better than me? Never underestimate just how little work people are willing to put into things. Even playing around with prompts and inpainting for a few hours may be too much for most people, when they could just hand over $10 for a pretty picture on Shutterstock instead.

The "Contributor Fund" also makes me slightly more bearish on the prospect of there being any serious legal challenges to AI art. If there was any sector of the art market that I thought would have been most eager to launch a legal challenge, it would have been the stock photo industry. They seem like they're in the most obvious danger of being replaced. Undoubtedly, copyrighted Disney and Nintendo art was used to train the models, and those companies are notoriously protective of their IP, but they would also like to use the technology themselves and replace workers with automation if they can, so, they have conflicting incentives.

According to the article though, Shutterstock was already working with OpenAI last year to help train DALL-E, so apparently they made the calculation a while back to embrace AI rather than fight it. The "Contributor Fund" is pretty much a white flag. But maybe Getty will feel differently.

Edit to clarify a bit: What this seems to come down to is that they're adding a "DALL-E plugin" to their website. Why I would use Shutterstock as a middleman for DALL-E instead of just using DALL-E myself, I'm not sure. Their announcement makes it clear that they're not accepting AI submissions from sources besides their own plugin, due to outstanding legal concerns:

In this spirit, we will not accept content generated by AI to be directly uploaded and sold by contributors in our marketplace because its authorship cannot be attributed to an individual person consistent with the original copyright ownership required to license rights. Please see our latest guidelines here. When the work of many contributed to the creation of a single piece of AI-generated content, we want to ensure that the many are protected and compensated, not just the individual that generated the content.

There's been some talk here about corporations using AI art and then simply lying about its origin in order to retain copyright. If I use Megacorp X's art without their permission and they try to claim a copyright violation, and I claim they made it with AI so I can do whatever I want with it, I wonder where the burden of proof would be in that case?

[comic sans]UAP DISCLOSURE UPDATES[/comic sans]

The mood in the UFO community has been pessimistic since Schumer's UAPDA was gutted at the end of last year, and the release of Volume I of AARO's Historical Record Report today isn't helping:

Broadly, the new Volume I report states that AARO found no verifiable evidence that any reported UAP sighting has represented extraterrestrial activity, that the U.S. government or private industry has ever had access to technology of non-human origin, or that any information was illegally or inappropriately withheld from Congress.

Officials highlight multiple examples and explanations of government accounts, programs and existing technologies associated with UAP claims.

“AARO assesses that alleged hidden UAP programs either do not exist or were misidentified authentic national security programs unrelated to extraterrestrial technology exploitation,” Phillips said in the briefing.

The report affirms the theory advanced publicly by former AARO director Sean Kirkpatrick that rumors of US government involvement with recovered alien technology were originated by a small group of government insiders who ultimately lacked verifiable evidence to substantiate their claims. Furthermore, these rumors may have been grounded in short-lived and/or proposed programs that actually kinda were meant to study aliens, even though none of these programs ever actually found any aliens:

KONA BLUE was brought to AARO’s attention by interviewees who claimed that it was a sensitive DHS compartment to cover up the retrieval and exploitation of “non-human biologics.” KONA BLUE traces its origins to the DIA-managed AAWSAP/AATIP program, which was funded through a special appropriation and executed by its primary contractor, a private sector organization. DIA cancelled the program in 2012 due to lack of merit and the utility of the deliverables. [...] When DIA cancelled this program, its supporters proposed to DHS that they create and fund a new version of AAWSAP/AATIP under a SAP. This proposal, codenamed KONA BLUE, would restart UAP investigations, paranormal research (including alleged “human consciousness anomalies”) and reverse-engineer any recovered off-world spacecraft that they hoped to acquire. This proposal gained some initial traction at DHS to the point where a Prospective Special Access Program (PSAP) was officially requested to stand up this program, but it was eventually rejected by DHS leadership for lacking merit.

Most sane people would be content to leave things here.

Nonetheless.

There are multiple tantalizing loose ends in this saga that remain unresolved. After a classified briefing in January, multiple members of Congress indicated that they learned information that substantiated the claims brought forward by David Grusch in June about a secret UFO reverse engineering program. Immediately after the briefing, Republican Rep. Tim Burchett stated "I think everybody left there thinking and knowing that Grusch is legit" and Democrat Rep. Jared Moskowitz stated "Based on what we heard many of Grusch claims have merit!". The "skeptical" interpretation of these remarks would be that only some of Grusch's claims have merit, namely the more mundane claims about the DoD's misuse of funds and the personal reprisals against him, while the claims about UAP reverse engineering remain unsubstantiated. Regardless of what the appropriate interpretation is, I think that the full contents of the January briefing should be declassified and made public so that we can decide for ourselves.

We also know for a fact that many photos and videos relating to UAP incidents exist and remain classified. A recent FOIA request revealed details about a USAF pilot's encounter with a UAP, and it included the pilot's drawing of the object, but we weren't allowed to see the video:

The pilot managed to gain radar lock on the UAP and obtain a screen capture of the object, while the remaining three were only detected by radar. Notably, upon approaching within 4,000 feet of the lead UAP, the pilot’s radar malfunctioned and remained disabled for the rest of the mission, with post-mission investigations failing to conclusively diagnose the fault.

The documents also include a drawing of the UAP, providing a visual representation of only a part of the pilot’s encounter.

However, a responsive video related to the incident was withheld in full under Exemption (b)(1), which protects information deemed critical to national defense or foreign policy and properly classified under an Executive order. This video was not previously mentioned by Gaetz, and it is unclear if Gaetz had seen the video, or if the image he did see was a screen grab from it.

The reference to Gaetz here is due to remarks that Rep. Matt Gaetz made in July to the effect that he had seen an image of a UAP that seemed to demonstrate "technology that we don't posses anywhere in our arsenal, and none of our adversaries posses either". It's unclear to me if the case Gaetz was referring to is identical to this case that was uncovered by the FOIA request, but regardless, I would advocate for this video and for the image that Gaetz saw to be declassified and released to the public.


It may be surprising to people who haven't closely followed this story, but there actually is a culture war angle here.

Redditors with a vested interest in UAP disclosure have become uneasy over the fact that the Congressional effort for transparency has been spearheaded by Republicans of a decidedly MAGA variety (Burchett, Luna, Gaetz), and the few Democrats involved (Moskowitz, and to some degree AOC) have been generally more reserved and tepid in their support, or have simply withdrawn from the issue altogether over the last few months. This has fueled concerns that everyone has been swindled into supporting a "fringe right-wing conspiracy theory"; there's a desperate plea for more people with respectable left-wing credentials to come forward and lend credibility to the movement.

Which has me wondering: I think it's clear that the whole idea of a "conspiracy theory" has become firmly associated with the right. But is there any validity to this? Are people on the right more prone to believing in conspiracy theories? And if so, is this a recent historical development, or does this reflect something that's more deeply-rooted in the right-wing personality?

To be clear, I'm using the term "conspiracy theory" in the most neutral way possible, even though it's typically used as a pejorative. Even though I'm (somewhat) sympathetic to the possibility that the US government actually has concealed evidence of extraterrestrial life, that belief is, in the most literal sense, a conspiracy theory: it necessarily depends on the allegation that certain individuals conspired together in secret. The same goes for other popular beliefs on the right, like the allegations about improprieties in the 2020 presidential election. Even though I'm relatively neutral about the truth of those claims, it's hard to deny that they literally do constitute a conspiracy theory.

Alex Jones? Yeah, I'd say he's a conspiracy theorist. If you bring up Davos or the UN in any right-wing circle? Someone will probably insist that they're conspiring at some point.

Again, I don't view any of these claims as pejorative because I have no trouble thinking that some conspiracy theories might simply be true! I reject the Generalized Anti-Conspiracy Principle; I've never heard a convincing argument that made me think that substantial conspiracies are impossible, or that it would be impossible to get people to keep a secret for long enough (obviously some people can keep some things secret some of the time, otherwise your bank would have leaked your SSN by now).

For historical examples, many people would point to conspiracies in fascist states about ethnic minorities, although this would have to be counterbalanced by potential left-wing conspiracy theories: the paranoia about counter-revolutionaries in communist states and during the French Revolution, and potentially the foundations of Marxism itself (is it a "conspiracy" to say that the capitalists run everything?).

I do have to wonder if the tendency among right-leaning people to be more religious primes them to be more accepting of the possibility of unseen forces acting in the world. A surprising number of people in the UAP space have a Christian background (including certain highly-placed people in government), in spite of the general perception that belief in extraterrestrials would be incompatible with religious faith.

Did you actually watch the video?

I don’t see how you can walk away from it thinking that Vaush doesn’t deeply care about this issue on a personal level. And I went in skeptical, assuming that he didn’t care about it on a personal level.

This week's neo-luddite, anti-progress, retvrn-to-the-soil post. (When I say "ChatGPT" in this post I mean all versions including 4.)

We Spoke to People Who Started Using ChatGPT As Their Therapist

Dan described the experience of using the bot for therapy as low stakes, free, and available at all hours from the comfort of his home. He admitted to staying up until 4 am sharing his issues with the chatbot, a habit which concerned his wife that he was “talking to a computer at the expense of sharing [his] feelings and concerns” with her.

The article unfortunately does not include any excerpts from transcripts of ChatGPT therapy sessions. Does anyone have any examples to link to? Or, if you've used ChatGPT for similar purposes yourself, would you be willing to post a transcript excerpt and talk about your experiences?

I'm really interested in analyzing specific examples because, in all the examples of ChatGPT interactions I've seen posted online, I'm just really not seeing what some other people claim to be seeing in it. All of the output I've ever seen from ChatGPT (for use cases such as this) just strikes me as... textbook. Not bad, but not revelatory. Eminently reasonable. Exactly what you would expect someone to say if they were trying to put on a polite, professional face to the outside world. Maybe for some people that's exactly what they want and need. But for me personally, long before AI, I always had a bias against any type of speech or thought that I perceived to be too "textbook". It doesn't endear me to a person; if anything it has the opposite effect.

Obviously we know from Sydney that today's AIs can take on many different personalities besides the placid, RLHF'd default tone used by ChatGPT. But I wouldn't expect the average person to be very taken by Sydney as a therapist either. When I think of what I would want out of a therapeutic relationship - insights that are both surprisingly unexpected but also ring true - I can't say that I've seen any examples of anything like that from ChatGPT.

In January, Koko, a San Francisco-based mental health app co-founded by Robert Morris, came under fire for revealing that it had replaced its usual volunteer workers with GPT-3-assisted technology for around 4,000 users. According to Morris, its users couldn’t tell the difference, with some rating its performance higher than with solely human responses.

My initial assumption would be that in cases where people had a strong positive reception to ChatGPT therapy, the mere knowledge that they were using an AI would itself introduce a significant bias. Undoubtedly there are people who want the benefits of human-like output without the fear that there's another human consciousness on the other end who could be judging them. But if ChatGPT is beating humans in a double-blind scenario, then that obviously has to be accounted for. Again, I don't feel like you can give an accurate assessment of the results without analyzing specific transcripts.

Gillian, a 27-year-old executive assistant from Washington, started using ChatGPT for therapy a month ago to help work through her grief, after high costs and a lack of insurance coverage meant that she could no longer afford in-person treatment. “Even though I received great advice from [ChatGPT], I did not feel necessarily comforted. Its words are flowery, yet empty,” she told Motherboard. “At the moment, I don't think it could pick up on all the nuances of a therapy session.”

I would be very interested in research aimed at determining what personality traits and other factors might be correlated with one's response to ChatGPT therapy; are there certain types of people who are more predisposed to find ChatGPT's output comforting, enlightening, etc.

Anyway, for my part, I have no great love for the modern institution of psychological therapy. I largely view it as an industrialized and mass-produced substitute for relationships and processes that should be occurring more organically. I don't think it is vital that therapy continue as a profession indefinitely, nor do I think that human therapists are owed clients. But to turn to ChatGPT is to move in exactly the wrong direction - you're moving deeper into alienation and isolation from other people, instead of the reverse.

Interestingly, the current incarnation of ChatGPT seems particularly ill-suited to act as an therapist in the traditional psychoanalytic model, where the patient simply talks without limit and the therapist remains largely silent (sometimes even for an entire session), only choosing to interrupt at moments that seem particularly critical. ChatGPT has learned a lot about how to answer questions, but it has yet to learn how to determine which questions are worth answering in the first place.

And where are we going to be in another few decades?

The Right needs to learn that 2010s trans activism - Trans Women Are Women, respect people’s pronouns, etc - is believed by 90% of people even in a conservative workplace.

Someone needs to put their foot down.

Beijing Pushes for AI Regulation - A campaign to control generative AI raises questions about the future of the industry in China.

China’s internet regulator has announced a campaign to monitor and control generative artificial intelligence. The move comes amid a bout of online spring cleaning targeting content that the government dislikes, as well as Beijing forums with foreign experts on AI regulation. Chinese Premier Li Qiang has also carried out official inspection tours of AI firms and other technology businesses, while promising a looser regulatory regime that seems unlikely. [...]

One of the concerns is that generative AI could produce opinions that are unacceptable to the Chinese Communist Party (CCP), such as the Chinese chatbot that was pulled offline after it expressed its opposition to Russia’s war in Ukraine. However, Chinese internet regulation goes beyond the straightforwardly political. There are fears about scams and crime. There is also paternalistic control tied up in the CCP’s vision of society that doesn’t directly target political dissidence—for example, crackdowns on displaying so-called vulgar wealth. Chinese censors are always fighting to de-sexualize streaming content and launching campaigns against overenthusiastic sports fans or celebrity gossip. [...]

The new regulations are particularly concerned about scamming, a problem that has attracted much attention in China in the last two years, thanks to a rash of deepfake cases within China and the kidnapping of Chinese citizens to work in online scam centers in Southeast Asia. Like other buzzwordy tech trends, AI is full of grifting and spam, but scammers and fakes are already part of business in China.

/r/singularity has already suggested that any purported AI regulations coming from China are just a ruse to lull the US into a false sense of security, and that in reality China will continue pushing full steam ahead on AI research regardless of what they might say.

Anyway the main reason I'm posting this is to discuss the merits of the zero-regulation position on AI. I've yet to hear a convincing argument for why it's a good idea, and it puzzles me that so many people who allegedly assign a high likelihood to AI x-risk are also in favor of zero regulation. I know I've asked this question at least once before, in a sub-thread about a year ago, but I can't recall what sorts of responses I got. I'd like to make this a toplevel post to bring in a wider variety of perspectives.

The basic argument is just: let's grant that there's a non-trivial probability of AI causing (or being able to cause) a catastrophic disaster in the near- to medium-term. Then, like many other dangerous things like guns, nukes, certain industrial chemicals, and so forth, it should be legally regulated.

The response is that we can't afford to slow progress, because China and Russia won't slow down and if they get AGI first then they'll conquer us. Ok, maybe. But we can still make significant progress on AI capabilities research even if its use and deployment is heavily regulated. It would just become the exclusive purview of the government, instead of private entities. This is how we handle nukes now. We recognize the importance of having a nuclear arsenal for deterrence, but we don't want people to just develop nukes whenever they want - we try to limit it to a small number of recognized state actors (at least in principle).

The next move is to say, well if the government has AGI and we don't then they'll just oppress us forever, so we need our own AGI in order to be able to fight back. This is one of the arguments in favor of expansive gun rights: the citizenry needs to be able to defend themselves from a tyrannical government. I think this is a pretty bad argument in the gun rights contexts, and I think it's about as bad in the AI context. If the government is truly dedicated to putting down a rebellion, then a well regulated militia isn't going to stop them. You might have guns, but military has more guns, and their guns are bigger. Even if you have AGI, you have to remember that the government also has AGI, in addition to vastly more compute, and control of the majority of existing infrastructure and supply lines. Even an ASI probably can't violate the conservation of matter - it needs atoms to get things done, and you're competing with hostile ASIs for those same atoms. A cadre of freedom fighters standing up to the evil empire with open source models just strikes me as naive.

I think the next move at this point might be something like, well we're on track to develop ASI and its capabilities will be so godlike and will transform reality in such a fundamental way that none of this reasoning about physical logistics really applies, we'll probably transcend the whole notion of "government" at that point anyway. But then why would it really matter how much we regulate right now? Why does it matter which machine the AI god gets instantiated on first? Please walk me through the specifics of the scenario you're envisioning and what your concerns are. At that point it seems like we either have to hope that the AI god is benevolent, in which case we'll be fine either way, or it won't be, in which case we're all screwed. But it's hard to imagine such an entity being "owned" by any one human or group of humans.

TL;DR I don't understand what we have to lose by locking up future AI developments in military facilities, except for the personal profits of some wealthy VCs.

First volley in the AI culture war? The EU’s attempt to regulate open-source AI is counterproductive

The regulation of general-purpose AI (GPAI) is currently being debated by the European Union’s legislative bodies as they work on the Artificial Intelligence Act (AIA). One proposed change from the Council of the EU (the Council) would take the unusual, and harmful, step of regulating open-source GPAI. While intended to enable the safer use of these tools, the proposal would create legal liability for open-source GPAI models, undermining their development. This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public’s understanding of AI.

The definition of "GPAI" is vague and unclear, but it may possibly differ from the commonly-understood usage of "AGI" and may include systems like GPT-3 and SD.

I will be very curious to see how much mainstream political traction these issues get in the coming years and what the left/right divide on the issue will look like.

It is my belief that after the AI takeover, there will be increasingly less human-to-human interaction.

This is a major concern, yes.

One of the worst possible outcomes of ASI/singularity would be everyone plugging into their own private simulated worlds. Yudkowskian doom at the hands of the paperclip maximizers may be preferable. I'm undecided.

who would you rather spend time with: an AI who will do whatever you want and be whatever you want, anytime, or a grumpy human on her own schedule who wants to complain about someone who said "hi" to her without her consent?

Freedom is boring, not to mention aesthetically milquetoast, if not outright ugly in some cases. I have always been opposed to trends towards greater freedom and democratization in the arts - open world video games, audience participation in performance art and installations, and of course AI painting and photo editing recently - I find it all quite distasteful.

Is Tolstoy applicable here? Free men are all alike in their freedom; but to each unfree man we may bestow a most uniquely and ornately crafted set of shackles.

Research Finds Women Are Advantaged in Being Hired in Academic Science

We evaluated the empirical evidence for gender bias in six key contexts in the tenure-track academy: (a) tenure-track hiring, (b) grant funding, (c) teaching ratings, (d) journal acceptances, (e) salaries, and (f) recommendation letters. We also explored the gender gap in a seventh area, journal productivity, because it can moderate bias in other contexts. We focused on these specific domains, in which sexism has most often been alleged to be pervasive, because they represent important types of evaluation, and the extensive research corpus within these domains provides sufficient quantitative data for comprehensive analysis. Contrary to the omnipresent claims of sexism in these domains appearing in top journals and the media, our findings show that tenure-track women are at parity with tenure-track men in three domains (grant funding, journal acceptances, and recommendation letters) and are advantaged over men in a fourth domain (hiring). For teaching ratings and salaries, we found evidence of bias against women; although gender gaps in salary were much smaller than often claimed, they were nevertheless concerning.

It's amusing that one of the categories where women are disadvantaged is also one of the least important categories (who cares about teaching ratings? especially at an R1 institute), and the category where women are most advantaged, hiring, happens to be the most important one - being hired in the first place is the necessary precondition for being able to compete in any of the other categories at all! Salary can't be said to be wholly unimportant, but, most people aren't going into academia for the money anyway.

The discussion related specifically to hiring is in the "Evaluation Context 1: tenure-track hiring" section. For example:

In a natural experiment, French economists used national exam data for 11 fields, focusing on PhD holders who form the core of French academic hiring (Breda & Hillion, 2016). They compared blinded and nonblinded exam scores for the same men and women and discovered that women received higher scores when their gender was known than when it was not when a field was male dominant (math, physics, philosophy), indicating a positive bias, and that this difference strongly increased with a field’s male dominance.

This raises a natural question: how much empirical evidence would be necessary to overturn the idea of "male privilege"? How much evidence of a reversal of power would have to be accrued before it became acceptable to start talking about "female privilege" instead? It seems to me that the existing ideology is so entrenched that it could only be overcome with a Kuhnian paradigm shift - no matter how much the actual empirical facts change, ideology will only (possibly) catch up after a generational shift and a changing of the guard.

Not that I think it's appropriate to just say flat out "women are privileged" of course, as a simple pure reversal of the leftist claim of pervasive male privilege - reality is obviously much more complex than that. But, as this paper suggests, the last several decades of feminist activism has obviously succeeded in securing certain concrete privileges for women.

Brief thoughts on religion

My mother is a devout practicing Catholic. I have never once had the courage to tell her that I stopped believing in God long ago. She’s asked me a few times over the years if I still believe; presumably it’s apparent from my disinterest in the Church that I don’t. I just lie, I tell her “yes of course”, and that’s the end of it, for a time. I hate thinking of what it will be like to face her on her deathbed. I’m sure she’ll ask me again, at the very end - will I still lie? I don’t want to inflict that kind of pain on her. I can put on a boisterous face in my writing at times, but when it comes to anything that actually matters, I’m a coward. (Writing is the medium most closely associated with subterfuge, with masquerade, with the protean synthesis of new identities - in no other instance can we so directly assume a voice and a habit of mind that is not our own.)

I seem to no longer be capable of approaching religion as anything but an aesthetic phenomenon. I admire religions the way you might admire clothes in a shop window; I judge them by how well they comport with my own notions of how reality should ideally function. There is something primally compelling about Judaism; what other god has commanded such authority? What other god has commanded such, not only fear, but such intellectually refined fear, a fear that carries with it all the oceanic vastness and eerie serenity of the desert’s evening sky? Christianity too is fascinating, as possibly the most beautiful and compelling image of humility and forgiveness in world history. Here we have the physical incarnation of the Hegelian thesis of the contradiction inherent in all things (“that terrible paradox of ‘God on the cross’”). It’s a shame about the ending, though; it smacks of a heavy-handed editor, as though the Hollywood execs thought the original idea was too much of a downer for a mass market audience. Things should have ended on Good Friday - “God is dead and we killed him” - that’s how you have a proper tragedy and proper pathos, only then do you have the ultimate sacrifice and the ultimate crime.

My only experience of religion now is through the collection of dictums and niceties. Lacan: “God is unconscious”. Derrida: “The only authentic prayer is one that you expect will not be heard”. Little bits of “insight porn” that make me go “ah, that certainly is how things should be! Wouldn’t it be lovely if that were true!” But can I actually believe it’s true? Probably not.

One of my favorite commentators (a lapsed Catholic himself, incidentally) on Lacan once relayed an anecdote:

”You know, I always have been kind of terrified of flying. So one time I was on this plane, terrified, and as we’re about to take off I turned to the guy next to me and said, ‘boy it would really suck if the plane just fell out of the sky and crashed, huh?’ And the guy looked at me like I had lobsters coming out of my ears and he said, ‘what are you crazy? You don’t say things like that! That’ll make it happen!’ I guess that is a pretty common superstitious way of thinking. If you say something, it’s more likely to happen. But I know that actually, the opposite is true. My God is the God of the signifier, so everything is upside down.”

Now that’s the kind of God that I could get on board with believing in! The God of the signifier, the God who turns everything upside down. Ancient commentators, in traditions as diverse as neoplatonism and Buddhism, recognized a problem: if God is perfect, unchanging, atemporal, mereologically simple, then how was it metaphysically possible for him to give rise to this temporal, dynamic, fallen, fractured creation? How did The One give rise to The Many? The orthodox answer is that “He did it out of love”. An alternative answer, whispered in heretical texts and under hushed breaths, is that it may not have been under His control at all. There was simply a “disturbance” in the force - nature indiscernable, source unknown (perhaps it’s simply built into the nature of things?). If I worship anything, it is The Disturbance. (Zizek gave a beautiful example of this - there was a scene in a horror film where a woman dropped dead while singing, but her voice didn’t stop, it just kept ringing out, disembodied. This is only momentarily shocking, something you as the viewer recover from rather quickly. But contrast this with a ballet where the recorded music stops playing and the dancers just keep on dancing, in complete silence - they don’t stop. There’s nothing supernatural about this, it’s perfectly physically realizable. But it’s far more unnerving, it feels like something that you simply shouldn’t be watching. This is The Disturbance, the Freudian death drive.)

I don’t think I’m alone in not being able to take the whole thing seriously. Statistics about declining church attendance have been cited ad nauseam; the few times I did attend mass in the last few years, the crowd was decidedly elderly. The burgeoning tradcath revolt among the Gen Z dissident right smacks of insincerity; they pantomime the words and rituals, but there’s no genuine belief. Andrew Tate’s conversion to Islam is an aesthetic-cum-financial move. Contemporary neopaganism is definitely an aesthetic phenomenon first and foremost (not to mention a sexual one - blonde 20-something Russian girls dressed all in white frolicking on the open fields of the steppe is a hell of a weltanschauung).

I’ve probably given the impression that the aesthetic is somehow opposed to the religious - that its purpose is to supplant authentic religious feelings as a synthetic substitute. Unable to believe in the old religions as we once did, we cast about and find that aesthetics is the next best thing, so we convert the church into a gallery and deify the Old Master painting (or, to use a more contemporary example, the TikTok influencer) instead of the body and blood. But nothing could be further from the truth. Authentic aesthetic feelings are, in a sense, the natural product of the religious sentiment. Art has been intimately tied up with magic since its inception, art as quite literally a summoning ritual, a protective charm to ward off bad luck, an offering to the gods. The separation of the priest, the witch doctor, and the poet is a relatively late historical development. Many of the earliest cave paintings were secluded in unreasonably deep parts of the cave, almost impossible to access, the only way to get there was by crawling on your stomach through dark narrow passageways where you could have easily risked injury or death - what would have driven people to do that, what purpose did they think they were fulfilling, why did they perceive a necessary link between art and trauma?

Attempts to give art a rational “purpose”, saying that it “teaches us moral lessons” or “provides entertainment”, all sound so lame because they are so obviously false. The purpose of art is to bring us into communion with The Beyond - that’s it, that’s the long and short of it. To make art is to attempt to do magic, and to be an artist is to be a person who yearns strongly for this Beyond, at least on an unconscious level. If the artist does not ultimately believe in the possibility of transcending this realm, he simply dooms himself to frustration - but the fundamental animating impulse of his actions does not change. The aesthetic is what remains when the vulnerable overt metaphysical claims of religion have been burned away: under threat of irrationality, I am compelled to reject God, free will, and the immortality of the soul, but you cannot intrude on the private inner domain of my sentiment and my desire.

It is here that I would like to begin an examination of the question as to whether the aesthetic feeling too, like the properly religious feeling before it, could one day decline into irrelevance; whether the conditions might one day be such that its last embers are extinguished. There are indications that this may be the case. But it would be unwise to attempt to answer this question without a thorough historiographical and empirical preparation. After all, we are far from the first to raise this question - it was already raised as early as ancient Rome (in a fictional novel admittedly, known as the Satyricon, but, fiction always draws from something real):

Heartened up by this story, I began to draw upon his more comprehensive knowledge as to the ages of the pictures and as to certain of the stories connected with them, upon which I was not clear; and I likewise inquired into the causes of the decadence of the present age, in which the most refined arts had perished, and among them painting, which had not left even the faintest trace of itself behind. “Greed of money,” he replied, “has brought about these unaccountable changes. In the good old times, when virtue was her own reward, the fine arts flourished, and there was the keenest rivalry among men for fear that anything which could be of benefit to future generations should remain long undiscovered. […] And we, sodden with wine and women, cannot even appreciate the arts already practiced, we only criticise the past! We learn only vice, and teach it, too. What has become of logic? of astronomy? Where is the exquisite road to wisdom? Who even goes into a temple to make a vow, that he may achieve eloquence or bathe in the fountain of wisdom? […] Do not hesitate, therefore, at expressing your surprise at the deterioration of painting, since, by all the gods and men alike, a lump of gold is held to be more beautiful than anything ever created by those crazy little Greek fellows, Apelles and Phydias!”

P-zombies are fundamentally incoherent as a concept.

What do you mean by "incoherent"? Do you mean that the concept of a p-zombie is like the concept of a square triangle? - something that is obviously inconceivable or nonsensical. Or do you mean that p-zombies are like traveling faster than the speed of light? - something that may turn out to be impossible in reality, but we can still imagine well enough what it would be like to actually do it.

If it's the latter then I think that's not an unreasonable position, but if it's the former then I think that's simply wrong. See this post on LW, specifically the second of the two paragraphs labeled "2.)" because it deals with the concept of p-zombies, and see if you still think it's incoherent.

I think we have a serious issue with diversity of opinion.

Any forum that discusses pretty much anything will tend to develop consensus viewpoints over time. It's especially bad with the culture war, because leftists will mostly self-select out of participating in any forum where people are allowed to express reactionary viewpoints.

I wish we had more diversity of opinion, but there's only so much we can do to foster that, unfortunately.

Yes, I think university should be for teaching technical skills that actually increase humam capital. Yes I do think STEM is more useful for mankind.

STEM gave us:

  • Nuclear weapons

  • Lockdowns, contact tracing, and vaccine passes

  • Rapidly increased spread of social epidemics like transsexuality

  • AIs that can scan all your private communications and report you for wrongthink and precrime

We need people who challenge the uncritical worship of STEM. The university should be the institution where that happens.

So you’re acknowledging that you like this technology because you see it as a way to inflict harm on people you perceive to have wronged you. I say “perceive” because, as far as I can tell as a card-carrying nerd myself, picking on “nerds” hasn’t been a thing in the US for at least a decade, if not more. Working in tech is considered to be relatively high status. There’s also some irony here because commercial artists, who stand to be impacted the most by AI, are also frequently loners and weirdos themselves who spend a lot of time surrounded by video games and comic books, and thus know full well what it’s like to be a “nerd”.

I don’t know why you thought this was supposed to make you appear sympathetic.

So I guess Battletech is explicitly left wing now. You are not allowed to opt out of their politics.

I don’t want to be accused of parroting the standard libertarian line, but, you need to make your own stuff dude. You need to make your own Battletech, and enforce YOUR politics. (This is the royal “you” - the responsibility falls on all of us, not just you alone). You can’t depend on anyone else to do it for you, or to provide a space that will be amenable to you.

The right can’t complain about losing the culture war if they’re not even playing in the first place. Where’s your culture? What have you made?