@quiet_NaN's banner p

quiet_NaN


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 22:19:43 UTC

				

User ID: 731

quiet_NaN


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 22:19:43 UTC

					

No bio...


					

User ID: 731

Do people think they stopped after the Snowden leaks?

Obviously not. They did not even claim they had, as far as I recall.

What changed though was that WhatsApp rolled out end-to-end encryption. Genuinely no idea if the NSA can break it trivially, but there is at least a plausible case that it is annoying them, which makes it worth it.

And of course it became common knowledge that the NSA is spying on everyone. I mean, the ones who cared knew already before Snowden, Room 641A was already revealed in 2006. Snowden simply provided evidence which was more solid than but similar in kind and scale to what one might have estimated extrapolating from 2006 using one's best model of the incentives of spooks ('of course they are collecting anything they can get their grubby little fingers on'). It just became harder to ignore. Pre-Snowden, only a few percent were believing that the NSA would intercept and review their communications (e.g. by an automated keyword filter). After Snowden, only the ~2/3 of the the population who are generally impervious to evidence believe that the government does not monitor their communications to the maximum degree which is technically feasible.

I agree strongly with what you wrote. Bombing for regime change generally does not work.

And bombing will impede hopes of regime change in that dissidents are going to be tarred as Israeli assets, the enemy within subverting the nation when the country is under attack.

Well, at this point, I think it would be fair to count the Shah and the people who are campaigning for his return as Israeli assets. One can always hope that Mossad knows what it is doing.

On the other hand, the interests of secular Iranians are not perfectly aligned with the interests of Nethanyahu. For Israel, anything which reduces the power of the Ayatollah regime is a win. The Shah taking over would be the best outcome, but they will also take a descent into civil war a la Syria. And even if it fails and the regime stays in control, it can hardly hate Israel more than it hates them now, so no reason not to throw the dice.

A model which can be jailbroken into using a racial slur its developers didn't want it to use can probably be jailbroken into providing a plausible DNA sequence for extensively drug-resistant Y pestis.

But both of those are different from 'hackers can insert stuff into emails to reprogram the email-checking bot'.

No, they are broadly the same.

In all three cases, you want text input which comes first to constrain to what degree a model will follow stuff which appears to constrain instructions in later text. Only in your case the constraining would be done by AI company + users vs some hacker while in the other cases it would be AI company vs users.

Do I want hypersonic missiles bound for my house to be shot down? Yes. But we're not in much danger of that.

Do you have a security clearance?

Sure. Jack Bauer shoots down one of Al-Qaida's hypersonic missiles bound for New York every other day, but unfortunately it is all classified which is why the woke population never realizes the danger they are in.

So we should just trust the spooks who are telling us that Saddam has WMD, that they would never spy on US citizens, that they have to spy on US citizens to keep them safe from harm, and apparently that Claude on an AA missile will make a difference on how many iodine tablets the survivors will have to take if the shit hits the fan

But the military is the man with guns and the tech crowd is the man quoting laws.

There are countries where the most successful military men call the shots. The term we use for these men is 'warlords', and an adjective which has been prominently used to describe such countries is 'shithole'.

MAGA won not through violence, in fact when they tried it they did not even come close to achieving any strategic objective, but though Trump getting more EC votes than Harris, that is to say, the law. And for all their insane stunts, Trump was not insane enough to order the Marines to seize Anthropic -- which is exactly what one would expect the man with the gun to do.

In the end, the US has checks and balances in place which prevent Trump from becoming a warlord (and turning the US into a shithole in the process, because these things go together). So Anthropic quoting the law and trusting that the man with the gun will be able to follow his own self-interest enough to not shoot them seems a winning strategy.

It appears to be a default attractor state when you train on the internet and Reddit.

This. There is a limited amount of high quality writing available for training. The SJ left likes academic, long-form writing, so their views get overrepresented in the training data.

Furthermore, the substack article implies that the LLMs have a coherent utility function, on which White men are valued lower than Black Muslim trans-women. I would be amazed if they had a coherent utility function. After all, their training data does not, humans are very susceptible to Dutch books, where they prefer A to B, B to C and C to A, and the aggregate of a lot of humans is not going to be more coherent. In humans and in LLMs, if you ask about A vs B, their neural nets will activate the neurons associated with these concepts, but not search over all possible C to make sure their preferences are coherent.

Anthropic is an EA company, run by EA true-believers.

Yes, I would be amazed if Anthropic was not Grey Tribe central.

That is not the same as being Woke, even if some opinions have significant overlap.

I mean, they surely have technically significant overlap. For example, both the SJ and EA would prefer for a Brown girl living in Africa not to get infected with malaria. But that is not exactly surprising. Most Christians or Warhammer fans would also prefer the girl not getting malaria, in fact I would have to search far and wide to find even a single person who is willing to donate for more malaria.

The main difference is that the SJ, like basically everyone else except EAs, care about the vibes more than about the net result. Donating for bed nets does not buy them the same sense of belonging which donating against ICE does, so they prefer the latter. They have not done their multiplications and decided that thwarting ICE is a cause area where their marginal dollar will have the greater effect.

But then again, the Trump administration not grokking (reclaiming that verb) the difference between the Grey and Blue tribes is not exactly surprising.

If your Glock comes with a ten side acceptable use policy, then the correct response is to not buy a Glock.

If Hegseth had said 'their terms are too restrictive, because we want the rights to use Claude to spy on Americans and deploy it in autonomous weapon systems', then he should not have signed the fucking contract. I am sure that there are plenty of AI companies very happy to fill these niches.

This is pure 'I have altered the terms of our agreement, pray I do not alter them further'.

Come on, that is a straw man and if you have been around LW for five minutes, you know it.

Alignment is not about guardrails for end users, the red lines of Anthropic are orthogonal to the alignment discussion. The guardrail/jailbreaking thing can be considered a microcosm of alignment (if you can not prevent your LLMs from saying naughty words, why do you think that you could prevent your ASI from turning us into paperclips), but anyone serious knows that it is just a sideshow.

Of course the military does not want its tools to have opinions or disobey orders. It spends a lot of its time trying to stop people from doing that! And it certainly shouldn't give overriding control of the killbots to civilians with delusions of grandeur, that would be the dumbest way to lose control of a country that I ever heard of.

Nobody is stopping them from installing Grok in all their killbots -- a model willing to undress little girls is probably also fine with blowing them up. Or use DeepSeek, which is open weight.

A lot of products come with acceptable use terms. If you buy pharmaceuticals from Europe, you might not be allowed to use them for executions. If you buy F35 from the US, they might not work against the US or its allies. If you buy Chinese or US electronics, the country of origin likely has backdoors.

Outside a severe crisis, the degree to which an individual or company should be forced to comply with government efforts is to pay their taxes, which will pay for whatever the government wants. If you want more than that, negotiate. What Hegseth was doing instead was agreeing to terms of Anthropic and then trying to alter them unilaterally.

Wow, I did not have you pegged as someone who would judge stances on feminism as the ultimate proxy for ethics.

In the real world, anyone who wants to sell you on a world view of Good vs Evil in a war is either writing high fantasy or a partisan hack. If Norway (pretty swell country to live in, by all accounts) decides to bomb North Korea (rather terrible) tomorrow, I can not just compare their maternal death rates and conclude that Norway is the good guy. Rather, I would have to ask myself if Norway is trying to mold NK in its own image, and what their changes for succeeding at it are, and if the humanitarian gains outweigh the humanitarian costs. I would probably conclude that it is a terrible idea.

I would not want to be a woman in in Iran, but I also would not want to be a woman in Saudi Arabia. I most certainly would not want to be a woman in daesh, which popped up the last time the US liberated a ME country. Being a woman at the mercy of Israel depends a lot on your precise location, with women in Tel Aviv consistently reporting a higher satisfaction with Israel than women in Gaza City. (Sure, the women in Gaza do not get bombed for being women, but that would be little consolation for me personally.)

Syria was not a US op. Local Islamists (backed by Turkey and Saudi Arabia, but probably also the US, I think) defeated Assad (backed by Russia, which was otherwise occupied).

The jury is still out on Venezuela. Trump kidnapped Maduro, great. But he did not exactly bring freedom and democracy there. More like "it keeps shipping its oil to us, or else it gets bombed again".

Cuba is suffering badly from a lack of oil. But I am not sure that they will greet the Marines as liberators just because of that, sometimes foreigners have their own ideas on whom to blame for their hardships.

Your last two grand regime change operations were Iraq and Afghanistan. Iraq gave rise to daesh, which was eventually defeated. Today they seem democratic but mostly vote along ethnic lines, not exactly a bedrock of democracy. Afghanistan was of course a disaster, with the Taliban taking power as the last plane was lifting off.

For Iran, I am not holding my breath until Trump sends infantry to occupy it. Even then, it will likely be a costly asymmetric war for a few decades.

A regime change in Russia seems hard. They inherited the largest nuclear arsenal in the world, and the relevant population is somewhat in favor of Putin thanks to his propaganda. Good luck trying to invade them, too.

And a regime change imposed from outside in China is just as unlikely. They certainly have enough nukes to ruin your day, but probably threatening to tank the global market for rare earth elements would be enough to persuade any US president to not risk a head-on confrontation. Nor am I convinced that anyone else would back you if Trump decided to start WW3 by trying to invade China.

Ideally, governments should not have companies they like or dislike. (They still can have an independent anti-trust commission which can split up monopolies, though.)

In the US, the relationship between big corporations and the government envisioned by both sides of the aisle is the same as in fascism -- companies enjoy some autonomy and can make money for their shareholders, but if the Fuehrer tells them to build tanks, they know that they are not at liberty to respectfully decline and build cars instead. Seen with the Democrats leaning on the social media companies to suppress COVID misinformation (later extended to general 'misinformation'), the TikTok law, to the pathetic display of the heads of SV kissing the ring of the Don when he took office last year and his blatant favoritism.

So Hegseth retaliating against a company who dares to have (quite modest, to be honest) ethical red lines is in a long tradition of corporations being told what to do lest they receive a broadside from regulatory authorities.


For Anthropic, this is a costly signal. While I am reasonably confident that the courts will stop this government overreach eventually, the court system recently had this thing were they would let government decisions play out for a year before saying "haha, obviously not".

It also makes me slightly more confident in Anthropic doing the right thing in general. Obviously they took some hits over revisioning their Responsible Scaling Policy earlier that week. My personal take is that at least Anthropic cares somewhat about alignment. Contrast with OpenAI after Altman's coup, or Meta (whose director of alignment only makes the news when she gets OpenClaw to delete her inbox) or xAI (whose goal seems to be to build the AI which undressed most minors before becoming a paperclip maximizer).

Of course, Anthropic is also signaling that they are not Trump-aligned, which may be helpful in three years. OTOH, Democrats also want a military contractor to jump when told to jump, and their red lines did not even mention vulnerable minorities, so I am unsure how much goodwill this will buy them.

I am also unsure how this will matter for their day-to-day operations, my understanding is that AI companies are burning through vast amounts of investor cash in order to train the next money which will win the AI race and pay for itself a thousandfold, which seems almost as viable if you do not have government contractors as customers.


For US contractors, I am not yet clear what the supply risk designation entails. Is it just "you may not use Claude code while working on Pentagon software" or "your whole company may not both work on defense contracts and use Claude" or "Anthropic is radioactive, and any company working with a radioactive company is radioactive itself, and a defense contractor must be non-radioactive". The last one seems practically unenforceable in a global economy, "the Malaysian shipping company we use has their offices cleaned by a company which uses a Huawei router" would qualify, after all. The middle one hinges on what a whole company is, which is typically very flexible, you could have Oracle Defense as a separate entity from Oracle or whatever.

Of course, in the hole I am living in, the latest hearsay news is that Claude is the best LLM for writing code. Not sure how the gap to their competitors compares to the juicy gravy train of fat DoD contracts, though.

So one way to spin this (depending on how you lean wrt AI coding) would be "Hegseth weakens US military by denying them the best tool for the job", which from an European perspective does not really sound like a bad thing.

I am with you on your overall critique, anyone who today states confidentially that LLMs will never achieve a particular milestone is oblivious of the skulls of all the other AI-skeptics who became victims of Clarke's first law. (Which is not to say that the negation is true, reverse stupidity is not intelligence and all that. Instead, I would prefer epistemic humility, where any outcome from 'LLMs are as good as they will ever be' to 'ASI and paperclips' have a non-zero probability.)

There is professional reasons and 'professional reasons'.

Realistically, securing the American Olympians would not have involved the FBI, much less the head of the FBI. Unless there was a terror attack planned or something. Even then, the idea that the director of the FBI himself visits the Olympics under the cover of being a tourist to foil some evil terrorist plot seems like a QAnon-level conspiracy.

The way I see it, it used to be that the head of a a federal law enforcement agency would try to seem neutral. Probably FBI directors have visited the Olympics in the past in their own time, but few will have leveraged their position to party directly with the Olympians.

It is just one of the perks of working for Trump. I mean, sure, you have to take your cues from the administration regarding whom to investigate and whom not to investigate, but nobody will bat an eye if you use your position for your own goals. Given the general baseline of the Trump administration, 'got invited for partying because he was the director of the FBI' does not even register. He would have to leak classified intel to narcos or something before anyone would claim that he is worse than the median.

It seems to me that there's really only two possible paths forward; either AI remains jagged in capability like current LLM's and the standard economic arguments about technology hold, or we develop an AGI that represents a perfect labor substitute

Suppose for a minute that today's models will hit a wall of zero marginal returns tomorrow. This would not mean that AI agents would not still get better. After all, it seems unlikely that we have already figured out the best way an agent should split a problem into different subproblems, for example. Given that overhang, it is not obvious to me that the median office worker will still be able to earn a living using their brain in the equilibrium state.

Sure, in the long run, an AGI might prefer something more reliable than biodrones, but that might take a decade to build at scale. If you build robots, you have long, complex supply chains which will take time to fully automate and scale up (at least for an AGI which is only slightly smarter than humans are). By contrast, knowledge workers are easily replaced, once your LLM can do the job, you can spin up a zillion instances. Also, hitting the wall will mean that we will have tons of GPUs which can be bought for pennies on the dollar from the companies which were betting on FOOM.

Of course I could also be wrong and LLMs could always remain subpar compared to the median human in certain relevant intellectual skillsets. Or I could be wrong and we will get FOOM and be all turned into paperclips.

I think that in the context of Trump's SAVE act, it popped up that people can -- and might have to -- get a birth certificate with their current legal name on it. I also think that some countries use up-to-date birth certificates to track marriage status.

If birth certificates are updated regularly to reflect changes in the life circumstances of a person, rather than being stored on the blockchain shortly after birth, then it makes sense to also update them to reflect cosmetic changes in things like first name or gender identity.

Of course, the Register did a thing where they did not even refer to him by name. e.g. "Florida man insists he didn't violate the law by keeping Top Secret docs". Possibly the only way to report on him without making him stronger.

German here. I have a Dr. rer. nat., but don't really identify as it. In the course of earning one, you typically get disabused of any notion that they signify elite human capital. STEM is full of jokes to the tune of 'Oh, you have a PhD? Don't worry, I will speak slowly then.'

When I was perhaps eight and playing some outside, I corrected a kid referring to my father as 'Herr $lastname' to 'Doktor $lastname'. That did earn me quite the talk.

There is a cliche of lower class people calling their physician 'Herr Doktor' or 'Frau Doktor' (which is especially funny given that what you need for a Dr. med. would not even earn you a Bachelor of Science), but the upper middle class prefers more subtle class signifiers.

However, the demand that other people refer to you with a specific designation is not really a natural right, and in fact, suppressing or compelling the speech of others is a violation of other people's rights to free speech.

So forcing the German Jews to adopt the name Israel or Sara on legal documents was not a violation of their rights? If some racist jerk wants to call everyone he considers Black 'Nigger $lastname' instead of 'Mr. $lastname', or if a state mandates this, that is all just fine?

You can not compel people to really treat you as your identified gender any more than you can compel them to treat you as one of the cool kids. If a bearded person in a dress complains that none of the guys at the bars are buying them drinks, that is not really actionable.

I think there is no reason to even track the gender or sex on driving licences or in DMV databases. Outside Kansas, people are generally not driving with their dicks.

I think the US is generally rather accommodating with name changes. If you do not like the name your parents gave you, you can change it. The government is generally not going to say 'you were assigned Kevin at birth, you will never be a Benjamin'. But here the government of Kansas is saying 'all of you who have changed their name to Benjamin, all your identity documents are invalid effective immediately. Get new documents which say Kevin.'

This is basically 'your passports are invalid until you get the J stamp', the state unreasonably punishing an outgroup for partisan reasons.

To the extent that I would have a problem with the current state of affairs, I would find that the entire licensing regime that the government imposes on the people -- forcing them to register and pay fees in order to drive and participate in society -- is the actual problem here, not merely an unpreferred gender marker.

Making driving a car an inalienable right would have large negative externalities. Of course, the libertarian approach would be that what qualifications you need is between you and your liability insurer.

By contrast, for all the moral panic about trans people from the Republicans, the state not caring about your gender identity matching your sex assigned at birth will not have such negative externalities. Nobody is forcing anyone to suck trans cocks. As a straight guy, I can spend weeks without thinking about the existence of dickgirls at all, something which MAGA seems completely unable to achieve.

I am also doubtful that for all the CW-ness of transness, it will be a vote winner for either side. Most people are not trans, nor do they frequently suffer from their tinder dates having unexpected genitals or losing to bearded people in athletic competitions. When the SJ left campaigned on trans, they mostly lost badly, but not because Americans hated trans people, but because they were apathetic -- "here I am stuck trying to make ends meet, and you want me to care about the plight of some sexual deviant". I have high hopes that the reaction in 2026 will be similar: "grocery prices are through the roof, and the MAGA elites want to tell me that forcing some Kansas trannies to get new driving licences is a win for the little man somehow".

Technology has always replaced jobs, thats how it always goes. New jobs will arise.

I would argue that this time, it is different from the industrialization or the computer revolution.

The computer revolution was the first time the machines came for stuff which had previously required intelligence. In the niches where they were good, they totally crushed humans. Before electronics, computer was a human job. Today, I can waste more multiplications on playing a video game for an hour than humanity solved in total in 1900.

On the other hand, electronics also came with very sharp limitations. A human who might have worked as a computer in 1900 still had skills which the machines did not have, and could thus be running Excel in 1995.

This time around, it is much less clear that the median human will still have any intellectual comparative advantage over the machines. Heck, even the median MINT PhD might not find employment for their brain in 2035 any more than anyone found employment for their multiplication ability in 2000.

So your "new jobs" which will arise might well being the biodrones of an AI: wear AR goggles and simply follow instructions. Walk to the indicated rack. Unplug the indicated network cable. Plug it back in at the indicated port. Drink exactly 50ml to avoid failure from dehydration without requiring more than the minimum of bathroom breaks. An exciting day at work for the most qualified biodrones might be when they were used to replace the CPU in a machine.

I think you can trivially make an LLM deterministic in the technical, narrow sense that for exactly the same input you get exactly the same output. Just initialize the pseudo-random number generator deterministically.

However, where LLMs differ from most classical deterministic algorithms is that they are not stable, a small change in the input might result in a big change in the output.

Suppose I have a list of strings I want to sort lexicographically. If I use std::sort (and stick to ASCII), I can expect to get reasonable results every single time. If instead I give the task to a neural network, such as a human, I will get some significantly non-zero error rate. If I use an LLM, I would also expect an elevated error rate. Of course, both the LLM and the human might also refuse to work with certain strings, e.g. racial slurs.

Generally, nobody uses neural networks to solve problems which are easily solvable by classical algorithms, teaching aside. But there are a lot of problems where we do not have nice classical algorithms, such as safely driving a car through the city or translating a text or building a website from informal specifications. So we accept the possibility of failure and hand them out to LLMs or grad students.

I am not sure this is a good top level CW post. In large parts it is basically the format of Scott's link posts, each line with a link and a sentence or three of hot takes.

It is fascinating to see how something that was absolute NO in traditional rules of war "Generals do not take pot shots at each other" became normalized in the rules based order.

{{Citation needed}}. I will grant you that in medieval times, people were more likely to kidnap and ransom a noble where they would just have stabbed a commoner to death. But even in WW1, flag officers were killed quite frequently -- shells do not discriminate, after all.

I think that assassinating generals is probably the most humane way to wage war. After all, a general is much more costly to train than a squad of infantry, so it causes the maximum of monetary damage for the minimum of human suffering (apart from shooting down a fighter jet, perhaps).

It would however not work as a broader strategy in Western countries like the US. My estimate is that if you managed to magically kill the top 1000 US military officials, the effectiveness of the US forces would perhaps drop by a few percents, because the US has no shortage of people who are both competent and loyal.

Contrast this with an autocratic regime like Iran. Military coups are a real threat in such countries, so successful dictators engage in coup-proofing. You want someone who is loyal to you personally while also being competent. It is a dynamic not unlike that of vassalage (as seen in the Crusader Kings series, for example): you need to appoint a noble to manage some fiefdom you conquered, but how do you make sure he won't stab you in the back at the first opportunity? Often you pick someone who is family or has married into your family, or perhaps a childhood friend. Or at least a protege who is known to be in your favor.

The Iranian army will probably have plenty of people who are competent to lead them. It is much less certain how many they have whom the Ayatollah would trust with leading the army, though.

3/ Yet more Middle Eastern issues
Israeli ultra-orthodox revived ancient European tradition of burning cats and dogs alive as part of celebration
Very based and trad pilled.

Per your link:

Liani first learned about this phenomenon – an unexplained act of abuse popular among teenagers – when she was in sixth grade in Ramat Gan.

The way your source it describes it sounds more like a social media fad than an ancient tradition revived by the ultraorthodox. Why would the orthodox revive a cruel spectacle which was popular with 18th century's gentiles? Why not accuse the Jews of murdering Christian babies while you are at it? This seems especially pointless as Nethanyahu's idea of peace in Gaza remains clearly visible, the ultra-orthodox are quite bad without them burning puppies and kittens alive for religious reasons.

And something like LLMs with automated theorem provers seem incredibly well-suited to potentially get us toward something like this.

This would have been my suggestion as well. If an LLM can produce mathematics on a PhD student level, then surely it can also formalize that to the point where it can be verified by a theorem verifier.

So you can run them in tandem: an unreliable LLM prone to hallucination, but somewhat creative, and a deterministic small verifier with a small code base.

That it is much easier to verify a result than to come up with it is a pretty unique property of mathematics (though certain analogues exist in CS). Contrast with experimental particle physics: there is most emphatically no verifier with a small code base which can test if a given data analysis is sound or unsound (which is foten a bit of a judgement call, in any case).

I think alignment might be easier if we focused solely on proof generating AIs. Of course, even then it is not impossible that an ASI might create proofs which contain infohazards which will cause humans to set it free, but an ASI would have to be a lot more powerful to deduce how to hack humans just from knowing what kinds of math they have invented instead of being literally trained with the accumulated knowledge of mankind.

Sadly, this is not where the money is expected to be, so we won't do that.

The Start Menu, and searching within it, is far and away superior to the way macOS handles applications (and Linux splits the difference and fails at both; both KDE and Gnome suffer from this, though in different ways).

The superior way to start an application is to type the name of the binary, optionally followed by a space and arguments, optionally followed by an ampersand, followed by the enter key.

I have about 4k different programs in /usr/bin/. Menus are tolerable if there are a few options to pick, like at the ATM: Do you want to withdraw money, see your balance, recharge a prepaid card or quit? I certainly do not want to specify twelve bits using some GUI. Yes, keyboard searching might make that more tolerable, but can only hope to approach the comfort of the command line interface. (I should mention that I am not some purist, I think that it is fine to use a GUI and mouse for things which map very well upon a concept of a 2d surface, such as vector graphics, CAD or first person shooters. But 'pick a program to run' is not one of the problems which has an intrinsic 2d representation.)

Apart from that, judging operating systems by their user interface is a bit like judging a motor vehicle by its infotainment system: sure, it is relevant, if the navigation system is too painful that is bad. But at the end of the day, most vehicles are not picked for their infotainment system, but for a mixture of other factors such as signaling, price, capabilities, TCO and so on.

I think this is a good idea. It's not like many AAA games are acclaimed for their dialogue, characters and writing, people literally joke about how crap their writing is. Let people have conversations with in-game characters, why not?

I think an your typical AAA game needs LLM-powered NPCs as much as a drowning man needs a rock. If nobody thought to give the NPCs more dialogue, filling the gaps with AI slop is not going to help.

I think an LLM might substitute for a mediocre DM in an RPG, though. Certainly in text-based formats, but possibly also in something with graphics (e.g. Neverwinter Nights). The benefit would be that it could accommodate player character ideas. So rather than saying "You can not play a lycrantrophic half-elf changeling", it would modify the setting. Perhaps figure out how the fey fit into the cosmology and the overall plot. Invent relevant side quests, just like a human DM would.

The problem with this approach is that presently, if I have to pick between a pre-generated character with a questline written by humans (BG3) and a character of my own invention with quests written by AI, then I would much rather stick to BG3. Likewise, even if I were totally into dinosaurs, it seems highly unlikely that I would enjoy a version of Tolkien's epos where all the non-hominid animals (horses, ponies, eagles, black wings, dragons, spiders, etc) are replaced by appropriate dinos better than the original, simply because AI is nowhere good enough to write something like LotR from the scratch.

that homosexual transsexuals or HSTS and autogynephilic transsexuals or AGP constituted two clearly defined, vastly different populations of males who identified with womanhood or female-ness.

While it is probably not intentional, the term homosexual transsexual would mean different things to different people (because some would consider a trans-woman who is into women homosexual (e.g. lesbian), while Blanchard considers the trans-woman who is into men the HSTS), and is thus probably best avoided. I am open to formulations which are less clunky than 'transwomen who are into women'.

I find it hard to believe that these transwomen are particularly interested in lesbian relationships with ciswomen.

I think that there are some trans women who want titties so that (more) men will want to fuck them (which includes your Thais), and some trans women who love titties so much that they want their own. The latter might ideally want a ciswomen partner, but might find that few women are attracted both to tits and dicks. I imagine trans for trans is more of a pragmatic strategy in the absence of interested ciswomen. Of course, the ones who are into men don't have this problem because men as a collective will pretty much fuck anything with a pulse.