@doglatine's banner p

doglatine


				

				

				
17 followers   follows 2 users  
joined 2022 September 05 16:08:37 UTC

				

User ID: 619

doglatine


				
				
				

				
17 followers   follows 2 users   joined 2022 September 05 16:08:37 UTC

					

No bio...


					

User ID: 619

You can look for correlated clusters of symptoms. It’s not that women and men with autism present with entirely qualitatively different features, it’s just that men and women present them to different degrees (men usually more so). If the scale is calibrated to men it will be relatively insensitive for women (though more specific).

Toy example: playing Warhammer and MtG is not especially diagnostic of autism in men. Lots of non-autistic men play Warhammer and MtG [citation needed]. I would expect a far higher proportion of cis women who play Warhammer/MtG to be autistic. So if our toy autism scale only puts a small amount of weight on this variable it will miss autism in women.

Given the already high rates of data fabrication inside but especially outside the West, I’d assign very little weight to any data from a paper where the authors, reviewers, and editors don’t even check for howlers like the ones quoted.

More broadly, speaking from the sausage factory floor, I can say that the trend in high-level publishing in the humanities increasingly seems to be towards special issues/special series where all papers are by invitation or commissioned. This creates some problems (harder for outsiders to break in, easier for ideologue editors to maintain a party line), but in general seems like an acceptable stopgap measure for wordcel fields to cover the next 5-10 year interregnum where LLM outputs are good enough to make open submission impossible, but not quite good enough to replace the best human scholars.

Fascinating; I seem to see quite a lot of small Ford Focuses, Fiestas, and Mondeos here in the UK.

Even ‘right-wing’ sci-fi has this motif. The heroes in ‘Starship Troopers’ are two White men leading the multicultural coalition of Earth against the brutalistic ‘bugs’

In the Heinlein novel, Johnny Rico is explicitly stated to have a Filipino background.

100% agree on all points. Not clear whether Google will be able to adapt AdWords for LLMs but at least they have a chance if they’re the ones leading the revolution.

And also completely agree about the changing shape of LLMs. They’ll just become a mostly invisible layer in operating systems that eg handles queries and parlays user vague requests (“show me some funny videos”) into specific personalised API calls.

Just some quick thoughts on the future of the internet. In short, I expect the way we use the web and social media to change quite dramatically over the next 3-5 years as a result of the growing sophistication of AI assistants combined with a new deluge of AI spam, agitprop, and clickbait content hitting the big socials. Specifically, I’d guess most people will have an AI assistant fielding user queries via API calls to Reddit, TikTok, Twitter, etc. and creating a personalised stream of content that filters out ads, spam, phishing, and (depending on users’ tastes) clickbait and AI generated lust-provoking images. The result will be a little bit like an old RSS feed but mostly selected on their behalf rather than by them directly, and obviously packed with multimedia and social content. As the big social networks start to make progressively more of their money from API charges from AI assistant apps and have fewer high-value native users, they’ll have less incentive to control for spambots locally, which will create a feedback loop that makes the sites basically uninhabitable without AI curation.

One result of this is that Google is kind of screwed, because these days people use it mainly for navigation rather than exploratory search (eg you use it to search Reddit, Twitter, or Wikipedia, or find your way to previously-visited articles or websites when you can’t remember the exact URL). But AI assistants will handle navigation and site-specific queries, and even exploratory search will be behind the scenes, meaning Google Ads will get progressively less and less exposure to human eyeballs. This is why they urgently need to make Gemini a success, because their current business model won’t exist in the medium-term.

All of this feels incredibly predictable to me given the dual combination of AI assistants and spambots getting much better, but I'm curious what others think, and also what the consequences of this new internet landscape will be for society and politics.

Behind the scenes Google is having a bit of an identity crisis. The DEI radicalism is there, but in the last 12 months there’s been apparently been a big shift away from creative research towards short-term deliverables (source: two very good friends there).

But if you’re going to be highly ideologically constrained yet also extremely focused on bottom lines and rushing products out the door, who wants to work for you? The brilliant hippies and autistic weirdos will go work somewhere they’re not chasing deadlines. The ruthlessly efficient pragmatists will go somewhere they can make sick bank without having to tithe to the DEI god.

Of course, you’ll still have plenty of mediocre middle managers, but you’ll be alienating a good chunk of the top talent who can choose where to go. And they’re the ones who can reliably deliver the big new ideas.

I don’t think it’s an insuperable problem. A difficult one to be sure, but academic incentive structures are a lot more mutable than a bunch of other social problems if you have the political will. There’s also the fact that the current blind review journal-based publishing system is on borrowed time thanks to advances in LLMs, so we’ll need to do a fair amount of innovating/rebuilding in the next decade anyway.

Another problem is that there are more scientists than plausible paths of scientific enquiry.

Philip Kitcher has some useful insights here on the division of epistemic labour in science. In short, it's not always ideal to have scientists pursuing just the most plausible hypotheses. Instead, we should allocate epistemic labour in proportion to something like expected utility, such that low-probability high-impact hypotheses get their due. Unfortunately, this can be a hard sell to many researchers given the current incentive structures. Do you want to spend 10 years researching a hypothesis that is almost certainly false and is going to give you null results, just for the 1% chance that it's true? In practice this means that science in practice probably skews too much towards epistemic conservatism, with outlier hypotheses often being explored only by well-funded and established eccentric researchers (example: Avi Loeb is one of the very few mainstream academics exploring extraterrestrial intelligence hypotheses, and he gets a ton of crap for it).

There are also of course some fields (maybe social psychology, neuroscience, and pharmacology as examples) where the incentives stack up differently, often because it's easy to massage data or methodology to guarantee positive results. This means that researchers go for whatever looks bold and exciting and shiny because they know they'll be able to manufacture some eye-catching results, whereas a better division of epistemic labour would have them doing more prosaic but valuable work testing and pruning existing paradigms and identifying plausible mechanisms where it exists (cue "it ain't much but it's honest work" meme).

All of which is to say, I think there's plenty of work to go around in the sciences, enough to absorb all the researchers we have and more, but right now that labour is allocated highly inefficiently/suboptimally.

Not an expert on this by any means but I have seen some encouraging results on in vivo (as opposed to in utero or in vitro) gene editing. Here's a sample paper discussing the state of the field. There's also a further question whether in vivo gene editing for intelligence would produce the kind of behavioural impacts we care about; as far as I know, that's an open uncertainty.

Just FWIW as someone engaged on academic work on these issues, I broadly agree with your take. That said, two quick points of disagreement -

(1) Even supposedly friendly personalisation can be dangerous. Really effective personalised advertised can boost consumption, but if you're anything like me, you should probably be consuming less. You're like a dieter walking through a buffet restaurant filled with dishes perfectly targeted to your palate. By controlling the data held on you by third parties, you can limit how appealing the menu they offer you is. Now, of course, sometimes it will be your cheat day and you can eat to your heart's content, and having an amazing menu offered to you is positively desirable. But most of the time, having this personalised menu is going to be bad for your ability to achieve your reflectively-endorsed goals. Data privacy is one way to protect yourself from having your own most voracious instincts exploited.

(2) Privacy concerns don't seem to me to be male-coded. If anything, more of my female students are very worried about it. More than anything else, I'd say it skews continental European; Germans above anyone else seem obsessed with it. Brits are radically unconcerned about it.

Couldn’t Abbot announce that state law enforcement would prevent federal agents from making arrests of guardsmen in that case? Obviously it would be an escalation but seems like there’s a whole ladder here with progressively more extreme rungs for both players.

When writing formal letters in Japanese, there are a variety of extra steps you have to do above and beyond fancy salutations and signoffs, including - my favourite - the seasonal observations beginning the letter (e.g., in August you could say "The oppressive heat continues to linger") and closing it ("please give my regards to everyone"). These are so stereotyped that I think most recipients of letters regard them more as a structural element of the composition than a semantic one, just as in English we don't really think of the virtue of sincerity when reading "Yours Sincerely".

I think this is basically what LLMs will do to writing, at least on the 5-10 year time scale. Everything will be written by LLMs and interpreted and summarised by LLMs, and there will be a whole SEO-style set of best practices to ensure your messages get interpreted in the right way. This might even mean that sometimes when we inspect the actual first-order content of compositions created by LLMs that there are elements we find bizarre or nonsensical, that are there for the AI readers rather than the human ones.

To get back to your point, I absolutely think this is going to happen to bureaucracy in academia and beyond, and I think it's a wonderful thing, a process to be cherished. Right now, the bureaucratic class in education, government, and elsewhere exert a strongly negative influence on productivity, and they have absolutely no incentives to trim down red tape to put themselves out of jobs or reduce the amount of power they hold. This bureaucratic class is at the heart of cost disease, and I'm not exaggerating when I say that their continued unchecked growth is a civilisation-level threat to us.

In this regard, LLMs are absolutely wonderful. They allow anyone with limited training to meet bureaucratic standards with minimal effort. Better still, they can bloviate at such length that the bureaucracy will be forced to rely on LLMs to decode them, as noted above, so they lose most of the advantage that comes with being able to speak bureaucratese better than honest productive citizens. "God created men, ChatGPT made them equal."

If you're worried that this will lead to lax academic standards or shoddy research practices, I'd reassure you that academic standards have never been laxer and shoddy research is absolutely everywhere, and the existence of review boards and similar apparatchik-filled bodies does nothing to curb these. If anything, by preventing basic research being done by anything except those with insider connections and a taste for bureaucracy, they make the problem worse. Similarly, academia is decreasingly valuable for delivering basic research; the incentive structures have been too rotten for too long, and almost no-one produces content with actual value.

I'm actually quite excited about what LLMs mean in this regard. As we get closer to the point where LLMs can spontaneously generate 5000-10000 word pieces that make plodding but cogent arguments and engage meticulously with the existing literature, huge swathes of the academic journal industry will simply be unable to survive the epistemic anarchy of receiving vast numbers of such submissions, with no way to tell the AI-generated ones from the human ones. And in the softer social sciences, LLMs will make the harder bits - i.e., the statistics - much easier and more accessible. I imagine the vast majority of PhD theses that get completed in these fields in 2024 will make extensive use of ChatGPT.

All of these changes will force creative destruction on academia in ways that will be beautiful and painful to watch but will ultimately be constructive. This will force us to think afresh about what on earth Philosophy and History and Sociology departments are all for, and how we measure their success. We'll have to build new institutions that are designed to be ecologically compatible with LLMs and an endless sea of mediocre but passable content. Meanwhile I expect harder fields like biomed and material sciences to (continue to) be supercharged by the capabilities of ML, with the comparative ineffectiveness of institutional research being shown up by insights from DeepMind et al.. We have so, so much to look forward to.

Nice! Note that it’s iecit rather than iacuit, and I feel like Latin wouldn’t do two coordinate clauses joined with a conjunction. Maybe a participle phrase, eg Abbotus numquam fideliter credens aleam iecit.

Before I even clicked, I knew this would either be NileRed or Action Lab 🤣

My understanding is also that any airline that was perceived as doing anything other than maximally cooperating with immigration authorities in a given country would probably be denied landing slots in future.

The UK is particularly bad here. At this point I’m no longer shocked by how much American friends make compared to British friends in similar jobs, often 2-3 times as much.

I’ve heard this expressed pithily as “one of the worst things about being poor is having to live alongside other poor people.” It sounds cruel and god knows there are plenty of rich arseholes but my friends from genuinely deprived backgrounds seem to have to endure a huge amount of interpersonal drama with family, from people needing to be bailed out (literally) to female relatives or friends needing help after getting beaten up by abusive boyfriends or spouses or friends stealing from them. Even if not all poor people have high time preferences and low willpower, the large majority of people with high time preferences and low willpower end up poor, and make life miserable for others in their community trying to escape.

Yes, well put. I don’t think the “woke establishment” has a good play here insofar as large swathes of the vanguard progressive movement are actually anti-Semitic by normie standards, while large swathes of the journalistic, financial, and political leadership of the movement are themselves Jewish and many of them feel betrayed by the wider left in the wake of October 7th.

I see two main possible outcomes. Either the leadership reins in the vanguard and has an anti-semitism purge as per Starmer in the UK. The effect of this would be disillusionment in the vanguard and a sense of betrayal. Many of the most passionate and/or psychotic progressives will splinter off. Alternatively, if the leadership is too weak to rein in the vanguard, then a lot of powerful Jewish Americans will splinter from the woke fringe (a la Luciana Berger in the UK), probably mostly flocking to centrist Democrat spaces.

Either way, it’s not a fight that can be brushed under the rug.

I'm coming late to this fantastic post, and most things worth saying have been said, but one issue no-one's tackled: how will AI affect all this? That might sound tenuous but I think it's potentially significance. We're on the cusp of -

  • Vastly more accessible/effective homeschooling and self-education via AI tutors
  • Massive skill equalisation for low- and mid-level white collar work
  • Likely evisceration of large parts of the Blue Tribe base
  • Easy creation of reasonably smart AI media/propaganda bots
  • Emergence of new more salient axes of disagreement splitting society down the middle (e.g. pro-tech/anti-tech)

Maybe a silly question, but given that Canada is a massive country concentrated in a few urban areas, why aren’t there more initiatives to build new cities and associated infrastructure, with migration plans explicitly focused on bringing migrants to the new cities rather than existing overcrowded urban areas?

Extremely reassuring 😄

Honestly some of the reactions here make me feel we’ve drifted away from the high-decoupling crowd we used to be, closer to normie conservatism. Pray god some of these people never get into a moral philosophy class or their heads will explode. “Why are you even thinking about pushing fat men off bridges? Are you some kind of sicko?”

Top or bottom?

FWIW I like your answer a lot and I don’t think preventing violence against Israel would be unattainable for a Gazan leader with a strong enough power base. I’m thinking here of Kadyrov in Chechnya. You’d want to start by finding a smart powerful and mercenary figure within Hamas. Give them enough money to build up their power base, bribe minor players, have major rivals killed. Give them weapons and allow them to build a Praetorian Guard of elite Hamas fighters who live like kings and get all the chicks. Develop very strict internal messaging norms around Israel and violence — general calls for a unified Palestine one day are fine, but no direct exhortations to violence. Make it so that anyone who fucks with you ends up dead, and anyone who works with you gets money and women.

This shouldn’t be politically impossible. Everyone is responsive to multiple social incentives and these in turn can be influenced with money. It will just take a lot of time, money, and finding the right people.