@astrolabia's banner p

astrolabia


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 01:46:57 UTC

				

User ID: 353

astrolabia


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 01:46:57 UTC

					

No bio...


					

User ID: 353

I agree that this was a stupid troll. I thought Hanania was better than that, but I guess it was just a lapse of judgement.

I agree that the resilience of labor-force participation in the face of the industrial revolution is a strong argument that there'll always be jobs. And I agree that the government will be able to mandate more bullshit jobs. But I also think there are currently a large fraction of non-bullshit jobs (e.g. truck drivers, nurses, policemen), and that our civilization could easily be a lot shittier if they were automated away. I'm imagining something like Saudi Arabia, where everyone just plays politics to get the cushiest BS job.

until the public reaction is so bad it demands a crackdown

Do you think this is realistic? Why hasn't a crackdown been demanded in L.A., even though it's apparently much worse than the TTC already?

PPC-style actual law-and-order conservatism is still completely verboten amongst all of my Canadian friends and colleagues.

I agree that there is bizarrely little focus on the possibility of our current institutions simply becoming worse, more powerful, and more totalizing versions of themselves. Although Andrew Critch and Paul Christiano have written detailed doom scenarios that look something like this. e.g. https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like

I think I agree with your analysis. I also would love to hear if you have any suggestions for a better value-aggregation method. I suspect Jan doesn't love his own proposal all that much, but realizes that AIs are going to become the de facto arbiters of human value to some extent, and is trying to propose something that's feasible and that people might actually agree to.

I'd also be sympathetic to the idea that this whole class of approaches is horrible, and that we should be aiming for more monarchistic (i.e. OpenAI's leadership decides) or libertarian (we try to avoid having to agree to values as much as possible) approaches. But I'm coming around to the idea that we have to propose something and start trialing and debugging it asap, precisely because we fear it'll end up being a horrible amalgamation of special interest lobbying and social desirability bias even worse than our current democratic institutions.

How many kids do you have? What is the sticking point for more kids? An older nanny might be able to help you with some informal child-rearing lessons.

I agree it sounds harsh, but you're the one that said

Why even bother with bio-souls?

If I was planning to spend less time with my friends in favor of an AI, it might be a dickish thing to be honest about, but saying why would at least give them a chance to fix the problem.

Good point that we already are the hyper-socially-competent layer mediating between unpleasant chimp brains.

Maybe I need to tell my friends that they had better step up their focus, social graces, and diction if they want to compete with ChatGPT.

To me, it primarily means "a young person really into theatre". For example, I think the OP meant this sense of the word above when they were describing themselves.

Are you trying to redefine "theatre kid" to be purely pejorative? Please don't.

I'm right there with you. The big question in my mind is: will socializing with hyper-socially-intelligent AIs make people more or less socially retarded when interacting with other humans? I can see it going either way. Maybe it won't matter much - our future human friendships and marriages (if any) might simply explicitly be mediated by AI, and perhaps be better for it.

That sounds like a nice prompt for an "odd couple" comedy - an old married couple whose AI intermediary / life coach breaks down, and they're forced to interact bare-brained, so to speak, for the first time.

How is everyone being replaced not scary to you?

I mean, our brains (presumably) experience qualia under some circumstances and not under others, e.g. deep sleep or comas, even though it's still the "exact same general purpose computing hardware".

Yeah, it's hilarious and sad that luminaries like Yann LeCun are being so dismissive, above and beyond standard counter-signalling. Although I've also kept my mouth shut about this on Twitter, since I'd sound like a basic bitch if I said "Yes, this is exciting!", although I do say that in person.

Perhaps part of it can be explained by Yann not having lived through the Great Update that most people in ML did when deep learning was unambiguously vindicated around 2013-2016. The rest of us got to learn what it feels like to be unnecessarily dismissive, and maybe learned some humility from that. But Yann was mostly right all along, so maybe he never had to :)

Oh, yes, I totally agree that fine-tuning gives them worse predictive likelihood. I had thought you were implying that the main source of their abilities wasn't next-token prediction, but now I see that you're just saying that they're not only trained that way anymore, which I agree with.

No, they don't merely predict the next token

I'm pretty sure this is still how they all work. Predicting the next token is both very hard and very useful to do well in all circumstances!

EDIT: Now that I think about it, I guess with RLHF and other fine-tuning, it'd be fair to say that they aren't "merely" predicting the next token. But I maintain that there's nothing "mere" about that ability.

As far as I understand, this has happened when George Mason University decided to hire a bunch of looked-over straight while male libertarian economists, and got a powerhouse is in the form of a single department at an otherwise unknown school with Robin Hanson, Bryan Caplan, and Tyler Cowen.

You also see extreme "over-representation" in any new, unregulated area of economic growth. E.g.

  1. The founding teams of most tech startups and the first few rounds of technical employees,

  2. Cryptocurrency

  3. E-sports

  4. Tech Venture Capital (e.g. Y combinator)

  5. AI and AI safety

  6. Effective altruism

It takes time for the problematizers to notice a new power center and bring the eye of sauron to bear. But this is becoming quicker and more predictable, so first movers are pre-emptively playing the optics game with more effort and finesse. I just worry that someday there won't be any new growth centers to move to.

I think we might agree? I'm saying that it's rent control and public / subsidized housing that make the difference. In my vague understanding, Atlanta, Dallas, and Phoenix have much less of this than old metros like NYC.

My guess is that rent control and public / subsidized housing in places like NYC make the de facto, mostly organic segregation that would normally keep groups separate and mostly invisible to one another much more visible. That is to say, my guess is (never having lived there) that I don't think NYC is much more segregated, it just looks like it because people can see the other side.

As a toy example, imagine a city laid out in a line from richest to poorest. Everyone would probably have a lot in common with their neighbours, and the racial mix would mostly change slowly across space. Now if we wrap that line into a circle, suddenly this city looks super racially segregated and inhumane where the ends meet! Anyways just a hypothesis.

Yes, I agree with all 4 points. I think you're also right that HlynkaCG agrees with me on these points.

I think what happened here is that, HlynkaCG saw me defend discussion of the possibility that there might be group differences in behavior (possibly due to poverty, or whatever the palatable explanation of the month is, I didn't say), saw this (correctly) as allowing more avenues for arguments in favor of HBD, and became mind-killed.

I don't get to talk with many social scientists, but the two I've talked to about these things were so appalled by the mere suggestion that I quickly shut up. But for example, a Bayesian ecologist told me that his prior on there being differences in behavior driving differential arrest rates was 0 (I'm not even sure what that's supposed to mean). A mathematician who said epigenetic trauma was an explanation for poor black outcomes, astoundingly also suggested that Jews' excellent outcomes after the holocaust were also due to epigenetic trauma. Like, that hypothesis wouldn't have even occurred to me in a million years.

The behavior I've seen is consistent with people sensing that they are discussing something sacred and not to be questioned. I've made my peace with this - except when it comes up in relation to policy discussions. In those cases, I wish we had some galaxy-brained norm about separation of church and state that we could invoke. In fact, that might be a great contribution to diffusing the culture wars - some version of "Render unto the racists..."

I'm sorry, I never raised the issue of genetics, I was only talking about group differences in behavior in general. I also heartily agree that the IRS could easily be following perverse incentives. I have no idea what you mean about Hegelian dynamics here, nor how individual character + agency precludes discussions of average differences in behavior between groups, which the article raised as a possibility.

I would really love it if you'd read my first reply again - I wasn't claiming that group differences explain anything here. I was saying that it's astounding the variety of behaviours people will display if prompted to acknowledge, in principle, the possible existence of average group differences (genetic or otherwise). Do you think such differences are possible?

Hmmm, I'm not sure I understand your point. To be uncharitable, this looks like exactly the sort of creative misdirection I was talking about. The NYT dismisses the possibility of different amounts of tax fraud between races for any reason. Whether or not it's genetic, or whether other factors might be more important, are separate questions, and are secondary to the question of whether the fraud detection algorithms are biased. Again, I'm saying that even acknowledging group average differences in behavior as a possible explanation for group average differences in outcome is already less mind-killed than most of my interlocutors.

Since I have you here, what do you mean when you say that a group-level difference could "outweigh" individual level variation? They're just two levels of variation, and nothing changes if one is bigger than the other - they're both still there.

I wouldn't even call it "mind-killing", because of the impressive mental gymnastics required to avoid ever even considering the idea that there could be meaningful group differences. The bizarre hypotheses, type errors, or misdirections that my friends and colleagues come up with when I ask if there is even in principle a possible difference in group averages is constant source of surprising creativity in my life.

The fact that the NYT article even mentions the possibility (to immediately dismiss it) already puts it in the top tier of clear thinking on the issue in my experience.

Regarding fusion in particular, I'm not an expert but I'm pretty sure that the prospects for fusion being much of an improvement on fission are very low with most current designs (because they'll still have similarly high capex costs), but this is almost never mentioned in articles about fusion power.