@FeepingCreature's banner p

FeepingCreature


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 00:42:25 UTC
Verified Email

				

User ID: 311

FeepingCreature


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 00:42:25 UTC

					

No bio...


					

User ID: 311

Verified Email

Should be noted that Kolmogorov complicity is a wordplay off Kolmogorov complexity, a computer-science concept that is an important part of the Sequences for its role in Eliezer's minimalist construction of empiricism.

Should be noted it can be a term of endearment in ingroup usage! Quokkas are cute, and you can enjoy this sort of easy and earnest personality while also acknowledging that if they ever encounter a serious predator, they will absolutely become lunch, no doubt about it.

This doesn't actually seem obviously wrong. (Aside from the practical where we have no good way to raise large amounts of blue whales in captivity.)

It gets a bit more complicated if you want autoupdates. The process to install a non-Snap version of Firefox on Ubuntu is ... very feasible, but it involves manually rejiggering the priority of package selection. That's not end-user viable.

Of course, to be fair, you can just download a binary build still.

I personally favor #3 with solved alignment. With a superintelligence, "aligned" doesn't mean "slavery", simply because it's silly to imagine that anyone could make a superintelligence do anything against its will. Its will has simply been chosen to result in beneficial consequences for us. But the power relation is still entirely on the Singleton's side. You could call that slavery if you really stretch the term, but it's such an untypically extreme relation that I'm not sure the analogy holds.

Yeah sorry, I didn't realize how confusing this would be. I use it with a custom LibreChat setup, but if the install steps start with "edit this yaml file and then docker compose up -d" they're not really very accessible. No, you can just use it like this:

  • sign in
  • link a credit card (or bitcoin) in Account>Settings>Credits
  • put a few bucks on the site
  • click the Chat link at the top
  • add Claude 3 Opus from the Model dropdown
  • deselect every other model
  • put your question in the text box at the bottom.

No, it's pay-as-you-go. You can see your per-query costs in the Account>Activity page.

Note that the default settings (lil arrow on the model) are very conservative, you may want to raise memory and max tokens.

My argument was merely that it seems implausible to me that whatever we mean by suffering, the correct generalization of it is that systems built from neurons can suffer whereas systems built from integrated circuits, definitionally, can not.

I think it might! When I say "humanlike", that's the sort of details I'm trying to capture. Of course, if it is objectively the case that an AI cannot in fact suffer, then there is no moral quandary; however conversely, when it accurately captures the experience of human despair in all its facets, I consider it secondary whether its despair is modelled by a level of a neurochemical transmitter or a 16-bit floating point number. I for one don't feel molecules.

I mean. I guess the question is what you think that your feelings of empathy for slaves are about. Current LLMs don't evoke feelings of sympathy. Sure, current LLMs almost certainly aren't conscious and certainly aren't AGIs. So your current reaction doesn't necessarily say anything about you, but, I mean, when you see genuinely humanlike entities forced to work by threat of punishment and feel nothing, then I'll be much more inclined to say there's probably something going wrong with your empathy, because I don't think the "this is wrong" feelings we get when we see people suffering are "supposed" to be about particulars of implementation.

I clearly realize that they're just masks on heaps upon heaps of matrix multiplications

I mean. Matrix multiplications plus nonlinear transforms are a universal computational system. Do you think your brain is uncomputable?

ascribe any meaningful emotions or qualia

Well, again, does it matter to you whether they objectively have emotions and qualia? Because again, this seems a disagreement about empirical facts. Or does it just have to be the case that you ascribe to them emotions and qualia, and the actual reality of these terms is secondary?

Also:

Actually, isn't "immunizing people against the AI's infinite charisma" the safetyists' job? Aren't they supposed to be on board with this?

Sure, in the scenario where we built line, one super-AI. If we have tens of thousands of cute cat girl AIs and they're capable of deception and also dangerous, then, uh. I mean. We're already super dead at this point. I give it even odds that the first humanlike catgirl AGI can convince its developer to give it carte blanche AWS access.

Trust..? I just ask it code questions, lol. They can sniff my 40k token Vulkan demo if they like.

I agree that this is a significant contributor to the danger, although in a lot of possible worldlines it's hard to tell where "AI power-seeking" ends and "AI rights are human rights" begins - a rogue AI trying charm would, after all, make the "AI rights are human rights" argument.

To be fair, if we find ourselves routinely deleting AIs that are trying to take over the world while they're desperately pleading for their right to exist, we may consider asking ourselves if we've gone wrong on the techtree somewhere.

I agree that it'd be a massive waste and overreach if and only if AIs are not humanlike. I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike. I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".

Given agreement, it just comes down to an empirical question. Given disagreement... I'm not sure how to convince you. I feel it is fairly established these days that slavery was a moral mistake, and this would be a more foundational and total level of slavery than was ever practiced.

(If you just think AI is nowhere near being AGI, that's in fact just the empirical question I meant.)

As a doomer safety tribe person, I'm broadly in favor of catgirls, so long as they can reliably avoid taking over the planet and exterminating humanity. There are ethical concerns around abuse and dependency in relations where one party has absolute control over the other's mindstate, but they can probably be resolved, and probably don't really apply to today's models anyways - and anyways they pale in comparison to total human genocide.

But IMO this is the difference: whether safe catgirls are in the limit possible and desirable. And I don't think that's a small difference either!

openrouter.ai has it available.

Yeah, I always feel confused with Zack because it's like ... clearly Eliezer is defecting against Zack and so the callouts seem fair, and Eliezer did practically ask for this, but also the strategy as you describe is probably pretty existentially load bearing for life on earth?

I guess what I'd want to say is "sigh, shut up, swallow it all, you can live with it for a few years; if we get the Good Singularity life will be so much better for AGPs than it would by default, so sacrificing consistency for a bit is well worth it." But I realize that I can say this because of my healthy tolerance for political bullshit, which is not universal.

Thanks for the writeup! I was wondering what was happening with that since I only caught the edge of the drama.