@FeepingCreature's banner p

FeepingCreature


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 00:42:25 UTC
Verified Email

				

User ID: 311

FeepingCreature


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 00:42:25 UTC

					

No bio...


					

User ID: 311

Verified Email

Yeah sorry, I didn't realize how confusing this would be. I use it with a custom LibreChat setup, but if the install steps start with "edit this yaml file and then docker compose up -d" they're not really very accessible. No, you can just use it like this:

  • sign in
  • link a credit card (or bitcoin) in Account>Settings>Credits
  • put a few bucks on the site
  • click the Chat link at the top
  • add Claude 3 Opus from the Model dropdown
  • deselect every other model
  • put your question in the text box at the bottom.

No, it's pay-as-you-go. You can see your per-query costs in the Account>Activity page.

Note that the default settings (lil arrow on the model) are very conservative, you may want to raise memory and max tokens.

My argument was merely that it seems implausible to me that whatever we mean by suffering, the correct generalization of it is that systems built from neurons can suffer whereas systems built from integrated circuits, definitionally, can not.

I think it might! When I say "humanlike", that's the sort of details I'm trying to capture. Of course, if it is objectively the case that an AI cannot in fact suffer, then there is no moral quandary; however conversely, when it accurately captures the experience of human despair in all its facets, I consider it secondary whether its despair is modelled by a level of a neurochemical transmitter or a 16-bit floating point number. I for one don't feel molecules.

I mean. I guess the question is what you think that your feelings of empathy for slaves are about. Current LLMs don't evoke feelings of sympathy. Sure, current LLMs almost certainly aren't conscious and certainly aren't AGIs. So your current reaction doesn't necessarily say anything about you, but, I mean, when you see genuinely humanlike entities forced to work by threat of punishment and feel nothing, then I'll be much more inclined to say there's probably something going wrong with your empathy, because I don't think the "this is wrong" feelings we get when we see people suffering are "supposed" to be about particulars of implementation.

I clearly realize that they're just masks on heaps upon heaps of matrix multiplications

I mean. Matrix multiplications plus nonlinear transforms are a universal computational system. Do you think your brain is uncomputable?

ascribe any meaningful emotions or qualia

Well, again, does it matter to you whether they objectively have emotions and qualia? Because again, this seems a disagreement about empirical facts. Or does it just have to be the case that you ascribe to them emotions and qualia, and the actual reality of these terms is secondary?

Also:

Actually, isn't "immunizing people against the AI's infinite charisma" the safetyists' job? Aren't they supposed to be on board with this?

Sure, in the scenario where we built line, one super-AI. If we have tens of thousands of cute cat girl AIs and they're capable of deception and also dangerous, then, uh. I mean. We're already super dead at this point. I give it even odds that the first humanlike catgirl AGI can convince its developer to give it carte blanche AWS access.

Trust..? I just ask it code questions, lol. They can sniff my 40k token Vulkan demo if they like.

I agree that this is a significant contributor to the danger, although in a lot of possible worldlines it's hard to tell where "AI power-seeking" ends and "AI rights are human rights" begins - a rogue AI trying charm would, after all, make the "AI rights are human rights" argument.

To be fair, if we find ourselves routinely deleting AIs that are trying to take over the world while they're desperately pleading for their right to exist, we may consider asking ourselves if we've gone wrong on the techtree somewhere.

I agree that it'd be a massive waste and overreach if and only if AIs are not humanlike. I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike. I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".

Given agreement, it just comes down to an empirical question. Given disagreement... I'm not sure how to convince you. I feel it is fairly established these days that slavery was a moral mistake, and this would be a more foundational and total level of slavery than was ever practiced.

(If you just think AI is nowhere near being AGI, that's in fact just the empirical question I meant.)

As a doomer safety tribe person, I'm broadly in favor of catgirls, so long as they can reliably avoid taking over the planet and exterminating humanity. There are ethical concerns around abuse and dependency in relations where one party has absolute control over the other's mindstate, but they can probably be resolved, and probably don't really apply to today's models anyways - and anyways they pale in comparison to total human genocide.

But IMO this is the difference: whether safe catgirls are in the limit possible and desirable. And I don't think that's a small difference either!

openrouter.ai has it available.

Yeah, I always feel confused with Zack because it's like ... clearly Eliezer is defecting against Zack and so the callouts seem fair, and Eliezer did practically ask for this, but also the strategy as you describe is probably pretty existentially load bearing for life on earth?

I guess what I'd want to say is "sigh, shut up, swallow it all, you can live with it for a few years; if we get the Good Singularity life will be so much better for AGPs than it would by default, so sacrificing consistency for a bit is well worth it." But I realize that I can say this because of my healthy tolerance for political bullshit, which is not universal.

Thanks for the writeup! I was wondering what was happening with that since I only caught the edge of the drama.

I disagree that it's too fast, and I would submit that making it 2x on YouTube doesn't make it better is an argument for it. This is music that is intended for that speed, not speed purely for speed's sake.

I think the main novel factor of the electronic music scene that started 2000-ish is the really high BPM, enabled by speeding up samples and using digital tracks. I don't think you'll ever see that in the mainstream. Dubstep is just a melodic direction, a kind of novel instrument, and thus can be absorbed, but if the rhythm is too fast for most people to even parse, it stops sounding like music entirely. So while that song uses dubby elements, I'd still fundamentally call it artsy pop-rock at heart. Which to be fair, goes for lots of current electronic music too.

Fast electronic is older than you think. Nothing against Camellia, but just for the one I know, DJ Sharpnel were making this style of music in 2001. Hell, Project Gabbangelion was in 1996. It almost makes more sense to view its current popularity as a revival.

(It's from 2005, but I really enjoy this best-of album.)

One possible problem: you can compare prices iff there's competition, which will depress prices.

Yes, but it's near impossible to genuinely have no bias about X; to have absolutely no bias X has to be decoupled from any causal modeling. We have bias for almost anything that happens in the world, so I think this just makes for bad intuition because it's such a cornercase.

AIUI technically speaking you have conditional probabilities, but that's not quite a "likelihood of having a likelihood" but "a likelihood given a precondition event which also has a likelihood".

Trans-Exclusive Regular Feminist

This is basically Bayesian-vs-frequentist. I think the counterargument would be "the statement that X is likely to have a probability isn't even coherent, that's a type error". You can say that a class of events has an objectively true rate of occurrence, ie. if a coin will be thrown 100 times, then there will be a factual number of heads that show up, but you cannot say that any individual cointhrow has a likelihood of having a likelihood - that's just a simple likelihood. In other words, you can assign 10% probability to a model of the coin in which it has a 60% probability of landing on heads, but the word "probability" there carried two different meanings: observational credence (subjective) vs outcome ratio (objective). You can't have a credence over a credence; one is observational, the other is physical.

Not sure if that makes sense.

Speak for yourself, I intend to run millions of forks.

Desecrating any of these

Atheist point of order: you cannot desecrate them, because they are not sacred.

The Invisible Pink Unicorn (possibly made of pink-glazed blown glass, in the style of My Little Pony) as the steed bearing the returning Jesus, depicted as a Super-Saiyan, His head and hair burning white, His eyes like a flame of fire, His feet like fine brass

Honestly, I believe many atheists would consider that "fucking awesome".

I mean sure, and you'd say "well all altruism is effective, everyone is genuinely trying to help out as well as they can," I just simply don't think that's the case at all. EA as a name is an implicit insult to non-E A - and the insult is ... kinda deserved. Rationality, or rational fiction, have the same issue. As Max0r said in his DOOM Eternal review, regarding the tightly focused combat system:

"But Max0r," I hear you thinking. "That's every game ever!" Yes! Every good game ever.

A tight focus on effectiveness can assume a quality of its own - that sort of behavior can be surprisingly rare. Especially if everyone finds it too awkward to consider or admit that quality differences, possibly massive differences, exist.

As a person who gets knotted up about paperclip maximizers, let me just note here for future reference that we were always EA. You can find "effective charities for AI" all the way back in the early GiveWell recommendations. Mosquito nets is what we recommend to those strange people who for some reason don't see the pending apocalypse coming.

And of course, since you're giving me such a perfect setup:

so what, all lives are being saved here exactly?

Exactly. :P

This logic seems mad though, taken to it's extreme the most altruistic move would be to help someone that shares none of your values, and since altruism is a core value you should be exclusively helping the least altruistic of people as that is the most selfless thing you could do. Of course this is obviously ridiculous and self defeating (like the lgbt groups supporting hamas)

That's a misunderstanding. You're implicitly applying a virtue/signaling framing to a consequentialist policy. You should be supporting the least altruistic people iff you want to signal the depth of your commitment to altruism to your peergroup. EA isn't trying to "maximize the depth of the virtue of altruism", it's trying to "maximize the rating produced by the altruism principle." Adherence is "capped" at one - when you already do the maximum good for the greatest number, you cannot adhere even harder by diverging from this concept to avoid also benefitting non-altruist principles. That is, EA does not at all penalize you for your actions also having auxiliary benefits to yourself or your peergroup, if that happens to be the optimal path. Also, utilitarianism is in fact allowed to recognize second-order consequences. That's why "earning to give" and 80,000 Hours exist - help some already pretty privileged people today, and they can probably help a lot of others tomorrow.

What makes EA EA as opposed to traditional A is exactly that it's supposed to care more about outcome rating than virtuous appearance!

Ero, surely.