@CeePlusPlusCanFightMe's banner p

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users  
joined 2022 September 05 17:01:33 UTC
Verified Email

				

User ID: 641

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users   joined 2022 September 05 17:01:33 UTC

					

No bio...


					

User ID: 641

Verified Email

Higher prices aren't just about encouraging quantity supplied, they are about reining in quantity demanded.

Higher gas prices mean consumers will attempt to avoid using gas where they can, and less-productive uses of gas in industry will fall by the wayside. This is as true of gas as it is of every product. This is an important function of prices in an economy.

Also worth pointing out: "russia invaded a country so we aren't taking their gas anymore" is not a black swan, being as it is an entirely logical outcome of modern Russian warmongering. "We could hardly have foreseen Russia invading their neighbors again" is unpersuasive.

But it only makes sense for potential suppliers to build ways to take advantage if they will be able to profit from such speculative preparations when prices spike.

Gotcha. I appreciate this insight into the anti-EA perspective.

I feel like one core insight of cancel culture is that if you have 1000 detractors and 20000 supporters the detractors can still make your life shit in ways your supporters can't really help with (phoning your boss, doxxing you, sending rape threats, harassing organizations that have the capacity to inconvenience you, etc.)

Are there any charities to which you would endorse sending 10 percent of your income each year?

There is, i feel, a degree to which cancel culture is just... Twitter culture. Where do mobs find stuff to hate? Twitter. Where do they organize? Twitter. Where are the employers nice and easy to contact via, essentially, short form open letter? Twitter again.

Don't get me wrong, cancel culture can still exist without twitter, but i expect it to be far more of a minor and localized phenomenon.

Anyway, this is a silver lining if shit all goes south and Twitter dies. Though for my part i still gain value from Twitter and i'd be bummed out.

And on reflection, cancel culture is just the dark mirror of legitimate accountability-- MeToo would not have gotten off the ground without twitter, nor would protests over various police abuses of power.

A failed Twitter would have lots of cultural consequences.

The answer seems obvious: instead of taking someone's word for or against ai xrisk being a thing, the arguments for it have to be evaluated on their merits, and you can decide for yourself whether it is something to be concerned about based on that.

Is there, do you think, any coherent moral framework you'd endorse where you should donate to the AMF over sending money to friends and family?

I think it's more like pointing out that there's no particular reason the EA charities should have been able to spot a fraud when the fraud went unspotted by a huge number of highly motivated traders whose job is, in part, to spot that sort of thing (so that they can either avoid it or make trades based around its existence).

Fair in general, but he is a central figure in EA specifically, and arguably its founder.

Yeah, fair, I'll cop to him being the founder (or at least popularizer) of EA. Though I declaim any obligation to defend weird shit he says.

I think one thing that I dislike about the discourse around this is it kinda feels mostly like vibes-- "how much should EA lose status from the FTX implosion"-- with remarkably little in the way of concrete policy changes recommended even from detractors (possible exception: EA orgs sending money they received from FTX to the bankruptcy courts for allocation to victims, which, fair enough.)

On a practical level, current EA "doctrine" or whatever is that you should throw down 10% of your income to do the maximum amount of good you think you can do, which is as far as I can tell basically uncontroversial.

Or to put it another way-- suppose I accepted your position that EA as it currently stands is way too into St. Petersberging everyone off a cliff, and way too into violating deontology in the name of saving lives in the third world. Would you perceive it as a sufficient remedy for EA leaders to disavow those perspectives in favor of prosocial varieties of giving to the third world? If not, what should EAs say or do differently?

Why on earth would a deontologist object to throwing someone in prison if they're guilty of the crime and were convicted in a fair trial?

Fair enough! I suppose it depends on whether you view the morally relevant action as "imprisoning someone against their will" (bad) vs "enforcing the law" (good? Depending on whether you view the law itself as a fundamentally consequentialist instrument).

That's like saying that Christians don't actually believe that sinning is bad because even Christians occasionally sin. You can genuinely believe in moral obligations even if the obligations are so steep that (almost) no one fully discharges them.

I think the relevant distinction here is that not only do I not give away all my money, I also don't think anyone else has the obligation to give away all my money. I do not acknowledge this as an action I or anyone else is obligated to perform, and I believe this is shared by most everyone who's not Peter Singer. (Also, taking Peter Singer as the typical utilitarian seems like a poor decision; I have no particular desire to defend his utterances and nor do most people.)

On reflection, I think that actually everyone makes moral decisions based on a system where every action has some (possibly negative) number of Deontology Points and some number (possibly negative) of Consequentialist Points and we weight those in some way and tally them up and if the outcome is positive we do the action.

That's why I not only would myself, but would also endorse others, stealing loaves of bread to feed my starving family. Stealing the bread? A little bad, deontology-wise. Family starving? Mega-bad, utility-wise. (You could try to rescue pure-deontology by saying that the morally-relevant action being performed is "letting your family starve" not "stealing a loaf of bread" but I would suggest that this just makes your deontology utilitarianism with extra steps.)

I can't think of any examples off the top of my head where the opposite tradeoff realistically occurs, negative utility points in exchange for positive deontology points.

Sure, except for when it really matters

I mean... yeah? The lying-to-an-axe-murderer thought experiment is a staple for a reason.

Of course, utilitarians don't believe in honesty -- it's just one more principle to be fed into the fire for instrumental advantage in manufacturing paperclips malaria nets.

There's a bunch of argument about what utilitarianism requires, or what deontology requires, and it seems sort of obvious to me that nobody is actually a utilitarian (as evidenced by people not immediately voluntarily equalizing their wealth), or actually a deontologist (as evidenced by our willingness to do shit like nonconsensually throwing people in prison for the greater good of not being in a crime-ridden hellhole.) I mean, really any specific philosophical school of thought will, in the appropriate thought experiment, result in you torturing thousands of puppies or letting the universe be vaporized or whatever. I don't think this says anything particularly deep about those specific philosophies aside from that it's apparently impossible to explicitly codify human moral intuitions but people really really want to anyway.

That aside, in real life self-described EAs universally seem to advocate for honesty based on the pretty obvious point that the ability of actors to trust one another is key to getting almost anything done ever, and is what stops society from devolving into a hobbesian war of all-against-all. And yeah, I guess if you're a good enough liar that nobody finds out you're dishonest then I guess you don't damage that; but really, if you think for like two seconds nobody tells material lies thinking they're going to get caught, and the obvious way of not being known for dishonesty long-term is by being honest.

As for the St. Petersberg paradox thing, yeah, that's a weird viewpoint and one that seems pretty clearly false (since marginal utility per dollar declines way more slowly on a global/altruistic scale than an individual/selfish one, but it still does decline, and the billions-of-dollars scale seems about where it would start being noticeable.) But I'm not sure that's really an EA thing so much as a personal idiosyncrasy.

I don't think I can have an educated opinion on whether the opposition to DeFi was (a) principled advocacy for something he genuinely believed, (b) basic self-interested moves typical of big players in most industries, or (c) nefarious shit that should tank his credibility among honest folk. My money ordinarily would be on (b), but that's just priors.

I think the most likely explanation is that at some point in the past they were a legitimate business that ran out of legitimate funds, probably due to their known penchant for highly leveraged bets. Then they deluded themselves into believing that if they dipped into customer accounts they could gamble their way out, return the customers money, and have nobody be the wiser. Cut forward some undefined span of time and the hole gradually grew to 8 billion dollars and the whole thing collapsed.

I mostly say this because most people aren't sociopaths and this seems like the most likely route this could have happened if Bankman is not a sociopath. If he is a sociopath and planned the elaborate fraud from the start, i guess nevermind. Feels less likely, though.

Anyway, I don't think we're looking at anything more or less than a polycule of stim-abusing rationalists with a gambling problem, good PR, and access to several billion dollars with which to gamble.

I think that the main lesson here is that you can't trust people just because they use lots of ingroup shibboleths and donate lots of money to charity, even though (to be honest) that would be kinda my first impulse.

On reflection I think EA as a tribal signifier has come to mean a whole bunch of different things to different people, from "we should value the lives of future people more than our own" to "maybe we should think for two seconds about cost efficiency" to "defrauding people can be good, actually" to "just donate to whoever Givewell says." This is unhelpful.

IIRC EY tweeted something to the effect of "go like 75% of the way from deontology to utilitarianism and you're basically in the right place until you've become a god", which sounds about right.

EA does not value ownership rights; if your money could do more good somewhere else it would be positive for it to be taken from you and directed somewhere else.

I think there's this idea that utilitarianism is all like "sure, go ahead, rob people iff you can use that money better" but that's dumb strawman-utilitarianism.

The reason it's dumb is because you have to take into account second-order effects in doing whatever it is you're doing, and those second-order effects for dishonest and coercive actions are nearly always profoundly negative, in general resulting in a society where nobody can trust anyone well enough to coordinate (and also resulting in a society where nobody would want to live).

There is a reason why nobody on the EA side is defending Bankman.

Why? I feel that is an impulse worth exploring.

This is very disappointing. Polygenic screening is going to need this kind of data linking genes and IQ if it's ever going to work well; it would be ironic and shameful if the NIH, by attempting to hide gene-to-IQ associations, ended up sabotaging the very groups such a censorship regime was meant to protect.

Society is fixed, but biology is mutable, and this is only going to become more true as AI foundation models bring more of biology under our explicit and direct control. If any one group did end up being lower-IQ than others, that is the group that has by far the most to gain from this kind of technology and (by extension) from this kind of research.

Finding links between IQ and genetics is crucial if we ever want polygenic screening for IQ to work well. Shouldn't we want smarter children?

I think that's basically reasonable. There is some plot stuff in Terminator which is less realistic or sensible that I'm not keen on arguing, but I feel 100% reality fidelity is unnecessary for Terminator to be an effective AI x-risk story showcasing the basic problem.

I get the impression that most of the pushback from alignment folks is because (1) they feel Terminator comparisons make the whole enterprise look unserious since Terminator is a mildly silly action franchise, and (2) that the series doesn't do a good job of pointing out why it is that it's really hard to avoid accidentally making Skynet. Like, it's easy to watch that film and think "well obviously if I were programming the AI I would just tell it to value human well-being. Or maybe just not make a military AI that I give all my guns to. Easy-peasy."

I think it's mainly the first one, though. It's already really hard to bridge the inferential distances necessary to convince normal people that AI x-risk is a thing and not a bunch of out-of-touch nerds hyperventilating about absurd hypotheticals; no point in making the whole thing harder on yourself by letting people associate your movement with a fairly-silly action franchise.

For my money, I like Mickey Mouse: Sorcerer's Apprentice as my alignment fable of choice. The autonomous brooms neither love you nor hate you. But they intend to deliver the water regardless of its impact on your personal well-being.

Disney's Fantasia: way ahead of its time.

Some entertainment workers deleting tweets and so on under CCP's pressure suggests that it's plausible that the world's richest man may turn his company into an asset of a foreign propaganda?

Let's make this clear, he's the world's richest man for as long as Tesla's doing well.

This is hardly a one-off-- there was a nearly identical incident with an NBA player (see https://time.com/5694150/nba-china-hong-kong/ ). There was also an incident with Disney putting a pro-Dalai Lama movie out which China took umbrage at, the result of which was that Disney apologized and promised never to do it again: https://asia.nikkei.com/Opinion/Disney-s-magical-thinking-won-t-keep-politics-away-from-Mulan . I have not bothered to dredge up further examples, but seems like there's a lot of them, and the net effect is even greater given that the way to avoid getting embroiled in similar scandals is to never offend China in the first place.

china having a similar level of 'influence' with many other executives and companies in america due to the very deep trade ties between us

They do indeed have that level of financial influence and it is indeed significant in practice; the fact that China's influence is felt in a huge number of other places in the US economy is not a reason to feel better about China having similar leverage over the owner of Twitter.

EDIT: from incidents where China has exerted leverage in the past the response from American politicians has not generally been anything more than worried hand-wringing. I see no particular reason it'll be different for China exerting influence over Twitter, especially if it comes in the form of Twitter algorithmically downplaying stuff China might get offended by.

did a control-f on this thread for the word "china" and nothing came up, so I'll just point out that before Musk took over Twitter China had no leverage over the platform to censor views they find objectionable, given that Twitter is already inaccessible in China. But Musk has a lot to lose if China were to pull their support for Tesla, since so much of Tesla's manufacturing capacity is located there.

Which means that if China were to, say, take offense at the views of people who are pro-Taiwan or anti-Xianjing-concentration-camps and wanted those views taken off of Twitter, they have a really tempting point of leverage! "That's a nice Tesla business you've got there, Musk, shame if something were to happen to it."

This is definitely the sort of thing that's already happened to other businesses over which China has had leverage-- see also https://en.wikipedia.org/wiki/Blitzchung_controversy for when Blizzard fired a bunch of people for being vocally pro-Hong Kong on stream, presumably to avoid China financially penalizing Blizzard in retaliation.

There is a very real sense in which Stable Diffusion and its ilk do represent a search process, it's just one over the latent space of images that could be created. The Shutterstock search process is distinct primarily in that it's a much much much more restricted search process that encompasses only a curated set of images.

This isn't (just) a "well technically" kind of language quibble, I'm pointing this out because generative prompt engineering and search prompt engineering are the same kind of activity, distinguished in large part by generative prompts yielding useful results far less frequently, with the search process being far slower as a result.

But this is a temporary (maybe) fact about the current state of the AI tool, not a permanent fact about reality.

Ah, an unstated but crucial assumption in the post was that you personally the one who created the image. it's true, AI images grabbed off of a stock website are basically similar to regular stock images in all relevant respects.