@CeePlusPlusCanFightMe's banner p

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users  
joined 2022 September 05 17:01:33 UTC
Verified Email

				

User ID: 641

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users   joined 2022 September 05 17:01:33 UTC

					

No bio...


					

User ID: 641

Verified Email

I'm not dismissing garbage collection whole sale. I'm dismissing programmers who have known nothing else.

Eh, this basically feels like a box out of the famous XKCD comic.

morally, i feel i should be able to lose weight myself

No! Bad! The decision to take a drug is a practical one with no moral implications. Similar statements include "morally, i feel i should be able to drive a bit longer without stopping at a rest area" or "morally, i feel i should be able to walk to the grocery store rather than drive."

To be fair, I think the only real hate for transracial people comes from the social-justice left; as far as I've heard nobody moderate to conservative has shown the slightest bit of interest. Admittedly this is also because the social-justice left is by far the segment of society most interested in any given person's race.

There's a reason the Rachel Dolezal transracial flareup happened to be around a college instructor! (Because if, say, the head of the National Rifle Association was transracial nobody would care even a little bit. Why would they?)

Good times.

Fair in general, but he is a central figure in EA specifically, and arguably its founder.

Yeah, fair, I'll cop to him being the founder (or at least popularizer) of EA. Though I declaim any obligation to defend weird shit he says.

I think one thing that I dislike about the discourse around this is it kinda feels mostly like vibes-- "how much should EA lose status from the FTX implosion"-- with remarkably little in the way of concrete policy changes recommended even from detractors (possible exception: EA orgs sending money they received from FTX to the bankruptcy courts for allocation to victims, which, fair enough.)

On a practical level, current EA "doctrine" or whatever is that you should throw down 10% of your income to do the maximum amount of good you think you can do, which is as far as I can tell basically uncontroversial.

Or to put it another way-- suppose I accepted your position that EA as it currently stands is way too into St. Petersberging everyone off a cliff, and way too into violating deontology in the name of saving lives in the third world. Would you perceive it as a sufficient remedy for EA leaders to disavow those perspectives in favor of prosocial varieties of giving to the third world? If not, what should EAs say or do differently?

There is no problem humans face that cannot be reframed as a programming or automation problem. Need food? Build a robot to grow it for you, and another to deliver it to your house. Need to build a robot? Make a factory that automates robot fabrication. Need to solve X medical issue? Write a program that figures out using simulations or whatever how to synthesize a chemical or machine that fixes it. Given this, the question of "what happens to programmers when computers can write code for arbitrary domains just as well as programmers can" answers itself.

I expect that fully automating coding will be the last job anybody ever does, either because we're all dead or we have realized Fully Automated Luxury Space Communism.

Most of my thoughts on this are driven by the practicalities of things we can do right now; I see no reason, assuming all technological restraints were lifted, that anyone shouldn't be able to do anything they want with their bodies.

Similarly, I feel like the only strong arguments against transitioning genders stem from the fact that our bio-engineering isn't up to snuff.

Why on earth would a deontologist object to throwing someone in prison if they're guilty of the crime and were convicted in a fair trial?

Fair enough! I suppose it depends on whether you view the morally relevant action as "imprisoning someone against their will" (bad) vs "enforcing the law" (good? Depending on whether you view the law itself as a fundamentally consequentialist instrument).

That's like saying that Christians don't actually believe that sinning is bad because even Christians occasionally sin. You can genuinely believe in moral obligations even if the obligations are so steep that (almost) no one fully discharges them.

I think the relevant distinction here is that not only do I not give away all my money, I also don't think anyone else has the obligation to give away all my money. I do not acknowledge this as an action I or anyone else is obligated to perform, and I believe this is shared by most everyone who's not Peter Singer. (Also, taking Peter Singer as the typical utilitarian seems like a poor decision; I have no particular desire to defend his utterances and nor do most people.)

On reflection, I think that actually everyone makes moral decisions based on a system where every action has some (possibly negative) number of Deontology Points and some number (possibly negative) of Consequentialist Points and we weight those in some way and tally them up and if the outcome is positive we do the action.

That's why I not only would myself, but would also endorse others, stealing loaves of bread to feed my starving family. Stealing the bread? A little bad, deontology-wise. Family starving? Mega-bad, utility-wise. (You could try to rescue pure-deontology by saying that the morally-relevant action being performed is "letting your family starve" not "stealing a loaf of bread" but I would suggest that this just makes your deontology utilitarianism with extra steps.)

I can't think of any examples off the top of my head where the opposite tradeoff realistically occurs, negative utility points in exchange for positive deontology points.

Sure, except for when it really matters

I mean... yeah? The lying-to-an-axe-murderer thought experiment is a staple for a reason.

it begins

Though for srs, replika by all accounts runs off a very small and mediocre language model compared to SOTA. What happens when a company with access to a gpt4-tier LLM tries their hand at similar is left as an exercise to the reader. Even the biggest llama variants might suffice to blow past the "i'm talking to a bot" feeling.

(Though i confess to mostly using an opportunity i saw to deliver the "sci-fi scenarios" line. Good every time.)

Fair counterexamples!

umm... there have been tons of shows featuring obese people

any tv shows you're thinking of specifically?

Mostly the examples that come to mind, for me, come in two categories:

(1) Schlubby Guy Hot Wife dom-coms, which haven't been in vogue for years

(2) Reality TV, where writers aren't creating the characters (and so aren't accountable for the way in which the "characters" behave or look.)

it's possible I just don't watch TV or movies where there are obese characters! But I also haven't heard any specific media called out in this thread as counterexamples for obesity specifically, so.

Are there any charities to which you would endorse sending 10 percent of your income each year?

Is there, do you think, any coherent moral framework you'd endorse where you should donate to the AMF over sending money to friends and family?

i think if it's a binary choice 50% is exactly right since if you don't know what a house is, no process of reasoning could get you better than a coin flip as to the right answer. Similar if you have N different choices where you can't distinguish between them in any meaningful way.

It's important because I find it unaesthetic to have athletes dying from ODs on PE drugs, and, more crucially, so do the people running the olympics.

that is an excellent point right up there with the thing where due to illegal drugs being illegal people will get them from street dealers, whose drugs are going to be massively more dangerous than a theoretical legal equivalent.

It's important because I find it unaesthetic to have athletes dying from ODs on PE drugs, and, more crucially, so do the people running the olympics.

They might, but how would you convincingly show an image to have been ai-generated?

For (1), what you're saying is certainly true; the better abstractions and better tooling has been accompanied by growth in hardware fundamentals that cannot be reasonably expected to continue.

(2) is where I'm a lot more skeptical. A sufficient-- though certainly not necessary-- condition for a valuable software project is identifying a thing that requires human labor that a computer could, potentially, be doing instead.

The reason I called out robotics specifically is because, yeah, if you think about "software" as just meaning "stuff that runs on a desktop computer", well, there's lots of spheres of human activity that occur away from a computer. But the field of robotics represents the set of things that computers can be made to do in the real world.

That being so, if non-robotics software becomes trivial to write I expect we are in one of four possible worlds:

World one: General-purpose robotics-- for example, building robots that plant and harvest crops-- is possible for (AI-assisted) human programmers to do, but it's intrinsically really hard even with AI support, so human programmers/engineers still have to be employed to do it. This seems like a plausible world that we could exist in, and seems basically similar to our current world except that the programmer-gold-rush is in robotics instead of web apps.

World two: General-purpose robotics is really easy for non-programmers if you just make an AI do the robot programming. That means "programming" stops being especially lucrative as a profession, since programming has been automated away. It also means that every other job has been (or will very soon be) automated away. This is Fully-Automated Luxury Space Communism world, and also seems broadly plausible.

World three: General-purpose robotics is impossible at human or AI levels of cognition, but non-robotics AI-assisted programming is otherwise trivial. I acknowledge this is a world where mass layoffs of programmers would occur and that this would be a problem for us. I also do not think this is a very likely scenario; general-purpose robotics is very hard but I have no specific reason to believe it's impossible, especially if AI software development has advanced to the point where almost all other programming is trivial.

World four: World two, except somebody screwed up the programming on one of their robot-programming AIs such that it murders everyone instead of performing useful labor. This strikes me as another plausible outcome.

Are there possibilities I'm missing that seem to you reasonably likely?

For your point (3), I have no particular expectations or insight one way or another.

Yeah, sorry, went on a bit of a tangent there. Anyway.

I feel a lot of skepticism about bad diet and exercise habits being the primary causal drivers of obesity, since on a personal level I know some people who struggle to lose weight in spite of vigorous and frequent exercise and a diet heavy in foods traditionally considered healthy.

I expect that genetics has a hell of a lot to do with whether somebody becomes fat or not, and that "well you probably have bad diet and exercise habits" is a close-to-hand explanation that is both extremely difficult to falsify and which satisfies our instincts toward the Just World Hypothesis. There might also be chemical contaminants involved.

You're definitely right that diet-and-exercise studies include a huge range of effect sizes. I'm not 100% certain how to interpret this; my suspicion is that there's a hidden intervention sliding scale between "doctor says to you, with gravitas, 'eat healthier'" and "nutritionist locks you in a box and hand-feeds you kale that they calorie-counted themselves." And meta-analyses do a poor job differentiating between these, including the one I linked.

I would expect that more dramatic effects combined with heavier fadeout of results is a natural indicator that a particular study is doing an unsustainably aggressive intervention; in the meta-analysis, it indicated that in both diet-only and diet-and-exercise groups everyone regained about half the weight after a year. Which still does leave 14 pounds, and that isn't anything to sneeze at.

You are also right that there are two ways of doing these studies-- "as prescribed" and "intent-to-treat", and as-prescribed results will always show much better effect sizes than intent-to-treat results. In a sense, intent-to-treat isn't measuring the results of the treatment as much as it is measuring the results of prescribing the treatment. And as-prescribed, diet and exercise will always be 100% effective at inducing any amount of weight loss almost by definition. Hard to beat that, really.

But on the other hand... I kinda figure that intent-to-treat is a fairer representation of real life? In the sense that in real life people don't have the option of getting locked in the nutritionist-box indefinitely. And if two treatments are both effective as-prescribed, but the first one has much worse intent-to-treat efficacy, I want the second treatment.

The value of HBD being true is basically nothing, as far as I'm concerned.

I-- and, I think, a lot of other people here-- just have an intense, instinctive flinch response to people saying things that aren't correct and when people say obvious nonsense, even if it's the most well-intentioned nonsense in the world, it triggers that flinch response. Obviously I don't say anything about it; I'm not stupid, and I value my social life.

Constrained reproduction is the stupid and unethical way to go about solving dysgenics, though-- it's never gonna happen, and if it did it would get weaponized by the people in power almost immediately against the people out of power. That's aside from any ethical considerations about involuntarily blocking people off from having kids, which are real and important.

My suggestion? Government-subsidized polygenic screening for everyone, optimizing for health and IQ, let's gooooooo

(Never solve with social technology that which you can instead solve with actual technology)

Lab leaks happen already by accident. Why would you believe it's so hard to engineer a lab leak directly given (1)superintelligence and (2) the piles of money superintelligence can easily earn via hacking/crypto/stock trading/intellectual labor?

EA does not value ownership rights; if your money could do more good somewhere else it would be positive for it to be taken from you and directed somewhere else.

I think there's this idea that utilitarianism is all like "sure, go ahead, rob people iff you can use that money better" but that's dumb strawman-utilitarianism.

The reason it's dumb is because you have to take into account second-order effects in doing whatever it is you're doing, and those second-order effects for dishonest and coercive actions are nearly always profoundly negative, in general resulting in a society where nobody can trust anyone well enough to coordinate (and also resulting in a society where nobody would want to live).

There is a reason why nobody on the EA side is defending Bankman.

Higher prices aren't just about encouraging quantity supplied, they are about reining in quantity demanded.

Higher gas prices mean consumers will attempt to avoid using gas where they can, and less-productive uses of gas in industry will fall by the wayside. This is as true of gas as it is of every product. This is an important function of prices in an economy.

Also worth pointing out: "russia invaded a country so we aren't taking their gas anymore" is not a black swan, being as it is an entirely logical outcome of modern Russian warmongering. "We could hardly have foreseen Russia invading their neighbors again" is unpersuasive.

But it only makes sense for potential suppliers to build ways to take advantage if they will be able to profit from such speculative preparations when prices spike.