@CeePlusPlusCanFightMe's banner p

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users  
joined 2022 September 05 17:01:33 UTC
Verified Email

				

User ID: 641

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users   joined 2022 September 05 17:01:33 UTC

					

No bio...


					

User ID: 641

Verified Email

I'm not dismissing garbage collection whole sale. I'm dismissing programmers who have known nothing else.

Eh, this basically feels like a box out of the famous XKCD comic.

Why on earth would a deontologist object to throwing someone in prison if they're guilty of the crime and were convicted in a fair trial?

Fair enough! I suppose it depends on whether you view the morally relevant action as "imprisoning someone against their will" (bad) vs "enforcing the law" (good? Depending on whether you view the law itself as a fundamentally consequentialist instrument).

That's like saying that Christians don't actually believe that sinning is bad because even Christians occasionally sin. You can genuinely believe in moral obligations even if the obligations are so steep that (almost) no one fully discharges them.

I think the relevant distinction here is that not only do I not give away all my money, I also don't think anyone else has the obligation to give away all my money. I do not acknowledge this as an action I or anyone else is obligated to perform, and I believe this is shared by most everyone who's not Peter Singer. (Also, taking Peter Singer as the typical utilitarian seems like a poor decision; I have no particular desire to defend his utterances and nor do most people.)

On reflection, I think that actually everyone makes moral decisions based on a system where every action has some (possibly negative) number of Deontology Points and some number (possibly negative) of Consequentialist Points and we weight those in some way and tally them up and if the outcome is positive we do the action.

That's why I not only would myself, but would also endorse others, stealing loaves of bread to feed my starving family. Stealing the bread? A little bad, deontology-wise. Family starving? Mega-bad, utility-wise. (You could try to rescue pure-deontology by saying that the morally-relevant action being performed is "letting your family starve" not "stealing a loaf of bread" but I would suggest that this just makes your deontology utilitarianism with extra steps.)

I can't think of any examples off the top of my head where the opposite tradeoff realistically occurs, negative utility points in exchange for positive deontology points.

Sure, except for when it really matters

I mean... yeah? The lying-to-an-axe-murderer thought experiment is a staple for a reason.

There is no problem humans face that cannot be reframed as a programming or automation problem. Need food? Build a robot to grow it for you, and another to deliver it to your house. Need to build a robot? Make a factory that automates robot fabrication. Need to solve X medical issue? Write a program that figures out using simulations or whatever how to synthesize a chemical or machine that fixes it. Given this, the question of "what happens to programmers when computers can write code for arbitrary domains just as well as programmers can" answers itself.

I expect that fully automating coding will be the last job anybody ever does, either because we're all dead or we have realized Fully Automated Luxury Space Communism.

morally, i feel i should be able to lose weight myself

No! Bad! The decision to take a drug is a practical one with no moral implications. Similar statements include "morally, i feel i should be able to drive a bit longer without stopping at a rest area" or "morally, i feel i should be able to walk to the grocery store rather than drive."

it begins

Though for srs, replika by all accounts runs off a very small and mediocre language model compared to SOTA. What happens when a company with access to a gpt4-tier LLM tries their hand at similar is left as an exercise to the reader. Even the biggest llama variants might suffice to blow past the "i'm talking to a bot" feeling.

(Though i confess to mostly using an opportunity i saw to deliver the "sci-fi scenarios" line. Good every time.)

Fair counterexamples!

umm... there have been tons of shows featuring obese people

any tv shows you're thinking of specifically?

Mostly the examples that come to mind, for me, come in two categories:

(1) Schlubby Guy Hot Wife dom-coms, which haven't been in vogue for years

(2) Reality TV, where writers aren't creating the characters (and so aren't accountable for the way in which the "characters" behave or look.)

it's possible I just don't watch TV or movies where there are obese characters! But I also haven't heard any specific media called out in this thread as counterexamples for obesity specifically, so.

Good times.

Are there any charities to which you would endorse sending 10 percent of your income each year?

Is there, do you think, any coherent moral framework you'd endorse where you should donate to the AMF over sending money to friends and family?

Fair in general, but he is a central figure in EA specifically, and arguably its founder.

Yeah, fair, I'll cop to him being the founder (or at least popularizer) of EA. Though I declaim any obligation to defend weird shit he says.

I think one thing that I dislike about the discourse around this is it kinda feels mostly like vibes-- "how much should EA lose status from the FTX implosion"-- with remarkably little in the way of concrete policy changes recommended even from detractors (possible exception: EA orgs sending money they received from FTX to the bankruptcy courts for allocation to victims, which, fair enough.)

On a practical level, current EA "doctrine" or whatever is that you should throw down 10% of your income to do the maximum amount of good you think you can do, which is as far as I can tell basically uncontroversial.

Or to put it another way-- suppose I accepted your position that EA as it currently stands is way too into St. Petersberging everyone off a cliff, and way too into violating deontology in the name of saving lives in the third world. Would you perceive it as a sufficient remedy for EA leaders to disavow those perspectives in favor of prosocial varieties of giving to the third world? If not, what should EAs say or do differently?

i think if it's a binary choice 50% is exactly right since if you don't know what a house is, no process of reasoning could get you better than a coin flip as to the right answer. Similar if you have N different choices where you can't distinguish between them in any meaningful way.

It's important because I find it unaesthetic to have athletes dying from ODs on PE drugs, and, more crucially, so do the people running the olympics.

that is an excellent point right up there with the thing where due to illegal drugs being illegal people will get them from street dealers, whose drugs are going to be massively more dangerous than a theoretical legal equivalent.

It's important because I find it unaesthetic to have athletes dying from ODs on PE drugs, and, more crucially, so do the people running the olympics.

They might, but how would you convincingly show an image to have been ai-generated?

For (1), what you're saying is certainly true; the better abstractions and better tooling has been accompanied by growth in hardware fundamentals that cannot be reasonably expected to continue.

(2) is where I'm a lot more skeptical. A sufficient-- though certainly not necessary-- condition for a valuable software project is identifying a thing that requires human labor that a computer could, potentially, be doing instead.

The reason I called out robotics specifically is because, yeah, if you think about "software" as just meaning "stuff that runs on a desktop computer", well, there's lots of spheres of human activity that occur away from a computer. But the field of robotics represents the set of things that computers can be made to do in the real world.

That being so, if non-robotics software becomes trivial to write I expect we are in one of four possible worlds:

World one: General-purpose robotics-- for example, building robots that plant and harvest crops-- is possible for (AI-assisted) human programmers to do, but it's intrinsically really hard even with AI support, so human programmers/engineers still have to be employed to do it. This seems like a plausible world that we could exist in, and seems basically similar to our current world except that the programmer-gold-rush is in robotics instead of web apps.

World two: General-purpose robotics is really easy for non-programmers if you just make an AI do the robot programming. That means "programming" stops being especially lucrative as a profession, since programming has been automated away. It also means that every other job has been (or will very soon be) automated away. This is Fully-Automated Luxury Space Communism world, and also seems broadly plausible.

World three: General-purpose robotics is impossible at human or AI levels of cognition, but non-robotics AI-assisted programming is otherwise trivial. I acknowledge this is a world where mass layoffs of programmers would occur and that this would be a problem for us. I also do not think this is a very likely scenario; general-purpose robotics is very hard but I have no specific reason to believe it's impossible, especially if AI software development has advanced to the point where almost all other programming is trivial.

World four: World two, except somebody screwed up the programming on one of their robot-programming AIs such that it murders everyone instead of performing useful labor. This strikes me as another plausible outcome.

Are there possibilities I'm missing that seem to you reasonably likely?

For your point (3), I have no particular expectations or insight one way or another.

Yeah, sorry, went on a bit of a tangent there. Anyway.

I feel a lot of skepticism about bad diet and exercise habits being the primary causal drivers of obesity, since on a personal level I know some people who struggle to lose weight in spite of vigorous and frequent exercise and a diet heavy in foods traditionally considered healthy.

I expect that genetics has a hell of a lot to do with whether somebody becomes fat or not, and that "well you probably have bad diet and exercise habits" is a close-to-hand explanation that is both extremely difficult to falsify and which satisfies our instincts toward the Just World Hypothesis. There might also be chemical contaminants involved.

You're definitely right that diet-and-exercise studies include a huge range of effect sizes. I'm not 100% certain how to interpret this; my suspicion is that there's a hidden intervention sliding scale between "doctor says to you, with gravitas, 'eat healthier'" and "nutritionist locks you in a box and hand-feeds you kale that they calorie-counted themselves." And meta-analyses do a poor job differentiating between these, including the one I linked.

I would expect that more dramatic effects combined with heavier fadeout of results is a natural indicator that a particular study is doing an unsustainably aggressive intervention; in the meta-analysis, it indicated that in both diet-only and diet-and-exercise groups everyone regained about half the weight after a year. Which still does leave 14 pounds, and that isn't anything to sneeze at.

You are also right that there are two ways of doing these studies-- "as prescribed" and "intent-to-treat", and as-prescribed results will always show much better effect sizes than intent-to-treat results. In a sense, intent-to-treat isn't measuring the results of the treatment as much as it is measuring the results of prescribing the treatment. And as-prescribed, diet and exercise will always be 100% effective at inducing any amount of weight loss almost by definition. Hard to beat that, really.

But on the other hand... I kinda figure that intent-to-treat is a fairer representation of real life? In the sense that in real life people don't have the option of getting locked in the nutritionist-box indefinitely. And if two treatments are both effective as-prescribed, but the first one has much worse intent-to-treat efficacy, I want the second treatment.

Yeah, the legs thing is probably the most invasive of the items on the list, and the one i know least about.

Gotcha. I appreciate this insight into the anti-EA perspective.

There is a very real sense in which Stable Diffusion and its ilk do represent a search process, it's just one over the latent space of images that could be created. The Shutterstock search process is distinct primarily in that it's a much much much more restricted search process that encompasses only a curated set of images.

This isn't (just) a "well technically" kind of language quibble, I'm pointing this out because generative prompt engineering and search prompt engineering are the same kind of activity, distinguished in large part by generative prompts yielding useful results far less frequently, with the search process being far slower as a result.

But this is a temporary (maybe) fact about the current state of the AI tool, not a permanent fact about reality.

Ah, an unstated but crucial assumption in the post was that you personally the one who created the image. it's true, AI images grabbed off of a stock website are basically similar to regular stock images in all relevant respects.

What you're saying checks out. One way of putting it might be that the craft of art is helped by AI-- as an artist I can do more, on an objective level-- but the profession of being an artist is irrecoverably damaged.

Currently you're totally right. But I'll point out that the reason it takes ten minutes is because right now AI art kinda sucks (so it takes a while to get a prompt that looks okay), and the tech only gets better from here on out.