@CeePlusPlusCanFightMe's banner p

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users  
joined 2022 September 05 17:01:33 UTC
Verified Email

				

User ID: 641

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users   joined 2022 September 05 17:01:33 UTC

					

No bio...


					

User ID: 641

Verified Email

But the larger understanding about problems as things that need to and can be resolved internally instead of by repetition is especially important in computer programming.

I agree with this. Most of being good at coding rests on your ability to detect hidden abstractions in the business logic you're writing-- subtle regularities in the domain that can be used to write easier-to-understand and easier-to-modify code.

There's this saying: "Show me your flowcharts and conceal your tables, and I shall be continued to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious." I think that's saying something basically similar, and I think it's true.

But trying to teach how to do that seems basically similar to trying to teach someone generic problem solving, which professional educators have been banging their heads against forever.

Yeah, the legs thing is probably the most invasive of the items on the list, and the one i know least about.

Gotcha. I appreciate this insight into the anti-EA perspective.

There is a very real sense in which Stable Diffusion and its ilk do represent a search process, it's just one over the latent space of images that could be created. The Shutterstock search process is distinct primarily in that it's a much much much more restricted search process that encompasses only a curated set of images.

This isn't (just) a "well technically" kind of language quibble, I'm pointing this out because generative prompt engineering and search prompt engineering are the same kind of activity, distinguished in large part by generative prompts yielding useful results far less frequently, with the search process being far slower as a result.

But this is a temporary (maybe) fact about the current state of the AI tool, not a permanent fact about reality.

Ah, an unstated but crucial assumption in the post was that you personally the one who created the image. it's true, AI images grabbed off of a stock website are basically similar to regular stock images in all relevant respects.

What you're saying checks out. One way of putting it might be that the craft of art is helped by AI-- as an artist I can do more, on an objective level-- but the profession of being an artist is irrecoverably damaged.

Currently you're totally right. But I'll point out that the reason it takes ten minutes is because right now AI art kinda sucks (so it takes a while to get a prompt that looks okay), and the tech only gets better from here on out.

it's possible that courts will start demanding a chain of custody for art, but I can't imagine that's terribly likely given the insane logistical challenges involved in enforcement.

There's already cases of people online claiming to have fallen in love with chatbots. Only a matter of time.

I think that (1) ai video looks about a year behind ai art and (2) ai art is about a year from being able to reliably deal with physically complex scenes with many moving parts. So 2 years?

I did the More Leaders Modmod!

The coding was extremely low-quality and the Python was probably buggy as hell. But it was mine.

EDIT: Wait, I think Ashes of Erebus did end up incorporating some of my work! How's that project going, by the by?

I'm kind of underwhelmed by the Huel Black Chocolate flavor. It's... very okay. Any suggestions on things I should add to it for additional flavor?

At the moment I'm just mixing it with whole milk.

Leaving aside whether "passing" as a concept is intrinsically problematic (probably? man, hell if I know) I definitely think there's pretty strong (for me) delineations between degrees-of-passing.

  1. I cannot distinguish this person from being a cisgendered man without them telling me verbally.

  2. I can tell this person is attempting to pass as a woman, but my hindbrain is continuing to helpfully inform me that this person is a man.

  3. I can tell this person is a trans woman based off of specific conscious cues, but she passes successfully enough to where my hindbrain perceives her as a cis woman.

  4. As far as I can tell this person's a cis woman.

"Passing", depending on context, either means (3) or (4). I definitely will accidentally misgender people who fall into categories (1) and (2), since talking with or about them involves constantly overriding my typical social scripts for dealing with people I've internally categorized as one gender or the other. I don't think that my unconscious sense of other peoples' gender actually distinguishes between "cis" and "trans".

As an aside, trans people are definitely susceptible to the Gaudy Graveyard Effect where the trans movement tends to be identified by people in categories (1) and (2) because that's where all the controversy is centered. Culture war stuff aside I don't think most people have any visceral problem with trans people in categories (3) or (4).

The process of losing weight is mostly eating a more normal amount of calories and engaging in physical activity

Worth pointing out that diet and exercise alone have extremely poor intent-to-treat efficacy, generally between 2% and 4% of body weight as measured by most studies. For instance, see https://www.nature.com/articles/0803015 . Medication dramatically improves weight successfully lost (see also: https://www.nejm.org/doi/full/10.1056/NEJMoa2032183)

Have you considered that physical appearance is one of the most malleable things about a person, particularly for a person with a high income? I have no specific knowledge of what about you is unattractive, but you have the following options open to you:

  1. plastic surgery if it's an unattractive face or jawline or your ears stick out or whatever

  2. weight loss drugs if you're overweight

  3. testosterone replacement therapy + personal training if you have a severe lack of muscle mass. (Girls mostly really like muscle mass.)

  4. that leg-lengthening procedure if your problem is height

  5. wigs or medical hair replacement (dunno the clinical term) if you are balding.

This is an entirely serious comment. Western society has a stigma against trying to change your appearance in these ways, but if your appearance is an impediment to you living your best life, you should change it if you have the money, which it sounds like you will.

Do these have side effects? Yeah, probably. Life is full of tradeoffs. Still, given current medical tech the OP reads a bit like a (more expensive) version of "i am worried that no woman will ever love me because all of my clothes are ugly. Should i resign myself to dying alone, or just really go hard on settling?" My dude! Just buy some new clothes!

Self-acceptance is bunk. Engineer that shit away.

Yup. Primary reason the anti drug rules are important is because with them pros will ride the razor's edge of discoverability; without them they will ride the razor's edge of ODing or death.

It asks them to inject themselves and to go on said trips, and they say "okay!"

I think it's more like pointing out that there's no particular reason the EA charities should have been able to spot a fraud when the fraud went unspotted by a huge number of highly motivated traders whose job is, in part, to spot that sort of thing (so that they can either avoid it or make trades based around its existence).

Of course, utilitarians don't believe in honesty -- it's just one more principle to be fed into the fire for instrumental advantage in manufacturing paperclips malaria nets.

There's a bunch of argument about what utilitarianism requires, or what deontology requires, and it seems sort of obvious to me that nobody is actually a utilitarian (as evidenced by people not immediately voluntarily equalizing their wealth), or actually a deontologist (as evidenced by our willingness to do shit like nonconsensually throwing people in prison for the greater good of not being in a crime-ridden hellhole.) I mean, really any specific philosophical school of thought will, in the appropriate thought experiment, result in you torturing thousands of puppies or letting the universe be vaporized or whatever. I don't think this says anything particularly deep about those specific philosophies aside from that it's apparently impossible to explicitly codify human moral intuitions but people really really want to anyway.

That aside, in real life self-described EAs universally seem to advocate for honesty based on the pretty obvious point that the ability of actors to trust one another is key to getting almost anything done ever, and is what stops society from devolving into a hobbesian war of all-against-all. And yeah, I guess if you're a good enough liar that nobody finds out you're dishonest then I guess you don't damage that; but really, if you think for like two seconds nobody tells material lies thinking they're going to get caught, and the obvious way of not being known for dishonesty long-term is by being honest.

As for the St. Petersberg paradox thing, yeah, that's a weird viewpoint and one that seems pretty clearly false (since marginal utility per dollar declines way more slowly on a global/altruistic scale than an individual/selfish one, but it still does decline, and the billions-of-dollars scale seems about where it would start being noticeable.) But I'm not sure that's really an EA thing so much as a personal idiosyncrasy.

Why? I feel that is an impulse worth exploring.

as an aside i'm curious about how much Shutterstock got paid for the training data they sold to OpenAI.

I expect this to work right up to the point where there's an economic downturn and customers look around for line items they can cut from their budget.

EDIT: Ahh, that was probably not actually right given that shutterstock's subscription plan is actually fairly reasonably priced.

For the record I'm definitely not convinced that "80% +- 20% chance" is a coherent thought.

Here's a thought experiment: I give you a coin, which is a typical one and therefore has a 50% chance of landing heads or tails. If I asked you the probability it lands on heads, you'd say 50%, and you'd be right.

Now I give you a different coin. I have told you it is weighted, so that it has an 80% chance of landing on one side and 20% chance of landing on another (but I haven't told you whether or not it favors heads or tails.) If I asked you the probability it lands heads when flipped, you should still say 50%.

That's because probabilities are a measure of your own subjective uncertainty about the set of possible outcomes. Probabilities are not a fact about the universe. (This is trivially true because a hypothetical omniscient being would know with 100% certainty the results of every future coinflip, thereby rendering them, by a certain definition, "nonrandom". But they would still be random to humans.)

Yeah, I'm concerned about the "destruction of the human species" angle. I've been mulling over whether in surviving timelines TSM is disproportionately likely to get destroyed by China, thereby stalling AI advancement and also plunging the world into a depression since everyone needs their stuff.

Eh, I doubt it's anything that logical. "Pretty sure that X" is, I think, just a colloquialism whose meaning is synonymous with "roughly 80% chance of X", similar to how "I'm basically certain of X" cashes out to "roughly 98% chance of X". Do you think of these statements as being fundamentally different in some way?