@CeePlusPlusCanFightMe's banner p

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users  
joined 2022 September 05 17:01:33 UTC
Verified Email

				

User ID: 641

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users   joined 2022 September 05 17:01:33 UTC

					

No bio...


					

User ID: 641

Verified Email

I think that (1) ai video looks about a year behind ai art and (2) ai art is about a year from being able to reliably deal with physically complex scenes with many moving parts. So 2 years?

The latter is a more 'rigorous' approach where students are taught the fundamentals such as data types, structures, flow, interpreter vs compiler, etc first; Then they are made to write programs. These programs are sometimes gamified but not to the extent as the former[...] I consider the latter "imparting knowledge" method superior. It's more in line with all the hard sciences I have been taught and all the good programmers I am aware of claim to have been taught using this method.

I realized as an adult that I do not retain knowledge if I am given that knowledge before I have any way to apply it. I suspect I'm not alone in this; but regardless, I strongly prefer the teaching methodology where you are made acquainted with tools by being given problems which necessitate using those tools. By "tools", here, I refer to algorithms and data structures, among other things. (I think this is why, even though I loved my Algorithms and Data Structures courses, I hated Operating Systems and whatever one it was that taught us assembly language. I retained very little of those and do not count them among the good or useful courses I took.)

I'm aware that this "knowledge-first-to-use-it-later" approach is similar to how the hard sciences are taught; I hated it there as well.

My actual start in programming came from hacking around in the Civilization 4 Python codebase, where I built mods for Fall From Heaven 2 and by necessity had to learn programming syntax-- I was only formally educated in programming later. Contrary to what your argument above would predict I was by far the strongest coder in my graduating class, and went on to get a job in FAANG (where I was, in my judgement, roughly at the top 20% of programmer strength in the company.)

So I don't know the total of what my "ideal programmer education" consists of, but I'm pretty sure a big chunk of it would involve writing a self-designed mod for the game Slay The Spire.

Okay okay, hear me out, this has a number of advantages:

  1. Slay the Spire is entirely programming-first. There is no "editor" interface, as a Unity game would have.

  2. Slay the Spire modding has, as its first step, decompiling the codebase. This gets your student exposure to "the act of having to understand somebody else's extremely nontrivial code".

  3. The codebase is also written using fairly reasonable best practices, particularly for a gaming studio-- it uses polymorphism to deal with all the myriad cards and their effects, which allows you to see very intuitively how polymorphism is used in the wild and why it's valuable. (I know that in my own programming education all of our programs were trivial enough that interfaces and abstract classes seemed weird and pointless, and none of my instructors could give what felt like adequate explanations for their use.)

  4. You can get something pretty cool out the other side-- a game mod! Having something cool and nontrivial that you're in the process of building is worth any number of credit points in inspiring motivation to actually learn programming.

  5. It's Java, which is a very standard programming language which features automated memory management.

So I think if I were designing a programming practicum it would feature game-modding as a big part of it, with perhaps some Code Combat or similar coding game in the first couple of weeks to familiarize students with the basic syntax and philosophy around programming in some reasonably entertaining format. And, of course, some problem sets later that showcase situations where students are given no choice but to use the standard data structures and algorithms.

There is no problem humans face that cannot be reframed as a programming or automation problem. Need food? Build a robot to grow it for you, and another to deliver it to your house. Need to build a robot? Make a factory that automates robot fabrication. Need to solve X medical issue? Write a program that figures out using simulations or whatever how to synthesize a chemical or machine that fixes it. Given this, the question of "what happens to programmers when computers can write code for arbitrary domains just as well as programmers can" answers itself.

I expect that fully automating coding will be the last job anybody ever does, either because we're all dead or we have realized Fully Automated Luxury Space Communism.

The process of losing weight is mostly eating a more normal amount of calories and engaging in physical activity

Worth pointing out that diet and exercise alone have extremely poor intent-to-treat efficacy, generally between 2% and 4% of body weight as measured by most studies. For instance, see https://www.nature.com/articles/0803015 . Medication dramatically improves weight successfully lost (see also: https://www.nejm.org/doi/full/10.1056/NEJMoa2032183)

I think an underappreciated aspect of this whole situation is that according to https://seekingalpha.com/symbol/U/income-statement , Unity Technologies is losing a billion dollars a year and is already 3 billion in debt, with their total market cap being 15 billion. This is a company that's circling the drain; this seems transparently like a hail-mary play that probably fails but maybe brings Unity to profitability.

Legally, I have no idea how big a grey area retroactive ToS changes are; the fact that Unity's doing it implies that they're not obviously illegal but I'm also aware that, in kind of a brute legal realism sense, different domains of law have judges that feel very differently about contracts where the fine print states "also we are allowed to fuck you in arbitrary ways defined by us, no limits, neener neener"

Like: apparently (based on what I've read in Matt Levine articles) corporate debt courts are really really specifically about the letter of the contract; someone puts in the fine print "also we can fuck you at any time" and the judge looks at it and is like "well, it's in the contract, guess you shouldn't have signed that one" which is in large part because corporate debt contracts are assumed to have been extremely well-vetted by lawyers on both sides. Everyone is assumed to be extremely saavy. My suspicion (not a lawyer) is that this is less true of consumer-facing EULAs (like Unity's); if I have a bunch of reddit posts saying "Unity will never fuck our users who sign this contract" and I have a EULA saying "Unity will never fuck our users. Also Unity, in its sole discretion, reserves the right to amend this contract" and then I do the obvious thing-- amend the contract retroactively to allow user-fucking, and then proceed to fuck our users-- I'm not sure how it would fare in court but it's not obvious the judge would love me for that?

Of course, there's also the legal-realism idea of "Unity probably just settles out-of-court with anyone big enough to challenge them, and hoovers up money from indies that can't afford ruinous court fees". Which is of course deeply unethical and also vibes like they're eating their seed corn (since who wants to go into business with a company that has, historically, not been willing to honor contracts.)

My expectation is that this ends with Unity sticking to its guns and declaring bankruptcy in a couple years.

There is, i feel, a degree to which cancel culture is just... Twitter culture. Where do mobs find stuff to hate? Twitter. Where do they organize? Twitter. Where are the employers nice and easy to contact via, essentially, short form open letter? Twitter again.

Don't get me wrong, cancel culture can still exist without twitter, but i expect it to be far more of a minor and localized phenomenon.

Anyway, this is a silver lining if shit all goes south and Twitter dies. Though for my part i still gain value from Twitter and i'd be bummed out.

And on reflection, cancel culture is just the dark mirror of legitimate accountability-- MeToo would not have gotten off the ground without twitter, nor would protests over various police abuses of power.

A failed Twitter would have lots of cultural consequences.

Is there, do you think, any coherent moral framework you'd endorse where you should donate to the AMF over sending money to friends and family?

I think that's basically reasonable. There is some plot stuff in Terminator which is less realistic or sensible that I'm not keen on arguing, but I feel 100% reality fidelity is unnecessary for Terminator to be an effective AI x-risk story showcasing the basic problem.

I get the impression that most of the pushback from alignment folks is because (1) they feel Terminator comparisons make the whole enterprise look unserious since Terminator is a mildly silly action franchise, and (2) that the series doesn't do a good job of pointing out why it is that it's really hard to avoid accidentally making Skynet. Like, it's easy to watch that film and think "well obviously if I were programming the AI I would just tell it to value human well-being. Or maybe just not make a military AI that I give all my guns to. Easy-peasy."

I think it's mainly the first one, though. It's already really hard to bridge the inferential distances necessary to convince normal people that AI x-risk is a thing and not a bunch of out-of-touch nerds hyperventilating about absurd hypotheticals; no point in making the whole thing harder on yourself by letting people associate your movement with a fairly-silly action franchise.

For my money, I like Mickey Mouse: Sorcerer's Apprentice as my alignment fable of choice. The autonomous brooms neither love you nor hate you. But they intend to deliver the water regardless of its impact on your personal well-being.

Disney's Fantasia: way ahead of its time.

I was wondering about that-- imgtoimg is a possibility, but it also could just be successive iteration on prompts until you get something close enough to the original. Especially for some of the more-generic images.

Only way to know for sure is having the proof contain the prompt and random seed.

Yeah, it's probably fair that your point deserved more care and elaboration than argumentum ad XKCD can provide. Which: sorry about that! I was overly flip.

So!

Fundamentally software is a rickety tower of abstractions built on abstractions built on abstractions. At the lowest level you've got logic gates, and if you put enough of those (and some other stuff) together in the right configurations you can make stuff like arithmetic logic units; and if you put enough stuff of basically that abstraction layer together, you have yourself a CPU, and that and some other bits gets you a computer; and then you have the BIOS, the OS on top of that, and the language runtime of the stuff you're working on on top of that, and your program running on top of that. Obviously you already know this.

And the reason this basically kinda works is that a long time ago programmers figured out that the way to productivity is to have hardened interfaces at which you program; the point of these interfaces is to avoid having to concern yourself with most of the vast underground of abstractions that form a computer. Which means that most programmers don't really concern themselves with those details, and honestly it's not clear to me they should in the typical case.

That's because making maintainable software is about ensuring that you are, at all times, programming in the level of abstraction appropriate to your problem domain, neither going higher (resulting in perf issues, typically) or lower (resulting in bugs and long implementation times as you re-invent the wheel over and over). For every guy who tanks the performance of an app by not respecting the garbage collector, there's another that decides to implement his own JSON parser "for efficiency" and hooks it up to the [redacted] API, resulting in several extremely-difficult-to-debug issues in production that I personally burned several hours in fixing, all to shave milliseconds off an hourly batch process' running time. Not that I'm bitter.

So I guess that sort of statement-- "you're only a good programmer if you've used a language with manual memory management"-- feels like unjustified programmer-machismo, where someone chooses one of those abstraction layers between bare physics and the language runtime more-or-less arbitrarily and says "ah, but only if you deeply understand this specific abstraction layer can you truly be a good programmer."

Admittedly I work in distributed systems, where 99% of things that actually matter for performance occur over the network.

There is a sense in which the job of coding has already been automated away several times. For instance, high-level languages enable a single programmer to accomplish work that would be out of the grasp of even a dozen assembly-language programmers. (This did, in fact, trash the job market for assembly-language programmers.)

The reason this hasn't resulted in an actual decline in programmer jobs over time is because each time a major tool is invented that makes programming easier (or eliminates the necessity for it in particular domains), people immediately set their sights on more-difficult tasks that were considered impractical or impossible in the previous paradigm.

I don't really see the mechanism by which AI-assisted programming is different in this way. Sure, it means a subset of programming problems will no longer be done by humans. That just means humans will be freed to work on programming and engineering problems that AI can't do, or at least can't do yet; and they'll have the assistance of the AI programmers that automated away their previous jobs.

And if there are no more engineering or programming problems like that, then you now have Automated Luxury Space Communism.

You're definitely right that diet-and-exercise studies include a huge range of effect sizes. I'm not 100% certain how to interpret this; my suspicion is that there's a hidden intervention sliding scale between "doctor says to you, with gravitas, 'eat healthier'" and "nutritionist locks you in a box and hand-feeds you kale that they calorie-counted themselves." And meta-analyses do a poor job differentiating between these, including the one I linked.

I would expect that more dramatic effects combined with heavier fadeout of results is a natural indicator that a particular study is doing an unsustainably aggressive intervention; in the meta-analysis, it indicated that in both diet-only and diet-and-exercise groups everyone regained about half the weight after a year. Which still does leave 14 pounds, and that isn't anything to sneeze at.

You are also right that there are two ways of doing these studies-- "as prescribed" and "intent-to-treat", and as-prescribed results will always show much better effect sizes than intent-to-treat results. In a sense, intent-to-treat isn't measuring the results of the treatment as much as it is measuring the results of prescribing the treatment. And as-prescribed, diet and exercise will always be 100% effective at inducing any amount of weight loss almost by definition. Hard to beat that, really.

But on the other hand... I kinda figure that intent-to-treat is a fairer representation of real life? In the sense that in real life people don't have the option of getting locked in the nutritionist-box indefinitely. And if two treatments are both effective as-prescribed, but the first one has much worse intent-to-treat efficacy, I want the second treatment.

Most of my thoughts on this are driven by the practicalities of things we can do right now; I see no reason, assuming all technological restraints were lifted, that anyone shouldn't be able to do anything they want with their bodies.

Similarly, I feel like the only strong arguments against transitioning genders stem from the fact that our bio-engineering isn't up to snuff.

https://www.eenewseurope.com/en/openai-backs-norwegian-bipedal-robot-startup-in-23m-round/

Quite aside from the god-inna-box scenario, OpenAI wants to give its AIs robot bodies.

sci-fi scenario

My dude, we are currently in a world where a ton of people have chatbot girlfriends, and AI companies have to work hard to avoid bots accidentally passing informal turing tests. You best start believing in sci-fi scenarios, To_Mandalay: you're in one.

The whole vaccine rollout had the theme of "all that is not compulsory is forbidden." That is: adults were banned from taking vaccines until the FDA had satisfactorily hemmed and hawed over the trials; afterward, vaccines became compulsory for quite a lot of everyday activities. This was similar (though more dramatic) story as masks-- masks were heavily discouraged by the CDC right up to the point where the CDC began mandating them.

In general the FDA and CDC are really really bad at expressing any epistemic attitude that isn't "utter certainty", even in the frequent occasions that the info available doesn't justify certainty.

For that reason I think it's basically coherent to say that the FDA was too restrictive and too pushy about the vaccines.

EDIT: This was also true of boosters! Boosters were forbidden roughly until the FDA began mandating them in order to be "fully vaccinated".

Are there any charities to which you would endorse sending 10 percent of your income each year?

Finding links between IQ and genetics is crucial if we ever want polygenic screening for IQ to work well. Shouldn't we want smarter children?

For the record I'm definitely not convinced that "80% +- 20% chance" is a coherent thought.

Here's a thought experiment: I give you a coin, which is a typical one and therefore has a 50% chance of landing heads or tails. If I asked you the probability it lands on heads, you'd say 50%, and you'd be right.

Now I give you a different coin. I have told you it is weighted, so that it has an 80% chance of landing on one side and 20% chance of landing on another (but I haven't told you whether or not it favors heads or tails.) If I asked you the probability it lands heads when flipped, you should still say 50%.

That's because probabilities are a measure of your own subjective uncertainty about the set of possible outcomes. Probabilities are not a fact about the universe. (This is trivially true because a hypothetical omniscient being would know with 100% certainty the results of every future coinflip, thereby rendering them, by a certain definition, "nonrandom". But they would still be random to humans.)

Yeah, I'm concerned about the "destruction of the human species" angle. I've been mulling over whether in surviving timelines TSM is disproportionately likely to get destroyed by China, thereby stalling AI advancement and also plunging the world into a depression since everyone needs their stuff.

Holy shit, I think you could be right. This is exactly the kind of use case NFTs were made for-- ones where you need a foolproof immutable chain of transactions that can never go down.

I did not expect this thread to be the first time I hear of a use case for which NFTs appear to be the best solution.

While i don't disagree with your assessment-- that a lot of these demo images have significant flaws if you look closely-- it seems to me that imagen is clearly at a place where i would happily use it over stock photos in any context where i might actually want to use stock photos.

They might, but how would you convincingly show an image to have been ai-generated?

Attempting to ban AI art directly seems obviously doomed due to the impossibility of answering the question "how do you know this piece is AI-generated." Even avoiding fraud accusations is super-easy: I assume that like two minutes after such a ban gets passed you'll see a stock photo site hosted in Argentina where it's like "yes, all of our art is human-generated, but all the humans are anonymous, 1 shiny dollar per download." Then you would use the image in your own U.S. works and be like "yeah, it came from these Argentina guys, take it up with them."

Banning AI art models seems substantially less doomed in concept, but I suspect that would be vigorously opposed by all the well-moneyed AI giants given that this ban would likely make the creation of large-scale multimodal neural networks entirely impossible.

Besides, Disney already has pretty easy ways to deal with copyright violators: sending each individual one takedown notices. They already do that today, and it doesn't seem like people are really interested in using Stable Diffusion to make tons of Mickey Mouse media; it's not obvious to me that Disney would want to provoke an expensive and unnecessary legal battle for the sake of marginally reducing the number of takedown notices they have to send.

I did the More Leaders Modmod!

The coding was extremely low-quality and the Python was probably buggy as hell. But it was mine.

EDIT: Wait, I think Ashes of Erebus did end up incorporating some of my work! How's that project going, by the by?

The value of HBD being true is basically nothing, as far as I'm concerned.

I-- and, I think, a lot of other people here-- just have an intense, instinctive flinch response to people saying things that aren't correct and when people say obvious nonsense, even if it's the most well-intentioned nonsense in the world, it triggers that flinch response. Obviously I don't say anything about it; I'm not stupid, and I value my social life.

Constrained reproduction is the stupid and unethical way to go about solving dysgenics, though-- it's never gonna happen, and if it did it would get weaponized by the people in power almost immediately against the people out of power. That's aside from any ethical considerations about involuntarily blocking people off from having kids, which are real and important.

My suggestion? Government-subsidized polygenic screening for everyone, optimizing for health and IQ, let's gooooooo

(Never solve with social technology that which you can instead solve with actual technology)