@Porean's banner p

Porean


				

				

				
3 followers   follows 1 user  
joined 2022 September 04 23:18:26 UTC

				

User ID: 266

Porean


				
				
				

				
3 followers   follows 1 user   joined 2022 September 04 23:18:26 UTC

					

No bio...


					

User ID: 266

Training consumes far more matmuls than inference. LLM training operates at batch sizes in the millions -- so if you aren't training a new model, you have enough GPUs lying around to serve millions of customers.

Yes indeed, that todo link should've been replaced with a link to a transcript of Emad's recent Interview.

I failed to find the transcript in my browser history, so I've relinked the video in its place.

That's actually really cool, wow.

Why do you care about what/who's fault it is? You have goals -- accomplish them or don't.

I don't see this as superstitious/magical. You are basically pressing the "purge all thoughts" button by spamming your brain with a single repeated concept.

I feel we are talking past each other. "In terms of the historical narrative, some artists were inspired by photography and made a cool synthesis of traditional art && the new technology" -- okay. But were there more artists (adjusting for base rate) creating realistic looking hand-drawn art pieces before or after the proliferation of the camera? Do you agree that the answer is before? Do you grasp the standard concerns shared amongst artists that believe before is the obvious answer?

I can't draw conclusions without knowing what kind of degenerate you are. If you're into hentai, the waifu diffusion model was trained on the 1.4 SD checkpoint && has much room for improvement. If you're a furry, fine-tuned models are currently a WIP and will be available soon. If you're a normal dude, I don't really understand because I honestly think it's good enough at this point.

The only thing I think is really poorly covered at the moment is obscure fetish content. A more complicated mixture of fine-tuning + textual inversion might be needed there, but I do truly believe the needs of >>50% of coomers are satisfiable by machines at this point.

Edit: I am less confident of my conclusion now.

I tend to stare down the same paragraph for two hours and finally squeeze out, word by painful word, something that sounds like the ramblings of a schizophrenic with aphasia

The problem is that you are not writing fast enough. Think about text too slow and the words will blend together and lose all meaning. Put your brain into Word Salad Generation mode and just dump as you would into a Motte comment; you can edit for style/tone/content once you actually have something to edit.

I've shilled this before, but you should really try The Most Dangerous Writing App to knock out a first draft. As described by Alexey Guzey:

DO ACTUALLY TRY THIS DON’T FLINCH AWAY. This app might seem like the dumbest thing in the world but it DOES REALLY HELP. And if it doesn’t work, you will just lose 5 minutes.

I want to be reminded of who has historically made bad/good takes.

Most of your post is in line with what I believe. The information workers in blue tribe will turn to protectionism as AI-generated content supercedes them. Red tribe blue-collar workers will suffer the least, and the Republicans will have their first and last opportunity to lure techbros away from the progressive sphere of influence.

There is one thing, though.

I simply do not forsee Republicans being likely to make AI regulations (or deregulation) a major policy issue in any near-term election, whilst I absolutely COULD see Democrats doing so.

It only takes one partisan to start a conflict. Republicans might not initially care, but once the democrats do, I expect it'll be COVID all over again -- sudden flip and clean split of the issue between parties.

But this is just nitpicking on my part.

But what will the Program be?

Will it be state persecution of racist AI developers to protect disadvantaged minorities? A corporate utopia of AI-driven capitalist monoculture? An anarchist-adjacent future of AI empowered individuals purging the remnants of the old world?

Or maybe just foom and we all die. That's why I think it's worth discussing!

How about the fact that you could make a video about literally anything happening at all? Fake any event you want. Nudes, terrorism, declarations of war... ideally we would learn to just ignore all of the fake content, but if we could do that, why would ads be a problem anymore?

where did you learn that from?

Any independent replications?

Sure.

OCR-VQGAN

Ah, interesting!

Resolve?

Delete (yes -- delete) all distractions. Mute everything. Lock your phone in a safe. Ensure that the only kind of rest you're permitted is passing out on the floor from exhaustion.

Okaay I have no idea what's going on with the comment box. The link I have in there right now when I click the edit button is:

https://streamable.com/e/e/ollvts

but it's getting rendered as

https://streamable.com/e/e/e/ollvts

Roughly speaking, I see your point and agree that it's possible we're just climbing a step further up on an infinite ladder of "things to do with computers".

But I disagree that it's the most likely outcome, because:

  1. I think the continued expansion of the domain space for individual programmers can be partially attributed to Moore's Law. More Is Different; a JavaScript equivalent could've easily been developed in the 80s but simply wasn't because there wasn't enough computational slack at the time for a sandboxed garbage collected asyncronous scripting language to run complex enterprise graphical applications. Without the regular growth in computational power, I expect innovations to slow.

  2. Cognitive limits. Say a full stack developer gets to finish their work in 10% of the time. Okay, now what? Are they going to spin up a completely different project? Make a fuzzer, a GAN, an SAT solver, all for fun? The future ability of AI tools to spin up entire codebases on demand does not help in the human learning process of figuring out what actually needs to be done. And if someone makes a language model to fix that problem, then domain knowledge becomes irrelevant and everyone (and thus no one) becomes a programmer.

  3. I think, regardless of AI, that the industry is oversaturated and due for mass layoffs. There are currently weak trends pointing in this direction, but I wouldn't blame anyone for continuing to bet on its growth.

If it does then it will be smart enough to self-modify,

This does not work out the way you think it will. A p99-human tier parallelised unaligned coding AI will be able to do the work of any programmer, will be able to take down most online infrastructure by merit of security expertise, but won't be sufficient for a Skynet Uprising, because that AI still needs to solve for the "getting out of the digital box and building a robot army" part.

If the programming AI was a generalised intelligence, then of course we'd be all fucked immediately. But that's not how this works. What we have are massive language models that are pretty good at tackling any kind of request that involves text generation. Solve for forgetfulness in transformer models and you'll only need one dude to maintain that full stack app instead of 50.

My instinct is that this should be smaller and easier than the Stable Diffusion I run on my PC, but maybe I am just super wrong about that?

Super-wrong is correct. Nobody has a consumer-sized solution for that, and if it ever happens it'll be huge news.

Completely true. Current advances do not guarantee the "no more jobs" dystopia many predict. My excitement is likely primarily a result of how much I've involved myself in observing this specific little burst of technological displacement.

We don't have infinite moderators.

You were MetroTrumper? Holy shit.

  1. We aren't important enough. We have about a dozen thousand users that do not-much more than words-words-words in a closed community.

  2. We have some pretty good programmers onboard. The codebase is probably not clean right now, but I think it's a matter of time.

This is a great response.