@Porean's banner p

Porean


				

				

				
3 followers   follows 1 user  
joined 2022 September 04 23:18:26 UTC

				

User ID: 266

Porean


				
				
				

				
3 followers   follows 1 user   joined 2022 September 04 23:18:26 UTC

					

No bio...


					

User ID: 266

This is a great response.

(your first two links are the same)

I agreed with the gist of the article, but I can't help but wonder if this topic was 99% covered by LW at some point.

Actually, just assume I'm wrong. I don't have the links

Leave the rest of the internet at the door.

Or could you at least have something more substantial to talk about than, "redditors upvote dumb shit, news at 11"?

What's the point?

No, really -- let's say you win. You've convinced the entirety of the western public that COVID-19 was made in a Chinese biolab. Okay, now what?

I have 180°'d on my opinions, thanks.

I hate how much coverage the AI/rat community is giving to "Loab". It seems abundantly clear to me it's a social hoax (or at least just a funny art exhibition) rather than demonstrating anything insightful into the latent space of diffusion models.

Bad idea, there. Knowing their username means you get to filter them from day 1.

Text, audio-vqvae, image-vqvae (possibly video too) tokens in one stream

How do you suppose it reads tiny words with a VQVAE? Even an RQVAE shouldn't have the pixel precision needed to see tiny 5px font letters.

Training consumes far more matmuls than inference. LLM training operates at batch sizes in the millions -- so if you aren't training a new model, you have enough GPUs lying around to serve millions of customers.

Resolve?

Delete (yes -- delete) all distractions. Mute everything. Lock your phone in a safe. Ensure that the only kind of rest you're permitted is passing out on the floor from exhaustion.

Okaay I have no idea what's going on with the comment box. The link I have in there right now when I click the edit button is:

https://streamable.com/e/e/ollvts

but it's getting rendered as

https://streamable.com/e/e/e/ollvts

Roughly speaking, I see your point and agree that it's possible we're just climbing a step further up on an infinite ladder of "things to do with computers".

But I disagree that it's the most likely outcome, because:

  1. I think the continued expansion of the domain space for individual programmers can be partially attributed to Moore's Law. More Is Different; a JavaScript equivalent could've easily been developed in the 80s but simply wasn't because there wasn't enough computational slack at the time for a sandboxed garbage collected asyncronous scripting language to run complex enterprise graphical applications. Without the regular growth in computational power, I expect innovations to slow.

  2. Cognitive limits. Say a full stack developer gets to finish their work in 10% of the time. Okay, now what? Are they going to spin up a completely different project? Make a fuzzer, a GAN, an SAT solver, all for fun? The future ability of AI tools to spin up entire codebases on demand does not help in the human learning process of figuring out what actually needs to be done. And if someone makes a language model to fix that problem, then domain knowledge becomes irrelevant and everyone (and thus no one) becomes a programmer.

  3. I think, regardless of AI, that the industry is oversaturated and due for mass layoffs. There are currently weak trends pointing in this direction, but I wouldn't blame anyone for continuing to bet on its growth.

If it does then it will be smart enough to self-modify,

This does not work out the way you think it will. A p99-human tier parallelised unaligned coding AI will be able to do the work of any programmer, will be able to take down most online infrastructure by merit of security expertise, but won't be sufficient for a Skynet Uprising, because that AI still needs to solve for the "getting out of the digital box and building a robot army" part.

If the programming AI was a generalised intelligence, then of course we'd be all fucked immediately. But that's not how this works. What we have are massive language models that are pretty good at tackling any kind of request that involves text generation. Solve for forgetfulness in transformer models and you'll only need one dude to maintain that full stack app instead of 50.

Completely true. Current advances do not guarantee the "no more jobs" dystopia many predict. My excitement is likely primarily a result of how much I've involved myself in observing this specific little burst of technological displacement.

We don't have infinite moderators.

and some modest computation capability (say, a cluster of 3090s or a commitment to spend a moderately large sum on lambda.labs)

This is not sufficient. The rig as described by neonbjb is only 192GB of vram; fine-tuning an LM with 130B params (in the best possible case of GLM-130B; the less said about the shoddy performance of OPT/BLOOM, the better) requires somewhere in the ballpark of ~1.7TB of vram (this is at least 20+ A100s), and that's on batch size 1 with gradient checkpointing and mixed precision and 8bit adam and fused kernels without kv cache and etc. If you don't have an optimised trainer ready to go (or god forbid, you're trying distributed training), you should expect double the requirements.

The cost of that isn't too bad, of course. Maybe $25 bucks an hour on LL, any machine learning engineer can surely afford that. The larger doubt I have is that any of this will take place.

or?

Yes indeed, that todo link should've been replaced with a link to a transcript of Emad's recent Interview.

I failed to find the transcript in my browser history, so I've relinked the video in its place.

That's actually really cool, wow.

I don't see this as superstitious/magical. You are basically pressing the "purge all thoughts" button by spamming your brain with a single repeated concept.

I feel we are talking past each other. "In terms of the historical narrative, some artists were inspired by photography and made a cool synthesis of traditional art && the new technology" -- okay. But were there more artists (adjusting for base rate) creating realistic looking hand-drawn art pieces before or after the proliferation of the camera? Do you agree that the answer is before? Do you grasp the standard concerns shared amongst artists that believe before is the obvious answer?

I can't draw conclusions without knowing what kind of degenerate you are. If you're into hentai, the waifu diffusion model was trained on the 1.4 SD checkpoint && has much room for improvement. If you're a furry, fine-tuned models are currently a WIP and will be available soon. If you're a normal dude, I don't really understand because I honestly think it's good enough at this point.

The only thing I think is really poorly covered at the moment is obscure fetish content. A more complicated mixture of fine-tuning + textual inversion might be needed there, but I do truly believe the needs of >>50% of coomers are satisfiable by machines at this point.

Edit: I am less confident of my conclusion now.

I tend to stare down the same paragraph for two hours and finally squeeze out, word by painful word, something that sounds like the ramblings of a schizophrenic with aphasia

The problem is that you are not writing fast enough. Think about text too slow and the words will blend together and lose all meaning. Put your brain into Word Salad Generation mode and just dump as you would into a Motte comment; you can edit for style/tone/content once you actually have something to edit.

I've shilled this before, but you should really try The Most Dangerous Writing App to knock out a first draft. As described by Alexey Guzey:

DO ACTUALLY TRY THIS DON’T FLINCH AWAY. This app might seem like the dumbest thing in the world but it DOES REALLY HELP. And if it doesn’t work, you will just lose 5 minutes.