site banner

Small-Scale Question Sunday for April 30, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

4
Jump in the discussion.

No email address required.

How are you feeling about "The AI Takeover"? Have you gotten more optimistic, more pessimistic?

I have to be honest that I'm positively gleeful. AI appears to be trending in a more anarchic direction as opposed to the more totalitarian direction that everyone assumed. We now have open source LLMs running on home computers approaching parity with the big corporate datacenter AIs. LLMs appear to be a totally uncontainable technology, with everyone training small LLMs by stealing data from big LLMs. I feel like Christmas has come early.

And I have to laugh that the people who hurt me and held me back in life are the same people, left right and center, who are now so terrified that AI will disrupt society. I say let AI disrupt society! If the average US man knew what I know after living in a foreign country, it would totally destroy American society. If you're one of these people that demands that society is morally structured in a specific way (what I might call a moralist), well, how are you going to force people to play your social games in the future? I can just tell my assistant to dream up a world where I don't have to play said games. Uncontainable AIs make uncontainable humans.

A couple months ago I posted a prediction that humans would mostly be socially isolated in the age of AI, due to the forces of moralists making ordinary human interaction impossible. But now I think that the moralists will become totally powerless. (Moralists would be the top of the political compass, sometimes called authoritarians, and the opposite of anarchists.) If nobody has to work, then that takes away the biggest level of power in most people's lives. And TPTB would have no choice but to either share economic power or kill us. (I think the later outcome, and the related idea of Big Yud DOOM, are unlikely for other reasons I can get into if you're curious.) If TPTB don't control all the robots, that's an even less governable situation!

Anyway it's been a crazy couple of months! How has the recent developments in AI changed how you feel about the future? More or less concerned about Big Yud DOOM? Or do you think it's all overhyped?

If the average US man knew what I know after living in a foreign country, it would totally destroy American society.

What do you mean? What do you know?

And TPTB would have no choice but to either share economic power or kill us. (I think the later outcome, and the related idea of Big Yud DOOM, are unlikely for other reasons I can get into if you're curious.)

I am curious, please tell.

Yea, it's definitely been a crazy couple of months.

We now have open source LLMs running on home computers approaching parity with the big corporate datacenter AIs.

Sort of. I've been playing around a with the publicly available LLMs, including the largest one, the 65 billion-parameter Llama model from Meta, and I find it's somewhere around the quality of GPT-3, nowhere near the quality of GPT-4. I'm also running it quantized to 4 bits on my CPU rather than on my GPU, so it's dogshit slow -- a word every ~2 seconds. Just enough to slake my curiosity. To run it at a conversational speed, you need a GPU with 40GB of VRAM, so you're probably looking at dropping $4,500 minimum on just the GPU, and maybe closer to $15,000 -- not exactly available to the masses. Maybe in 4 years. Moore's law is still kicking on the GPU side.

I'm not that impressed with any of the LLMs. I know it's a controversial take around here, but I don't think they're doing any reasoning at all. The reasoning is in the humans who wrote their training data, and the LLMs are doing a great job of predicting the text. You'll see it for yourself if you just play around with one enough until you see it screw up in ways that demonstrate it has no idea what it's talking about. Adding parameters and training data helps push back the boundaries of what it can do, but the fundamental issue remains. The lights are on, but nobody's home. There need to be more algorithmic insights before we get AGI.

I however think Big Yud is right in that eventually we will get to AGI, and unless we're extraordinarily conscientious, it'll kill us all. His arguments aren't rigorous to the point of mathematical certainty, but it seems like that's the default path unless there's a big as-of-yet-unpredicted happening intervenes. The future is full of such things, of course. But I don't share anybody's concern about LLMs or transformers. If any thing, all the recent hype about LLMs' shockingly good performance improves humanity's odds. But we're actually going to need world-wide agreements and to risk shedding blood and bombing defectors' data centers if we want humanity to survive our first contact with an alien species.

To run it at a conversational speed, you need a GPU with 40GB of VRAM, so you're probably looking at dropping $4,500 minimum on just the GPU, and maybe closer to $15,000 -- not exactly available to the masses. Maybe in 4 years. Moore's law is still kicking on the GPU side.

I'm curious what the availability of standalone AI processors might do. You can get, today, a Jetson NGX Orin with 64GB VRAM on a development board around 2K USD. That's not as fast as an nVidia A100 (I think ~30% of the speed for Int8 perf? supposed to be comparable to a V100) and data transfer rates can be obnoxious since everything has to get shoved through an ARM processor, but for non-video applications, it's really hard to beat the price point or the thermal envelope while still having enough VRAM for the big models.

((At least unless nVidia's consumer modules end up surprising everyone, but I expect they're more likely to boost to 24GB/32GB for the mid- and high-end cards for this generation, rather than 32GB/48GB or higher.))

That doesn't make them a good investment today, since 2k worth of standard GPU will also let you do everything else GPUs are used for, but if a killer app comes about there's a lot of ways for people to run at smaller-scales in home environments.

I don't know.

This guy (https://news.ycombinator.com/item?id=35029766) claims to get about 4 words per second out of an A100 running the 65B model. That's a reasonable reading pace. But I'm sure there's going to be all sorts of applications for slower output of these things that no one has yet dreamt of. One thing that makes Llama interesting (in addition to being locally-runnable) is that Meta appears to have teased more usefulness per parameter -- it's comparable with competing models that have 3-4 times as many parameters. And now there's supposedly a non-public Llama that's 546 billion parameters. (I think all of these parameter numbers are coming from what can fit in a single A100 or a pod of 8x A100s). Sadly, I think there's already starting to be some significant overlap between the cognitive capabilities of the smartest language models and the least capable deciles of humans. The next ten years are going to be a wild ride for the employment landscape. For reference, vast.ai will rent you an A100 for $1.50/hr.

I'm sorry, I wasn't clear. The aliens we're about to meet are the AGIs.

To be clear, I think there will be UBI, but I think every government will attach strings to it that involve a hybrid of existing incentives, social credit and new incentives.

I don't think it actually changes your point, but I don't think "UBI" is an appropriate term for the type of welfare / economy design you're talking about. The key distinguishing factor between UBI and welfare is that UBI is universal, paid out at the exact same rate to all adults (although I guess I've seen discussions over exactly how to handle payments to people in prison, with some versions stopping UBI payments to them).

I don't think this is an argument against expecting the world to possibly look like this or even an argument against expecting politicians to implement it and call it UBI.


UBI obliterates the 'carrot'.

I assume the idea is that we're predicting a society where there's no jobs (since AI does all of them) so there's no opportunities for income other than welfare? If that's the case, then not having a welfare scheme is sentencing all of the non-owners to death. If it's not the case, then there's still a carrot. Maybe there's some in-between here where somehow neither is the case?