site banner

Small-Scale Question Sunday for April 30, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

4
Jump in the discussion.

No email address required.

To run it at a conversational speed, you need a GPU with 40GB of VRAM, so you're probably looking at dropping $4,500 minimum on just the GPU, and maybe closer to $15,000 -- not exactly available to the masses. Maybe in 4 years. Moore's law is still kicking on the GPU side.

I'm curious what the availability of standalone AI processors might do. You can get, today, a Jetson NGX Orin with 64GB VRAM on a development board around 2K USD. That's not as fast as an nVidia A100 (I think ~30% of the speed for Int8 perf? supposed to be comparable to a V100) and data transfer rates can be obnoxious since everything has to get shoved through an ARM processor, but for non-video applications, it's really hard to beat the price point or the thermal envelope while still having enough VRAM for the big models.

((At least unless nVidia's consumer modules end up surprising everyone, but I expect they're more likely to boost to 24GB/32GB for the mid- and high-end cards for this generation, rather than 32GB/48GB or higher.))

That doesn't make them a good investment today, since 2k worth of standard GPU will also let you do everything else GPUs are used for, but if a killer app comes about there's a lot of ways for people to run at smaller-scales in home environments.

I don't know.

This guy (https://news.ycombinator.com/item?id=35029766) claims to get about 4 words per second out of an A100 running the 65B model. That's a reasonable reading pace. But I'm sure there's going to be all sorts of applications for slower output of these things that no one has yet dreamt of. One thing that makes Llama interesting (in addition to being locally-runnable) is that Meta appears to have teased more usefulness per parameter -- it's comparable with competing models that have 3-4 times as many parameters. And now there's supposedly a non-public Llama that's 546 billion parameters. (I think all of these parameter numbers are coming from what can fit in a single A100 or a pod of 8x A100s). Sadly, I think there's already starting to be some significant overlap between the cognitive capabilities of the smartest language models and the least capable deciles of humans. The next ten years are going to be a wild ride for the employment landscape. For reference, vast.ai will rent you an A100 for $1.50/hr.