site banner

Small-Scale Question Sunday for January 29, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

4
Jump in the discussion.

No email address required.

Inference of LLMs isn't that much of a problem, because batch size scales rather cheaply, and there are tricks to scale it even cheaper at production volume. OpenAI is still burning through millions in a month (we aren't positive on the exact figure) but it's probably less expensive than their current training runs. This is one of relevant papers.

We store tensors such as weights and the KV cache in on-device high-bandwidth memory (HBM). While there are other tensors that pass through the HBM, their memory footprint is much smaller, so we focus on just these two largest groups of tensors. ... Even though the computational cost of attention is relatively small, it can still account for a sig- nificant fraction of memory capacity and bandwidth costs, since (unlike the weights) the KV cache is unique for each sequence in the batch. ... For a 500B+ model with multihead attention, the attention KV cache grows large: for batch size 512 and context length 2048, the KV cache totals 3TB, which is 3 times the size of the model’s parameters.

Also, contra @xanados I'd say they probably don't run those models in bf16, so make that 180 GB.

@Porean investigated a similar question recently.

Re: your question – I also don't know of any convenient one-click-install local LLM-based applications (at least for natural language-focused models). There are not-terrible models you can run locally (nothing close to ChatGPT, to be clear) but you'll have to fiddle in CLI for now. There is no rush to develop such apps, because, again, models of practical size won't fit on any consumer hardware. Text is vastly less redundant than images.

I hope people realize soon the risk of offloading their brain extensions to Microsoft cloud and learn to pool resources to economically rent servers for community-owned LLMs, or something along these lines.