site banner

Small-Scale Question Sunday for January 29, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

4
Jump in the discussion.

No email address required.

I am not a coder. If something requires me to install dependencies, I can sometimes successfully do that with very detailed instructions, but often I fail even then.

I also do not trust anything I type into an online AI to not be either used against me, or filtered in ways I will find annoying.

So I was pleasantly surprised to discover NMKD's Stable Diffusion GUI, which allows me to more or less one-click-install (unrar, start the .exe, click the "install" button), so I could play around with this image-generating AI stuff everyone has been talking about. It has been interesting and fun, and doesn't require me to send my prompts for use or abuse by distant others.

(I recognize the possibility that the creator is essentially doing this to me anyway--I have no way of checking that on my own--but that is a larger problem and not the point of my simple question.)

Here's my simple question: I would like to have something like NMKD's Stable Diffusion GUI, but for text-based AI instead of art-generating AI. That is--a one-click-install, locally-hosted, no-expertise-required version of ChatGPT or similar. My instinct is that this should be smaller and easier than the Stable Diffusion I run on my PC, but maybe I am just super wrong about that?

Anyway, do any of you know of such a thing?

My instinct is that this should be smaller and easier than the Stable Diffusion I run on my PC, but maybe I am just super wrong about that?

Super-wrong is correct. Nobody has a consumer-sized solution for that, and if it ever happens it'll be huge news.

Just saying:

installing software dependencies to run it is very simple you can learn it in 15 minutes.

It depends on the environment but e.g. for the popular Javascript command line applications you need to install the Javascript virtual machine (NodeJs), it will install for you Npm, the node package manager which allow you to install dependencies.

You git clone a JS repository you find cool.

you run npm install

and to run the app it depends, could be npx run or npm run/serve, but that detail is described in the Readme file of the github repository see section how to install/run

For other programming languages, the steps are very similar and straigthforward.

installing software dependencies to run it is very simple you can learn it in 15 minutes.

I mean, you're not wrong, I've definitely done it several times. It's just that, more often than not I get some error or other that I find myself completely unable to resolve, and after a few hours of troubleshooting I give up. I guess you could say it's not the software dependencies that are the problem per se, it is my inability to troubleshoot them when they don't work the way I expect them to. And being as I make my living in other ways, it has never yet been worth my time to "get good" in this domain.

Usually things are trivial and just works, but not all technological ecosystems are equal, for example while javascript programs works fine, python programs often have dependencies issues (too old/out of sync). If the error message is a dependency version conflict yes, you can't solves them by yourself easily, often the thing to do in those cases is to look at the corresponding github issue or to open one. That way you can offload the troubleshooting on others or find out people have already shared a solution

My instinct is that this should be smaller and easier than the Stable Diffusion I run on my PC, but maybe I am just super wrong about that?

I believe large language models take much more VRAM for generation than image models. For example, the open model BLOOM requires 352GB. So it's not realistic to do if on your local machine at the moment.

The only project I've seen along these lines is https://github.com/LAION-AI/Open-Assistant, but I don't think it's real yet.

Amazing... I had seen a number like that elsewhere but I assumed that was for training models--not for hosting local instances of them. Based on that thread, I have to wonder whether OpenAI made some kind of tremendous breakthrough such that they could host so many conversations to the public, or whether they just happen to have dedicated an ungodly amount of compute to their public demonstrations.

Training consumes far more matmuls than inference. LLM training operates at batch sizes in the millions -- so if you aren't training a new model, you have enough GPUs lying around to serve millions of customers.

Inference of LLMs isn't that much of a problem, because batch size scales rather cheaply, and there are tricks to scale it even cheaper at production volume. OpenAI is still burning through millions in a month (we aren't positive on the exact figure) but it's probably less expensive than their current training runs. This is one of relevant papers.

We store tensors such as weights and the KV cache in on-device high-bandwidth memory (HBM). While there are other tensors that pass through the HBM, their memory footprint is much smaller, so we focus on just these two largest groups of tensors. ... Even though the computational cost of attention is relatively small, it can still account for a sig- nificant fraction of memory capacity and bandwidth costs, since (unlike the weights) the KV cache is unique for each sequence in the batch. ... For a 500B+ model with multihead attention, the attention KV cache grows large: for batch size 512 and context length 2048, the KV cache totals 3TB, which is 3 times the size of the model’s parameters.

Also, contra @xanados I'd say they probably don't run those models in bf16, so make that 180 GB.

@Porean investigated a similar question recently.

Re: your question – I also don't know of any convenient one-click-install local LLM-based applications (at least for natural language-focused models). There are not-terrible models you can run locally (nothing close to ChatGPT, to be clear) but you'll have to fiddle in CLI for now. There is no rush to develop such apps, because, again, models of practical size won't fit on any consumer hardware. Text is vastly less redundant than images.

I hope people realize soon the risk of offloading their brain extensions to Microsoft cloud and learn to pool resources to economically rent servers for community-owned LLMs, or something along these lines.

For comparison, Stable Diffusion has 890 million parameters and GPT-3/ChatGPT has 175 billion, so about 200x. I think they probably have really good mechanisms to distribute their queries rather than a breakthrough in efficiency of inference, but I'm not super knowledgeable about this topic.