site banner

Small-Scale Question Sunday for January 29, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

4
Jump in the discussion.

No email address required.

I am not a coder. If something requires me to install dependencies, I can sometimes successfully do that with very detailed instructions, but often I fail even then.

I also do not trust anything I type into an online AI to not be either used against me, or filtered in ways I will find annoying.

So I was pleasantly surprised to discover NMKD's Stable Diffusion GUI, which allows me to more or less one-click-install (unrar, start the .exe, click the "install" button), so I could play around with this image-generating AI stuff everyone has been talking about. It has been interesting and fun, and doesn't require me to send my prompts for use or abuse by distant others.

(I recognize the possibility that the creator is essentially doing this to me anyway--I have no way of checking that on my own--but that is a larger problem and not the point of my simple question.)

Here's my simple question: I would like to have something like NMKD's Stable Diffusion GUI, but for text-based AI instead of art-generating AI. That is--a one-click-install, locally-hosted, no-expertise-required version of ChatGPT or similar. My instinct is that this should be smaller and easier than the Stable Diffusion I run on my PC, but maybe I am just super wrong about that?

Anyway, do any of you know of such a thing?

My instinct is that this should be smaller and easier than the Stable Diffusion I run on my PC, but maybe I am just super wrong about that?

I believe large language models take much more VRAM for generation than image models. For example, the open model BLOOM requires 352GB. So it's not realistic to do if on your local machine at the moment.

The only project I've seen along these lines is https://github.com/LAION-AI/Open-Assistant, but I don't think it's real yet.

Amazing... I had seen a number like that elsewhere but I assumed that was for training models--not for hosting local instances of them. Based on that thread, I have to wonder whether OpenAI made some kind of tremendous breakthrough such that they could host so many conversations to the public, or whether they just happen to have dedicated an ungodly amount of compute to their public demonstrations.

For comparison, Stable Diffusion has 890 million parameters and GPT-3/ChatGPT has 175 billion, so about 200x. I think they probably have really good mechanisms to distribute their queries rather than a breakthrough in efficiency of inference, but I'm not super knowledgeable about this topic.