Take a look at https://ollama.ai/ there is a docker image.
And there is a few good models (not as good as ChatGPT) you can run such as: openhermes2.5-mistral
I use it with chatbot-ollama.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
For Example
We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.
Useful Lists
Take a look at https://ollama.ai/ there is a docker image.
And there is a few good models (not as good as ChatGPT) you can run such as: openhermes2.5-mistral
I use it with chatbot-ollama.
ChatGPT is so far ahead and so advanced that:
- No model is even close to its quality
- Even if it it was released to the public, you would need so beefy machines to run it it makes no sense
We got to wait for some kind of breakthrough that would allow running high quality open source models locally.
Considering the cost of hosting anything, even if it was host on a PC at your place, the electricity bill alone would be higher than the ChatGPT API cost.
This is the correct answer. Unfortunately it's not the answer many people want to hear, and a lot of people end up going with some grifter scheme that sells snake oil. The gen AI space is currently 99.5% grift and 0.5% legitimate business.
I can run VMWare's Open LLama 7B v2 Open Instruct on my laptop comfortably (though I have 64GB ram and 16GB VRAM) and my sense is that's it's probably somewhere between GPT2 and GPT3 in inference quality. It is, however, very slow. Even with my comparatively strong hardware, it's slow enough that I wouldn't want to use it in an interactive context (though it may be useful for background processing)
I do a bunch of AI stuff, but you won't get chatgpt quality from anything else. It requires a massive amount of storage, memory and processing hardware- millions of dollars in hardware alone. Not sure what you're trying to do exactly, but that model is insane to attempt reproduction in any part
you could use LocalAI or ollama. but neither is going to work with 300mb of ram, and it needs a bunch compute resources for response speed to be usable. these models are also not very capable, in comparison to openAI’s gpt’s, but that depends on what your goal is with the models.