this post was submitted on 14 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Today I tried a number of private (local) opensource #GenAI #LLM servers in Docker. I only run LLM servers in Docker. Without Docker I'm pretty sure my desktop would quickly become an angry bag of snakes in no time (Snakes, pythons, geddit? ๐Ÿ ๐Ÿ˜ ). For context, I'm evaluating these LLM components to figure out what part they might play in my Backchat plugin project for Backstage from Spotify (https://via.vmw.com/backchat)

Here's what I discovered:

* PrivateGPT has promise. It offers an OpenAI API compatible server, but it's much to hard to configure and run in Docker containers at the moment and you must build these containers yourself. If it did run, it could be awesome as it offers a Retrieval Augmented Generation (ingest my docs) pipeline. The project's docs were messy for Docker use. (https://github.com/imartinez/privateGPT)

* OpenVINO Model Server. Offers a pre-built docker container, but seems more suited to ML rather than LLM/Chat use cases. Also, It doesn't offer and OpenAI API. Pretty much a non-starter for my use case but an impressive project. (https://docs.openvino.ai/2023.1/ovms_what_is_openvino_model_server.html)

* Ollama Web UI & Ollama. This server and client combination was super easy to get going under Docker. Images have been provided and with a little digging I soon found a `compose` stanza. The chat GUI is really easy to use and has probably the best model download feature I've ever seen. Just one problem - doesn't seem to offer OpenAI API compatibility which limits it's effectiveness for my use case. (https://github.com/ollama-webui/ollama-webui)

In the end I liked Ollama/Ollama Web UI a lot. If OpenAI API compatibility gets added, it could be my go-to all round LLM project of choice - but not yet.

Ollama Web UI in Backstage

โ€‹

Backchat architecture

โ€‹

you are viewing a single comment's thread
view the rest of the comments
[โ€“] DreamGenX@alien.top 1 points 1 year ago (3 children)

Curious to hear what other UIs people use and for what purpose / what they like about each (like Oogabooga, or Kobold).

[โ€“] WolframRavenwolf@alien.top 1 points 1 year ago (2 children)

SillyTavern for text chat. A true power-user LLM frontend, so I always use the same interface, no matter which backend I need (e. g. koboldcpp or oobabooga's text-generation-webui or even ChatGPT/GPT-4).

Going beyond text, I recently started using Voxta together with VaM/Virt-A-Mate. That brings my AI's avatar into the real world, thanks to the Quest 3's augmented/mixed reality features. Here's an example by one of Voxta's devs that showcases what that looks like (without mixed reality, though). Sure, it's just for fun right now, but I see the potential for it to become more than an entertaining novelty.

[โ€“] necile@alien.top 1 points 1 year ago (1 children)

I recently started using Voxta together with VaM/Virt-A-Mate

oh my god....how are you liking it so far? I might disappear from society for a few months based on your answer..

It's like VR itself - amazing technology, mind-blowing, but needs engagement and motivation to be really useful. Text chat is easier and with my limited time, I'm not using this as much as I'd like to. Still, there's a lot of active development and waiting potential, so I'm looking forward how this evolves.