Curious to hear what other UIs people use and for what purpose / what they like about each (like Oogabooga, or Kobold).
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
SillyTavern for text chat. A true power-user LLM frontend, so I always use the same interface, no matter which backend I need (e. g. koboldcpp or oobabooga's text-generation-webui or even ChatGPT/GPT-4).
Going beyond text, I recently started using Voxta together with VaM/Virt-A-Mate. That brings my AI's avatar into the real world, thanks to the Quest 3's augmented/mixed reality features. Here's an example by one of Voxta's devs that showcases what that looks like (without mixed reality, though). Sure, it's just for fun right now, but I see the potential for it to become more than an entertaining novelty.
I recently started using Voxta together with VaM/Virt-A-Mate
oh my god....how are you liking it so far? I might disappear from society for a few months based on your answer..
It's like VR itself - amazing technology, mind-blowing, but needs engagement and motivation to be really useful. Text chat is easier and with my limited time, I'm not using this as much as I'd like to. Still, there's a lot of active development and waiting potential, so I'm looking forward how this evolves.
At our lab, we're using the latest version of the ollama-webui and it seems to have the OpenAI API support already, among many another new features (and an updated UI, which imo is a lot better). You might want to update to the latest version!
Which dockerfile did you build to get PrivateGPT to work? There are not docs, multiple dockerfiles, and just building them doesnt seem to work
I have ollama on my Mac (not Docker) and installed the ollama web UI. It works fine but their instruction on running ollama in a LAN network doesn't work for me. The flags they mention to add the CLI command throw an error (esp. the *
part).
You can use ollama with litellm to get OpenAI api.