this post was submitted on 17 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] altoidsjedi@alien.top 1 points 10 months ago (1 children)

I find running an OpenAI style API endpoint (using llama.cpp directly when I want fine control, or StudioLM when I need something quick and easy) is the best way to go in combination with a good chat UI designed to interface with OpenAI models.

To that end, I redirect Chatbox to my local LLM server, and I LOVE IT. Clean but powerful interface, support for markdown, ability to save different agents for quick recall, and more. Highly, HIGHLY recommend it.

It's open source and available on pretty much every platform -- and you can use it to interface with both local LLM and with OpenAI LLM's.

[โ€“] dr_nick_riveria@alien.top 1 points 9 months ago

What are you using for your local LLM server and any pointers you can share on how to redirect/point Chatbox to your local server?