this post was submitted on 29 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Hi,

I've tried to follow a couple step by step guides from August, but the models they've used are now apparently outdated, and when I try to load the model into obbaguba, I get a bunch of errors.

So I've downloaded the GGUF models from ThatBloke and still having issues with the models only good for "taking instructions" and the server crashes when I try use the chat feature..

Are there any step by step guides that someone can recommend? I want to set up a UI where I can text chat, and also talk to the AI and have the AI reply in voice mode...

System is 7950x3d with 4090 and 64GB DDR5..

I've set up Anaconda and python, also got stable diffusion working previously as well.. just not Llama2..

Cheers in advance

you are viewing a single comment's thread
view the rest of the comments
[–] opi098514@alien.top 1 points 11 months ago

For anyone that is interested. Here is a code that will do this as long as you have some knowledge of python and conda you should maybe be able to get it to work. Just follow the instructions. Maybe.

https://github.com/opisaac9001/TTS-With-ooba-and-voice