this post was submitted on 09 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I've done a basic fine tune using colab and a very tiny Bloom model. For some reason I'm struggling with translating that knowledge to other tools.

I've built myself a little ML/AI rig with 2xRTX3090s, I'm really enjoying downloading models and playing with them - I've used Ooba, llama.cpp, etc. I've been able to expose my ooba interface publicly via the Gradio hosting on HF, etc.

I seem to learn this stuff the best when I can walk through a few examples. Many of the instructions I see are a bit vague and so on.. I'd just like to get a guaranteed walk through to work so I can be sure I understand the process (on a local machine, using the tools I have) and I can then have a base to expand my experiments.

Any pointers to good examples like this would be most appreciated.

โ€‹

you are viewing a single comment's thread
view the rest of the comments

llama-recipies