this post was submitted on 25 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Hi!

I’m quite new to LLMs and want to use it to make training workouts. My idea would be to feed it scientific studies and a bunch of example workouts.

Is this what “training a model” is for? Any resource where I can start to learn how to train one?

Can I use and already fine tuned model like Mistral, or do I need to train a base model like LLama2?

Can I train a quantized model or do I need to use a vanilla one? And quantize it after training?

I have 2x3090, 5950x and 64GB of Ram. If that matters. If I can load a model for inference can I train? Are the resources needed the same?

Thanks!

you are viewing a single comment's thread
view the rest of the comments
[–] MINIMAN10001@alien.top 1 points 11 months ago

Generally if what you want is to impart new knowledge what you want is a embedding.

Assuming it is a large amount of data you will want a vector db.

Using retrieval augmented generation, RAG.

This is better explained by this guy 16 days ago

https://www.reddit.com/r/LocalLLaMA/comments/17qse19/comment/k8e7fvx/