this post was submitted on 09 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

is there a way to get a zero knowledge model that only knows how to chat. and from there fine tune it with specialized knowledge? and do this on consumer hardware (mac M1/16 gig) or free colab hardware?

i want to do this so as to prevent the model from hallucinating outside of the domain knowledge it is fed....like passing in a textbook and it only knows how to answer questions from it

top 3 comments
sorted by: hot top controversial new old
[–] troposfer@alien.top 1 points 1 year ago

Also what can you do with the latest m3 max with 128gb ram , can anyone put it in to context by comparing it

[–] DaltonSC2@alien.top 1 points 1 year ago

Checkout NanoGPT, it's very educational. You can even load GPT2 weights and finetune them if you want: https://github.com/karpathy/nanoGPT (The accompanying YouTube video is also very nice)

[–] Ok_Post_149@alien.top 1 points 1 year ago

Is home hardware a requirement for this project? I guess I'm a little confused what that has to do with model hallucinations.