this post was submitted on 10 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Can you share your experiences?

top 8 comments
sorted by: hot top controversial new old
[–] Fluffy-Ad3495@alien.top 1 points 1 year ago
[–] __bruce@alien.top 1 points 1 year ago

Jeremy Howard was talking about this in more detail recently https://m.youtube.com/watch?v=5Sze3kHAZqE

[–] eggandbacon_0056@alien.top 1 points 1 year ago

Obviously it is adding knowledge.

The training is done the same as for the preparing with adjusted hyper parameters. ...

[–] FullOf_Bad_Ideas@alien.top 1 points 1 year ago

It does but it's not going to have perfect recall of that knowledge. For example you can have a dataset about random product hexyhexy that is unrecognized by the base model. Let's say hexyhexy is a game and you trained qlora on a documentation and tips&tricks for that game. End result will be that now model recalls knowledge about that game correctly around 50-70% of the time. It does know something it didn't know before, but you wouldn't make your life dependent on it.

[–] kristaller486@alien.top 1 points 1 year ago (1 children)

Full-weight fine-tuning should add new knowledge. Not LoRa.

Any repo for llama-2 to do this?

[–] eggandbacon_0056@alien.top 1 points 1 year ago
[–] liquiddandruff@alien.top 1 points 1 year ago

Yes, people have done experiments with fine tuning on just a dump of the ue5 game engine documentation and it was able to recall the changes in the new documentation that wasn't in the base model. The repo for this is on GitHub, you should be able to find it.