No
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
Jeremy Howard was talking about this in more detail recently https://m.youtube.com/watch?v=5Sze3kHAZqE
Obviously it is adding knowledge.
The training is done the same as for the preparing with adjusted hyper parameters. ...
It does but it's not going to have perfect recall of that knowledge. For example you can have a dataset about random product hexyhexy that is unrecognized by the base model. Let's say hexyhexy is a game and you trained qlora on a documentation and tips&tricks for that game. End result will be that now model recalls knowledge about that game correctly around 50-70% of the time. It does know something it didn't know before, but you wouldn't make your life dependent on it.
Full-weight fine-tuning should add new knowledge. Not LoRa.
Any repo for llama-2 to do this?
Wrong.
Yes, people have done experiments with fine tuning on just a dump of the ue5 game engine documentation and it was able to recall the changes in the new documentation that wasn't in the base model. The repo for this is on GitHub, you should be able to find it.