peterwu00

joined 10 months ago
[โ€“] peterwu00@alien.top 1 points 10 months ago

I have a question regarding the meta licensing below. Does it imply that we can use the open-source Llama 2 model as a foundational model, train it with additional data, and retain the retrained model as proprietary?

"v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof)."

 

The question is probably too basic. But how do i load llama2 70B model using 8b quantization? I see TheBlokeLlama2_70B_chat_GPTQ but they only show 3b/4b quantization. I have 80G A100 and try to load llama2 70B model with 8b quantization. Thanks a lot!