this post was submitted on 31 Oct 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Code

I'm using mistral-7b to understand LLMs' procedure.

Does anyone have an idea to improve this process?

do not recommend changing the number of tokens -> 1. :)

you are viewing a single comment's thread
view the rest of the comments
[–] Ok_Post_149@alien.top 1 points 1 year ago

I just wrote a tutorial on how you can scale Mistral-7b to many GPUs in the cloud. I hope this can give you some value. Not sure if you're looking to do on-demand inference or inference on a bunch of inputs.

https://www.reddit.com/r/LocalLLaMA/comments/17k2x62/i_scaled_mistral_7b_to_200_gpus_in_less_than_5/