this post was submitted on 20 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Following the release of Dimensity 9300 and S8G3 phones, I am expecting growth in popularity of LLMs running on mobile phones, as quantized 3B or 7B models can already run on high-end phones from five years ago or later. But despite it being possible, there are a few concerns, including power consumption and storage size. I've seen posts about successfully running LLMs on mobile devices, but seldom see people discussing about future trends. What are your thoughts?

you are viewing a single comment's thread
view the rest of the comments
[–] Maykey@alien.top 1 points 11 months ago

My hot take is that local models will become truly feasible on phones(and in general) only once we move past transformers towards something more FLOP and memory efficient(RetNet, S5)