this post was submitted on 10 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

https://preview.redd.it/txoqaubzehzb1.png?width=1062&format=png&auto=webp&s=5ce1e0599c1b0430106cd828cad77dc516a42a4a

https://reddit.com/link/17rzqfm/video/fqtexzq5fhzb1/player

https://preview.redd.it/s60h7gh1fhzb1.png?width=1016&format=png&auto=webp&s=23f963f561d4f57c8562924032301ce0256e4249

Heard Apple's working on an on-device Siri with LLMs, but these models are memory-intensive, especially for iPhone's limited RAM. This isn't just an Apple issue; big tech companies who want to run ML models on device, like samsung, google, meta will face same problem.

What if models could run directly from storage instead of RAM?

Samsung is onto something with their MRAM tech – it's non-volatile, power-efficient, and can handle some Logic, AI processing. Imagine your phone running models from storage!

Not an ML expert, but this tech evolution is intriguing. is there other attempt like this?

you are viewing a single comment's thread
view the rest of the comments
[–] 2016YamR6@alien.top 1 points 1 year ago

Did they confirm the LLM is on device memory? That wouldn’t make much sense to me at all. Siri already takes an input and sends it to the cloud to return a response. Why wouldn’t they use the same concept and just connect LLM to the cloud to process the response then send to the phonev