this post was submitted on 12 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Hi, would the mac be a good machine for having lots of smal models in memory - ready for action - like whisper, an llm, tts, llava and use em sequentially or maybe max two in paralell? (in respect to the 400gbit bandwidth limitation) i d like to make a powerful agent, like a brain with many brainparts responsible for specific stuff

top 1 comments
sorted by: hot top controversial new old
[–] a_beautiful_rhind@alien.top 1 points 10 months ago

I think those run on metal so you're good. Haven't seen any performance numbers from the M3 though. It's a lot of coin to spend on something sight unseen.