this post was submitted on 26 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Title says it all. Why spend so much effort finetuning and serving models locally when any closed-source model will do the same for cheaper in the long run. Is it a philosophical argument? (As in freedom vs free beer) Or are there practical cases where a local model does better.

Where I’m coming from is the requirement of a copilot, primarily for code but maybe for automating personal tasks as well, and wondering whether to put down the $20/mo for GPT4 or roll out my own personal assistant and run it locally (have an M2 max, compute wouldn’t be a huge issue)

you are viewing a single comment's thread
view the rest of the comments
[–] Only-Letterhead-3411@alien.top 1 points 11 months ago (5 children)
  • Local AI belongs to you, GPT-4 don't. You are simply buying permission to use it for a limited time, and AI company can take AI from you anytime they want for any reason they like. You can only lose your local AI if someone physically removes it from your PC and you no longer can download it.
  • GPT-4 is censored and biased. Local AI have uncensored options.
  • AI companies can monitor, log and use your data for training their AI. With local AI you own your privacy.
  • GPT-4 requires internet connection, local AI don't.
  • GPT-4 is subscription based and costs money to use. Local AI is free use.
[–] allinasecond@alien.top 1 points 11 months ago (4 children)

Are there any good tutorials on where to start? Im a FW engineer with a M1 Macbook, I dont know much about AI or LLMs

[–] jarec707@alien.top 1 points 11 months ago

GPT4all may be the easiest on ramp for your Mac. 7b models run fine on 8gb system, although take much of the memory.

load more comments (3 replies)
load more comments (3 replies)