this post was submitted on 26 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Title says it all. Why spend so much effort finetuning and serving models locally when any closed-source model will do the same for cheaper in the long run. Is it a philosophical argument? (As in freedom vs free beer) Or are there practical cases where a local model does better.

Where I’m coming from is the requirement of a copilot, primarily for code but maybe for automating personal tasks as well, and wondering whether to put down the $20/mo for GPT4 or roll out my own personal assistant and run it locally (have an M2 max, compute wouldn’t be a huge issue)

you are viewing a single comment's thread
view the rest of the comments
[–] freehuntx@alien.top 1 points 11 months ago (1 children)

Why buy a car when there is uber?

[–] oppenbhaimer@alien.top 1 points 11 months ago (1 children)

The alternative here isn’t Uber. It’s a fast public transportation system. Local LLMs still don’t hold a candle to GPT-4’s performance from my experience, no matter what benchmarks say

[–] a_beautiful_rhind@alien.top 1 points 11 months ago

I have decent public transportation in my city. It still takes 2 hours to get somewhere. Won't drop me to the door on my schedule.

Autonomy counts for something. Best case is always "get both".