this post was submitted on 26 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Title says it all. Why spend so much effort finetuning and serving models locally when any closed-source model will do the same for cheaper in the long run. Is it a philosophical argument? (As in freedom vs free beer) Or are there practical cases where a local model does better.

Where I’m coming from is the requirement of a copilot, primarily for code but maybe for automating personal tasks as well, and wondering whether to put down the $20/mo for GPT4 or roll out my own personal assistant and run it locally (have an M2 max, compute wouldn’t be a huge issue)

top 34 comments
sorted by: hot top controversial new old
[–] Aaaaaaaaaeeeee@alien.top 1 points 9 months ago
[–] freehuntx@alien.top 1 points 9 months ago (1 children)

Why buy a car when there is uber?

[–] oppenbhaimer@alien.top 1 points 9 months ago (1 children)

The alternative here isn’t Uber. It’s a fast public transportation system. Local LLMs still don’t hold a candle to GPT-4’s performance from my experience, no matter what benchmarks say

[–] a_beautiful_rhind@alien.top 1 points 9 months ago

I have decent public transportation in my city. It still takes 2 hours to get somewhere. Won't drop me to the door on my schedule.

Autonomy counts for something. Best case is always "get both".

[–] mulletarian@alien.top 1 points 9 months ago (1 children)

Why eat out when you can have a home-cooked meal

[–] Mastershima@alien.top 1 points 9 months ago

I don’t think we’re at a good home cooked meal yet. I think we’re at “Mom: we have AI at home, you don’t need that”

[–] Only-Letterhead-3411@alien.top 1 points 9 months ago (1 children)
  • Local AI belongs to you, GPT-4 don't. You are simply buying permission to use it for a limited time, and AI company can take AI from you anytime they want for any reason they like. You can only lose your local AI if someone physically removes it from your PC and you no longer can download it.
  • GPT-4 is censored and biased. Local AI have uncensored options.
  • AI companies can monitor, log and use your data for training their AI. With local AI you own your privacy.
  • GPT-4 requires internet connection, local AI don't.
  • GPT-4 is subscription based and costs money to use. Local AI is free use.
[–] allinasecond@alien.top 1 points 9 months ago (4 children)

Are there any good tutorials on where to start? Im a FW engineer with a M1 Macbook, I dont know much about AI or LLMs

[–] nitrodudeIX@alien.top 1 points 9 months ago

Look up ollama.ai as a starting point...

[–] sarl__cagan@alien.top 1 points 9 months ago

If you are cool just using the command line, ollama is great and easy to use.

Otherwise, you could download LMStudio app on Mac, then download a model using the search feature, then you can start chatting. Models from TheBloke are good. You will probably need to try a few models (GGML format most likely). Mistral 7B or llama2 7B is a good starting place IMO.

[–] ThisGonBHard@alien.top 1 points 9 months ago

https://github.com/oobabooga/text-generation-webui

How much ram do you have? It matters a lot.

For a BIF simplification, think of the models you can run as the size (billion parameter, for example 13B means 13 billion) = 50-60% of your RAM.

If you have 16 GB, you can run a 7B model for example.

If you have 128GB, you can run 70B,

[–] jarec707@alien.top 1 points 9 months ago

GPT4all may be the easiest on ramp for your Mac. 7b models run fine on 8gb system, although take much of the memory.

[–] wiesel26@alien.top 1 points 9 months ago

Control. You can have the control or you can let someone else have the control. Open source LLMs give The masses and other option. An option they don't have to pay for. Your question is like saying why don't you use Microsoft 360 instead of open office.

[–] Wonderful_Ad_5134@alien.top 1 points 9 months ago

Local models aren't censored lol

[–] tu9jn@alien.top 1 points 9 months ago

You wont get banned from local for asking the wrong questions, and GPT4 has hourly limit as well

If you already have the hardware why not try it? It's literally free.

[–] edwios@alien.top 1 points 9 months ago (1 children)

No, nothing I am working on or will be working on will go to any uncontrolled whereabouts. Period. Besides, it’ll get banned immediately anyway, so why bother lol

[–] InitialCreature@alien.top 1 points 9 months ago

this guy builds fun stuff

[–] son_et_lumiere@alien.top 1 points 9 months ago

Once you get into the automation aspect, you're going to need to hit the OAI API, and that's an additional cost per 1k tokens beyond the $20 per month. That'll start to add up fast when you're passing a lot of data back and forth often.

[–] ccbadd@alien.top 1 points 9 months ago

For me it's just censorship and privacy. Maybe api costs once we get more apps will be an issue too.

[–] superbottom85@alien.top 1 points 9 months ago

Because hourly limits make GPT-4 unusable.

[–] Monkey_1505@alien.top 1 points 9 months ago

Why do people brew their own beer, or grow their own weed?

It's because they want to be more connected to the process, in control of it, and cut out the middleman. Also, local models probably won't destroy civilization.

[–] nazihater3000@alien.top 1 points 9 months ago

Try writing Churchill/Hitler slash fiction with GPT-4.

[–] geekcko@alien.top 1 points 9 months ago

Because most jobs won't let you use anything not self-hosted by yourself or a company.

[–] Bright-Question-6485@alien.top 1 points 9 months ago

Maybe I missed it but the most important argument might have slipped which is quite simply that GPT4 looks and feels good, however if you have a clear task (anything, literally - examples are data structuring pipelines, information extraction, repairing broken data models) then a fine tuned llama model will make GPT4 look like a toddler. It’s crazy and if you don’t believe me I can only recommend to everyone to give it a try and benchmark the results. It is that much of a difference. Plus, it allows you to iron out bugs in the understanding of GPT4. There is clear limits to where prompt engineering can take you.

To be clear I am really saying that there is things GPT4 just cannot do where a fine tuned llama just gets the job done.

[–] kivathewolf@alien.top 1 points 9 months ago

I like the analogy that Andrej Karpathy posted on X sometime back. LLM OS

Think of LLM as an OS. There are closed source OS like Windows and Mac, and then there are open source OS based on Linux. Each has its place. For most regular consumers, windows and mac are sufficient. However Linux has its place for all kinds of applications (from the Mars rover, to your raspberry pi home automation project). The LLMs may evolve in a similar fashion. For highly specific use cases, it maybe better to use a small LLM fine tuned for your application. In cases where data sovereignty is important, it’s not possible to use open AIs tools. Next, let’s say you have an application where u need an AI service and internet is not available. Local models are the only way you can go about.

It’s also important to understand that when you use GPT4, you aren’t using an LLM, but a full solution, where there’s the LLM, RAG, classic software functions (math), internet browsing and may be even other “expert LLMs”. When you download a model from Hugging face and run it, you are just using one piece of the puzzle. So yes, your results will not be comparable to GPT4. What open source gives you, is the ability to make a system like GPT4, but you need to do the work to get it there.

[–] Independent_Key1940@alien.top 1 points 9 months ago

It's not just philosophical. When you have a technology that holds power to change the world, it should either be destroyed or given into everyone's hands so that people can adapt and be at easy with it. Otherwise the person inventing the technology will rule the world. Or in today's world, will influence politics, will have support from powerful people, will attract wealth, and will make mistakes which could destroy he world.

So it's not just about morals, it's about survival.

[–] Mission_Revolution94@alien.top 1 points 9 months ago

because they are run by the borg (microsoft)

never think that ease is the only reason to do something privacy security

and overall control of your own domain are very good reasons.

another great reason local never says no.

[–] jpalmerzxcv@alien.top 1 points 9 months ago

Data collection. You're sending all of your queries to the GPT4 server, to people you don't know. Who knows what they're doing with it?

[–] ThisGonBHard@alien.top 1 points 9 months ago

closed-source model

You gave your own answer:

Not monitored

Not controlled

Uncensored

Private

Anonymous

Flexible

[–] frozen_tuna@alien.top 1 points 9 months ago

I use it for development. All the things mentioned are nice, but there's no way I could afford to do development using a paid service. I pass/generate way too many tokens and my company hasn't really sponsored my work yet.

Having chatgpt write a pirate poem hardly costs a thing. Getting an llm to summarize a bunch of search results, or read an email inbox flagging certain scenarios, or parse through a codebase looking for specific features gets very, very expensive fast.

[–] ekowmorfdlrowehtevas@alien.top 1 points 9 months ago

"Those who would give up privacy to purchase a temporarily better large language model interface, deserve neither" - Benjamin Franklin

[–] RadiantQualia@alien.top 1 points 9 months ago

GPT-4 is much much better for most normal use cases. Hopefully that changes one day, but OpenAI’s lead might just keep getting bigger.

[–] naoyao@alien.top 1 points 9 months ago

I was long a hold out for ChatGPT because I wasn't confident about OpenAI's handling of my personal information. I've started using Llama just a couple weeks ago, and whilst I'm happy that it can be run locally, I'm still looking forward to open source LLMs, because Llama isn't actually open source.

[–] Nonetendo65@alien.top 1 points 9 months ago

GPT-4 is plagued with outages. I've found the API unreliable to use in a production setting. Perhaps this will improve with time :)