this post was submitted on 01 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I’m fascinated by the whole ecosystem popping up around llama and local LLMs. I’m also curious what everyone here is up to with the models they are running.

Why are you interested in running local models? What are you doing with them?

Secondarily, how are you running your models? Are you truly running them on a local hardware or on a cloud service?

you are viewing a single comment's thread
view the rest of the comments
[–] ChangeIsHard_@alien.top 1 points 10 months ago

First, out of principle, because I don't want OpenAI to use my inputs for RLHF to then replace me.

Second, I want much more freedom for experimentation, and I can't have that with a cloud API where I have to constantly worry about how many tokens I consume, which translates to $$$