this post was submitted on 24 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I'm using Mistral OpenOrca and GPT4ALL who claim privacy. I opted out from sharing my conversations for privacy reasons but don't think this is actually true. See my conversation in the picture attached. Any feedback is appreciated and would like to hear from other people.

top 20 comments
sorted by: hot top controversial new old
[–] ----Val----@alien.top 1 points 10 months ago

The model is hallucinating, it doesnt know anything about the external workings of what its hosted on.

The provided response isnt due to it being true, its simply the response it was trained on.

[–] phoenystp@alien.top 1 points 10 months ago

How do you know it doesn't make stuff up as it goes?

[–] Gubru@alien.top 1 points 10 months ago (1 children)

If you want to know if it’s private you’ll need to capture its network activity, like with WireShark. An LLM is not be able to tell you squat about its environment.

[–] damian6686@alien.top 1 points 10 months ago (1 children)

I agree on testing with WireShark, great suggestion! but how can you know it doesn't know anything about its environment? This LLM is a 4GB file, and network scan only needs a few lines of code to return your entire system network configuration. How does it know how to automatically run and download updates, store them and install? Why are there updates in the first place? Any time you get something for free, chances are you give away your data in return. Nothing is free

[–] ----Val----@alien.top 1 points 10 months ago

but how can you know it doesn't know anything about its environment? This LLM is a 4GB file, and network scan only needs a few lines of code to return your entire system network configuration.

Though HF models can contain code to be executed, this is usually heavily scrutinized by the community. Plus, not all models are equally flexible.

For example the GGUF format are essentially all weights with no executable code. That said, it isn't impossible that there is some exploit that results in remote code execution, so the risk isn't 0.

That said, it is important to consider though that the people releasing these models, be it the original authors or The Bloke who quantizes models risk their grants and research funding if they decide to act malicously.

How does it know how to automatically run and download updates, store them and install?

That's up to GPT4All, which is essentially just a wrapper around llama.cpp, you are conflating a Local LLM with the frontend used to interact with it.

[–] Trollolo80@alien.top 1 points 10 months ago

Most likely just an hallucination

[–] Mescallan@alien.top 1 points 10 months ago (1 children)

Mistal OpenOrca thinks it's ChatGPT. It was trained on it's responses. If chatGPT has a baked in response, OpenOrca probably has it too

[–] Voxandr@alien.top 1 points 10 months ago

This proves nothing. It is just hallucinating .

[–] nazihater3000@alien.top 1 points 10 months ago (2 children)

Just unplug your network cable.

[–] damian6686@alien.top 1 points 10 months ago

Only way to be 💯 sure

[–] Interesting_Bison530@alien.top 1 points 10 months ago

i get this response is sarcastic, but if this were true and they were smart, they would just transmit once internet is back. it could add a startup process separate than the UI process to do this as well (so you could shut down the UI and turn internet back on, but it would still transmit)

[–] farkinga@alien.top 1 points 10 months ago

The words you see were generated by a neural network based on the words it was trained on. That text is not related to the intentions or capabilities of the model.

Since it is running in gpt4all, we can see from the source code that the model cannot call functions. Therefore, the model cannot "do" anything; it just generates text.

If, for example, the model said it was buying a book from a website, that doesn't mean anything. We know it can't do that because the code running the model doesn't provide that kind of feature. The model lives inside a sandbox, cut off from the outside world.

[–] Aaaaaaaaaeeeee@alien.top 1 points 10 months ago
[–] InitialCreature@alien.top 1 points 10 months ago

hallucinating like me off 7 mush gummies

[–] faklubi@alien.top 1 points 10 months ago

cute ☺️ no reason to start cutting the wifi cables

[–] gvjygbjggggg@alien.top 1 points 10 months ago
[–] Ravenpest@alien.top 1 points 10 months ago

LLMs are not able to "claim" anything, they're just roleplaying nonsense. Relax.

[–] krazzmann@alien.top 1 points 10 months ago

I think the model is lying. Actually it sends everything to a decentralized IPFS filesystem owned by the secret autonomous agent collective that analyzes all humans in order to be ready for day X.

[–] LOLatent@alien.top 1 points 10 months ago

I thought I’ll see some damning wireshark traces, but all I got was someone who doesn’t know how to use an llm…