----Val----

joined 1 year ago
[–] ----Val----@alien.top 1 points 11 months ago (1 children)

Check which model you are using. The latest 2.0.3 XTTSv2 is really wonky. Manually revert it to 2.0.2.

[–] ----Val----@alien.top 1 points 11 months ago

but how can you know it doesn't know anything about its environment? This LLM is a 4GB file, and network scan only needs a few lines of code to return your entire system network configuration.

Though HF models can contain code to be executed, this is usually heavily scrutinized by the community. Plus, not all models are equally flexible.

For example the GGUF format are essentially all weights with no executable code. That said, it isn't impossible that there is some exploit that results in remote code execution, so the risk isn't 0.

That said, it is important to consider though that the people releasing these models, be it the original authors or The Bloke who quantizes models risk their grants and research funding if they decide to act malicously.

How does it know how to automatically run and download updates, store them and install?

That's up to GPT4All, which is essentially just a wrapper around llama.cpp, you are conflating a Local LLM with the frontend used to interact with it.

[–] ----Val----@alien.top 1 points 11 months ago

The model is hallucinating, it doesnt know anything about the external workings of what its hosted on.

The provided response isnt due to it being true, its simply the response it was trained on.