this post was submitted on 24 Jun 2024
-16 points (38.2% liked)

Selfhosted

40132 readers
546 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I have experience in running servers, but I would like to know if it's possible to do it, I just need a GPT 3.5 like private LLM running.

you are viewing a single comment's thread
view the rest of the comments
[–] TheBigBrother@lemmy.world 0 points 4 months ago* (last edited 4 months ago) (2 children)

I was talking about that with a friend some days ago, and they made an experiment, they just made the AI correct punctuation errors of a text document, no words at all which you can easily add manually, and the anti-AI system target 99% AI made, I don't know how to explain that, maybe the text was AI generated also IDK or there is a watermark in some place, a pattern or something.

Edit: you point will be that there is no way to fool the anti-AI systems running a private LLM?

[–] entropicdrift@lemmy.sdf.org 7 points 4 months ago* (last edited 4 months ago) (2 children)

Just that they're no easier to use to fool an anti-AI system than using ChatGPT, Gemini, Bing, or Claude. Those AI detectors also give false positives on works made by humans. They're unreliable in the first place.

Basically, they're "boring text detectors" more than anything else.

[–] TheBigBrother@lemmy.world 0 points 4 months ago

I have a friend who is running a business of doing homework on demand, he is using AI to do the work, he got back a work because AI generated content was detected on it, he used to employ real people to do the work but anyway real people used AI too sometimes, so he knows I'm a "hacker" LMAO and asked me if I knew any way to fool the anti-AI systems, I thought about running a private LLM and training it with real human generated content like ebooks depending on the subject of the work, do you think it could be possible to fool these things with this method?

[–] al4s@feddit.de 3 points 4 months ago (2 children)

LLMs work by always predicting the next most likely token and LLM detection works by checking how often the next most likely token was chosen. You can tell the LLM to choose less likely tokens more often (turn up the heat parameter) but you will only get gibberish out if you do. So no, there is not.

[–] TheBigBrother@lemmy.world 0 points 4 months ago

What about if you train the AI with human generated content? For example e-books?