this post was submitted on 21 Feb 2024
288 points (95.0% liked)
Technology
59358 readers
4664 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Here’s an idea let’s all panic and make wild ass assumptions with 0 data lmao.
This article doesn’t even list a claim of what their settings were nor try to recreate anything.
Whole fucking article is a he said she said bullshit.
If I set the top_p setting to 0.2 I too can make the model say wild psychotic shit.
If I set the temp to a high setting I too can make the model seem delusional but still understandable.
With a system level prompt I too can make the model act and speak however I want (for the most part)
More bullshit articles designed to keep regular people away from newly formed power. Not gonna let these people try and scare y’all away. Stay curious.
Where did that come from?
AI bros need to tell themselves that everyone is in a delusional panic about "AI" to justify their shilling for them.
Literally the top comment for me (and maybe not you depending on which instance you’re registered with, because some instances block another) says that this is because they’re training their modes off user input lmfao.
But go off with your douchey assumptions.
🤡
Bare in mind depending on your instance, you won’t see the same comments as others do.
With that said, top comment here for me is talking about how this was because they’re training their models on user input.
As if the leaders in fucking AI development don’t know what they’re doing, especially for a concept that’s covered in every intro level AI course in college. 🙄
Then again not everyone went to college I guess and would rather make arm chair assumptions and pray at the alter of google despite complaining about how AI is ruining everything and google being one of the first people to do shit like this with their search engine for “better results” (not directed at you of course, thanks for being respectful and just asking a simple question rather than making assumptions)
I mean, OpenAI themselves acknowledged there was an issue and said they were working on it,
“We are investigating reports of unexpected responses from ChatGPT,” an update read, before another soon after announced that the “issue has been identified”. “We’re continuing to monitor the situation,” the latest update read.