this post was submitted on 30 Apr 2024
837 points (98.5% liked)
Technology
59472 readers
3029 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Oh it’s valid whether you want to believe it or not.
I think "Valid" just isn't the word you're looking for here. Valid requires verification, and since your point was verifyably false, valid isn't what you were going for. Scary hypothesis, nightmare fuel, anything where it doesn't have to actually be possible, to still cause a fear response would be terms that fit better.
It is becoming easier to spot A.I. posters. They’ll have a coherent argument yet will constantly misspell words a person of their supposed intelligence should know. It’ll look and sound about right, but not 100%. I’ve read traffic is about 50% bots, starting to add up
Are you ok? You've doubled down on nonsense. Seriously, take a breath. Look into some treatment for anxiety.
The whole danger is that AI text generation doesn't misspell, and comes across highly confidently.
There's actual research out there on spotting AI generated text. Most of it is based off tone, frequency of some specific phrases, and sentence structure.
If you're mixing this with the idea that spam emails and scamming comments are often misspelled, that's done in an attempt to avoid word filters, and also to help ensure that people who fall for them are dumb enough not to notice, making them easy marks more likely to overlook other warning signs. If they aren't trying to get you to take an action, or a coordinated push to manufacture consent, the chance of AI is low.
Also, the statistics about internet traffic you're thinking about is about bots. That's largely scripts and web scrapers, less so automated posters making arguments multiple levels down incredibly quiet threads on low user count social media like lemmy.