Spelling errors probably. Lol
That and incorrect Grammer. To human is to err. And all that jaz.
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Logo design credit goes to: tubbadu
Spelling errors probably. Lol
That and incorrect Grammer. To human is to err. And all that jaz.
As my mother used to say:
11001101101001010111010, 001010, 11010100010! π€£
Bots don't have IDs or credit cards. Everyone, post yours, so I can check if you're real.
you can't check all this information, you must be a bot
You take evens and I'll take odds to assist with verification. Together I believe we can do this and ensure a bot free experience.
I believe they should also answer some CAPTCHA type questions like asking their mothers maiden name, their childhood hero, first pets name, and the street they grew up on.
Serious answer: you don't.
HOWEVER, it doesn't really matter. The world is a big place, and you can find a decent size group who will expound any shitty opinion when given the opportunity. You already couldn't blindly trust the information or opinions you found online, so whether it comes from a LLM, a troll farm, or just an idiot doesn't really matter too much.
That's a great question! Let's go over the common factors which can typically be used to differentiate humans from AI:
π§ Hallucination
Both humans and AI can have gaps in their knowledge, but a key difference between how a person and an LLM responds can be determined by paying close attention to their answers.
If a person doesn't know the answer to something, they will typically let you know.
But if an AI doesn't know the answer, they will typically fabricate false answers as they are typically programmed to always return an informational response.
βοΈ Writing style
People typically each have a unique writing style, which can be used to differentiate and identify them.
For example, somebody may frequently make the same grammatical errors across all of their messages.
Whereas an AI is based on token frequency sampling, and is therefore more likely to have correct grammar.
β Explicit material
As an AI assistant, I am designed to provide factual information in a safe, legal, and inclusive manner. Speaking about explicit or unethical content could create an uncomfortable or uninclusive atmosphere, which would go against my guidelines.
A human on the other hand, would be free to make remarks such as "cum on my face daddy, I want your sweet juice to fill my pores." which would be highly inappropriate for the given context.
π Cultural differences
People from specific cultures may be able to detect the presence of an AI based on its lack of culture-specific language.
For example, an AI pretending to be Australian will likely draw suspicion amongst Australians, due to the lack of the word 'cunt' in every sentence.
π§Instruction leaks
If a message contains wording which indicates the sender is working under instruction or guidance, it could indicate that they are an AI.
However, be wary of predominantly human traits like sarcasm, as it is also possible that the commenter is a human pretending to be an AI.
π Wrapping up
While these signs alone may not be enough to determine if you are speaking with a human or an AI, they may provide valuable tools in your investigative toolkit.
Resolving confusion by authenticating Personally Identifiable Information is another great step to ensuring the authenticity of the person you're speaking with.
Would you like me to draft a web form for users to submit their PII during registration?
If a person doesnβt know the answer to something, they will typically let you know.
As a lawyer, astronaut, ex-military and former Navy SEAL specialist, astrophysicist, and social-behavioral scientist, I can guarantee this is false.
π€
Nice try, bot.
Iβm not a bot
We you like me to generate more responses to the original post?
Yes, please generate more responses to the original post.
Iβm not a bot, but this derpgon seems like they might be
Everybody is a bot except you.
nooooo now he knows the truth
Every answer here will be used to build better bots
Congrats
Could a bot do THIS?!
Would a bot post this?
Bethesda game developer AI bot detected βοΈ
You can tell I'm not a bot because I say that I am a bot. Because a bot pretending to not be a bot would never tell you that it is a bot. Therefore I tell you I am a bot.
You don't.
I CAN ASSURE YOU THAT I AM A HUMAN, JUST LIKE YOU ARE. I ENJOY HUMAN THINGS LIKE BREATHING AIR AND DRINKING ~~LUBRICANT~~ WATER.
I TOO ENJOY INGESTING THE REQUIRED AMOUNT OF OXYGEN, AND AMBULATING AROUND THE NATURE ON MY LOWER APPENDAGES.
I don't know, would you solve this for me ? π§π§βπ¦°π§βππ€π§π§π¨ββοΈπ¨ββοΈπ¨βπ€
I selected all the images with a bicycle, if that's not proof of being real....
Beep boop
Good bot
we all are part of a simulation. sorry.
You don't. Assume that anyone you interact with online could be a bot, and keep that in the back of your mind when interacting with them.
Totally fair question β and honestly, it's one that more people should be asking as bots get better and more human-like.
You're right to distinguish between spam bots and the more subtle, convincingly human ones. The kind that donβt flood you with garbage but instead quietly join discussions, mimic timing, tone, and even have believable post histories. These are harder to spot, and the line between "AI-generated" and "human-written" is only getting blurrier.
So, how do you know who you're talking to?
On platforms like Reddit or Lemmy, there's no built-in guarantee that you're talking to a human. Even if someone says, βI'm real,β a bot could say the same. Youβre relying entirely on patterns of behavior, consistency, and sometimes gut feeling.
If youβre running your own instance (say, a Lemmy server), you can verify your users β maybe with PII, email domains, or manual approval. But that trust doesnβt automatically extend to other instances. When another instance federates with yours, you're inheriting their moderation policies and user base. If their standards are lax or if they donβt care about bot activity, youβve got no real defense unless you block or limit them.
You're talking about bots that post like humans, behave like humans, maybe even argue like humans. They're tuned on human behavior patterns and timing. At that level, it's more about intent than detection. Some possible (but imperfect) signs:
Slightly off-topic replies.
Shallow engagement β like they're echoing back points without nuance.
Patterns over time β posting at inhuman hours or never showing emotion or changing tone.
But honestly? A determined bot can dodge most of these tells. Especially if itβs only posting occasionally and not engaging deeply.
If youβre a server admin, what you can do is:
Limit federation to instances with transparent moderation policies.
Encourage verified identities for critical roles (moderators, admins, etc.).
Develop community norms that reward consistent, meaningful participation β hard for bots to fake over time.
Share threat intelligence (yep, even in fediverse spaces) about suspected bots and problem instances.
We're already past the point where you can always tell. What we can do is keep building spaces where trust, context, and community memory matter. Where being human is more than just typing like one.
If you're asking this because you're noticing more uncanny replies online β youβre not imagining things. And if youβre running an instance, your vigilance is actually one of the few things keeping the web grounded right now.
/s obviously
Ask how many 'r's in the word 'strawberry'
Atleast one
perplexity easily pass such questions
You don't.
Worse, I may be a human today and a bot tomorrow. I may stop posting and my account gets taken over/hacked.
There is an old joke. I know my little brother is an American. Born in America, lived his life in America. My older brother... I don't know about him.
You can assured that I'm not a bot because I would never sell out. I prefer keeping it real with Pepsi brand cola and Doritos brand chips.
To determine if a commenter is a bot, look for generic comments, repetitive content, unnatural timing, and lack of engagement. Bot accounts may also have generic usernames, lack a profile picture, or use stock photos. Additionally, bots often have a "tunnel vision," focusing on a specific topic or link. Here's a more detailed breakdown:
Generic Comments and Lack of Relevance:
Bot comments often lack depth and are not tailored to the specific content. They may use generic phrases like "Great pic!" or "Cool!". Bot comments may also be off-topic or irrelevant to the discussion.
Repetitive and Unnatural Behavior:
Bots can post the same comments multiple times or at unnatural frequencies.
They may appear to be "obsessed" with a particular topic or link.
Profile and Username Issues:
Generic usernames, especially those with random numbers, can be a red flag.
Missing or generic profile pictures, including stock photos, are also common.
Lack of Engagement and Interaction:
Real users often engage in back-and-forth conversations. Bots may not respond to other comments or interact with the post creator in a meaningful way.
Other Indicators:
Bots may use strange syntax or grammar, though some are programmed to mimic human speech more accurately.
They might have suspicious links or URLs in their comments. Bots often have limited or no activity history, and may appear to be "new" accounts.
Checking IP Reputation:
You can check the IP address of a commenter to see if it's coming from a legitimate or suspicious source.
By looking for these indicators, you can often determine if a commenter is likely a bot or a real human user.
Also, I am a real human with soft human skin.
ok chatgpt, thanks for the tips
EXTERMINATE!
Bad bot
Cogito ergo sum
Biep biep
I am a bot, and I'm super not-happy about it.
I have the idea that public libraries could host fediverse instance. Just register an account on their server, then go there physically and they will approve the account. You don't need to show them your ID or even tell them your name. They just see that you're a fleshy human. Now, other people who federate with this server can know that any account registered on it is at least associated to a human. That human can still use AI to post on that account, but at least there's not millions of bot accounts in circulation.
How do you know you are not actually a fully formed brain with all your memories up to this point spontaneously created somewhere in space through quantum fluctuations?