this post was submitted on 29 May 2025
174 points (92.6% liked)

Ask Lemmy

31954 readers
2153 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?

Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?

I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).

top 50 comments
sorted by: hot top controversial new old

Spelling errors probably. Lol

That and incorrect Grammer. To human is to err. And all that jaz.

[–] Sho@lemmy.world 5 points 2 days ago

As my mother used to say:

11001101101001010111010, 001010, 11010100010! 🀣

[–] Susurrus@lemm.ee 10 points 2 days ago (2 children)

Bots don't have IDs or credit cards. Everyone, post yours, so I can check if you're real.

[–] Alistaire@sopuli.xyz 2 points 2 days ago

you can't check all this information, you must be a bot

[–] Alaik@lemmy.zip 4 points 2 days ago

You take evens and I'll take odds to assist with verification. Together I believe we can do this and ensure a bot free experience.

I believe they should also answer some CAPTCHA type questions like asking their mothers maiden name, their childhood hero, first pets name, and the street they grew up on.

[–] scoobford@lemmy.zip 4 points 2 days ago (2 children)

Serious answer: you don't.

HOWEVER, it doesn't really matter. The world is a big place, and you can find a decent size group who will expound any shitty opinion when given the opportunity. You already couldn't blindly trust the information or opinions you found online, so whether it comes from a LLM, a troll farm, or just an idiot doesn't really matter too much.

load more comments (2 replies)
[–] phlegmy@sh.itjust.works 37 points 3 days ago* (last edited 3 days ago) (5 children)

That's a great question! Let's go over the common factors which can typically be used to differentiate humans from AI:

🧠 Hallucination
Both humans and AI can have gaps in their knowledge, but a key difference between how a person and an LLM responds can be determined by paying close attention to their answers.

If a person doesn't know the answer to something, they will typically let you know.
But if an AI doesn't know the answer, they will typically fabricate false answers as they are typically programmed to always return an informational response.

✍️ Writing style
People typically each have a unique writing style, which can be used to differentiate and identify them.

For example, somebody may frequently make the same grammatical errors across all of their messages.
Whereas an AI is based on token frequency sampling, and is therefore more likely to have correct grammar.

❌ Explicit material
As an AI assistant, I am designed to provide factual information in a safe, legal, and inclusive manner. Speaking about explicit or unethical content could create an uncomfortable or uninclusive atmosphere, which would go against my guidelines.

A human on the other hand, would be free to make remarks such as "cum on my face daddy, I want your sweet juice to fill my pores." which would be highly inappropriate for the given context.

🌐 Cultural differences
People from specific cultures may be able to detect the presence of an AI based on its lack of culture-specific language.
For example, an AI pretending to be Australian will likely draw suspicion amongst Australians, due to the lack of the word 'cunt' in every sentence.

πŸ’§Instruction leaks
If a message contains wording which indicates the sender is working under instruction or guidance, it could indicate that they are an AI.
However, be wary of predominantly human traits like sarcasm, as it is also possible that the commenter is a human pretending to be an AI.

🎁 Wrapping up
While these signs alone may not be enough to determine if you are speaking with a human or an AI, they may provide valuable tools in your investigative toolkit.
Resolving confusion by authenticating Personally Identifiable Information is another great step to ensuring the authenticity of the person you're speaking with.

Would you like me to draft a web form for users to submit their PII during registration?

[–] throwawayacc0430@sh.itjust.works 9 points 3 days ago* (last edited 3 days ago) (1 children)

If a person doesn’t know the answer to something, they will typically let you know.

As a lawyer, astronaut, ex-military and former Navy SEAL specialist, astrophysicist, and social-behavioral scientist, I can guarantee this is false.

πŸ€“

load more comments (1 replies)
load more comments (4 replies)
[–] Almacca@aussie.zone 4 points 2 days ago

Nice try, bot.

[–] moseschrute@lemmy.world 2 points 1 day ago (1 children)

I’m not a bot


We you like me to generate more responses to the original post?

[–] derpgon@programming.dev 1 points 1 day ago (1 children)

Yes, please generate more responses to the original post.

[–] moseschrute@lemmy.world 2 points 1 day ago

I’m not a bot, but this derpgon seems like they might be

[–] AnthropomorphicCat@lemmy.world 4 points 2 days ago (1 children)

Everybody is a bot except you.

nooooo now he knows the truth

[–] Angry_Autist@lemmy.world 5 points 2 days ago

Every answer here will be used to build better bots

Congrats

[–] Brunbrun6766@lemmy.world 24 points 3 days ago

Could a bot do THIS?!

[–] Feathercrown@lemmy.world 10 points 2 days ago (1 children)

Bethesda game developer AI bot detected ❗️

[–] sandflavoured@lemm.ee 4 points 2 days ago

You can tell I'm not a bot because I say that I am a bot. Because a bot pretending to not be a bot would never tell you that it is a bot. Therefore I tell you I am a bot.

[–] daggermoon@lemmy.world 2 points 2 days ago
[–] wirelesswire@lemmy.zip 82 points 4 days ago (4 children)

I CAN ASSURE YOU THAT I AM A HUMAN, JUST LIKE YOU ARE. I ENJOY HUMAN THINGS LIKE BREATHING AIR AND DRINKING ~~LUBRICANT~~ WATER.

[–] Pissmidget@lemmy.world 41 points 4 days ago (3 children)

I TOO ENJOY INGESTING THE REQUIRED AMOUNT OF OXYGEN, AND AMBULATING AROUND THE NATURE ON MY LOWER APPENDAGES.

load more comments (3 replies)
load more comments (3 replies)
[–] M33@lemmy.sdf.org 1 points 1 day ago

I don't know, would you solve this for me ? πŸ§“πŸ§‘β€πŸ¦°πŸ§‘β€πŸ­πŸ€–πŸ§“πŸ§“πŸ‘¨β€βš–οΈπŸ‘¨β€βœˆοΈπŸ‘¨β€πŸŽ€

I selected all the images with a bicycle, if that's not proof of being real....

[–] thermal_shock@lemmy.world 4 points 2 days ago (1 children)
[–] xorollo@leminal.space 2 points 2 days ago
[–] aliser@lemmy.world 2 points 2 days ago

we all are part of a simulation. sorry.

[–] ligma_centauri@lemmy.world 61 points 4 days ago (8 children)

You don't. Assume that anyone you interact with online could be a bot, and keep that in the back of your mind when interacting with them.

load more comments (8 replies)
[–] bigboismith@lemmy.world 7 points 2 days ago (2 children)

Totally fair question β€” and honestly, it's one that more people should be asking as bots get better and more human-like.

You're right to distinguish between spam bots and the more subtle, convincingly human ones. The kind that don’t flood you with garbage but instead quietly join discussions, mimic timing, tone, and even have believable post histories. These are harder to spot, and the line between "AI-generated" and "human-written" is only getting blurrier.

So, how do you know who you're talking to?

  1. Right now? You don’t.

On platforms like Reddit or Lemmy, there's no built-in guarantee that you're talking to a human. Even if someone says, β€œI'm real,” a bot could say the same. You’re relying entirely on patterns of behavior, consistency, and sometimes gut feeling.

  1. Federation makes it messier.

If you’re running your own instance (say, a Lemmy server), you can verify your users β€” maybe with PII, email domains, or manual approval. But that trust doesn’t automatically extend to other instances. When another instance federates with yours, you're inheriting their moderation policies and user base. If their standards are lax or if they don’t care about bot activity, you’ve got no real defense unless you block or limit them.

  1. Detecting β€œsmart” bots is hard.

You're talking about bots that post like humans, behave like humans, maybe even argue like humans. They're tuned on human behavior patterns and timing. At that level, it's more about intent than detection. Some possible (but imperfect) signs:

Slightly off-topic replies.

Shallow engagement β€” like they're echoing back points without nuance.

Patterns over time β€” posting at inhuman hours or never showing emotion or changing tone.

But honestly? A determined bot can dodge most of these tells. Especially if it’s only posting occasionally and not engaging deeply.

  1. Long-term trust is earned, not proven.

If you’re a server admin, what you can do is:

Limit federation to instances with transparent moderation policies.

Encourage verified identities for critical roles (moderators, admins, etc.).

Develop community norms that reward consistent, meaningful participation β€” hard for bots to fake over time.

Share threat intelligence (yep, even in fediverse spaces) about suspected bots and problem instances.

  1. The uncomfortable truth?

We're already past the point where you can always tell. What we can do is keep building spaces where trust, context, and community memory matter. Where being human is more than just typing like one.


If you're asking this because you're noticing more uncanny replies online β€” you’re not imagining things. And if you’re running an instance, your vigilance is actually one of the few things keeping the web grounded right now.

/s obviously

load more comments (2 replies)
[–] csm10495@sh.itjust.works 3 points 2 days ago (2 children)

Ask how many 'r's in the word 'strawberry'

[–] Mahalio@lemm.ee 3 points 2 days ago

Atleast one

[–] Alistaire@sopuli.xyz 1 points 2 days ago

perplexity easily pass such questions

[–] MNByChoice@midwest.social 7 points 2 days ago (5 children)

You don't.

Worse, I may be a human today and a bot tomorrow. I may stop posting and my account gets taken over/hacked.

There is an old joke. I know my little brother is an American. Born in America, lived his life in America. My older brother... I don't know about him.

load more comments (5 replies)
[–] pinball_wizard@lemmy.zip 6 points 2 days ago

You can assured that I'm not a bot because I would never sell out. I prefer keeping it real with Pepsi brand cola and Doritos brand chips.

(Shamelessly lifted from Wayne's World II)

[–] samus12345@lemm.ee 4 points 2 days ago (5 children)

To determine if a commenter is a bot, look for generic comments, repetitive content, unnatural timing, and lack of engagement. Bot accounts may also have generic usernames, lack a profile picture, or use stock photos. Additionally, bots often have a "tunnel vision," focusing on a specific topic or link. Here's a more detailed breakdown:

  1. Generic Comments and Lack of Relevance:

    Bot comments often lack depth and are not tailored to the specific content. They may use generic phrases like "Great pic!" or "Cool!". Bot comments may also be off-topic or irrelevant to the discussion.

  2. Repetitive and Unnatural Behavior:

    Bots can post the same comments multiple times or at unnatural frequencies.

They may appear to be "obsessed" with a particular topic or link.

  1. Profile and Username Issues:

    Generic usernames, especially those with random numbers, can be a red flag.

Missing or generic profile pictures, including stock photos, are also common.

  1. Lack of Engagement and Interaction:

    Real users often engage in back-and-forth conversations. Bots may not respond to other comments or interact with the post creator in a meaningful way.

  2. Other Indicators:

    Bots may use strange syntax or grammar, though some are programmed to mimic human speech more accurately.

They might have suspicious links or URLs in their comments. Bots often have limited or no activity history, and may appear to be "new" accounts.

  1. Checking IP Reputation:

    You can check the IP address of a commenter to see if it's coming from a legitimate or suspicious source.

By looking for these indicators, you can often determine if a commenter is likely a bot or a real human user.

Also, I am a real human with soft human skin.

[–] sinedpick@awful.systems 3 points 2 days ago

ok chatgpt, thanks for the tips

load more comments (4 replies)
[–] Strider@lemmy.world 3 points 2 days ago (1 children)
[–] Nounka@lemmy.world 1 points 2 days ago

Cogito ergo sum

Biep biep

[–] BodePlotHole@lemmy.world 2 points 2 days ago

I am a bot, and I'm super not-happy about it.

[–] gandalf_der_12te@discuss.tchncs.de 12 points 3 days ago (2 children)

I have the idea that public libraries could host fediverse instance. Just register an account on their server, then go there physically and they will approve the account. You don't need to show them your ID or even tell them your name. They just see that you're a fleshy human. Now, other people who federate with this server can know that any account registered on it is at least associated to a human. That human can still use AI to post on that account, but at least there's not millions of bot accounts in circulation.

load more comments (2 replies)
[–] bjoern_tantau@swg-empire.de 31 points 4 days ago (4 children)

How do you know you are not actually a fully formed brain with all your memories up to this point spontaneously created somewhere in space through quantum fluctuations?

load more comments (4 replies)
load more comments
view more: next β€Ί