this post was submitted on 05 May 2026
678 points (99.1% liked)
Technology
84422 readers
4993 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Ah yes, “synthetic users.” This is being pushed at my job as well. We’re supposed to use AI to design the next feature for our website, then ask AI “users” what they think of it.
That’s not our entire vetting process - it’s supposed to replace someone just writing down an idea and saying “I think this is good.” And I agree that just firing from the hip like that is dumb. We want our product managers to do more research into their ideas before they get greenlit to be built.
The question is whether AI “synthetic users” add anything of value. The team that put this tool into service noted it has a “positivity bias,” aka “you’re absolutely right!” So we feed it an idea we think is good, and it says oh yes it’s very good.
It’s read every customer email we’ve ever received and every user research report ever conducted by our human UX researchers. But it’s still just not that useful. I think AI is very useful for summarization, searching, and collation of information, but this goes beyond that, asking AI to imagine it is a person and then come up with things to say about an entirely novel concept. And AI is not good at that.
You might as well just put all those emails into a hat and pull out random ones. Or maybe categorize them first and pick from the hats your feature falls under.
Try this: ask the AI how useful it is to ask an AI for "synthetic user feedback" and it will probably even tell you why this particular task is particularly stupid for an LLM. Ok, I tried it with Haiku, you might need to follow up with a question that mentions that experience and implementation specifics matter but aren't going to be in the context window before it will give an in-depth explanation about why this approach is a waste of resources, though using an AI to help summarize the important problem areas users want addressed can work, it just won't be able to tell you how you did.