Imagine an actor who never ages, never walks off set or demands a higher salary.
That’s the promise behind Tilly Norwood, a fully AI-generated “actress” currently being courted by Hollywood’s top talent agencies. Her synthetic presence has ignited a media firestorm, denounced as an existential threat to human performers by some and hailed as a breakthrough in digital creativity by others.
But beneath the headlines lies a deeper tension. The binaries used to debate Norwood — human versus machine, threat versus opportunity, good versus bad — flatten complex questions of art, justice and creative power into soundbites.
The question isn’t whether the future will be synthetic; it already is. Our challenge now is to ensure that it is also meaningfully human.
All agree Tilly isn’t human
Ironically, at the centre of this polarizing debate is a rare moment of agreement: all sides acknowledge that Tilly is not human.
Her creator, Eline Van der Velden, the CEO of AI production company Particle6, insists that Norwood was never meant to replace a real actor. Critics agree, albeit in protest. SAG-AFTRA, the union representing actors in the U.S., responded with:
“It’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion, and from what we’ve seen, audiences aren’t interested in watching computer-generated content untethered from the human experience.”
Their position is rooted in recent history: In 2023, actors went on strike over AI. The resulting agreement secured protections around consent and compensation.
So if both sides insist Tilly isn’t human, the controversy, then, isn’t just about what Tilly is, it’s about what she represents.
I really don't know what we can do with AI today. I do know that what we can do today seemed a distant dream not too long ago. It's moving fast and I can't imagine how far along it'll be in one year, or even five.
... you should probably check, before you go selling the what-ifs.
Diffusion is a denoising algorithm. It's just powerful enough that "noise" can mean, all the parts that don't look like Shrek eating ramen. Show it a blank page and it'll squint until it sees that. It's pretty good at finding Shrek. It's so-so at finding "eating." You're better-off starting from rough approximation, like video of a guy eating ramen. And it probably doesn't hurt if he's painted green.