this post was submitted on 11 Feb 2024
643 points (97.9% liked)

Technology

59608 readers
3621 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The White House wants to 'cryptographically verify' videos of Joe Biden so viewers don't mistake them for AI deepfakes::Biden's AI advisor Ben Buchanan said a method of clearly verifying White House releases is "in the works."

(page 3) 50 comments
sorted by: hot top controversial new old
[–] Darkassassin07@lemmy.ca 4 points 9 months ago (5 children)

I'm more interested in how exactly you'd implement something like this.

It's not like videos viewed on tiktok display a hash for the file you're viewing; and users wouldn't look at that data anyway, especially those that would be swayed by a deep fake...

[–] cley_faye@lemmy.world 4 points 9 months ago

Like you said, the issue is in verification by the end-user. It is trivial to provide a digitally signed (and timestamped) file. It is also trivial to provide trusted tools to verify these files. It is immensely difficult to provide a solution user will care about; which is why more often than not the most people asks companies in the data authenticity business is "can we show a green check on screen? That would be perfect!".

And we end up with something that nobody checks beyond the "it's probably ok" phase. If the goal is to teach the masses about trusting their source, either they have a miracle solution, or it just won't work. (and all that is assuming people actually care about checking the authenticity of the stuff they see, which is not a norm as it is…)

load more comments (4 replies)
[–] autotldr@lemmings.world 3 points 9 months ago

This is the best summary I could come up with:


The White House is increasingly aware that the American public needs a way to tell that statements from President Joe Biden and related information are real in the new age of easy-to-use generative AI.

Big Tech players such as Meta, Google, Microsoft, and a range of startups have raced to release consumer-friendly AI tools, leading to a new wave of deepfakes — last month, an AI-generated robocall attempted to undermine voting efforts related to the 2024 presidential election using Biden's voice.

Yet, there is no end in sight for more sophisticated new generative-AI tools that make it easy for people with little to no technical know-how to create fake images, videos, and calls that seem authentic.

Ben Buchanan, Biden's Special Advisor for Artificial Intelligence, told Business Insider that the White House is working on a way to verify all of its official communications due to the rise in fake generative-AI content.

While last year's executive order on AI created an AI Safety Institute at the Department of Commerce tasked with creating standards for watermarking content to show provenance, the effort to verify White House communications is separate.

Ultimately, the goal is to ensure that anyone who sees a video of Biden released by the White House can immediately tell it is authentic and unaltered by a third party.


The original article contains 367 words, the summary contains 218 words. Saved 41%. I'm a bot and I'm open source!

[–] npaladin2000@lemmy.world 2 points 9 months ago

If the White House actually makes the deep fakes, do they count as "fakes?"

load more comments
view more: ‹ prev next ›