this post was submitted on 24 May 2024
289 points (97.7% liked)

Technology

59427 readers
2816 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

archive.is

Shall we trust LM defining legal definitions, deepfake in this case? It seems the state rep. is unable to proof read the model output as he is "really struggling with the technical aspects of how to define what a deepfake was."

you are viewing a single comment's thread
view the rest of the comments
[โ€“] j4k3@lemmy.world 6 points 5 months ago (1 children)

๐Ÿ™Š and the group think nonsense continues...

Y'all know those grammar checking thingies? Yeah, same basic thing. You know when you're stuck writing something and your wording isn't quite what you'd like? Maybe you ask another person for ideas; same thing.

Is it smart to ask AI to write something outright; about as smart as asking a random person on the street to do the same. Is it smart to use proprietary AI that has ulterior political motives; things might leak, like this, by proxy. Is it smart for people to ask others to proof read their work? Does it matter if that person is a grammar checker that makes suggestions for alternate wording and has most accessible human written language at its disposal.

[โ€“] bolexforsoup@lemmy.blahaj.zone 9 points 5 months ago* (last edited 5 months ago) (1 children)
[โ€“] j4k3@lemmy.world 3 points 5 months ago

I don't see any issue whatsoever in what he did. The model can draw meaning across all human language in a way humans are not even capable of doing. I could go as far as creating a training corpus based on all written works of the country's founding members and generate a nearly perfect simulacrum that includes much of their personality and politics.

The AI is not really the issue here. The issue is how well the person uses the tool available and how they use it. By asking it for writing advice for word specificity, it shouldn't matter so long as the person is proof reading it and it follows their intent. If a politician's significant other writes a sentence of a speech, does it matter. None of them write their own sophist campaign nonsense or their legislative works.