this post was submitted on 24 Aug 2023
127 points (94.4% liked)

Canada

7200 readers
323 users here now

What's going on Canada?



Communities


🍁 Meta


πŸ—ΊοΈ Provinces / Territories


πŸ™οΈ Cities / Local Communities


πŸ’ SportsHockey

Football (NFL)

  • List of All Teams: unknown

Football (CFL)

  • List of All Teams: unknown

Baseball

Basketball

Soccer


πŸ’» Universities


πŸ’΅ Finance / Shopping


πŸ—£οΈ Politics


🍁 Social and Culture


Rules

Reminder that the rules for lemmy.ca also apply here. See the sidebar on the homepage:

https://lemmy.ca


founded 3 years ago
MODERATORS
 

Until AI is allowed to vote perhaps they sit the fuck down.

top 27 comments
sorted by: hot top controversial new old
[–] treadful@lemmy.zip 42 points 1 year ago (1 children)

Political ads that contain any outright lies should be illegal.

[–] avidamoeba@lemmy.ca 9 points 1 year ago (2 children)

Isn't this already the case?

[–] treadful@lemmy.zip 9 points 1 year ago (1 children)

Sorry, I apparently got lost and didn't realize I was in a Canadian community. I have no idea what your laws are about it. In the US it sure isn't illegal.

What seriously? Damn.

[–] corsicanguppy@lemmy.ca 5 points 1 year ago

You'd think.

[–] Cruxifux@lemmy.world 31 points 1 year ago (1 children)

Honestly ads should just be illegal.

[–] ttmrichter@lemmy.world 5 points 1 year ago (2 children)

Ads should be explicitly opt-in. Not even "implicit" opting-in or opt-out. You should specifically say you want to see ads before seeing them. And if that means no more billboards, ads in bus shelters, ads on radio, etc. then I'm all for it.

[–] Cruxifux@lemmy.world 1 points 1 year ago

And violations of that law should be punishable by death.

[–] maxprime@lemmy.ml 1 points 1 year ago (2 children)

Don’t get me wrong I like the vision but who pays for server costs?

[–] Tired8281@lemmy.ca 1 points 1 year ago

Perhaps we need a lot fewer servers? Servers that are useful will probably be fine, such as this one we're talking on, that I'm sure gets enough donations to be fine. Servers that spew unwanted ads, well, no great loss.

[–] ttmrichter@lemmy.world 1 points 1 year ago

Users? If you feel the server is worth something, you give them a little something.

[–] FlexibleToast@lemmy.world 12 points 1 year ago (1 children)

They aren't creating themselves. Someone is using the AI as a tool to create the ads. AI is just a tool like any other.

[–] mp3@lemmy.ca 4 points 1 year ago (2 children)

The issue is how these ads creators can operate in the shadow.

[–] GBU_28@lemm.ee 5 points 1 year ago

How wasn't that the case before

[–] FlexibleToast@lemmy.world 2 points 1 year ago

They don't need AI for that.

[–] ininewcrow@lemmy.ca 10 points 1 year ago

This only works if social media is regulated like every other communications platform before it.

Newspapers were the only source of news 100 years ago ... they were regulated and kept in line to prevent outright lies from being spread or disinformation from being promoted

Radio came after newspapers ... they were regulated and kept in line to prevent outright lies from being spread or disinformation from being promoted

Network TV came after ... they were regulated and kept in line to prevent outright lies from being spread or disinformation from being promoted

Today, everyone uses the internet and specifically social media as their preferred source for news ... social media is not regulated to same degree as media platforms before it and now the world is awash with disinformation, misinformation, non-information, lies and mistruths ... and everyone wonders why that happened.

Regulate All Social Media

[–] MacroCyclo@lemmy.ca 2 points 1 year ago

Where are you seeing them?

[–] Dearche@lemmy.ca 1 points 1 year ago (1 children)

AI generated content will become prolific no matter what. It's only a matter of time, so rather than just banning such things and pretending it will never become an issue once you do so, it's better to simply legislate so that it is less likely to become an issue.

Things like limiting what sorts of ads you're allowed to make, how much money each ad would be worth for election purposes, rules and regulations on the contents of the ads (like significant penalties for being misleading or outright lies).

In fact, since AI in theory should be much better at fact checking than humans, the standards of information quality should be much higher and enforced by law, by penalty of a significant fine either against the party or individual's coffers, or against their campaign funds, depending on when it's done.

I think AI generated content is fine as long as it's not spreading misinformation. Mostly because there's no stopping it. If you ban it, people will find a way around it, so just regulate it to make it as beneficial as possible.

[–] Voroxpete@sh.itjust.works 11 points 1 year ago (1 children)

In fact, since AI in theory should be much better at fact checking than humans, the standards of information quality should be much higher

What we're all calling "AI" right now has basically zero ability to fact check.

Large Language Models are essentially just a form of autocomplete. They predict valid outputs based on statistical analysis of their training data. This makes them quite good at passing the Turing test (ie, convincing the average user that they have something approximating intelligence), but what they completely lack is the ability to evaluate source for reliability. That's why it's so easy to deliberately trick them into repeating false information.

Real fact checking is a lot more than just googling something and finding a source that agrees with you. I can find sources claiming that the Earth is flat, aliens rule the world and Hillary Clinton is a baby eating lizard person. But none of those sources are in any way credible. However explaining why they're not credible is a much more difficult question. Media literacy is a conplex skill, and it's one that involves evaluating a huge number of different criteria, using a large number of different metrics, and it often involves making difficult judgement calls. Even people who are good at media literacy can be fooled, or just get it wrong. The entire study of history is basically about evaluating sources, and there's often serious disagreements over the veracity of a piece of information. Good journalists have to be very careful over exactly how they frame information to disambiguate the exact degree of confidence they have about it (ie, I can say with absolute certainty that this person told me this thing, but I can't say with absolute certainty that what they told me is true)... And that's the good journalists. There are a LOT of bad journalists out there.

It's possible that some hypothetical future generation of AI will be better at fact checking them humans, but that's not what we have today. The only way to get modern LLMs to produce factual information is to be extremely careful about what data they are fed; and even then, they will often just make shit up out of whole cloth from that data. Any output has to be verified by a human operator to avoid situations like Microsoft recommending the Ottawa food bank as a must see tourist attraction.

[–] Dearche@lemmy.ca 1 points 1 year ago (1 children)

No, I know that modern AI has no real ability to fact check, but the reason is because they've never been built that way, nor do they have the resources to do it properly. They have no way to know what is a reliable source, nor how to interpret the data in a meaningful way if it needs to be used in an abstract manner.

But I do believe that modern AI technology should be able to do so if given the resources. Create an AI that only references from a list of credible sources, and is able to compare them to what is said elsewhere.

I'm no AI specialist or anything, so maybe I'm completely wrong and such a method wouldn't work. But at the very least, I haven't even heard of any real attempt at making a fact checking AI yet. All the existing ones are shit and only adapt normal language learning models to reference other internet sources regardless of their legitimacy.

[–] Voroxpete@sh.itjust.works 1 points 1 year ago

The problem is that for any of what you're describing to work, AI has to be capable of comprehension and interpretation, neither of which are capabilities that LLMs have. This would be a quantum leap forward in terms of AI technology.

That's the key thing that has to be understood about "AI"; it fundamentally does not understand any of the words that it's saying. It's engaged in nothing more than extremely complex mimicry. Even a dog has more comprehension of human language than an LLM, and you wouldn't trust a dog to fact check political ads. Remember, even when working from accurate training data, LLMs will still cheerfully invent entirely fictitious data that just happens to fit the pattern of the training data, because that's all they are; pattern matchers.

If I present an AI with the statements "Mike Harris sold our LTC care system to corporate profiteers" and "Mike Harris sold your grandma's house to corporate profiteers" it has no way of accurately determining if the latter statement is true or false, because it fits the pattern of the first statement. A human can instantly distinguish between the concept of a long term care home and a person's privately owned house. An AI doesn't know what a person is, what a long term care home is, what ownership is, what the difference between private and public ownership are, what a house is and how that's different from a long term care home even though both are referred to as homes, what it means to sell something, what profiteering is and whether or not that term accurately describes the actions taken by the corporations that bought most of Ontario's LTC system. And then you have to get into the complex legalities of whether or not you're allowed to use the term "profiteers" in a political ad... It's a nightmare of complexity.

If there's a way to get to what you're describing, from where we are now, no one has come up with it yet and the first company that does will be rich beyond their wildest dreams. We're just not even remotely close to that kind of technology.

[–] Tired8281@lemmy.ca 0 points 1 year ago

lol that would just move them offshore, and the only thing worse than AI generated political ads is foreign AI generated political ads.

[–] NotAPenguin@kbin.social -2 points 1 year ago

It's not like cameras and photoshop can vote either