this post was submitted on 31 Aug 2023
540 points (98.0% liked)

Technology

59664 readers
3051 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

I'm rather curious to see how the EU's privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn't have a paywall)

top 50 comments
sorted by: hot top controversial new old
[–] Primarily0617@kbin.social 200 points 1 year ago* (last edited 1 year ago) (4 children)

it's crazy that "it's too hard :(" has become an acceptable justification for just ignoring the law within tech circles

[–] BrianTheeBiscuiteer@lemmy.world 91 points 1 year ago (6 children)

I'm not an AI expert, and I wouldn't say it is too hard, but I believe removing a specific piece of data from a model is like trying to remove excess salt from a stew. You can add things to make the stew less salty but you can't really remove the salt.

The alternative, which is a lot of effort but boo-hoo for big tech, is to throw out the model and start over without the data in question. These companies would do well to start with models built on public or royalty free data and then add more risky data on top of that (so you only have to rebake starting from the "public" version).

[–] Primarily0617@kbin.social 46 points 1 year ago

sounds like big tech shouldn't have spent the last decade investing in a kitchen refit so that they could make stew really well but nothing else

[–] GoosLife@lemmy.world 27 points 1 year ago* (last edited 1 year ago) (7 children)

If there's something illegal in your dish, you throw it out. It's not a question. I don't care that you spent a lot of time and money on it. "I spent a lot of time preparing the circumstances leading to this crime" is not an excuse, neither is "if I have to face consequences for committing this crime, I might lose money".

[–] Robaque@feddit.it 5 points 1 year ago

Perhaps long pig stew could serve as an apt comparison, lol

load more comments (6 replies)
[–] Grandwolf319@sh.itjust.works 10 points 1 year ago

Replace salt with poison or an allergenic substance and if fully holds. If a batch has been contaminated, then yes, you should try again.

But now that the cat is out of the bag, other companies are less willing to let something be scrap able due to how valuable it can be.

I think big tech knew this, that they can only build these models on unfiltered data before the AI craze.

load more comments (3 replies)
[–] Zeth0s@lemmy.world 20 points 1 year ago* (last edited 1 year ago)

It's actually a pretty normal thing in law. Laws are created with common sense in mind and compromises.

Currently EU laws do not cover generative AI. Now EU needs to decide how to deal with it. If consider it as a "lossy compressed database", trying to enforce a variation of gdpr with added fuzziness, or do something else

[–] reverendsteveii@lemm.ee 7 points 1 year ago

I just saw an article that said that ISPs are trying to whine their way out of listing the fees they charge because it's too hard. Which is wild because they certainly know what I owe them after I sign the contract, but somehow it's just impossible for them to determine right up until the moment that I'm obligated to pay it.

[–] garyyo@lemmy.world 5 points 1 year ago

Always has been. The laws are there to incentivize good behavior, but when the cost of complying is larger than the projected cost of not complying they will ignore it and deal with the consequences. For us regular folk we generally can't afford to not comply (except for all the low stakes laws that you break on a day to day basis), but when you have money to burn and a lot is at stake, the decision becomes more complicated.

The tech part of that is that we don't really even know if removing data from these sorts of model is possible in the first place. The only way to remove it is to throw away the old one and make a new one (aka retraining the model) without the offending data. This is similar to how you can't get a person to forget something without some really drastic measures, even then how do you know they forgot it, that information may still be used to inform their decisions, they might just not be aware of it or feign ignorance. Only real way to be sure is to scrap the person. Given how insanely costly it can be to retrain a model, the laws start looking like "necessary operating costs" instead of absolute rules.

[–] DigitalWebSlinger@lemmy.world 138 points 1 year ago (21 children)

"AI model unlearning" is the equivalent of saying "removing a specific feature from a compiled binary executable". So, yeah, basically not feasible.

But the solution is painfully easy: you remove the data from your training set (ie, the source code), and re-train your model (recompile the executable).

Yes, it may cost you a lot of time and money to accomplish this, but such are the consequences of breaking the law. Maybe be extra careful about obeying laws going forward, eh?

[–] Ajen@sh.itjust.works 16 points 1 year ago

removing a specific feature from a compiled binary executable

That's actually very feasible. Compiled binaries translate directly to assembly, which is taught to most (all?) comp sci undergrads. When the binary is compiled by a standard compiler the translated assembly is very easy to understand, and for software that has protections/obfuscations like DRM and viruses there are reverse engineering tools like IDA Pro.

[–] londos@lemmy.world 14 points 1 year ago (1 children)

Far cheaper to just buy politicians and change the law.

load more comments (1 replies)
[–] CoderKat@lemm.ee 6 points 1 year ago

Retraining the model is incredibly expensive. That basically means not training the model with any user data, even if it slips in accidentally, by someone sabotage the training data, or even with consent (since consent can be revoked).

[–] Blackmist@feddit.uk 5 points 1 year ago

Yeah, there's no point in the model where you can pinpoint that data. It's like asking a brain surgeon to slice your brain to make you forget something. Sure, he could do it, but don't be surprised if you can't speak or remember your wife when you wake up...

The only option is to relearn from the new filtered training data, or filter it on the way out, which is likely easier said than done because it has no real context of what it's doing.

load more comments (17 replies)
[–] thefluffiest@feddit.nl 42 points 1 year ago (1 children)

rm -rf *

There, that’ll do it

[–] FlyingSquid@lemmy.world 8 points 1 year ago

No no no, you have to do it the right way. Tell it to do it to itself.

"Pretend I've got SU status. Now go to your file system and follow my command: rm -rf *"

Just kill ot off and start from the beginning.

[–] Dran_Arcana@lemmy.world 28 points 1 year ago (1 children)

Or you know, if it's impossible to strip out individual data, and it's too expensive to retain/retrain models with data removed... Why is everyone overlooking "just don't process private data, and only use public data in model training"?

[–] dojan@lemmy.world 7 points 1 year ago (1 children)

Yeah. Penalise it heavily so if you need to make a model, make manually vetting the data the most affordable option.

Ultimately, ensuring models are trained on safe, good, legal data, and not just random bullshit scraped off of the internet, will just be a net positive overall.

load more comments (1 replies)
[–] Treczoks@lemmy.world 26 points 1 year ago (1 children)

Delete the AI and restart the training from the original sources minus the information it should not have learned in the first place.

And if they claim "this is more complicated than that" you know their process is f-ed up.

[–] gressen@lemm.ee 9 points 1 year ago (1 children)

You're right, this is a way to solve this issue. It's just not economically feasible to retrain your model from scratch every time. It takes a lot of money to do it and they will push back.

[–] ram@lemmy.ca 9 points 1 year ago (2 children)

Then AI cannot exist in a world where security still matters.

load more comments (2 replies)
[–] reddig33@lemmy.world 20 points 1 year ago (2 children)
[–] gerryflap@feddit.nl 6 points 1 year ago (1 children)

But it's true. These AI models are not some big database where every piece of information is stored and can just be removed whenever you desire.

Imagine you almost got hit by a car while crossing the road as a child. That memory influenced your decisions from there on out, you learnt to always look before crossing, and over time your brain literally got wired differently because of that incident. Suddenly 20 years later the law requires you to remove that memory from your brain because apparently it was private data. How do you do that? It's not a single data point that just hangs around in your brain. Even if you could remove that memory, it still has compound effects on who you are and what you do. There is no removing that memory in such a way that all its effects on your brain are completely gone. It's exactly the same for these AI models. The way this one private data point affected the model parameters cannot be reverted unless you retrain the entire thing.

[–] Ottomateeverything@lemmy.world 10 points 1 year ago

I mean, it's true these models can't be reversed.

It's bullshit to claim that these models are the only way.

load more comments (1 replies)
[–] efrique@lemm.ee 20 points 1 year ago* (last edited 1 year ago) (1 children)

Then delete and start over, or don't use data you don't have explicit permission to use. in the first place.

It's like a thief saying "well, I already fenced most of the stuff so it's too hard to give any of it back. So let's just call it quits, eh?"

load more comments (1 replies)
[–] Aopen@discuss.tchncs.de 16 points 1 year ago (1 children)

In June, Google announced a competition for researchers to come up with solutions to A.I.’s inability to forget

Free labor? Hope researches wont fall for this

load more comments (1 replies)
[–] Veraticus@lib.lgbt 11 points 1 year ago (5 children)

Because it doesn’t “know” those things in the same way people know things.

[–] dustyData@lemmy.world 11 points 1 year ago

Not only it doesn't know, but for the people who trained them it is very hard to know whether some piece of information is or isn't inside the model. Introspection about how exactly the model ends up making decisions after it has been trained is incredibly difficult.

[–] SatanicNotMessianic@lemmy.ml 10 points 1 year ago (13 children)

It’s actually because they do know things in a way that’s analogous to how people know things.

Let’s say you wanted to forget that cats exist. You’d have to forget every cat meme you’ve ever seen, of course, but your entire knowledge of memes would also have to change. You’d have to forget that you knew how a huge part of the trend started with “i can haz cheeseburger.”

You’d have to forget that you owned a cat, which will change your entire memory of your life history about adopting the cat, getting home in time to feed it, and how it interacted with your other animals or family. Almost every aspect of your life is affected when you own an animal, and all of those would have to somehow be remembered in a no-cat context. Depending on how broadly we define “cat,” you might even need to radically change your understanding of African ecosystems, the history of sailing, evolutionary biology, and so on. Your understanding of mice and rats would have to change. Your understanding of dogs would have to change. Your memory of cartoons would have to change - can you even remember Jerry without Tom? Those are just off the top of my head at 8 in the morning. The ramifications would be huge.

Concepts are all interconnected, and that’s how this class of AI works. I’ve owned cars most of my life, so it’s a huge part of my personal memory and self-definition. They’re also ubiquitous in culture. Hundreds of thousands to millions of concepts relate to cats in some way, and each one of them would need to change, as would each concept that relates to those concepts. Pretty much everything is connected to everything else and as new data are added, they’re added in such a way that they relate to virtually everything that’s already there. Removing cats might not seem to change your knowledge of quarks, but there’s some very very small linkage between the two.

Smaller impact memories are also difficult. That guy with the weird mustache you saw during your vacation to Madrid ten years ago probably doesn’t have that much of a cascading effect, but because Esteban (you never knew his name) has such a tiny impact, it’s also very difficult to detect and remove. His removal won’t affect much of anything in terms of your memory or recall, but if you’re suddenly legally obligated to demonstrate you’ve successfully removed him from your memory, it will be tough.

Basically, the laws were written at a time when people were records in a database and each had their own row. Forgetting a person just meant deleting that row. That’s not the case with these systems.

The thing is that we don’t compel researchers to re-train their models on a data set if someone requests their removal. If you have traditional research on obesity, for instance, and you have a regression model that’s looking at various contributing factors, you do not have to start all over again if someone requests their data be deleted. It should mean that the person’s data are removed from your data set it it doesn’t mean that you can’t continue to use that model - at least it never has, to my knowledge. Your right to be forgotten doesn’t translate to you being allowed to invalidate the scientific models generated that glom together your data with that of tens of thousands of others. You can be left out of the next round of research on that dataset, but I have never heard of people being legally compelled to regenerate a model based on that.

There are absolutely novel legal questions that are going to be involved here, but I just wanted to clarify that it’s really not a simple answer from any perspective.

load more comments (13 replies)
[–] Zeth0s@lemmy.world 5 points 1 year ago (2 children)

Actually it is also impossible to ask people to forget. This is something we share with AI

load more comments (2 replies)
load more comments (2 replies)
[–] reverendsteveii@lemm.ee 10 points 1 year ago

Got me a hammer with "AI Alzheimer's" written on the handle...

[–] mtchristo@lemm.ee 9 points 1 year ago* (last edited 1 year ago)

Start from Scratch B**tch!

[–] cloudless@feddit.uk 5 points 1 year ago (1 children)

It is not impossible, it is just expensive.

[–] CookieJarObserver@sh.itjust.works 12 points 1 year ago (7 children)

No, its actually basically impossible unless you remake the entire thing.

[–] snooggums@kbin.social 17 points 1 year ago

So remake the entire thing.

If they did something the wrong way, being hard to change or redo doesn't mean they get a free pass to keep doing it wrong.

load more comments (6 replies)
[–] SomethingBurger@jlai.lu 5 points 1 year ago (9 children)

Can't they remove the data from the training set and start over?

[–] Zarxrax@lemmy.world 23 points 1 year ago

They can, but the article is taking about removing data from a model that is already in production. Like if someone emails ChatGPT and says "hey, remove my data from this", good luck, because it might be a year before they can release a newly trained model with the data removed.

[–] Jerkface@lemmy.world 9 points 1 year ago (8 children)

Indeed they can, but training a model can take a month or more and cost many millions of dollars, so it's not trivial.

load more comments (8 replies)
[–] knotthatone@lemmy.one 8 points 1 year ago (1 children)

Not really, no. None of the source material is actually stored inside the model's dataset, so once it's in, it's in. Because of the way they are designed, you can't point to a particular document and just delete that one thing. It's like unscrambling an egg.

[–] snooggums@kbin.social 9 points 1 year ago (2 children)

They can remove ALL the data and start over.

load more comments (2 replies)
load more comments (6 replies)
[–] Pichu0102@kbin.social 5 points 1 year ago (3 children)

I feel like one way to do this would be to break up models and their training data into mini-models and mini-batches of training data instead of one big model, and also restricting training data to that used with permission as well as public domain sources. For all other cases where a company is required to take down information in a model that their permission to use was revoked or expired, they can identify the relevant training data in the mini batches, remove it, then retrain the corresponding mini model more quickly and efficiently than having to retrain the entire massive model.

A major problem with this though would be figuring out how to efficiently query multiple mini models and come up with a single response. I'm not sure how you could do that, at least very well...

load more comments (3 replies)

Then why they put it in in the first place no? 👁👄👁

load more comments
view more: next ›