this post was submitted on 21 Sep 2023
193 points (92.5% liked)

Technology

59219 readers
2821 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] lily33@lemm.ee 98 points 1 year ago* (last edited 1 year ago) (30 children)

competition too intense

dangerous technology should not be open source

So, the actionable suggestions from this article are: reduce competition and ban open source.

I guess what it is really about, is using fear to make sure AI remains in the hands of a few...

[–] thehatfox@lemmy.world 34 points 1 year ago (1 children)

Yes, this the setup for regulatory capture before regulation has even been conceived. The likes of OpenAI would like nothing more than to be legally declared the only stewards of this "dangerous" technology. The constant doom laden hype that people keep falling for is all part of the plan.

[–] lily33@lemm.ee 5 points 1 year ago* (last edited 1 year ago) (1 children)

I think calling it "dangerous" in quotes is a bit disingenuous - because there is real potential for danger in the future - but what this article seems to want is totally not the way to manage that.

[–] foggy@lemmy.world 17 points 1 year ago (1 children)

It would be an obvious attempt at pulling up the ladder if we were to see regulation on ai before we saw regulation on data collection from social media companies. Wen have already seen that weaponized. Why are we going to regulate something before it gets weaponized when we have other recent tech, unregulated, being weaponized?

[–] Touching_Grass@lemmy.world 6 points 1 year ago* (last edited 1 year ago) (1 children)

I saw a post the other day about how people crowd sourced scraping grocery store prices. Using that data they could present a good case for price fixing and collusion. Web scraping is already pretty taboo and this AI fear mongering will be the thing that is used to make it illegal.

[–] foggy@lemmy.world 8 points 1 year ago* (last edited 1 year ago) (1 children)

It won't be illegal because there is repeated court precedent for it to be categorically legal.

https://techcrunch.com/2022/04/18/web-scraping-legal-court/

[–] Touching_Grass@lemmy.world 1 points 1 year ago (1 children)

so chatgpt can scrap data?

[–] foggy@lemmy.world 6 points 1 year ago* (last edited 1 year ago)

Yes.

It's not unlike recording someone in public. Anything publicly available on the internet is legal for you to access and download. There is no expectation of that datas privacy.

[–] Heresy_generator@kbin.social 9 points 1 year ago* (last edited 1 year ago)

It's also about distraction. The main point of the letter and the campaign behind it is slight-of-hand; to get the media obsessing over hypothetical concerns about hypothetical future AIs rather than talking about the actual concerns around current LLMs. They don't want the media talking about the danger of deepfaked videos, floods of generated disinformation, floods of generated scams, deepfaked audio scams, and on and on, so they dangle Skynet in front of them and watch the majority of the media gladly obsess over our Terminator-themed future because that's more exciting and generates more clicks than talking about things like the flood of fake news that is going to dominate every democratic election in the world from now on. Because these LLM creators would much rather see regulation of future products they don't have any idea how to build (and , even better, maybe that regulation can even entrench their own position) than regulation of what they're currently, actually doing.

[–] Touching_Grass@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

I'm going to need a legal framework to be able to DMCA any comments I see online in case they were created with an AI trained on Sara Silverman's books

[–] Hanabie@sh.itjust.works 1 points 1 year ago

That's exactly what it is.

load more comments (26 replies)
[–] Steeve@lemmy.ca 61 points 1 year ago (1 children)

“Dangerous technology should not be open source, regardless of whether it is bio-weapons or software,” Tegmark said.

What a stupid alarmist take. The safest way for technology to operate is when people can see how it works, allowing experts who don't just have a financial interest in it succeeding to scrutinize it openly. And it's not like this is some magical technology that only massive corporations have access to in the first place, it's built on top of open research.

Home Depot sells all the ingredients you need to make a substantial bomb, should we ban fertilizer and pressure cookers for non-industrial use?

[–] tryptaminev@feddit.de 10 points 1 year ago (1 children)

Many countries only sell fertilizers with a concentration of Ammonium-Nitrate below explosive levels.

[–] Steeve@lemmy.ca 14 points 1 year ago (2 children)

How about bleach and ammonia? I can buy those ingredients at any convenience store near me and throw together some mustard gas right? Point is if we banned everything that has any potential to do harm we wouldn't even be left with rocks and sticks. Regulate, sure, but taking technology out of the hands of regular people and handing it to a select few corporations is a recipe for inequality and disaster.

[–] tryptaminev@feddit.de 2 points 1 year ago

You wouldnt make mustard gas. You'd make chlor gas,which is also very nasty but still quite a mile from mustard gas. The extent to which risky chemicals have been banned, reduced in concentration or made subject to extensive monitoring of sales and use is quite substantial.

But here is a huge difference to AI tools. Anyone could create these tools him or herself. It is information. Unlike information on how to build a nuke it is more easy to use this information for negative purposes, but the extend is much smaller. A deepfake itself cannot kill people. A selfmade pipebomb can. Meanwhile the cat is out of the hat for ML already. The tools are there and many people have copies of the code and it can be replicated countless times, whereas the clandestine bomb-builder needs to procure another batch of chemicals and hardware.

load more comments (1 replies)
[–] mojo@lemm.ee 11 points 1 year ago* (last edited 1 year ago) (1 children)

Anyone against FOSS adoption of LLMs is straight up a capitalist fascist

They love the AI ethics issue, it's so vague and morally superior that they can use it to shut down anything they like.

The letter warned of an “out-of-control race” to develop minds that no one could “understand, predict, or reliably control”

And this is why people who don't understand that LLMs are essentially big hallucinating math machines should have no voice in things they fundamentally do not understand

[–] 5BC2E7@lemmy.world 1 points 1 year ago (6 children)

You might be able to assert they are full of shit after hearing the arguments. Accusing them of being fascist for not agreeing with you is extremely intolerant and authoritarian aka facist.

[–] mojo@lemm.ee 6 points 1 year ago (1 children)

Being anti foss is being anti freedom, full stop

[–] 5BC2E7@lemmy.world 2 points 1 year ago (1 children)

the thing is that you don't want to become the thing you are fighting. you can be right in every case, as long as it's in a case by case basis. it would be different if you explain why the arguments are bad faith arguments or why they are facists, that is also perfectly fine.

[–] mojo@lemm.ee 1 points 1 year ago (1 children)

There are things that are just true like that, like racism or slavery don't have a case by case basis where they're bad, but that's getting to be an extreme comparison here, just saying absolute statements can be true like that. When is FOSS not about freedom?

[–] 5BC2E7@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

this seems a bit more complicated than the examples you share where things are more evident. even if they are wrong, they can be wrong for reasons other than them being facists.

edit: to show some nuance, would people not be against open software that is purposefully crafted for a nefarious purpose? be it ransomware, or software for a DIY automated blinding laser weapons? I know UN would probably not like the second example, regardless of it being FOSS.

[–] jcg@halubilo.social 3 points 1 year ago

regardless of it being FOSS

Exactly, it's not about it being FOSS. It's about the nature of the software itself. Being against that software doesn't make you anti-FOSS. Additionally, open sourcing your malware is actually helpful for people trying to combat it.

load more comments (5 replies)
[–] autotldr@lemmings.world 5 points 1 year ago

This is the best summary I could come up with:


The scientist behind a landmark letter calling for a pause in developing powerful artificial intelligence systems has said tech executives did not halt their work because they are locked in a “race to the bottom”.

Max Tegmark, a co-founder of the Future of Life Institute, organised an open letter in March calling for a six-month pause in developing giant AI systems.

Despite support from more than 30,000 signatories, including Elon Musk and the Apple co-founder Steve Wozniak, the document failed to secure a hiatus in developing the most ambitious systems.

“I felt there was a lot of pent-up anxiety around going full steam ahead with AI, that people around the world were afraid of expressing for fear of coming across as scare-mongering luddites.

“So you’re getting people like [letter signatory] Yuval Noah Harari saying it, you’ve started to get politicians asking tough questions,” said Tegmark, whose thinktank researches existential threats and potential benefits from cutting-edge technology.

Mark Zuckerberg’s Meta recently released an open-source large language model, called Llama 2, and was warned by one UK expert that such a move was akin to “giving people a template to build a nuclear bomb”.


The original article contains 695 words, the summary contains 192 words. Saved 72%. I'm a bot and I'm open source!

load more comments
view more: next ›