this post was submitted on 13 Mar 2024
232 points (93.6% liked)

Technology

59219 readers
4025 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

From the article: "In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use."

top 49 comments
sorted by: hot top controversial new old
[–] Boozilla@lemmy.world 67 points 8 months ago (2 children)

To steal an old trope: the tech bros have circled the globe eight times while the government is still putting its boots on. If there's money to be made via automation, there's no stopping it (unless we get the guillotines out of mothballs).

[–] ObviouslyNotBanana@lemmy.world 16 points 8 months ago (1 children)

Someone will try to sell AI guillotines

[–] anarchrist@lemmy.dbzer0.com 8 points 8 months ago (1 children)

I literally just saw they're testing AI robots in Gaza

[–] pdxfed@lemmy.world 3 points 8 months ago

Read about drone warfare in Ukraine and how AI drone swarm warfare is just a matter of months away if it's not already being done.

[–] Neato@ttrpg.network -1 points 8 months ago

The problem with any regulation is that it's going to have unforeseen knock-on effects. It might cripple an otherwise benign use. This can be mitigated by trying to draft smart bills initially by coordinating with leaders in the field who aren't corporate backers. And then being able and willing to amend laws as these effects take shape.

Unfortunately this is not how the US congress functions right now and for the foreseeable future. Therefore regulation will likely be sparse and when it is heavy handed, unlikely to be amended unless the knock-on effects are massively bad.

[–] notannpc@lemmy.world 28 points 8 months ago (2 children)

There’s a better chance of AI becoming sentient and stopping itself from being harmful than there is that people do the right thing.

[–] Silentiea@lemm.ee 7 points 8 months ago

Humans. Can't trust 'em for shit.

[–] blazeknave@lemmy.world -1 points 8 months ago

Found Ultron everyone!

[–] aberrate_junior_beatnik@midwest.social 28 points 8 months ago* (last edited 8 months ago)

Those who do not learn from history are doomed to repeat it. Those who do learn from history are doomed to watch as those who did not learn repeat it.

[–] Dasnap@lemmy.world 25 points 8 months ago (1 children)

Let's make different, more disastrous mistakes!

[–] isthingoneventhis@lemmy.world 5 points 8 months ago

Climb in back and we'll be off!

[–] TheFriar@lemm.ee 25 points 8 months ago* (last edited 8 months ago) (1 children)

See, this is the exact shit I mean when I scaremonger about AI. Especially in this community, I have been called a few names and likened to some stupid anti-tech movements.

But it’s not the tech itself. It’s the world and the companies this tech is being borne into and giving power to. This is not the early 80s, where we generally had some sort of capitalist equilibrium going on—as much as an exploitative system can have equilibrium, that is. It’s a post-Reagan 2024, and this system is so out of balance that this is like, in a war between a modernized society and an uncontacted Amazon people, a nuke was dropped.

Your cars, your phones, your entire online ecosystem, all the smart devices, the cops, the federal govt…they’re all working in a system of surveillance and—honestly, for lack of a better word, though it’s been co-opted and bastardized by conspiracy nut jobs—mind control that we barely understand, let alone have any control over. But that’s exactly what this is, they are astroturfing public opinion, pushing ideas in an almost streamlined fashion, and getting us evermore addicted to these means of coercion.

And this is all before we even discuss the general balance of “consumer/producer.” Assuming we are even willing to make capitalism come close to functioning (which is a fools errand at this point), we need to completely upend the current imbalance in what we accept as a suitable give/take. They are taking more, while giving us less, and it’s only getting much worse. Now they’re taking way, way more than ever and we get, what, new social media sites in return? Nah. We are no longer consumers, we are products. We are the piece in the puzzle with the least agency.

And AI is only exacerbating those problems. And that’s all before we get into discussing the massive environmental concerns! We are barreling toward destruction and we are…sinking more computers and server farms and time and infrastructure to completely backwards energy consumers.

Don’t use their AI, refuse to use social media, jailbreak and privatize your phones, refuse to buy any surveillance nightmare cars, use a TOR browser…do everything we can to wrench just a little of our own data and agency back from them. Because getting all giddy over some shitty chatbot and refusing to heed the warnings of the Reddit and google CEOs literally spelling out how they are harvesting and profiling our data is just beyond stupid and gives them the idea that, as the google ceo said, “well, if you keep using it, it’s your own fault.” Or something to that effect. But at some point, these evil cartoon villains may just be right. We keep following along. At what point do we build parallel systems to evade their reach?

[–] Grimy@lemmy.world 6 points 8 months ago (1 children)

The problem is that fear mongering only serves them. Their plan is to stop individuals from having access to it, not companies and they will do so by convincing the population that AI is theft and therefore bringing up the price so only a handful of companies can afford to train models or by convincing the population that it's too dangerous for the common man to have free access to it.

[–] TheFriar@lemm.ee 2 points 8 months ago

But individuals can use copyrighted artwork for their own personal use, businesses can’t. I don’t think there’s any attempt to get people to stop using it. As stated in the article (maybe not this one, but a recent one either way, I don’t remember at this point), AI chat bots are a GREAT way to extract obscene amounts of data from people. It’s one of the main draws to it for business, and one of its main uses for future profitability.

“Fear monger” might have been the wrong term to use, it’s got a negative connotation. But I do believe it’s a dangerous, dangerous step in the current trend of surveillance capitalism. Like I said, it’s akin to the A-bomb in the war over our privacy and data.

[–] CriticalMiss@lemmy.world 22 points 8 months ago (1 children)

Hello, I’m from the future. We didn’t handle it responsibly.

[–] ArmoredThirteen@lemmy.ml 6 points 8 months ago (1 children)

How far in the future? Because if you're like 100+ years from now I'd just be happy the planet held out that long. Well enough to support a time travel capable society too

[–] erwan@lemmy.ml 6 points 8 months ago (1 children)

There will still be humans in 100 years. The planet is fine, we're just making it harder for us and many other species to survive.

How many humans there will be, and how they will live is a different question.

[–] blazeknave@lemmy.world 1 points 8 months ago (1 children)

But a time travel capable society per the above comment?

[–] Silentiea@lemm.ee 1 points 8 months ago

I mean, until it happened it's just a fiction. Maybe time travel wasn't possible at all until we nuked ourselves back to the stone age fighting over what was left after climate change.

[–] CrabAndBroom@lemmy.ml 15 points 8 months ago (2 children)

Or... hear me out.... we use AI to make social media even more insufferable than it was before.

[–] SandbagTiara2816@lemmy.dbzer0.com 3 points 8 months ago (1 children)
[–] CrabAndBroom@lemmy.ml 0 points 8 months ago

Unlimited scams and Jesus stuff for everyone!

[–] BakerBagel@midwest.social 3 points 8 months ago

So excited to have an llm make posts on Twitter for me, so that all my bot followers can think i am funny

[–] wise_pancake@lemmy.ca 11 points 8 months ago (1 children)

We have two ethics systems:

The first we apply to healthcare and government, and it's best sunned up by Micheal Scott "Don't ever, for any reason, do anything to anyone, for any reason, ever, no matter what. No matter... where. Or who, or who you are with, or, or where you are going, or... or where you've been... ever. For any reason, whatsoever.",

The second is applied to private industry, and it's best summed up by "innocent, until proven guilty".

And that's why we let private industry roll along with whatever they want, until we can definitively prove harm, but society found it unreasonable to ask people to vaccinate because there was a minute chance of rare side effects less bad than the disease it was for.

[–] blazeknave@lemmy.world 2 points 8 months ago

Citizens United

[–] HubertManne@kbin.social 11 points 8 months ago

most of social medias ills come from the algorithms which is basically what ai is soooo....

[–] ech@lemm.ee 8 points 8 months ago

Way too late. All of the harmful parts of social media are exploited and promoted by corporate interests, and llms are shaping up the same. Users have already shown they have no interest in policing themselves, so unless something is done to drastically restrain corporations, there's little that can or will be done to keep the new thing from being even worse than the old thing.

[–] Darkassassin07@lemmy.ca 8 points 8 months ago

Bahahahahahha, NEVER gonna happen.

Humanity learning from its mistakes, I mean.

[–] dan1101@lemm.ee 8 points 8 months ago (2 children)

I saw an IBM commercial that depicted an AI personal assistant informing a user that their credit card had been used for a fraudulent charge. It asked the user if a particular charge was legit and they said no. Then the AI informed the user that the transaction had been canceled and a new card had been issued.

I have a couple problems with this. First, what if the AI was hallucinating parts of this interaction? Secondly, at some point the user's AI will be interfacing with the bank's AI, then we will effectively be subservient to a bunch of algorithms automatically running the world and we will basically be children with our AI parents taking care of us.

[–] MajorHavoc@programming.dev 5 points 8 months ago* (last edited 8 months ago) (1 children)

then we will effectively be subservient to a bunch of algorithms automatically running the world and we will basically be children with our AI parents taking care of us.

That's the plan.

Shareholders think they'll be excluded because they can call and reach a human.

But it will soon be impossible to be a shareholder over every AI that could possibly fuck you. And we will undoubtedly turn over things to AI that we should have kept control of, to the point of being unable to even help our poor shareholders.

I expect that everyone will need at least a little prompt engineering in their life before this mess is under control.

So there's that to look forward to.

The good news is AI are just computers wearing fancy pants and can, and will, be unplugged when we learn - the hard way - what uses AI is no good for. I'm sure that'll be big "who could possibly have seen this coming?!" news, too.

[–] lurch@sh.itjust.works 2 points 8 months ago

(I'm a shareholder and I don't think this.)

[–] BagelEmbezzler@lemmy.world 2 points 8 months ago

Not to mention how voice assistants can just mishear you. Told google once to put dental floss on my shopping list and it said "got it, I added applesauce." Good try I guess. Pretty trivial this time, but they expect me to trust that for tasks with financial stakes?

[–] _sideffect@lemmy.world 7 points 8 months ago

Lmao right; you know this so called "ai" is going to be used and abused for every ounce of gains possible

[–] Alexstarfire@lemmy.world 7 points 8 months ago

Ohh we will. And include all new mistakes too. Gotta cover our bases.

[–] Breve@pawb.social 6 points 8 months ago

Hahah but really AI is already being used to amplify and exploit all the problems of social media to new levels. It was nice while it lasted, but we can't stuff this all back in Pandora's box.

[–] TigrisMorte@kbin.social 5 points 8 months ago

You mean the part where we assume rich People have our best interests at heart and are trying to help?

[–] Even_Adder@lemmy.dbzer0.com 5 points 8 months ago

Meanwhile, reports commissioned by the state department suggesting publishing weights be made illegal, so corporations can have their monopoly of a public technology.

[–] gapbetweenus@feddit.de 5 points 8 months ago

We haven't figured out how to deal with social media at all. We will make even worse mistakes with AI, that's just how we are as humans.

[–] autotldr@lemmings.world 3 points 8 months ago

This is the best summary I could come up with:


When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising.

The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporations (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more.

The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability.

And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media.


The original article contains 2,953 words, the summary contains 211 words. Saved 93%. I'm a bot and I'm open source!

[–] Moobythegoldensock@lemm.ee 3 points 8 months ago

“Advertising, surveillance, virality, lock-in, monopolization”

Of course advertising will be used for all those things and probably already is.

[–] LainTrain@lemmy.dbzer0.com 1 points 8 months ago* (last edited 8 months ago) (5 children)

Social media is just another scapegoat like Russian bots.

The truth is much worse: most people are, and have always been awful, bloodthirsty ghoulish pieces of shit and they were so before social media, you just know it now.

[–] DaMonsterKnees@lemmy.world 18 points 8 months ago (2 children)

No friend, no, I'm sorry, but the whole world just wouldn't work if that were actually the case. Humanity is inherently altruistic. The issue is that people struggle to be that and survive. We just have to ramp down the me:first and push more for society. EU is starting to make those in-roads, so stay positive!

[–] BearOfaTime@lemm.ee 3 points 8 months ago* (last edited 8 months ago)

I'd say it's that we all have these elements within us.

We're all born as selfish idiots, how can we be otherwise? We're helpless at birth, thrust from perfect comfort and safety into discomfort, utterly ignorant and wholly dependent, with no knowledge there are others, who are just as dependent when they're born.

There's the variability in personality, but by and large we have to learn to see others as the same as ourselves.

So while we may not all actively try to be assholes, it takes conscious effort to be better than our base nature.

And, I tend to think we all get to be assholes now and again. We all have moments we can look back on and say "oh, yea, I was the asshole that time".

Social media just reflects humanity, though the algorithms are certainly designed to increase engagement via the simplest mechanisms - emotional engagement. And which are the easiest to target? Yep - the most basic, they have the broadest appeal, because we all share those base emotions.

Another way to look at this: if we didn't share these base emotions, would the algorithms have any effect?

[–] fishos@lemmy.world 0 points 8 months ago* (last edited 8 months ago)

World Hunger is literally a problem of corruption. The vast majority of problems are "we could solve this, but it costs money and we'd rather have another mega yacht". If humans were truly altruistic, homelessness and hunger wouldn't be issues at all. Are we savages? Maybe not. But overall altruistic? Bullshit.

There's a reason we idolize heros instead of treating them as mundane. They are exceptional, not the norm.

[–] Neato@ttrpg.network 3 points 8 months ago

most people are, and have always been awful, bloodthirsty ghoulish pieces of shit

Most people are empathetic and decent. This sounds like apologia or projection. Evil people think everyone else is just as evil and that's how they rationalize it.

[–] AtHeartEngineer@lemmy.world 3 points 8 months ago

So many people on Lemmy are pessimistic as shit, makes it hard to read the comments sometimes

[–] lurch@sh.itjust.works 2 points 8 months ago

It is one side of us humans. You don't become top of the food chain by petting the lions.

However, the other side is: We can team up and watch each others backs.

[–] pennomi@lemmy.world 1 points 8 months ago

This article mostly ignores the one thing that actually makes social media harmful in a way that is unique to social media: addictiveness, and the ultimate mental health decline that comes from scrolling through a feed and getting tiny dopamine hits all day.

This is distinct from “virality,” since is more related to the idea that we’ve optimized for “engagement”. It is inevitable that AI will be tuned for that very soon, and we will find that AI is addictively engaging a lot sooner than it is correct.