this post was submitted on 03 Feb 2024
396 points (93.0% liked)

News

22903 readers
4584 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] SnotFlickerman@lemmy.blahaj.zone 43 points 7 months ago (1 children)
[–] MxM111@kbin.social 1 points 7 months ago (1 children)

That’s much more useful though.

[–] remotelove@lemmy.ca 20 points 7 months ago* (last edited 7 months ago) (2 children)

We shall see. Meta, Google and Microsoft aren't exactly spending billions of dollars on AI to make the world a better place....

[–] SuckMyWang@lemmy.world 4 points 7 months ago

They’re making it more efficient… to do shitty things like exploit people and steal their data

[–] kromem@lemmy.world 3 points 7 months ago (1 children)

Pharmaceutical companies aren't funding cancer research to make the world a better place either.

Should we debate whether or not a cure for cancer would be a good thing because of the profit driven motivation behind its development?

[–] remotelove@lemmy.ca 1 points 7 months ago (1 children)

That's a good debate topic, actually.

A subtopic should be about if pharmaceutical companies should use taxpayer dollars to research drugs that are used to make billions of tax free revenue. For better or for worse, those drugs might have the potential to save lives.

It's a completely different topic about how AI companies are going to make their products more addictive and then use that influence to shift public perception. (Or, thet just use AI to find even more ways to shove advertisements down our throats. That is more reasonable.)

[–] kromem@lemmy.world 2 points 7 months ago* (last edited 7 months ago) (1 children)

That's a good debate topic, actually.

No, whether or not curing cancer is a good thing would be a rather poor debate topic. You are instead suggesting tangential topics around pharmaceutical research that are good discussion points, but separate from the broad question of if the things a for profit company produces are always inherently bad because of the for profit motivations.

It's a completely different topic about how AI companies are going to make their products

This tells me you are completely disconnected from any kind of ongoing research right now, as the products being produced are actually having a pretty wild impact on research and we're probably entering a new Renaissance because of them.

Yeah, of course Google is going to try to use AI to sell you shit. But they are also solving protein folding along the way and are producing AI that can translate the massive number of uncovered but yet untranslated historical documents in existence. Deep learning yielded a new class of antibiotics for antibiotic resistant infections just this past month.

In both the original case and the analogy we are still talking about technology that will save human lives.

And as we just saw with Musk's Grok, sometimes the ways in which AI develops are at odds with the goals of the corporations creating them, and as Anthropic's recent research shows, those initial inclinations are actually much more difficult to correct for than you might think.

Will corporations do their best to unethically profit off of advances? Of course.

But if you think that means the advances will be a net negative, I respectfully disagree.

[–] remotelove@lemmy.ca 1 points 7 months ago* (last edited 7 months ago) (1 children)

This tells me you are completely disconnected from any kind of ongoing research right now, as the products being produced are actually having a pretty wild impact on research and we're probably entering a new Renaissance because of them.

Oh, I am fully aware of what is going on with AI and what is possible with AI. We haven't even scratched the surface with its development or it's potential uses.

One thing I am pointing out is the layers of bullshit that are attached to AI right now. When this current bubble pops, and it will, the tech can be developed to its full potential after that. Right now, the market is 99% snakeoil. Just look at LinkedIn as a clear example. Everyone is somehow an AI expert now and every company is an AI company.

What I choose not to minimize is the unethical use of this tech by companies. The reality remains that Meta, Google and Microsoft will not profit if the world's problems are solved. Their open research projects are great, a good tax deduction and may benefit millions. Unfortunately, I can't help but quote Obadiah from Iron Man: "Tony, come on. We built that thing to shut the hippies up."

While I paint a picture of an "AI Doomer", I really am not. The benefits of AI are incalculable right now. Unfortunately, the risks are just as massive. Everyone just seems to be blinded by the "new shiny" and refuses to see any negatives....... again. This makes the environment ripe for scams and deception.

[–] kromem@lemmy.world 1 points 7 months ago* (last edited 7 months ago) (1 children)

When this current bubble pops, and it will, the tech can be developed to its full potential after that. Right now, the market is 99% snakeoil.

It depends on what bubble one's referring to. The tech itself isn't going to 'pop' - in many cases the capabilities will probably outpace the current promises given the compounding rates of improvement. This isn't like past tech buzz cycles which is part of why there's a lot of questionable predictions regarding it.

Yeah, snake oil bottom feeders will gravitate to any buzz they can attach themselves to. But the barnacles don't steer the ship. The market of snake oil will dry up as it always does, but that's largely because their primary industry is selling snake oil, not whatever they change the label to.

The reality remains that Meta, Google and Microsoft will not profit if the world's problems are solved.

Not really. Microsoft stands to make a killing just running these models on Azure as their sole line of business if AI ends up as successful as it may prove to be. Google divesting more from ads might prove to make them less evil. Meta would be evil in this space if not for the fact that because they started off late they're the biggest driver of open AI development right now and arguably the biggest funder of any hope of counter-corporate AI existing.

It's easy to regard companies as monoliths. And while it's generally true that a corporation, especially large public companies, will end up trying to optimize around short term gains even at the cost of long term consequences or social evils, it isn't necessarily true that public good is always at odds with capitalist self optimization in all things. So it would still be a win for Microsoft if AI allowed for the public good as long as they could ensure that AI was running on their servers and they could attempt to maximize their margins as much as the market allowed for before net gains decrease. And any corporation smart enough to focus on longer term gains is going to be one that's going to actively try to avoid excessive public harm as your longer term revenues aren't going to go up if your customers die or go homeless, etc.

Also, the researchers themselves have certain aims and if their parent company doesn't align with those aims, they may take themselves and their significant value away from that company. For example, Meta was only suddenly "open AI friendly" and then a major player after literally half their AI team quit for greener pastures.

Unfortunately, the risks are just as massive. Everyone just seems to be blinded by the "new shiny" and refuses to see any negatives.......

While weighing risks, it's also important to weigh opportunity costs.

Also, I'm not sure who you are interfacing with, but in my experience it definitely seems like the majority of people are fairly bearish regarding AI (there's a number of reasons why I think that's the case, but it's still a significant majority). These days positivity regarding AI that isn't in the context of a snake oil sales channel is a rarity in most public discussions.

[–] remotelove@lemmy.ca 1 points 7 months ago* (last edited 7 months ago)

Also, I'm not sure who you are interfacing with, but in my experience it definitely seems like the majority of people are fairly bearish regarding AI

(I am going to run on the assumption that you meant bullish instead of bearish? I'll clarify my view, just in case.)

I have been in IT Security for a very long time. People in my industry are usually pulled in to clean up the mess after the party is over. It's not entirely bad as it pays my bills and in general, I am bullish on AI in the long term and extremely bearish in the short term.

Thinking of a company as a monolith is easy. The sum of its parts must always equal profit. Sure, a company can have good parts and bad parts but one side cannot justify the other. (That could be a much deeper philosophical discussion for another time though.)

What pisses me off now are the number of people and companies that not weighing the risks or have a full understanding of what they are doing in the rush to implement. To use an old phrase: "Just because you can implement something doesn't mean that you should."

From my perspective, many companies are just shoving all of their sensitive data and our PII into a magic box to see how many dollars it poops out. It's sidestepping all of our data protection laws because it's going to be years before the laws are adjusted. (I simplified the complexity of ANNs and how data is stored in them, but my point remains.)

Even worse, companies are using the magic of AI to slurp even more of our data either willingly or by forced updates. A seemingly benign example of this is Microsoft integrating AI into notepad. Think about it: It is the most common temporary space on a PC where the average user stores highly sensitive and confidential data. Google is going to integrate Bard into messages so it can ingest your entire text history, personal or not. (Point to point encryption and other controls become irrelevant then.)

This particular brand of snake oil is exceptionally potent because products like ChatGPT introduce such high levels of illusion.

Edit: With all of that said, I really do respect your points and where our opinions differ. I also realized I am allocating too much time to this discussion and have a headache now. I'll gracefully back out now, but that kinda sucks. This would be a much more fruitful discussion in person where we aren't limited by typing.