this post was submitted on 02 May 2026
222 points (98.3% liked)

Technology

84302 readers
7165 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 28 comments
sorted by: hot top controversial new old
[–] 4am@lemmy.zip 38 points 9 hours ago

Because they’re blatantly using it to try and enslave us?

Like, not even metaphorically.

[–] jtrek@startrek.website 65 points 10 hours ago (1 children)

I'm so tired of every job posting frothing at the mouth over AI. "We're ai native" , "we want employees who are excited about ai tools", "agenic workflows"

Just fuck off.

Even if all of this stuff was a real productivity increase, who is keeping that extra production? Not the workers!

[–] vala@lemmy.dbzer0.com 4 points 1 hour ago

They always want you to be excited about things that don't benefit you in any way.

[–] SoloPhoenyx@lemmy.world 29 points 10 hours ago (1 children)

Fun fact: Data centers in the US are completely vulnerable to UAS attack vectors.

[–] sanitation@lemmy.radio 6 points 10 hours ago (1 children)

What is UAS? drones? Yeah but iran I don't think can reach them, otherwise yeah

[–] SoloPhoenyx@lemmy.world 9 points 10 hours ago (1 children)

I'm not talking about Iran.

[–] deliriousdreams@fedia.io 29 points 11 hours ago (2 children)

People find AI to be irritating because of its flaws and failure to deliver. They are also angry about big tech suggesting that AI will force real humans out of human spaces. The arts, media, research, science, the work force etc.

The "anxiety" is mostly fear of exactly what's being promised at the detriment of the people expected to fund it. Anyone who's got eyes and ears knows that the venture capital well will run dry eventually.

There is no return on investment for the vast majority of regular every day humans living in this world at this time. Not where AI is concerned. It isn't hard to follow what is being marketed to its conclusion. Tech Oligarchs have been saying the quiet part out loud since the begining.

AI will replace workers. AI will replace people who make art and music, and write things. AI will replace.

They even tell us they know it's a flawed replacement that they can't make better. And they pretty much tell us that they haven't found a way to monetize it so it's sustainable which basically means one way or another they will be looking for people to pay more for it.

People have started thinking about what that means and naturally they don't like it. Tech Bros are selling this dream of replacing us but we don't have any money to pay more for a product that doesn't produce anything worthwhile for the cost. Especially not if you're replacing them and there is no safety net.

[–] Zink@programming.dev 5 points 7 hours ago

AI will replace workers. AI will replace people who make art and music, and write things.

This part made me think how I've commented recently that AI does the thing it was designed to do, but that the thing it was designed to do is generate something you could believe somebody wrote on the internet.

That doesn't mean the answer is correct, of course. It's often confidently wrong, just like real people online!

But when it comes to artistic expression, there is no clear right or wrong. Music, art, and the written word are some of the most human things we have, but you are absolutely right that they will be replaced. If a marketing director can pay Google a few dollars to generate a hundred concept drawings so they can do "I'll know it when I see it" design, that's a human artist job they won't budget for.

[–] Glitchvid@lemmy.world 22 points 11 hours ago (1 children)

There's often a tacit acknowledgment to the poor quality of AI output, but that they do not care, the strategy is to flood the zone with so much garbage as to make it irrelevant. It's a grift-conomy mindset, the focus is on "velocity" and "productivity" to the detriment of all else.

[–] mojofrododojo@lemmy.world 6 points 7 hours ago

we're living in a gish gallop society - politics, AI, it's all overloading the polity with so many outrageous events they can't react to the last one, much less the outrage 4 days ago... and unfortunately it's working.

I don't know any solutions - damn near anything you do will be labelled insurrection and treason, jfc, they're suing SPLC for supporting white supremacist orgs for paying... informants.

ultra fucking stupid, but sadly effective, because most of america wants to stay out of politics and not confront the difficult shit ahead.

[–] Flower@sh.itjust.works 60 points 14 hours ago

That's what people see

[–] zd9@lemmy.world 43 points 13 hours ago (9 children)

It's not AI that's the problem. AI is an amazingly powerful tool (I'm an AI researcher).

The problem is that it's in the hands of psychotic technofascist greedy subhumans that want to destroy basically all of society so their stock can go up 0.001%. If we can cut out the source of the cancer, the body can begin to heal itself.

[–] bluegreenpurplepink@lemmy.world 3 points 4 hours ago

I want to agree with you, but AI is just another psychopath in a world where we don't need any more psychopaths.

[–] RiverRabbits@lemmy.blahaj.zone 0 points 2 hours ago

-is an AI researcher -immediately uses Nazi lingo after introducing themselves

you can't be more obvious than this about the ideology of AI💀

[–] mojofrododojo@lemmy.world 7 points 7 hours ago (1 children)

The problem is that it’s in the hands of psychotic technofascist greedy subhumans

gee maybe people like you shouldn't have put those tools into the shitbag's hands?

I remember a decade ago multiple movements to reign in AI before it became uncontrollable, and any chance of that is long fuckin gone. we're gonna barrel forward heedless of the danger, because fuck you that guy wants profits and doesn't care about humanity.

and people like you made the tools and gave it to 'em.

[–] badgermurphy@lemmy.world 1 points 6 hours ago

That seems terribly extreme. Its not like its a bomb that is obviously for blowing people up. Someone made something with some cool applications, then some guys with many times more money and resources than anyone should be allowed to have, took the idea and ran with it toward a bunch of psychotic ends.

The problem isn't that people can use good things for bad purposes, nor is it the people that make or improve those things. The root cause is that western society is currently structured in a way that ends up rewarding certain types of madness, and the reward structure is set up such that individuals can get a vast undue amount of influence and power. Under these conditions, it is natural that even a tiny number of such individuals can overtake the system like a single cancer cell can eventually kill someone. All of these alarming things going on for over 60 years are symptoms of that societal illness. Please don't blame scientists for sciencing.

[–] its_kim_love@lemmy.blahaj.zone 38 points 12 hours ago (1 children)

Right! If you don't count the mass surveillance boost, the autonomous killing machines they're trying to make, the environmental impact, the pillaging of our individual experiences, and the destruction of all our shared spaces online, AI is a pretty cool tool.

[–] OpenStars@piefed.social 12 points 10 hours ago

Narrator: actually, no it was not.

e.g. it still spreads misinformation.

[–] theparadox@lemmy.world 25 points 13 hours ago* (last edited 13 hours ago) (1 children)

I was excited about the idea of purpose-built systems trained on specific datasets to be help find complex patterns to diagnose diseases or suggest potential molecules for specific purposes.

Then the LLM shit started and everyone started fantasizing about intelligent "AI" just because it was able to reproduce patterns of language that seem relevant to a given input. Some of those funding it kept chasing that dream and are convinced that, if they just throw more compute at the problem, they can evolve the renaissance AGI that can do anything. Then they can fire every worker and be bazillionaires with robot slaves and never have to work another day of their lives... and fuck everyone and everything else.

It's amazing what we can ruin when we let greed and selfishness drive our society.

[–] roux2scour@jlai.lu 4 points 9 hours ago (1 children)

At 1million i could already stop working and live decent life :/. I really don't get why past 1billion they continue to search for more

[–] theparadox@lemmy.world 1 points 9 hours ago* (last edited 9 hours ago)

Maybe it's because I've only ever had at most a comfortable income but I truly don't understand the mentality of needing so much money.

I don't get paid as much as my peers but I make enough to be comfortable. I am my own department and, aside from emergencies and other high priority situations, I manage myself and choose what to work on when. I have a decent work life balance. Because I make enough to be comfortable (in large part because my landlord promised not to raise our rent - early in the COVID lockdown - if we were "good tenants" and has managed to keep true to her word) I don't feel the need for more. That balance is worth not making the 20% more a year I might get somewhere else because I can't guarantee I won't have a shitty boss that doesn't let me have that work/life balance.

[–] Buffalox@lemmy.world 13 points 12 hours ago* (last edited 12 hours ago)

The lack of regulation of AI is absolutely a serious problem, there are so many problems your comment isn't even funny.
Problems with people using it for health advice.
Problems with teens using it instead of friends.
Problems with AI giving absurdly incorrect advice to people in general, but also professionals like managers and CEO's.
Problems with data-centers that host these AI systems require enormous amounts of power. So much researchers have shown these data centers are drying up vast areas around the centers.

The techno-fascists are in all sorts of business, that's not special for AI. The problem is with AI the techno-fascists aren't regulated in any way.
Neither how their data centers impact the environment and the electric grid, or how AI has actual bad effects for their customers, because there is no regulation on the use or supply of AI services.

[–] ace_garp@lemmy.world 6 points 13 hours ago

Indeed.

To cut off their data and revenue streams, stick to Open Source, locally run, models and chatbots.

[–] ag10n@lemmy.world 4 points 13 hours ago

It’s amazing how open source has benefitted the individual. The monopolization of compute is still a barrier we’ll have to crash through

[–] FaceDeer@fedia.io 1 points 13 hours ago

It's always been that way, it's just that until now the general public could say "well at least they pay me."

So ironically this rise in anxiety is itself being driven by self-interest. People were fine with those people being in charge as long as they got a comfortable lifestyle out of it. A pattern seen throughout history.

[–] Buffalox@lemmy.world 4 points 12 hours ago* (last edited 12 hours ago)

For who is it growing? Is it the same people that use AI to "discuss" personal problems because the AI is always nice to them? (yes this is really a thing, especially with young people).
Or people who use AI to be "creative"?
Or the people that use AI to seek health advice?

There are many good reasons to worry about AI, but my guess is that most the people that worry, do it for the wrong reasons.
Apart from the bad advice, and annoying AI customer services, and possibly taking jobs and potentially being a danger to humanity because leaders trust the AI. There may be a much closer and more imminent danger.

The movie "Good Luck, Have Fun, Don't Die" seemed a bit stupid when I first saw it, but goddam the movie has a point, that's how it's actually turning out for some people. They choose to live with an AI generated fantasy, created specifically to make them feel good!! A fantasy where they are always right, and are amazing artists, and where the AI is a better "friend" than actual friends.
I predict that AI will be worse than any cult in taking away family and friends.

https://www.imdb.com/title/tt1341338/

[–] Elilol@fedinsfw.app 2 points 13 hours ago

Yeah in the people that needs it to be useful for SOMETHING!