this post was submitted on 29 Aug 2023
134 points (100.0% liked)
Technology
37725 readers
486 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yes, I know about the exploitation that happened during early industrialization, and it was horrible. But if people had just rejected and banned factories back then, we'd still be living in feudalism.
I know that I don't want to work a job that can be easily automated, but intentionally isn't just so I can "have a purpose".
What will happen if AI were to automate all jobs? In the most extreme case, where literally everyone lost their job, then nobody would be able to buy stuff, but also, no company would be able to sell products and make profit. Then, either capitalism would collapse - or more likely, it will adapt by implementing some mechanism such as UBI. Of course, the real effect of AI will not be quite that extreme, but it may well destabilize things.
That said, if you want to change the system, it's exactly in periods of instability that can be done. So I'm not going to try to stop progress and cling to the status quo out of fear what those changes might be - and instead join a movement that tries to shape them.
Maybe. But generally on Lemmy I see sooo many articles about "Oh, no, AI bad". But no good suggestions on what exactly regulations should we want.
Movements that shape changes can also happen by resisting or by popular pressure. There is no lack of well-reasoned articles about the issues with AI and how they should be addressed, or even how they should have been addressed before AI engineers charged ahead not even asking for forgiveness after also not asking for permission. The thing is that AI proponents and the companies embracing them don't care to listen, and governments are infamously slow to act.
For all that is said of "progress", a word with a misleading connotation, once again this technology puts wealthy people, who can build data centers for it, at an advantage compared to regular people who at best can only use lesser versions of it, if even that, they might instead just receive the end result of whatever the technology owners want to offer. Like the article itself mentions, it has immense potential for advertising, scams and political propaganda. I haven't seen AI proponents offering meaningful rebuttals to that.
At this point I'm bracing for the dystopian horrors that will come before it all comes to a head, and who knows how it might turn out this time around.
You won't get a direct rebuttal because, obviously, an AI can be used to write ads, scams and political propaganda.
But every day millions of people are cut by knives. It hurts. A lot. Sometimes the injuries are fatal. Does that mean knives are evil and ruining the world? I'd argue not. I love my kitchen knives and couldn't imagine doing without them.
I'd also argue LLMs can be used fact check and uncover scams/political propaganda/etc and can lower the cost of content production to the point where you don't need awful advertisements to cover the production costs.
This knife argument is overused as an excuse to take no precautions about anything whatsoever. The tech industry could stand to be more responsible about what it makes rather than shrugging it off until aging politicians realize this needs to be adressed.
Using LLMs to fact check is a flawed proposition, because ultimately what it provides are language patterns, not verified information. Nevermind its many examples of mistakes, it's very easy for them to provide incorrect answers that are widely repeated misconceptions. You may not blame the LLM for that, you can scratch that to generalized ignorance, but it still ends up falling short for this use case.
But as much as I dislike ads, that last one is part of the problem. Humans losing their livelihood. So, going back to a previous point, how does the lowered ad budget help anyone but executives and investors? The former ad workers get freed to do what? Because the ones focused on art or writing would only have a harder time making a career out of that now.