this post was submitted on 27 Apr 2026
1138 points (98.6% liked)
Technology
84200 readers
5361 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This isn't an AI story, it's a "completely fucking idiotic sysadmins exist" story.
Treat an AI like the idiot intern without any references you just hired. Gave the idiot intern permission to delete your production database? That's entirely on you, zero sympathy. (Actually, give any developer that power? You get what you deserve.)
It could be a moronic sysadmin, it could just as easily be a moronic exec pushing staff to implement this crap right now and damn the consequences.
⤴️ #MyLastJob
I mean that's kinda the whole point.
Companies are looking at AI to replace people. Either it's ready or it's not.
If you need to treat it like it's an intern, then it's not worth the expense. Anyone hiring interns to be productive doesn't understand why you hire an intern.
As if a 90$/month intern wasn't a good deal lol
You don't hire interns for productivity. If you're intern program is any good it's a time/resource sink. However, it's a good recruiting pipeline and provides young people an opportunity to get real world experience.
Right now it's somewhere between a smart intern and a smart recent grad. A lot depends on what Skills.md and frameworks your org has set up.
I actually think it's better than that and when you set up multiple pipelines that interact and cross check it starts to ramp up. Definitely true Lemmy has its head in the sand about it though.
This. Yes it seems wasteful or whatever but you need bots with prompts that review the work, kick it back to the coder bot to re-do, yadda. But at the end of the day you have a thing that Fixes Your Bugs and Implements Basic Features For You.
Is it really fixing if it's only short-term with mounting technical debt?
No it's not. You're giving it way too much credit.
Gogo gadget inefficient hallucinating predictive text generator grift
People don't wanna hear that around here. But I agree, with the right instructions it's better than a junior Dev. Loads faster, and mistakes can be fixed faster, and if you update the prompts then it learns better from mistakes too.
People don't want to hear it anywhere because you're lauding the benefits of a parasitic technology which is inherently hostile towards workers.
And if you're getting paid for it, it makes you a parasite too, or at least more complicit than the average person.
Maybe your position would be better served by not lashing out at people as if they're your enemy.
Multiple things can be true at the same time. Statements about the technical capability of a technology don't detract from the negative impacts on the world. Those are two different topics.
Fossil fuels have incredibly massive, civilization-scale problems that are actively harming the modern world AND ALSO have enabled industrialization, pulling billions out of poverty.
AI is objectively capable at some tasks AND ALSO is being used to disrupt the labor market and causing other harmful effects in society.
The world isn't black and white
OMG adult balanced take with no detectable outrage
I'll see you in Sort By: Controversial
Black and white, no, but things can be evaluated on their net impact. And in that evaluation, AI is shit.
I understand the arguments, today isn't my first day on the Internets.
The comment that was responded to was in a conversation talking about the technical capabilities and how it doesn't matter what the truth is on that topic because some people don't want to hear it because they only can view AI in a 2-diminsional, black or white, net good or net bad way.
Then you showed up like a caricature of the type of irrationality that they were discussing.
I even explained the, very obvious, context that you breezed right passed and yet you're still grinding that same talking point without a moment of self reflection.
Viewing things for their net impact is not "irrational" just because you don't like the conclusions reached.
woosh.gif
It didn't go over my head. You're just wrong.
I honestly think, it's very cool for prototyping ideas at this point. It's also parasitic. Although I think because of (maybe) different reasons: It gives people the power (which they unfortunately use way too much) to imitate an art, but in an non-arty imperfect way that doesn't comprehend details (of the art), resulting in slop. For software that can go very wrong as we see here. This is also a reason why I mostly quit open-source, because now everyone can code a bad version of a library, it sucked the art out of good open source etc. and it's increasingly difficult because of good wording/"look" etc. to differentiate on quality of code, previously you could often check a code-base review it somewhat and know how good the quality is, now it's more like "is this slop or not?" (in which case I go a big circle around it, because reviewing is often not worth it)
At some point though, I think this automation of work is inevitable, we need to think about a society that can peacefully exist without having the requirement to work to exist. I actually think this could easily be utopian, everyone can focus on what they actually think is fulfilling life.
Though, it's sad and concerning that technology is developing faster than society can adapt, which is why I'm mostly with you, because people (or representatives like politicians) just aren't "programmed" for these fast-paced changes, to adapt the technology such that the future may be more utopian as it currently is heading towards a dystopian future...
Every commercial use of AI negatively impacts the environment in order to further the interests of capital and is therefore inherently immoral.
If we were in a nuclear fusion or otherwise all-renewable-energy-with-plenty-of-excess world, then I'd be more aligned with your mindset and agree that only uses which bastardize art / etc are immoral.
Is it okay for Skrillex to make loops? For Vanilla Ice or MC Hammer to sample?
The fact is, it can be a very useful technology when deployed sensibly. Yes, it's going to inflict massive harm on society in multiple ways - but just dismissing it as shit is putting your head in the sand. We need to be figuring out how to ensure that the harm it does is minimised and ideally that it's used in ways that benefit us all. Fuck knows how though.
But it's not just going to go away, no matter how much we might want it to.
It destroys the environment inherently by virtue of its operation (in the context of our current energy infrastructure). I do not care how "useful" it is to you or any corporation if it takes even a single living organism off of this earth.
I dismiss it as shit and I don't need your approval to do so. Medical and scientific applications are acceptable. Nothing else, no exceptions.
“Treat an AI like an idiot intern without any references you just hired.”
Instead of this, treat AI like some dude off the street who you didn’t hire and leave it out of your life. It’s shitty, it’s wasteful, and it’s subsidized by everyone to get a few tech bros rich.
Like seriously, it’s just theft of people’s work it “trained on”, powered by energy companies that charge us more to power it, at the cost of poisoning our water supplies, to ultimately try and steal our salaries one day.
It’s absolutely parasitic software at every level.
Nah, I think I'm going to keep using it
Hah, you just wrote a punchline similar to a presentation I've been giving at conferences.
My company is in the process of pivoting hard to Claude after 50yrs of doing virtually everything themselves and rolling their own versions of already-existing software, and this is almost verbatim how I've described to others what it feels like to use it.
It feels like cajoling an intern to understand a job for which they have some average skill but zero motivation, and they only want to do the bare minimum, so you spend all the time you could be doing your job holding their hand through basic tasks.
It's fucking annoying.
negl sounds like you need to spend some time writing good documentation. May as well do it in the form of Skills files so humans and bots both are more quickly able to be useful in your org.
Fun fact: giving developers access to production deployments violates FedRAMP and like half a dozen other compliance regimes SOC2/IRAP/ISMAP/G-Cloud/BSI C5/...
But it doesn't mean it isn't incredibly common. Especially with "DevOps" where the developers are pushed to handle literally every aspect.
IMO DevOps was always a stupid idea. Impedance mismatch.
Developers who are really good at designing complex enterprise-level shit need days-to-weeks of uninterrupted time to think and experiment. Please, skip the daily stand-up until you've figured out how to fix
Coders who are good at fixing bugs or adding a new menu item need a few hours or a day uninterrupted. Daily stand-up, should have closed yesterday's ticket or have hit a real roadblock with it.
Ops IT people are fixing like 4 fires at the literal same time, they are lucky to get minutes of uninterrupted thinking time. It's about managing rate of tickets per day, and in contrast going full CAPA when there's a significant outage.
Just... totally different workflows, personalities, and management
I totally agree. I think it stems from Ops people that are angry at developers for building bad software. Theoretically making devs responsible for their deployments would make them care more about the quality, but really it just splits their focus and now they make bad software and provide poor ops.
Agreed about salty ops people. That said it is important even for fancy-schamcy Architect-level engineers to be assigned real annoying bugs in the codebase they helped to shape
I was once the intern who did relatively stupid things with one very big consequence.
My biggest fuckup was unplugging a 10base2 (edit: I originally wrote 10-base-T) coax wire from the loop so I could plug in a newly built computer. Everyone at the time (including me) knew that an unterminated 10-base-T network would crash Win 3.11, so the accepted process was to tell the entire network you were about to disconnect a cable so they could save their work and be ready to drop to DOS. I spaced that step in my haste to test a newly built computer and ruined a day's worth of work by the sales guy.
Ultimately, I was the one who fucked up and did know better. That's AI. However, it only had consequences because Win 3.11 networking code was fucking awful and because the sales guy didn't save his work frequently. If the same person in this story had asked Claude whether it was a good idea to have the backup and production databases on the same volume, the AI would have said No. If the person had asked Claude whether it was a good idea to delete a database without any confirmation dialogue, the AI would have said No. AI did it anyway. That's what makes this an AI story.
Was their database environment stupid? Yes. Did the sysadmin fuck up by not treating AI like an intern? Yes. Did the AI do something it knew it shouldn't do? Also yes. This is both an AI story and stupid sysadmin story.
I witnessed a sysadmin, on a production database, type a SQL
DELETE FROMquery, which was being read to him over a call.He ran the command before writing the WHERE clause.
Luckily, they had backups.
"OOPS!? What do you mean "oops"?" was a meme around the office for years.
An extremely enthusiastic intern that, if presented with a question/problem/prompt they don't know the solution for will just overconfidently pull something out of their ass and run with it.
It's both.
Problem is execs and stupid software devs wanna give these things full reign on systems because of “performance gainz “
It’s a collective stupidity that’s impossible to break because it’s hooked into the highest decision makers.
These things are bought specifically because they are trying to replace the sysadmins... Along with everyone else.
Any business who uses AI in that manner will fail like all of the dot com companies who went all-in on the Internet when it first achieved a bit of popularity.
AI is, at best, a tool that professionals may be able to use in some situations. Any company dumb enough to believe the hype generated by the chatbot companies is probably making other, similarly dumb, decisions in other areas.
Things like giving way too much access to a worker, not having a tested disaster recovery plan, and not having anyone who understands the technologies that their business depends on.
This company was heading towards disaster due to poor decision making, it just happened to be AI related but it could have also been an undetected cyberattack, 0-day exploits pushed to the client app, destructive ex-employee, etc.
This is a cautionary tale about bad management