Yeah but not until the next financial quarter so it's all good right?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
As long as you have a Golden Parachute in your contract!
Wait... Why do none of us have Golden Parachutes...?
You guys have parachutes?
I do! But it's just an anvil with some string attached
Ooooh, that sounds awkward - let me take that golden anvil off your hands before it hurts you.
There's a Blackberry docu-drama streaming on Netflix now - (Jay Baruchel - Hiccup from How to train your Dragon / Dave from 2010 Sorcerer's Apprentice - has a leading role, it fits him well...) Real life tales of golden parachutes, compressed decision making, consequences...
Oh no i'm terrified to lose my "learning velocity."
holy corporate word salad
Maybe enough that corporate types will find it compelling?
Yeah, the big thing is that management has no sense how little coding you actually do in a software engineering role. You spend so much more time understanding requirements, understanding how you can resolve roadblocks within your organization and understanding what the hell the code does that was previously written.
In particular, the last part is something that will most definitely take longer for vibecoded programs.
The code is often needlessly complex, because:
- folks throw in additional features with no restraint,
- the AI will gladly generate a second implementation for stuff, you already solved in the codebase, and
- AI-generated code tends to just be noisy, because you need rigorous logical reasoning to find the most minimal solution.
But you also just don't have human beings that made all the detail decisions and can tell you why they're important. In vibecoded code, all of these detail decisions are accidental and only 'proven' in so far as the given accidental state that the code is in, happens to not explode in reality. If you need to tweak anything about it, you're completely blind as to what's actually important and what's just in there, because the AI figured, it's the most likely thing to autocomplete there.
Fun story from this week, we had a chore for the frontend to refresh to a new version of the UI framework. Fairly simple task, so off to a junior developer. Within a couple hours there was a merge request ready to go. Ok, a fairly normal amount of time to change version and at least do a sniff test and find nothing changed so I go in assuming I'll look at a few version bumps, maybe one or two tweaks... I see the junior dev was proposing over 1,000 lines of code to be added... WTF...
I crack it open and there was just a firehose of css rules, all marked '!important'. Looking at one examlpe, it repeated the same classifier with the same exact bunch of rules 5 times in a row. It was like it found every possible derived css class combination with tag and defined !important CSS for most everything about it.
So I find out that the junior dev asked it to rebase and it did what he expected, just change some version and went. He tried it and due to a framework change, one element was misaligned by a little bit. So he gave the feedback to the LLM and tried again... and it failed, and he tried again and it failed and after 5 rounds, it finally got the element aligned and hit 'merge request'. For fun I opened up his proposed change and just so much was just a bit dodgy css wise because it screwed with so much stuff, but the junior dev only concerned himself with the page as it opened.
So I said screw it, I'll do it myself, and added the singular rule that was needed to adapt to the framework change, making it overall about a 5 line change including versioning and such.
Depressingly, I suspect an executive would consider me far less productive because I only did 5 lines of change and the junior dev would have done thousands...
Yup. AI might, and I mean might, be a force multiplier for senior developers but this whole "Everyone can just let AI code" is bullshit that will lead to a giant mess of unmanageable code when your developers your maturing don't understand the underlying code or good software architecture.
Oh and also I'm right there with you with the fuck that whole ELOC bullshit as a metric.
In my experience, the bigger the codebase gets, the more confounded LLM gets at trying to make coherent changes. So LLM projects start on shaky ground and just get worse because they can't maintain the stuff they themselves generated.
I've seen what LLM can do and it is certainly interesting and can do some stuff, but the vast majority of my experience is someone who had not coded before "vibing" themselves into a corner and demanding help to dig them out. A bit irritating because while before we could reasonably prioritize requests to do stuff because management understood making something from nothing was real work, now management says "they aren't asking you to make something, just help them fix something that already exists, should be easy!"
On the ELOC metric, for a long time I pointed out how disastrous I must be because my contribution to a project I was on was about -10,000 lines of code by the time I went to something else.
Depressingly, I suspect an executive would consider me far less productive because I only did 5 lines of change and the junior dev would have done thousands...
Probably the very same execs that use phrases like "Do more with less!" and then completely miss the point when true efficiency stares them in the face.
Depressingly, I suspect an executive would consider me far less productive because I only did 5 lines of change and the junior dev would have done thousands...
The first question I ask about any analytics requests is what you're trying to do with the results, what business question you want to answer. The second is how the analytics question relates to that business question.
It's very easy to ask a question, look for a way to measure the answer, find something you can measure and start looking for the best way to ask for that measure. It implicitly assumes that "I need to measure an answer -> I can measure this -> this is the answer", but as you point out, this isn't a valid implication.
My worst enemy is the sentiment "If you can't measure it, you can't manage it." Curse every MBA that repeats it like a mantra, in the name of the Stocks and the Shareholder Value and the Holy KPI.
Yes, some measures can be valuable indicators, if contextualised correctly, but not everything that has to be managed can be measured effectively. To grasp for measures anyway twists your sight away from the actual, non-measured facts.
That last sentence hits me where it hurts.
I agree wholeheartedly with this article, however, it’s giving vibe-authored
The core problem was not simply the technology itself. It was the organizational inability to integrate AI into real workflows, learn from deployment and distinguish between a demo that worked and a system that delivered.
Yeah, it has that phrasing sometimes.
The missing Oxford comma... *twitch, twitch*
Good article. Any company doing any of those examples deserves to die.
The companies that will pull ahead in the next 24 months are not the ones that adopt fastest. They are the ones whose judgment systems are mature enough that adoption does not break them.
Yeah, judgement doesn't seem to be a high priority for the a.i. addled mind.
At first I thought vibe coding was just coding stuff for fun using whatever comes to your mind. Then I learned that it's just letting ai code for you mostly and just copy paste the code.
Now I wonder if there are some cases of real vibe coding like my first assumption.
Yeah, vibe coding is such a fun term, too bad it's used for this purpose.
The danger here is that many people think that software is all about having code that seems to work when you try it. Those people have never been able to get past "Hello, World" in X for Dummies, so they don't realize all the practical realities of software distribution that are very much more nuanced and complicated than just writing the code. They get their hands on some working code and wheeeee!!! Ship it!!!!
A while back I compared LLMs to lightsabers - and pointed out how many amputees are found in the Galaxy far far away that has lightsabers.
Not copy-paste. Let the ai do it for you. …… else how would we get these entertaining stories of idiots letting ai delete their production database
Move fast, break things LOL
The term you're looking for is "cowboy coding."
I resemble that remark - rode herd on a whole passel 'o C back in the early 90s.
There are a lot of folks saying that Bluesky's recent outages were due to the vast amounts of vibe coding in their systems. It was days of not working.
This article is good, a rare exception in the current discourse around LLMs.
The real vibe coding is a bottle of beer and a lotta fuck it
There's code over 20 years old still in use that I had written using that approach.
No one had the cultural standing to say this looks great, and we are not putting it into production.
Can someone in your organization look at a slick prototype and say “no” without career risk? If the answer is no, vibe coding becomes a one-way ratchet.
This is definitely the feeling at my company. "How fast is AI letting you ship" is the only question management & executive are asking.
the resulting ambiguity will be filled by whoever moves fastest, which is rarely whoever should be deciding.
There's capitalism!
Can someone in your organization look at a slick prototype and say “no” without career risk? If the answer is no
You have toxic leadership and we have just handed them a mini-gatling-gun with which to shoot everyone's feet off.
This is definitely the feeling at my company. "How fast is AI letting you ship" is the only question management & executive are asking.
Slightly faster, but with a way higher upkeep cost. And it might delete your companys database or have a customer-data leak.
These golden eggs are being laid so slowly! There's got to be a faster way!
Slightly faster, but with a way higher upkeep cost. And it might delete your companys database or have a customer-data leak.
You had me at "slightly faster", no need to keep selling it. Sold!
- The Average CEO, this year.
We inadvertently let two Jr's vibe code a project. They used their own Cursor subs and did not tell anyone. They are no longer with us. Now there is this project with no documentation, they removed the LLMs comments, and then ran a linter. I checked the git history. Nobody can make sense of the code base. We can't even empirically show that it is cheaper to rewrite than start over, so we just stare at it everyday. Nobody can add any features. When you debug you end up in abstraction hell and work through the most buggy nonsense code.
They are no longer with us.
Hey, I'm annoyed by slop coding work as much as the next guy, but murder seems a bit much as a reaction...
Not to the shareholders.
I only want to punch the one guy in the throat. I am not that violent.
It's going to get even more dangerous over time because the LLMs are coming out of the uncanny valley but still have subtle problems, they are just getting harder to see. I just did a bunch of AI training at my company and on the one hand, some of it was already out of date, and on the other hand, the language used to describe it gave it a lot more credit than it deserved.
People are already thinking it can do things it really can't. Like think or analyze.
I've been in this cycle since the first time I've interacted with an LLM or AI coding system where at first it looks impressive and I'm not sure what its limits are and then I slam into a wall that makes me realize in horror that it's capabilities are far less than it seemed at first, then improvements come out and I'll repeat the whole process because the previous wall seems to be dealt with and it becomes hard to argue with the people who are gung ho for AI.
So Ian Malcolm was right the whole time