This study is over 6 months old, why is Fortune.com only writing about it now?
Programming
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
A version of this story originally published on Fortune.com on July 20, 2025.
Nevermind, I guess...
Writing code with an AI as an experienced software developer is like writing code by instructing a junior developer.
... That keeps making the same mistakes over and over again because it never actually learns from what you try to teach it.
Yep, the junior is capable of learning.
My job believes the solution to this is a 7,000 line agents.md file
Without the payoff of the next generation of developers learning.
Management: "Treat it like a junior dev"
... So where are we going to get senior devs if we're not training juniors?
Apparently some people would love to manage a fleet of virtual junior devs instead of coding themselves, I really don’t see the appeal.
I think the appeal is that they already tried to lean to code and failed.
Folks I know who are really excited about vibe coding are the ones who are tired of not having access to a programmer.
In some of their cases, vibe coding is a good enough answer. In other cases, it is not.
Their workplaces get to find out later which cases were which.
Very true. I've been saying this for years. However, the flip side is you get the best results from AI by treating it as a junior developer as well. When you do, you can in fact have a fleet of virtual junior developer working for you as a senior.
However, and I tell this to the junior I work with: you are responsible for the code you put into production, regardless if you write it yourself or you used AI. You must review what it creates because you're signing off on it.
That in turn means you may not save as much time as you think, because you have to review everything, and you have to make sure you understand everything.
But understanding will get progressively harder the more code is written by other people or AI. It's best to try to stay current with the code base as it develops.
Unfortunately this cautious approach does not align with the profit motives of those trying to replace us with AI, so I remain cynical about the future.
Usually, having to wrangle a junior developer takes a senior more time than doing the junior's job themselves. The problem grows the more juniors they're responsible for, so having LLMs stimulate a fleet of junior developers will be a massive time sink and not faster than doing everything themselves. With real juniors, though, this can still be worthwhile, as eventually they'll learn, and then require much less supervision and become a net positive. LLMs do not learn once they're deployed, though, so the only way they get better is if a cleverer model is created that can stimulate a mid-level developer, and so far, the diminishing returns of progressively larger and larger models makes it seem pretty likely that something based on LLMs won't be enough.
No. Experienced devs knew it would make tasks take longer, because we have common sense and technical knowledge.
I don't blame randos for buying into the hype; what do they know? But by now we're seeing that they have caught on to the scam.
People assumed X, but in one experiment the result was Y.
And in his many experiments the result was in fact X, if it was just 1 on which it was Y?
I don't actually disagree with the article, I'm just pointing out the title is meaningless.
The real slowdown comes after when you realize you don't understand your own codebase because you relied too much on AI. To understand it well enough requires discipline, which in the current IT world is lacking anyway. Either you can rely entirely on AI or you need to monitor its every action, in which case you may be better off writing yourself. But this hybrid approach I don't think will pan out particularly well.
Yeah, it's interesting how strangely development is presented, like programming is only about writing code. They still do that when they tout ai coding capabilities.
I'm not against ai, it's amazing how quickly you can build something. But something small and limited one person can build. The whole human experience is missing, laziness, boredom, communication and issues with communication,... to actually build a good product that's more than a simple app.
I assumed nothing, and evaluated it like I would any other tool. It's ok for throwaway scripts but if the script does anything non-trivial that could affect anything external the time spent making sure nothing goes awfully wrong is at least as much as the time saved generating the script, at least in my domain.
When writing code, I don't let AI do the heavy lifting. Instead, I use it to push back the fog of war on tech I'm trying to master. At the same time, keep the dialogue to a space where I can verify what it's giving me.
- Never ask leading questions. Every token you add to the conversation matters, so phrase your query in a way that forces the AI to connect the dots for you
- Don't ask for deep reasoning and inference. It's not built for this, and it will bullshit/hallucinate if you push it to do so.
- Ask for live hyperlinks so it's easier to fact-check.
- Ask for code samples, algorithms, or snippets to do discrete tasks that you can easily follow.
- Ask for A/B comparisons between one stack you know by heart, and the other you're exploring.
- It will screw this up, eventually. Report hallucinations back to the conversation.
About 20% of the time, it'll suggest things that are entirely plausible and probably should exist, but don't. Some platforms and APIs really do have barn-door-sized holes in them and it's staggering how rapidly AI reports a false positive in these spaces. It's almost as if the whole ML training stratagem assumes a kind of uniformity across the training set, on all axes, that leads to this flavor of hallucination. In any event, it's been helpful to know this is where it's most likely to trip up.
Edit: an example of one such API hole is when I asked ChatGPT for information about doing specific things in Datastar. This is kind of a curveball since there's not a huge amount online about it. It first hallucinated an attribute namespace prefix of data-star- which is incorrect (it uses data- instead). It also dreamed up a JavaScript-callable API parked on a non-existent Datastar. object. Both of those concepts conform strongly to the broader world of browser-extending APIs, would be incredibly useful, and are things you might expect to be there in the first place.
My problem with this, if I understand correctly, is I can usually do all of this faster without having to lead a LLM around by the nose and try to coerce it into being helpful.
That said, search engines do suck ass these days (thanks LLMs)
That's been my biggest problem with the current state of affairs. It's now easier to research newer tech through an LLM than it is to play search-result-wack-a-mole, on the off chance that what you need is on a forum that's not Discord. At least an AI can mostly make sense of vendor docs and extrapolate a bit from there. That said, I don't like it.
Not surprised.
In my last job, my boss used more and more AI. As a senior dev, I was very used to his coding patterns. I knew the code that he wrote and could generally follow what he made. The more he used AI? The less understandable, confusing and buggy the code became.
Eventually, the CEO of the company abused the "gains" of the AI "productivity" to push for more features with tighter deadlines. This meant the technical debt kept growing, and I got assigned to fixing the messes the AI was shitting all over the code base with.
In the end? We had several critical security vulnerabilities and a code base that even I couldn't understand. It was dogshit. AI will only ever be used to "increase productivity" and profit while ignoring the chilling effects: lower quality code, buggy software and dogshit working conditions.
Enduring 3 months of this severely burnt me out, I had to quit. The rabid profit incentive needs to go to fucking hell. God I despise of tech bros.
And this gets worse over time because you still have to maintain it.
And as the cherry on top - https://www.techradar.com/pro/nearly-half-of-all-code-generated-by-ai-found-to-contain-security-flaws-even-big-llms-affected
Someone on Mastodon was saying that whether you consider AI coding an advantage completely depends on whether you think of prompting the AI and verifying its output as “work.” If that’s work to you, the AI offers no benefit. If it’s not, then you may think you’ve freed up a bunch of time and energy.
The problem for me, then, is that I enjoy writing code. I do not enjoy telling other people what to do or reviewing their code. So AI is a valueless proposition to me because I like my job and am good at it.
Here's the full paper for the study this article is about: Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity (PDF).
Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%--AI tooling slowed developers down.
The gap and miss is insane.
Thank you, the article is shit
I get the agenda of the study and I also agree with it, but the study itself, is really low effort.
Obviously, an experienced developer working on a highly specialized project, where the software developer already have all the needed context, and have no experience with using AI, will beat a clueless AI.
How would the results look like, if the software developer had experience with AI, and were to start on a new project, without any existing context? A lot different, i would imagine. AI is also not only for code generation. After a year of working as a software developer, I could no longer gain much experience from my senior colleagues (says much more about them, than me or AI) and I kinda was forced to look for sparring elsewhere. I feel like I have been speed running my experience and career, by using AI. I have never used code generation that much, but instead I've used it to learn about things i don't know i don't know about. That have been an accelerator.
Today, I'm using code generation much more; when starting a new project, or when i need to prototype something, complete mundane tasks on existing projects, make some none-critical python scripts, get useful bash scripts, spin up internal UI projects, etc..
Sometimes, i naturally waste time, as it takes time for an AI to produce code, and then it takes time to review the code, but in general I feel my productivity have gained by using AI.
Yeah, I think it's weird how people need to think in such a binary manner. AI sucks in almost every way and it can also save you time as a quick auto complete in an IDE. You'd have to be an idiot to have it write big blocks of code you don't understand. That's on you if you do it. If you want to use it to improve productivity, you should just let it write a few lines here and there which otherwise costs you several seconds if you didn't. When it comes to refactoring, I've found GitHub copilot helps a lot because what I'm doing is changing from one common pattern to another, probably even more common pattern. It's predictable, so it usually gets it fairly right.
If it were really artificially intelligent you could just describe a program and in seconds get a nearly bug-free, productiom-ready app. That's a LONG way off, if it ever happens. People treating LLMs like they are actually AI is the issue. Stop misusing the tool.
Use judgement, people.