92
AI chatbots were tasked to run a tech company. They built software in under seven minutes — for less than $1.
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
Management and sound technical specifications, that sounds to me like you've never actually worked in a real software company.
You just said what the main problem is: ChatGPT is not perfect. Code that isn't perfect (compiles + has consistent logic) is worthless. If you need a developer to look over it you've already lost and it would be faster to have that developer write the code themselves.
Have you ever gotten a pull request with 10k lines of code? The AI could spit out so much code in an instant, no developer would be able to debug this mess or do a code review. They'll just click "Approve" and throw it on the giant garbage heap whatever the AI decided to spit out.
If there's a bug down the line (if you even get the whole thing to run), good luck finding it if no one in your developer team even wrote the code in the first place.
Worked at quite a few. Once you get out of college and start engaging with companies beyond "Ugh, how dare they want me to waste my precious time by talking to people" you start to learn the value of a strong management team.
And, more importantly, where those jira tickets come from.
A bog standard development flow is "all pull requests are linked to a documented issue/ticket. All pull requests require tests to pass, code coverage to not decrease, and approval by a code owner"
How does that work in reality?
Issues/tickets (just going to say issues from here on out) are created by a combination of customer feedback, identified issues by the development team, and directives from on high (which is generally related to the overall roadmap). One or more developers work on a merge request, the person who best understands the appropriate code looks it over, it is tested, and it is merged in. After enough of those cycles happen, a release is prepared and a manager signs off on it.
How does that map to an "AI" based workflow?
Issues/tickets (just going to say issues from here on out) are created by a combination of customer feedback, identified issues by the development team, and directives from on high (which is generally related to the overall roadmap). Because LLMs can provide feedback and uncertainty measurements once you get past Google Bard. And regression testing and nightly performance testing can highlight deficiencies. The issue is put into a template, that includes all existing constraints, and the LLM generates a solution. Someone who understands the code checks to make sure that looks sane, it is tested, and it is merged in. After enough of those cycles happen, a release is prepared and a manger signs off on it.
And then it becomes a question of what level you start requiring humans. Because when I do a code review prior to a Release? I am relying VERY heavily on my team to have been doing their due diligence. I skim through the MRs and look for a few hot spots but it is mostly "Well, Fred and Nancy said this was good and it passes all the tests so..."
I VEHEMENTLY disagree with this. If you don't have developers looking over your code then you are not a software engineer. And if it takes them the same amount of time to review code as it does to write it? You aren't working on interesting problems and are wasting vast amounts of money.
I can farm out a general task of "improve our code coverage" to an intern. They can spend a few days (or even weeks) doing that, and I can review their MRs in a few minutes. If something looks weird, I leave a comment and wait for them to get back to me. All the time I am working on much more interesting problems... or doing the same for my SSEs.
Once you stop worshiping the ground that "developers" walk on (which mostly comes from time and experience) you start to realize how many people spend most of their lives just filling out tickets with no understanding of "Why". And how much work your management is putting in so that you don't throw a temper tantrum or break the code base. Which... maps pretty well to an LLM.
Just to make it clear. I am not saying that all developers should strive to be managers. I actively disagree with that.
But if you aren't interested in how management works? Whether it is because you want a heads up when crunch is coming or want to understand the big picture or just figure out when it is time to get getting? Then you aren't growing as a developer and are not an engineer. You are a monkey with a typewriter in the basement.
You misunderstood, I never said management is worthless. The product managers know what customers want. The product owners keep 8 out of 10 dumb ideas away from the development team. And management again leans on the development team to find out what is actually technically possible and in what time frame.
If management just threw every customer wish into a magic black box to get code out, even if that code was perfect, you wouldn't have a product. You'd have a pile of steaming crap.
I've done plenty of code reviews, they only work if they are small human readable increments. Like they say: A code review of 100 lines might take an hour. A code review of 10000 lines takes thirty minutes.
AI would spit out so much code with missing context for the developer, it would be impossible to properly review.
Again: No
if it takes you the same amount of time to review 10k lines versus write 10k lines? Either you are bad at your job or you aren't working on a meaningful problem. One of the most valuable things an engineer can learn is to ask questions. If this MR is hard to parse? Leave a comment and make the developer improve the documentation or restructure a function or two. And you can do that with LLMs.
And, again, there is no difference between assigning "Implement Feature X" ticket to Stan versus StanAI. If StanAI is writing 500x the amount of code that Stan would? StanAI sucks and needs to be retrained.
And, as it stands? Using tools like CoPilot or even ChatGPT, "StanAI" tends to write more concise AND more readable code. In large part because its training data is weighted by the code that has already gone through code review, was accepted, and may even be part of the production stack on half the planet.
You really don't get the issue. Give real developers pull requests with 10, 100, 1000 and 10000 lines of changed code. I promise you, 100% that the quality on the latter two pull requests will be abysmal. No matter how good you are as a developer, you can be the best of the best, after a few hundred lines of code you're unfamiliar with you'll overlook obvious issues.
And let's be honest, most developers will try to quickly get it done, read over it, hit the approve button and go back to their own work. This is how it works in the real world.
A small pull request with 10 or at most 100 lines will get a lot more scrutiny where developers actually have the mental capacity to think and reason about the code and its context.
If you let AI write a full system, or even a full module at once, spitting that code out, you'll get large pull requests. Too large to do a meaningful review. It's like if I threw you a pull request right now for a software you're not familiar with and it's 2000 lines of code. How well do you think you'll do?
And you know what you say if someone is submitting 10k SLOC in a pull request?
"Hey Fred, document the hell out of this and split it into multiple MRs".
And if there is no way to accomplish that ticket without it being a 10k SLOC MR? Then it was a bad ticket and whoever made it failed.
Nothing you have described doesn't apply to humans too. If anything, StanAI is less likely to throw a temper tantrum if I leave a comment on his MR.
Hmm. If only there was a way to conserve that "mental capacity" by offloading the more banal tasks. Hmmm
Horribly. I would also make it a point to never use any software you are responsible for again if you think asking someone who doesn't understand a code base to review the MR.
Either you have no idea what you are talking about or you are a genuinely horrible manager who has been entirely dependent on having a few "rock star developers" to do your job for you. So... yeah.
You can't have your cake and eat it too. The entire point of AI would be to off-load the development work. You write a specification, throw it into the magic AI box, then get a working code base out.
Why the hell would you invest ten times the amount of organization work to break every feature down into small human sized parts? The AI doesn't need bite sized tickets like humans do, you can throw a complex 100 page specification at it and get out working code an hour later. But you'll get out 100k lines of code at once in that case.
You're treating the AI like a junior developer, give it tiny tickets it can work on, then let a human review the work. The human will do badly because they have no context (they'd have to read the entire specification first, then read the pull request, then try to reason about code that a machine wrote). Reviewing code is always more difficult than writing it, the writing part is easy.
Again. If you are not already breaking down every feature into human sized parts, you are a horrible manager. And you seem hellbent on using a specific use case that you would never use in reality because... Frankenstein Complex?
And you continue to assume that the only people who can review a pull request are outside hires with no knowledge of the codebase or problem at all. Which... again, please never work on anything useful.
I'll say this: If you actively sabotage your employees, they will fail. It doesn't matter if that is Stan on the third floor or StanAI in the server room.