this post was submitted on 27 Jan 2024
147 points (98.0% liked)

Programming

17343 readers
334 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 

Interesting to see the benefits and drawbacks called out.

top 30 comments
sorted by: hot top controversial new old
[–] uthredii@programming.dev 46 points 9 months ago (2 children)

In this regard, AI-generated code resembles an itinerant contributor, prone to violate the DRY-ness [don't repeat yourself] of the repos visited.

So I guess previously people might first look inside their repo's for examples of code they want to make, if they find and example they might import it instead of copy and pasting.

When using LLM generated code they (and the LLM) won't be checking their repo for existing code so it ends up being a copy pasta soup.

[–] sentient_loom@sh.itjust.works 24 points 9 months ago (2 children)

If you use AI to generate code, that should always be the first draft. You still have to edit it to make sure it's good.

[–] walter_wiggles@lemmy.nz 17 points 9 months ago (3 children)

I totally agree, but I don't hear any discussion about how to incentivize developers to do it.

If AI makes creating new code disproportionately easy, then I think DRY and refactoring will fall by the wayside.

[–] sentient_loom@sh.itjust.works 5 points 9 months ago (1 children)

How do we currently incentivize developers to keep it DRY? Code review still exists.

[–] MNByChoice@midwest.social 9 points 9 months ago* (last edited 9 months ago) (1 children)

Code review still exists.

For now code reviews are done by competent people. What about once

AI makes creating new code disproportionately easy

?

Edit: Is it clear the quote, plus the items before and after are all one thought? I am hopeful, but not convinced.

[–] sentient_loom@sh.itjust.works 5 points 9 months ago* (last edited 9 months ago) (1 children)

Yes your message is clear.

To answer your original question, I have no idea what it will look like when software writes and reviews itself. It seems obvious that human understanding of a code base will quickly disappear if this is the process, and at a certain point it will go beyond the capacity of human refactoring.

My first thought is that a code base will eventually become incoherent and irredeemably buggy. But somebody (probably not an AI, at first) will teach ChatGPT to refactor coherently.

But the concept of coherence here becomes a major philosophical problem, and it's very difficult to imagine how to make it practical in the long run.

I think for now the practical necessity is to put extra emphasis on human peer review and refactoring. I personally haven't used AI to write code yet.

My dark side would love to see some greedy corporations wrecking their codebase by over-relying on AI to replace their coders. And debugging becomes a nightmare because nobody wrote it and they have to spend more time bug-fixing than they would have spent writing it in the first place.

Edit: missing word

[–] peopleproblems@lemmy.world 5 points 9 months ago (1 children)

And, while some of us may be out of a job temporarily, historically, when companies make these big brain decisions, we end up getting to come back and charge 4x what we used to get paid to get it working again.

When I found out one of the contractors I worked with was not one of the cheap ones, but instead rehired after he retired at a 400% bump, I decided that maybe I needed to understand the business needs better

[–] sentient_loom@sh.itjust.works 2 points 9 months ago

Wow that's a huge pay bump lol. Maybe I should also start studying those business needs more.

[–] tatterdemalion@programming.dev 4 points 9 months ago

Because it will lead to an incomprehensible mess. Ever heard the quote, "Programs are meant to be read by humans and only incidentally for computers to execute"? This is well-trodden ground in science fiction. If you have AI writing code that's so lacking in abstraction (because machines require less of it to understand) then humans will become useless in maintaining it. Obviously this is a problem because it centralizes responsibility of maintenance onto machines who depend on this very code to operate.

[–] Honytawk@lemmy.zip 1 points 9 months ago (1 children)

If the AI is creating the code, it is up to it to keep itself DRY

[–] sentient_loom@sh.itjust.works 2 points 9 months ago

Well that means it's up to us to make it recognize non-DRY code and teach it to refactor while remaining coherent forever and ever, or else we'll have to parachute into lands of alien code and try to figure out something nobody wrote and nobody understands.

[–] hikaru755@feddit.de 8 points 9 months ago

Yeah, but by generating with AI you're incentivized to skip that initial research stage into your own code base, leading you to completely miss opportunities for consolidation or reuse

[–] GammaGames@beehaw.org 3 points 9 months ago* (last edited 9 months ago) (2 children)

Makes sense, even if it’s not good practice.

It is really useful for hobby projects! I needed a recursive function to find a path between two nodes in a graph and it wrote me something that worked with my data in a few seconds, saved a bit of time

[–] MonkderZweite@feddit.ch 9 points 9 months ago (1 children)

saved a bit of time.

For now.

[–] GammaGames@beehaw.org -1 points 9 months ago (1 children)
[–] MonkderZweite@feddit.ch 8 points 9 months ago* (last edited 9 months ago) (2 children)

Best practices exist to save you time and nerves in debugging.

[–] magic_lobster_party@kbin.social 2 points 9 months ago (1 children)

Good practices don’t matter much for small hobby projects.

[–] MonkderZweite@feddit.ch 5 points 9 months ago

Training bad habits... vs. fun, i get it.

[–] GammaGames@beehaw.org 2 points 9 months ago

Meh, I knew that my graph would never have loops and would only ever have one path from A to B, so it did it well enough. Pretty easy to test!

[–] silasmariner@programming.dev 5 points 9 months ago (2 children)

I find that code way too fun to write to let someone or something else do it for me 😂

[–] redcalcium@lemmy.institute 4 points 9 months ago

Actual time spent micromanaging the AI until it produces the perfect code may or may not exceed writing the code yourself.

[–] GammaGames@beehaw.org 1 points 9 months ago

That’s fair, and if I were getting paid for it I’d do the same! But it’s for a project that’s essentially a pomodoro timer 😆 so it’s harder to justify the time

[–] MaoZedongers@lemmy.today 19 points 9 months ago

what a shocker

[–] chepox@sopuli.xyz 16 points 9 months ago (3 children)

I use it mostly as a help menu. Details of the function and parameter settings. Also fixing errors. I don't use it to generate code for me though.

[–] conciselyverbose@kbin.social 23 points 9 months ago* (last edited 9 months ago) (1 children)

Using it to generate code isn't inherently bad (outside of copyright concerns). Especially in "stupid amount of boiler plate" languages/etc.

But the problem is that people are lazy. They don't bother understanding the output, making sure it does what you want it to, etc. It's not that different than people copy pasting code from reference material. Part of the beauty of software development is that you don't have to solve every problem someone else has already solved. But you do need to know what your code is doing and why.

Copilot is a shortcut to code that "works" with less requirement to know what's happening.

[–] evatronic@lemm.ee 9 points 9 months ago (1 children)

I thought we solved the boilerplate issue with templates and snippets like 30 years ago.

[–] lemmyvore@feddit.nl 5 points 9 months ago

Not only that, but we solved it in a deterministic manner. The way LLMs go about it, by picking something they think sort of maybe looks like the right thing is more bother than it's worth.

[–] Landless2029@lemmy.world 6 points 9 months ago* (last edited 9 months ago) (1 children)

It's awesome for debugging for me.

Also helped me a few times with recursive logic.

As with any AI solution it's "garbage in. Garbage out."

Write your code normally. Then ask to generate comments? Add logging? Any tips for improvements?

You have to already know how to code so you know what to ignore.

[–] bruhduh@lemmy.world 5 points 9 months ago

Exactly, ai only speeds up your coding, quality still depends on you

[–] linearchaos@lemmy.world 3 points 9 months ago

If I don't use copilot to give me a piece of best practice code, I'm probably going to go and find it with a search engine.

Obviously I'm not going to do it for every little thing but if I'm going to implement a * somewhere I screw with that what, once every 5 years?I'm going to go and look how someone else did it and probably take their exact implementation and make minor modifications.

I'm not an absolute copy and paste fiend but I don't have the time to reinvent the wheel every time I want to do something. For the most part it's faster to go and grab crowd vetted code from someone that it is to go back through my own stuff and source my own implementation in the last project. Hell, and a lot of cases there might even be a better implementation than I used the last time I borrowed it from someone else.