LMs give the appearance of understanding, but as soon as you try to use them for anything that you actually are knowledgable in, the facade crumbles.
Even for repetitive tasks, you have to do a lot of manual checking to ensure they did not start hallucinating half way through.
I'm a programmer as well. When ChatGPT & Co initially came out, I was pretty excited tbh and attempted to integrate it into my workflow, which kinda worked-ish? But was also a lot of me being amazed by the novelty, and forgiving of the shortcomings.
Did not take me long to phase them out again though. (And no, it's not the models I used; I have tried again now and then with the new, supposedly perfect-for-programming models, same results). The only edgecase where they are generally useful (to me at least) are simple tasks that I have some general knowledge of (to double theck the LM's work) but not have any interest in learning anything further than I already know. Which does occur here and there, but rarely.
For everything else programming-related, it's flat out shit.I do not beleive they are a time saver for even moderately difficult programs. Bu the time you've run around in enough circles, explaining "now, this does not do what you say it does", "that's the same wring answer you gave me two responses ago", "you have hallucinated that function", and found out the framework in use dropped that general structure in version 5, you may as well do it yourself, and actually learn how to do it at the same time.
For work, I eventually found that it took me longer to describe the business logic (and do the above dance) than to just.... do the work. I also have more confidence in the code, and understand it completely.
In terms of programming aids, a linter, formatter and LSP are, IMHO, a million times more useful than any LM.