this post was submitted on 15 Apr 2026
-48 points (23.3% liked)

Technology

83966 readers
3545 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] brynden_rivers_esq@lemmy.ca 17 points 5 days ago (2 children)

I'm not a coder, so I can't speak to the quality of code generated by these models. I am a lawyer, and every time I see stuff that lay people think is impressive in my field, I can't help but guffaw and think "none of this is going to function, and no one will know for years. We're so fucked...and then one day we'll have to clean all this up and it's gonna be so much work." I kind of assume it'll be similar for code? Like...it'll obviously be somewhat better because there is a lot of testing you can actually do, whereas in law "testing" takes many years...and by the time you find out something doesn't work, the burden of having done it wrong all this time, thinking it was right is catastrophic (which is why lawyers are so conservative about language that they "know works."

I can see how little features can get added and these tools can deliver on those projects fast...but like...can they do bigger things with consistency? Can they like...set things up well? I'm not saying it's impossible, but...I guess i'm thinking about Go. It took a long time for neural networks to get to be good at 19 x 19. They got good at 9 x 9 pretty fast. But as the game gets more complicated, it's way WAY harder to do good long-term strategy. And the machines got there, no doubt. But the entire universe of Go is a 19x19 grid, on which the spaces are black or white or empty. How much more complicated is a language? Even a programming language? infinitely more complex, of course!

So I worry that we're going to have individual features that work well, but systems that cannot function...looking like the uhhh...weasley house in Harry Potter...but without the magic to hold it up lol.

[–] phutatorius@lemmy.zip 4 points 4 days ago* (last edited 4 days ago)

I kind of assume it’ll be similar for code?

Yes.

Like…it’ll obviously be somewhat better because there is a lot of testing you can actually do

If your code can cost someone their life savings or get them maimed or killed, there's even more testing to do when using an LLM, since there's no demonstrable basis for the way the code is that it recommends.

I've been coding for a very long time. Now I'm mainly in software tech management, but I still code (proofs of concept, new visualizations, that sort of thing). In the field I'm in, we've put in a lot of effort to assess the value of large language models (LLMs) to assist in our coding. We're in a highly technical field. Because our use cases are not common, and some of our requirements are extreme, there are no good code examples to train an LLM on. Consequently, we have found that the LLM's recommendations in those cases are worthless time-wasting crap.

If you're doing something in a well-known language, in a well-known framework, with non-safety-critical requirements and with volumes, response times and reliability within moderate bounds, the training set will be much bigger and you'll probably have better luck with LLMs. But that means you could also just do a web search or look on something like StackOverflow.

We do have active machine learning (ML) efforts underway, and some of those look very promising for certain tricky problems within our domain. But ML is a whole different kettle of fish than LLMs.

Your observations on Go are regarding the size of the state space of the game, which is 19 times 19 times a few more dimensions and constraints that reflect the allowable state combinations and the transition rules from one position to the next. The 4th or 5th power of something (to be conservative) gets big really damn fast. Some problems are intrinsically intractable, and AI won't help with those, though quantum computing might in at least some cases.