Aux

joined 7 months ago
[–] Aux@feddit.uk -3 points 1 month ago

Ahaha! Oh my... American prudes and their perverted views...

[–] Aux@feddit.uk -5 points 1 month ago (2 children)

They are not children, that's the thing.

[–] Aux@feddit.uk 1 points 1 month ago

Did you know that many states in the US have more gun deaths per capita than Ukraine during an active war?

[–] Aux@feddit.uk 0 points 1 month ago

And look at what kind of mess they ended up!

[–] Aux@feddit.uk 14 points 1 month ago (3 children)

If Harry wasn't a self entitled prick there would be no books.

[–] Aux@feddit.uk -3 points 1 month ago

But people can sue you for that, AI can't.

[–] Aux@feddit.uk 3 points 1 month ago (5 children)

They are extremely useful for software development. My personal choice is locally running qwen3 used through AI assistant in JetBrains IDEs (in offline mode). Here is what qwen3 is really good at:

  • Writing unit tests. The result is not necessarily perfect, but it handles test setup and descriptions really well, and these two take the most time. Fixing some broken asserts takes a minute or two.
  • Writing good commit messages based on actual code changes. It is a good practice to make atomic commits while working on a task and coming up with commit messages every 10-30 minutes is just depressing after a while.
  • Generating boilerplate code. You should definitely use templates and code generators, but it's not always possible. Well, Qwen is always there to help!
  • Inline documentation. It usually generates decent XDoc comments based on your function/method code. It's a really helpful starting point for library developers.
  • It provides auto-complete on steroids and can complete not only the next "word", but the whole line or even multiple lines of code based on your existing code base. It gets especially helpful when doing data transformations.

What it is not good at:

  • Doing programming for you. If you ask LLM to create code from scratch for you, it's no different than copy pasting random bullshit from Stack Overflow.
  • Working on slow machines - a good LLM requires at least a high end desktop GPU like RTX5080/5090. If you don't have such a GPU, you'll have to rely on a cloud based solution, which can cost a lot and raises a lot of questions about privacy, security and compliance.

LLM is a tool in your arsenal, just like other tools like IDEs, CI/CD, test runners, etc. And you need to learn how to use all these tools effectively. LLMs are really great at detecting patterns, so if you feed them some code and ask them to do something new with it based on patterns inside, you'll get great results. But if you ask for random shit, you'll get random shit.

[–] Aux@feddit.uk 3 points 1 month ago (2 children)

Being right just half a time is much better than most people can do.

[–] Aux@feddit.uk 1 points 1 month ago

What do you mean "build"? It's part of the development process.

view more: ‹ prev next ›