There's a proliferation of dynamically and/or softly typed languages. There are very few, if any, truly untyped languages. (POSIX shells come close, though internally they have at least two types, strings and string-arrays, even if the array type isn't directly usable without non-POSIX features.)
BatmanAoD
Yes. Types are good. Numeric operations have specific hardware behavior that depends on whether you're using floating-point or not. Having exclusively floating-point semantics is wildly wrong for a programming language.
I think you're misunderstanding that paragraph. It's specifically explaining how LLMs are not like humans, and one way is that you can't "nurture growth" in them the way you can for a human. That's not analogous to refining your nvim config and habits.
That's just not terribly meaningful, though. Was JavaScript the "best tool" for client-side logic from the death of Flash until the advent of TypeScript? No, it was the only tool.
Even in the original comic, that would have been appropriate, I think.
At one point the user linked to a rust-lang forum thread from 2016-2019 as evidence that Jai has "some of the tools to make the code language agnostic" or something like that. The thread started with a discussion of array-of-struct vs struct-of-array data layouts, which of course has nothing to do with making code "language agnostic." The user also mentioned the coding influencer lunduke multiple times. So I think they are simply misinformed on a lot of points, and I doubt they're in the closed beta for Jai.
(I read some of the comments simply because I had the same question you did. And, as it happens, the last post in the forum thread I mentioned was written by me, which was a funny surprise.)
Exactly: that's tight feedback loops. Agents are also capable of reading docs and source code prior to generating new function calls, so they benefit from both of the solutions that I said people benefit from.
As an even more obvious example: students who put wrong answers on tests are "hallucinating" by the definition we apply to LLMs.
making the same mistakes
This is key, and I feel like a lot of people arguing about "hallucinations" don't recognize it. Human memory is extremely fallible; we "hallucinate" wrong information all the time. If you've ever forgotten the name of a method, or whether that method even exists in the API you're using, and started typing it out to see if your autocompleter recognizes it, you've just "hallucinated" in the same way an LLM would. The solution isn't to require programmers to have perfect memory, but to have easily-searchable reference information (e.g. the ability to actually read or search through a class's method signatures) and tight feedback loops (e.g. the autocompleter and other LSP/IDE features).
This seems like it doesn't really answer OP's question, which is specifically about the practical uses or misuses of LLMs, not about whether the "I" in "AI" is really "intelligent" or not.
One list, two list, red list, blue list
(I genuinely thought that was where you were going with that for a line or two)
I have managed to mostly avoid needing to code in either language, but my strong inclination is to agree that they are indeed hacks.