jasory

joined 2 years ago
[–] jasory@programming.dev 2 points 4 days ago

Pivoting away from number theory and trying to learn molecular dynamics. I still have a large number theory research library that I can work on, but now I need to find a research topic in MD.

[–] jasory@programming.dev 1 points 4 days ago (1 children)

Sure, but is there an actual reason to be switching?

Uutils doesn't seem to be an evolution of coreutils, but a functional clone. What advantage do we get with that?

Note: I have pull requests against uutils so I'm by no means anti-Rust or against the project. But I personally would not replace coreutils with it.

[–] jasory@programming.dev 2 points 4 days ago (1 children)

Why at the same time? Can't it be done over a week?

[–] jasory@programming.dev 13 points 4 days ago

Or maybe the resignation was a wake-up call that resulted in systemic changes.

I have no idea what the actual case is, but there are often multiple possible causes.

[–] jasory@programming.dev 1 points 4 days ago

The thing about all these conspiracy theories about false flag killings, is why would you choose such an incredibly risky way, where the target could just as easily be accidentally killed if you get the windage wrong?

The reality is that is vastly more likely that they were trying to kill ICE employees, but since they were blindly firing at a van, they only hit detainees.

[–] jasory@programming.dev -5 points 1 week ago (2 children)

Or get shot to death, when an anti-ICE activist fires on your transport.

[–] jasory@programming.dev 6 points 1 week ago

Israel was going for a death toll, they said it explicitly themselves.

AI "errors" had nothing to do with the outcome in Gaza. The IDF would have used another sloppier metric for targeting, they flat out don't care as long as they still get money and US troops defending them.

[–] jasory@programming.dev 1 points 1 week ago

Physics modeling is arguably the most important task of computers. That was the original impetus for building them; artillery calculations in WW2.

All engineering modeling uses physics modeling, almost always linear algebra (which involves large summations). Nuclear medicine—physics, weather forecasting—physics, molecular dynamics and computational chemistry—physics.

Physics modeling is the backbone of modern technology, it's why so much research has been done on doing it efficiently and accurately.

[–] jasory@programming.dev 2 points 1 week ago

Their articles aren't that deep and they mostly focus on similar topics.

I think it's perfectly possible for someone to have a backlog of work/experience that they are just now writing about.

If it were AI spam, I would expect many disparate topics at a depth slightly more than a typical blog post but clearly not expert. The user page shows the latter, but not the former.

However, the Rubik's cube article does seem abnormal. The phrasing and superficiality makes it seem computer-generated, a real Rubik's afficionado would have spent some time on how they cube.

Of course I say this as someone much more into mathematics than "normal" software engineering. So maybe their writing on those topics is abnormal.

[–] jasory@programming.dev 4 points 1 week ago (4 children)

You can use Kahan summation to mitigate floating point errors. A mere 100 thousand floating point operations is a non-issue.

As a heads up computational physics and mathematics tackle problems trillions of times larger than any financial computation, that's were tons of algorithms have been developed to handle floating point errors. Infact essentially any large scale computation specifically accounts for it.

[–] jasory@programming.dev 2 points 1 month ago

I'm not a software dev but rather a mathematical researcher. I see zero use for myself or designing any advanced or critical systems. LLM coding is like relying on stack overflow, if you want to solve a novel or sophisticated problem relying on them is the wrong approach.

[–] jasory@programming.dev 2 points 2 months ago (1 children)

The commenter more or less admitted that they have no way of knowing that the algorithm is actually correct.

In your first analogy it would be like if text predictors pulled words from a thesaurus instead of a list of common words.

19
submitted 8 months ago* (last edited 8 months ago) by jasory@programming.dev to c/rust@programming.dev
 

I wrote up a port of GNU factor that has a slightly nicer UI than the original, and runs in approximately 1/3rd the time for 128-bit integers, on average. This is just a preliminary release and I plan implementing elliptic curve arithmetic and extending it to 192-bit to cover all the small integers that CADO-NFS doesn't support.

The factorization algorithm is provided as a separate crate that provides a C-api, since fast factorisation algorithms are hard to come-by.

view more: next ›