arendjr

joined 8 months ago
[–] arendjr@programming.dev 5 points 4 months ago (1 children)

Yeah, sorting is definitely a common use case, but note it also didn’t improve every sorting use case. Anyway, even if I’m a bit skeptical I trust the Rust team that they don’t take these decisions lightly.

But the thing that lead to my original question was: if the compiler itself uses the std sorting internally, there’s also additional reason to hope that it might have transitive performance benefits. So even if compiling the Rust compiler with this PR was actually slower, compiling again with the resulting compiler could be faster since the resulting compiler benefits from faster sorting. So yeah, fingers crossed 🤞

[–] arendjr@programming.dev 7 points 4 months ago (3 children)

Yeah, it was the first line of the linked PR:

This PR replaces the sort implementations with tailor-made ones that strike a balance of run-time, compile-time and binary-size, yielding run-time and compile-time improvements.

It was also repeated a few paragraphs later that the motivation for the changes was both runtime and compile time improvements. So a little bit bumped to hear the compile time impact wasn’t as good as the authors hoped apparently. I’m not even sure I fully endorse the tradeoff, because it seems the gains, while major, only affect very select use cases, while the regressions seem to affect everyone and hurt in an area that is already perceived as a pain point. But oh well, the total regression is still minor so I guess we’ll live with it.

[–] arendjr@programming.dev 5 points 4 months ago (5 children)

The post mentioned that the introduction of these new algorithms brings compile-time improvements too, so how should I see this? I assumed it meant that compiling applications that use sorting would speed up, but that seems like a meaningless improvement if overall compilation times have regressed. Or do you mean compiling the compiler has become slower?

[–] arendjr@programming.dev 8 points 4 months ago (7 children)

Does the Rust compiler use their std sort algorithms, or does it already use specialized ones? If the former, it would be a great side-effect if the compiler itself receives additional speed ups because of this.

[–] arendjr@programming.dev 7 points 4 months ago* (last edited 4 months ago)

From what I understand as I skimmed over the stable sort analysis (https://github.com/Voultapher/sort-research-rs/blob/main/writeup/driftsort_introduction/text.md), it lost out against driftsort.

[–] arendjr@programming.dev 1 points 5 months ago

For a little bit I thought this library might be a subtle joke, seeing the #define _SHITPRESS_H at the start. That combined with the compress() and decompress() not taking any arguments and not having a return value, I thought we were being played. Not to mention the library appears to be plain C rather than C++... surely the author should know the difference?

Then I saw how the interface actually works:

// interface for the library user, implement these in your program:
unsigned int SPR_in(); // Return next byte from input or value > 255 on EOF.
void SPR_out(unsigned char); // Output byte.

This seems extremely poorly thought out. Calling into global functions for input and output means that your library will be a pain to use in any program that has to (de)compress anything more than a single input.

[–] arendjr@programming.dev 18 points 5 months ago (1 children)

The System76 scheduler helps to tune for better desktop responsiveness under high load: https://github.com/pop-os/system76-scheduler I think if you use Pop!OS this may be set up out-of-the-box.

[–] arendjr@programming.dev 3 points 6 months ago

I greatly fear refactoring in Rust. Making a single conceptual change can require a huge number of code changes in practice, especially if it’s a change to ownership.

I think this is a fair criticism, but I think you and the poster you responded to are talking about different things. Usually when people use the term “fearless” in relation to Rust, it means the language provides a high level of confidence that what you’re delivering is free of mistakes. This certainly applies to refactoring too, where usually after you are done, and things compile again, you can be reasonably confident things work as before (assuming you didn’t make other changes the type system can’t catch for you).

The kind of fear you are describing sounds more like a discouragement because of the amount of work ahead of you. That’s fair, because Rust does sometimes make things harder. I just think many Rust developers will disagree with you, not because you’re wrong, but because they may not feel the same type of discouragement, possibly because they’ve learned to appreciate the rewards more.

[–] arendjr@programming.dev 3 points 7 months ago (2 children)

It's a fair concern, but if there's any consolation, we're experiencing quite the influx of new contributors lately and the maintainer team is growing every month for the last couple of months. There's a lot of steam coming our way, so I don't think we'll run out just yet :)

[–] arendjr@programming.dev 3 points 7 months ago (4 children)

Surely it would be easier to leave that to the TypeeScript devs and just focus on linting and formatting, no?

Almost nobody uses the TypeScript compiler for transpilation. I think most people nowadays use either Esbuild or SWC for that. The advantage that Biome has is that we already have the parsing and the serialization infrastructure and can add features like that with relative ease. For users that means fewer dependencies, less configuration, and less room for error.

[–] arendjr@programming.dev 9 points 7 months ago (1 children)

Couple of weeks ago there was a post here calling for more content to be posted in this sub, so I figured you might appreciate the content. As a project, Biome is also helping a lot of web developers become interested in Rust, since many of our contributors make their first-time Rust contributions there.

view more: ‹ prev next ›