arendjr

joined 2 years ago
[–] arendjr@programming.dev 2 points 3 months ago* (last edited 3 months ago) (2 children)

You know, as a full-time Linux user, I think I rather have game developers continue to create Windows executables.

Unlike most software, games have a tendency to be released, then supported for one or two years, and then abandoned. But meanwhile, operating systems and libraries move on.

If you have a native Linux build of a game from 10 years ago, good luck trying to run it on your modern system. With Windows builds, using Wine or Proton, you actually have better chances running games from 10 or even 20 years ago.

Meanwhile, thanks to Valve’s efforts, Windows builds have incentive to target Vulkan, they’re getting tested on Linux. That’s what we should focus on IMO, because those things make games better supported on Linux. Which platform the binary is compiled for is an implementation detail… and Win32 is actually the more stable target.

[–] arendjr@programming.dev 4 points 4 months ago (1 children)

tsc is (very) slow and there are also no convenient ways to interact with it from Rust.

So it saves a lot development and CI time to roll our own. The downside is that our inference still isn’t as good as tsc of course, but we’re hopeful the community can help us get very close at least.

[–] arendjr@programming.dev 4 points 4 months ago (2 children)

Heh, I agree with everything you said, but I’m afraid such a framework is impossible to create, let alone implement. It’s impossible to foresee the infinite possibilities for people to screw themselves through bad decisions, so all you’d create is a lot of bureaucracy to still end up in the same place.

[–] arendjr@programming.dev 3 points 4 months ago (1 children)

That’s still a very major achievement! Do I understand correctly this means all target architectures supported by GCC are now unlocked for Rust too?

[–] arendjr@programming.dev 6 points 4 months ago

It’s that the compiler doesn’t help you with preventing race conditions. This makes some problems so hard to solve in C that C programmers simply stay away from attempting it, because they fear the complexity involved.

It’s a variation of the same theme: Maybe a C programmer could do it too, given infinite time and skill. But in practice it’s often not feasible.

[–] arendjr@programming.dev 6 points 4 months ago

Which one should I pick then, that is both as fast as the std solutions in the other languages and as reusable for arbitrary use cases?

Because it sounds like your initial pick made you loose the machine efficiency argument and you can’t have it both ways.

[–] arendjr@programming.dev 5 points 4 months ago (3 children)

I’m not saying you can’t, but it’s a lot more work to use such solutions, to say nothing about their quality compared to std solutions in other languages.

And it’s also just one example. If we bring multi-threading into it, we’re opening another can of worms where C doesn’t particularly shine.

[–] arendjr@programming.dev 6 points 4 months ago (2 children)

Well, let’s be real: many C programs don’t want to rely on Glib, and licensing (as the other reply mentioned) is only one reason. Glib is not exactly known for high performance, and is significantly slower than the alternatives supported by the other languages I mentioned.

[–] arendjr@programming.dev 29 points 4 months ago (15 children)

I would argue that because C is so hard to program in, even the claim to machine efficiency is arguable. Yes, if you have infinite time for implementation, then C is among the most efficient, but then the same applies to C++, Rust and Zig too, because with infinite time any artificial hurdle can be cleared by the programmer.

In practice however, programmers have limited time. That means they need to use the tools of the language to save themselves time. Languages with higher levels of abstraction make it easier, not harder, to reach high performance, assuming the abstractions don’t provide too much overhead. C++, Rust and Zig all apply in this domain.

An example is the situation where you need a hash map or B-Tree map to implement efficient lookups. The languages with higher abstraction give you reusable, high performance options. The C programmer will need to either roll his own, which may not be an option if time Is limited, or choose a lower-performance alternative.

[–] arendjr@programming.dev 1 points 4 months ago (1 children)

Of course, but it needn’t be black and white. You can also diversify, make yourself less reliant on a single platform. And by doing so, enable your audience to follow you elsewhere. Or diversify into different activities altogether. And when it’s no longer half your income on the line, then switch.

But doing nothing and saying, “but half my income!”? That’s not only a choice, but also complacency.

[–] arendjr@programming.dev 7 points 4 months ago (3 children)

Great points, except:

People can’t leave for anything smaller.

They can and some do. It’s still a choice.

[–] arendjr@programming.dev 1 points 4 months ago

Ah yes, then we are in agreement. I thought we were talking about unintentionally arriving at the same implementation after looking at the original, which is where the discussion started.

 

I just had a random thought: a common pattern in Rust is to things such as:

let vec_a: Vec<String> = /* ... */;
let vec_b: Vec<String> = vec_a.into_iter().filter(some_filter).collect();

Usually, we need to be aware of the fact that Iterator::collect() allocates for the container we are collecting into. But in the snippet above, we've consumed a container of the same type. And since Rust has full ownership of the vector, in theory the memory allocated by vec_a could be reused to store the collected results of vec_b, meaning everything could be done in-place and no additional allocation is necessary.

It's a highly specific optimization though, so I wonder if such a thing has been implemented in the Rust compiler. Anybody who has an idea about this?

247
submitted 2 years ago* (last edited 2 years ago) by arendjr@programming.dev to c/rust@programming.dev
 

Slide with text: “Rust teams at Google are as productive as ones using Go, and more than twice as productive as teams using C++.”

In small print it says the data is collected over 2022 and 2023.

 

I have a fun one, where the compiler says I have an unused lifetime parameter, except it's clearly used. It feels almost like a compiler error, though I'm probably overlooking something? Who can see the mistake?

main.rs

trait Context<'a> {
    fn name(&'a self) -> &'a str;
}

type Func<'a, C: Context<'a>> = dyn Fn(C);

pub struct BuiltInFunction<'a, C: Context<'a>> {
    pub(crate) func: Box<Func<'a, C>>,
}
error[E0392]: parameter `'a` is never used
 --> src/main.rs:7:28
  |
7 | pub struct BuiltInFunction<'a, C: Context<'a>> {
  |                            ^^ unused parameter
  |
  = help: consider removing `'a`, referring to it in a field, or using a marker such as `PhantomData`

For more information about this error, try `rustc --explain E0392`.
error: could not compile `lifetime-test` (bin "lifetime-test") due to 1 previous error
 

As part of my Sudoku Pi project, I’ve been experimenting with improving the Bevy UI experience. I’ve collected most of my thoughts on this topic in this post.

 

I wrote a post about how our Operational Transfomation (OT) algorithm works at Fiberplane. OT is an algorithm that enables real-time collaboration, and I also built and designed our implementation. So if you have any questions, I'd be happy to answer them!

view more: ‹ prev next ›