arendjr

joined 2 years ago
[–] arendjr@programming.dev 1 points 3 weeks ago (1 children)

Saying that people are corruptible doesn’t imply they are corrupt. Thankfully we live finite lives and plenty of us can make it to the end before we corrupt ourselves.

Given the right luck they could only mirror the elite, not change their structure.

This is quite literally pretending the Age of Enlightenment never existed. We can change structures and have throughout history.

[–] arendjr@programming.dev 2 points 1 month ago (1 children)

5% at the end of the decade is quite a pessimistic take 😉

Looking at the graph 1% was crossed mid/late 2021, while 2% was crossed mid 2024, so almost 3 years later. Now 3% is crossed a little more than a year later. Next year we would be likely to have crossed 4% and 5% should be no later than 2027, even if it doesn’t speed up much further.

[–] arendjr@programming.dev 2 points 1 month ago (1 children)

I’m not sure I agree with the “no one claimed” part, because I think the proof is specifically targeting the claim that it is more likely than not that we are living in a simulation due to the “ease of scaling” if simulated realities are a thing. Which I think is one of the core premises of simulation theory.

In any case, I don’t think the reasoning only applied to “full scale” simulations. After all, let’s follow the thought experiment indeed and presume that quantum mechanics is indeed the result of some kind of “lazy evaluation” optimisation within a simulation. Unless you want to argue solipsism in addition to simulation theory, the simulation is still generating perceptions for every single conscious actor within the simulation, and the simulation therefore still needs to implement some kind of “theory of everything” to ensure all perceptions across actors are being generated consistently.

And ultimately, we still end up with the requirement that there is some kind of “higher order” universe whose existence is fundamentally unknowable and beyond our understanding. Presuming that such a universe exists and manages our universe seems to me to be a masked belief in creationism and therefore God, while trying very hard to avoid such words.

The irony is that the thought experiment started with “pesky weird behaviours” that we can’t explain. Making the assumption that our “parent universe” is somehow easier to explain is really just wishful thinking that’s as rational as wishing a God to be responsible for it all.

I’ll be straight here: I’m a deist, I do think that given sufficient thought on these matters, we must ultimately admit there is a deity, a higher power that we cannot understand. We may as well call it God, because even though it’s not a religious idea of God, it is fundamentally beyond our capacity to understand. I just think simulation theory is a bit of a roundabout way to get there as there are easier ways to reach the same conclusion :)

[–] arendjr@programming.dev 4 points 1 month ago

The original quote is a horrible take, trying to make people with suicidal thoughts feel guilty about themselves, as if they don’t feel shit enough yet.

The thing is, suicide is only an out if the pain is beyond a single person’s capacity to bear. Yes, the act may inflict pain upon others, but generally not to the same amount, or most suicides would set off a chain reaction. So chill out and don’t try to blame those are already feeling down.

Of course none of this is an endorsement of suicide. If you’re having these thoughts, please find help. Here in the Netherlands you can call 113, and other countries might have their own support lines. Hell, shoot me a DM if you feel the need.

I did have a friend of mine commit suicide many years ago. It’s not an experience you wish upon anyone, and I’m talking about both the experience of the suicidal as well as the survivors. But I do believe a large part of the pain can be prevented by sharing your thoughts, so that you can get out of the negative spiral you may reinforcing on yourself. So talk to someone. Anyone.

[–] arendjr@programming.dev 8 points 1 month ago* (last edited 1 month ago) (4 children)

It’s possible yes, but the nice thing is that we know we are not merely talking about “advanced people with vastly superior technology” here. The proof implies that technology within our own universe would never be able to simulate our own universe, no matter how advanced or superior.

So if our universe is a “simulation” at least it wouldn’t be an algorithmic one that fits our understanding. Indeed we still cannot rule out that our universe exists within another, but such a universe would need a higher order reality with truths that are fundamentally beyond our understanding. Sure, you could call it a “simulation” still, but if it doesn’t fit our understanding of a simulation it might as well be called “God” or “spirituality”, because the truth is, we wouldn’t understand a thing of it, and we might as well acknowledge that.

[–] arendjr@programming.dev 1 points 1 month ago (1 children)

Okay, I’ll spend one last reply on this, because I don’t appreciate getting a strawman assigned to me. I didn’t say getting “every character’s expressed desires being instantly granted” is the main thing making fiction interesting. I said it’s seeing actions play out that you normally don’t is what it makes it interesting. That’s quite a different thing.

And no, I still don’t think it’s a major plot point. It’s a plot point, yes, but the movies also left it out without real impact to the plot. That’s not a major plot point to me.

[–] arendjr@programming.dev 0 points 1 month ago* (last edited 1 month ago) (3 children)

that’s a reason to have Beverly suggest it. Not a reason to have it actually happen.

Sorry, but that's just silly. If it were brought up as a suggestion that didn't happen, that would be even weirder than it actually happening. As a writer, you don't go around finding reasons to block your character's ideas, because that's a horribly anti-climactic thing to do, teasing your readers for no purpose, but worst of all, you don't get to see how the action pans out if it does happen, which is the primary thing that makes fiction interesting to begin with.

And no, not every action needs foreshadowing either. In the grand scheme of things, this whole scene that people fuss about isn't a major plot point in the book. I read the book twice (though even the second time was a while ago), and I had pretty much forgotten about it, until I saw people complaining it. But it still seems as if you think King has some moral obligation to guard and guide the actions of his characters. He doesn't, and thankfully he doesn't, because his books are more interesting for it.

[–] arendjr@programming.dev 2 points 1 month ago* (last edited 1 month ago) (5 children)

As a writer, I disagree. Writers often write thinking from the perspective of their characters. If something makes sense from the character’s perspective, they’ll write it. It’s not an endorsement by the writer, it just makes for a natural and believable progression and that’s why the book is better for it.

I can bet you King never decided that he should include such a scene because it would make the book better. He did it because he was writing from her perspective, and it popped into his mind as something that made sense for her to do.

It’s not a fantasy, not an endorsement, and not a post-rationalisation either. And knowing his writing style, upon reflection he probably felt it belonged for shock value alone. Writers do have a knack for pushing boundaries, and he’s certainly got a taste of it.

[–] arendjr@programming.dev 4 points 1 month ago

I think there is an objective good. That goodness is Life itself. So long as we treat all Life with respect and try to live a life of balance, that makes us good. You are right though that this is still a very simplified view, and what it means to "live in balance" can depend on the situation or environment. But it's a starting point at least.

As for forgiveness, it's a choice. If someone makes an honest mistake, it should be easy to forgive them, as whatever harm they caused was not intentional. But if someone makes a wilful mistake, it will be harder to forgive them. And yet, because forgiveness is a choice, we can look at the reasons why someone acted in a manner that was harmful, and still decide to forgive them, especially if they repent.

As for consequences, those are results of our actions, whether intentional or unintentional. They are not strictly related to the concept of forgiveness, but generally speaking, we find it easy to forgive someone if their actions are harmless, or if the consequences don't affect us personally. But if someone's actions do affect us, we find it harder to forgive, regardless of whether something was an honest mistake or not. But the key to forgiveness, in my opinion, is that we need to look beyond the consequences and look beyond how we were personally affected. Forgiveness is a choice, and that choice is easier to make if our emotion is not muddied by consequence.

[–] arendjr@programming.dev 1 points 1 month ago

Silver actually interacts horribly with and ruins the flavour of some foods. There’s a reason why silver cups often have gold plating on the inside to not ruin the taste of wine.

I’d stick with the steel any time.

[–] arendjr@programming.dev 8 points 3 months ago

I dunno, I have a Framework laptop and had a keyboard issue with it. It still worked, but one of the keys didn’t register well. So they sent me a new keyboard and I sent them back the old one after I’d swapped it. Not a single day was I without my laptop, which sounds quite unlikely compared to other laptop brands and the support you get (or not) with those. No buyer’s remorse here.

[–] arendjr@programming.dev 7 points 3 months ago

Captchas are getting out of hand.

 

I just had a random thought: a common pattern in Rust is to things such as:

let vec_a: Vec<String> = /* ... */;
let vec_b: Vec<String> = vec_a.into_iter().filter(some_filter).collect();

Usually, we need to be aware of the fact that Iterator::collect() allocates for the container we are collecting into. But in the snippet above, we've consumed a container of the same type. And since Rust has full ownership of the vector, in theory the memory allocated by vec_a could be reused to store the collected results of vec_b, meaning everything could be done in-place and no additional allocation is necessary.

It's a highly specific optimization though, so I wonder if such a thing has been implemented in the Rust compiler. Anybody who has an idea about this?

247
submitted 2 years ago* (last edited 2 years ago) by arendjr@programming.dev to c/rust@programming.dev
 

Slide with text: “Rust teams at Google are as productive as ones using Go, and more than twice as productive as teams using C++.”

In small print it says the data is collected over 2022 and 2023.

 

I have a fun one, where the compiler says I have an unused lifetime parameter, except it's clearly used. It feels almost like a compiler error, though I'm probably overlooking something? Who can see the mistake?

main.rs

trait Context<'a> {
    fn name(&'a self) -> &'a str;
}

type Func<'a, C: Context<'a>> = dyn Fn(C);

pub struct BuiltInFunction<'a, C: Context<'a>> {
    pub(crate) func: Box<Func<'a, C>>,
}
error[E0392]: parameter `'a` is never used
 --> src/main.rs:7:28
  |
7 | pub struct BuiltInFunction<'a, C: Context<'a>> {
  |                            ^^ unused parameter
  |
  = help: consider removing `'a`, referring to it in a field, or using a marker such as `PhantomData`

For more information about this error, try `rustc --explain E0392`.
error: could not compile `lifetime-test` (bin "lifetime-test") due to 1 previous error
 

As part of my Sudoku Pi project, I’ve been experimenting with improving the Bevy UI experience. I’ve collected most of my thoughts on this topic in this post.

 

I wrote a post about how our Operational Transfomation (OT) algorithm works at Fiberplane. OT is an algorithm that enables real-time collaboration, and I also built and designed our implementation. So if you have any questions, I'd be happy to answer them!

view more: ‹ prev next ›