sleep_deprived

joined 1 year ago
[–] sleep_deprived@lemmy.world 3 points 2 weeks ago

Reminds me of The Gourmet in Skyrim

[–] sleep_deprived@lemmy.world 21 points 1 month ago (8 children)

If we stop doing business with SpaceX, we immediately demolish most of our capability to reach space, including the ISS until Starliner quits failing. Perhaps instead of trying to treat this as a matter of the free market we should recognize it as what it is - a matter of supreme economic and military importance - and force the Nazi fucker out.

[–] sleep_deprived@lemmy.world 2 points 1 month ago (1 children)

I'd be interested in setting up the highest quality models to run locally, and I don't have the budget for a GPU with anywhere near enough VRAM, but my main server PC has a 7900x and I could afford to upgrade its RAM - is it possible, and if so how difficult, to get this stuff running on CPU? Inference speed isn't a sticking point as long as it's not unusably slow, but I do have access to an OpenAI subscription so there just wouldn't be much point with lower quality models except as a toy.

[–] sleep_deprived@lemmy.world 5 points 1 month ago

Bevy, cause I'm a sucker for Rust

[–] sleep_deprived@lemmy.world 8 points 2 months ago

Well they said .NET Framework, and I also wouldn't be surprised if they more or less wrapped that up - .NET Framework specifically means the old implementation of the CLR, and it's been pretty much superseded by an implementation just called .NET, formerly known as .NET Core (definitely not confusing at all, thanks Microsoft). .NET Framework was only written for Windows, hence the need for Mono/Xamarin on other platforms. In contrast, .NET is cross-platform by default.

[–] sleep_deprived@lemmy.world 4 points 4 months ago

I've found it depends a lot on the game. In CP2077, DLSS+frame gen looks great to me with full raytracing enabled. But in The Witcher 3, I found frame gen to cause a lot of artifacts, and in PvP games I wouldn't use regular DLSS/FSR. In general I've found the quality preset in DLSS to be mostly indistinguishable from native on 3440x1440, and I'm excited to try FSR 3 when I get the chance.

[–] sleep_deprived@lemmy.world 2 points 4 months ago

This is seriously wonderful news. DLSS was just head and shoulders above FSR 2 in my experience, so if this comes close it's a huge deal. DLSS is (hopefully was) Nvidia's biggest advantage over AMD in my opinion.

[–] sleep_deprived@lemmy.world 32 points 5 months ago (13 children)

This is a use-after-free, which should be impossible in safe Rust due to the borrow checker. The only way for this to happen would be incorrect unsafe code (still possible, but dramatically reduced code surface to worry about) or a compiler bug. To allocate heap space in safe Rust, you have to use types provided by the language like Box, Rc, Vec, etc. To free that space (in Rust terminology, dropping it by using drop() or letting it go out of scope) you must be the owner of it and there may be current borrows (i.e. no references may exist). Once the variable is droped, the variable is dead so accessing it is a compiler error, and the compiler/std handles freeing the memory.

There's some extra semantics to some of that but that's pretty much it. These kind of memory bugs are basically Rust's raison d'etre - it's been carefully designed to make most memory bugs impossible without using unsafe. If you'd like more information I'd be happy to provide!

[–] sleep_deprived@lemmy.world 38 points 7 months ago (5 children)

I'm only an armchair physicist, but I believe this isn't possible due to relativity. I know that, at least, there are cases where two observers can disagree on whether an event occurred simultaneously. Besides all the other relativity weirdness, that alone seems to preclude a truly universal time standard. I would love for someone smarter than me to explain more and/or correct me though!

[–] sleep_deprived@lemmy.world 6 points 9 months ago (3 children)

The issue is that, in the function passed to reduce, you're adding each object directly to the accumulator rather than to its intended parent. These are the problem lines:

if (index == array.length - 1) {
	accumulator[val] = value;
} else if (!accumulator.hasOwnProperty(val)) {
	accumulator[val] = {}; // update the accumulator object
}

There's no pretty way (that I can think of at least) to do what you want using methods like reduce in vanilla JS, so I'd suggest using a for loop instead - especially if you're new to programming. Something along these lines (not written to be actual code, just to give you an idea):

let curr = settings;
const split = url.split("/");
for (let i = 0; i < split.length: i++) {
    const val = split[i];
    if (i != split.length-1) {
        //add a check to see if curr[val] exists
        let next = {};
        curr[val] = next;
        curr = next;
    }
    //add else branch
}

It's missing some things, but the important part is there - every time we move one level deeper in the URL, we update curr so that we keep our place instead of always adding to the top level.

[–] sleep_deprived@lemmy.world 1 points 9 months ago (1 children)

The GPU I used is actually a 1080, with a (rapidly declining in usefulness) Intel 4690k. But I suppose laptop vs desktop can certainly make all the difference. What I really want is GPU virtualization, which I've heard AMD supports, but I'm not about to buy a new GPU when what I've got works fine.

view more: next ›