Traister101

joined 11 months ago
[–] Traister101@lemmy.today 18 points 6 days ago
[–] Traister101@lemmy.today 8 points 1 week ago (3 children)

"Significantly" Going by the comparison Sony felt large enough to brag about there's hardly a noticeable difference

[–] Traister101@lemmy.today 5 points 3 weeks ago

There are a couple projects with native block chain art but as you might expect it's low resolution pixel art due to the nature of block chain being prohibitively expensive to use as storage

[–] Traister101@lemmy.today 3 points 1 month ago (1 children)

Brave is forked from Chromium so hypothetically they could maintain V2 but they'd need their own store as they currently rely on Googles

[–] Traister101@lemmy.today 65 points 1 month ago (2 children)

Yup. The moron even admitted it too!

[–] Traister101@lemmy.today 3 points 1 month ago* (last edited 1 month ago)

Elon pretended to lean left. He was and never has been left leaning. He's been the same old guy this entire time it's just continuing to be more and more difficult to pretend otherwise.

[–] Traister101@lemmy.today -2 points 2 months ago (2 children)

All cars are bad. The car you already own is less bad than a brand new car

[–] Traister101@lemmy.today 4 points 2 months ago (1 children)

Counterintuitive but more instructions are usually better. It enables you (but let's be honest the compiler) to be much more specific which usually have positive performance implications for minimal if any binary size. Take for example SIMD which is hyper specific math operations on large chunks of data. These instructions are extremely specific but when properly utilized have huge performance improvements.

[–] Traister101@lemmy.today 1 points 2 months ago (1 children)

I take it you haven't had to go through an AI chat bot for support before huh

[–] Traister101@lemmy.today 4 points 2 months ago (3 children)

We do know we created them. The AI people are currently freaking out about does a single thing, predict text. You can think of LLMs like a hyper advanced auto correct. The main thing that's exciting is these produce text that looks as if a human wrote it. That's all. They don't have any memory, or any persistence whatsoever. That's why we have to feed it a bunch of the previous text (context) in a "conversation" in order for it to work as convincingly as it does. It cannot and does not remember what you say

[–] Traister101@lemmy.today 7 points 2 months ago (1 children)

Yup, libraries should usually let the consumer chose what to do with an error, not crash the program without a choice in the matter. The only real exception is performance critical low level code such as the core of a graphics or audio driver. Though in those cases crashing also often isn't an option, you just power through and hope things aren't too screwed up.

view more: next ›