agressivelyPassive

joined 1 year ago
[–] agressivelyPassive@feddit.de 33 points 7 months ago (6 children)

If you actually want to use your machine, keeping the machine from nuking itself shouldn't be a hobby on its own. I need a reliable platform to work on, not a minefield on a fault line.

[–] agressivelyPassive@feddit.de 15 points 7 months ago (4 children)

But you don't. And neither do 99.99% of users.

[–] agressivelyPassive@feddit.de 9 points 7 months ago

Powergrids need to be upgraded anyway, simply because of EVs and heat pumps. All the energy that was formerly distributed via gas stations and pipes now has to be pushed through the grid.

Apart from that, local storage is actually pretty great for handling fluctuations, since it's essentially a smoothing capacitor.

Cities are a problem, but that's what large scale production like wind is for. Obviously, large storage facilities are also needed, but hydro power, hydrogen and a few local batteries could easily supplement the grid.

[–] agressivelyPassive@feddit.de 18 points 7 months ago

I'm so skeptical that AI will deliver large scale economic value.

The current boom is essentially fueled by free money. VCs pump billions into start-ups, more established companies get billions in subsidies or get their customers to pay outrageous amounts on promises. Yet, I have yet to see a single AI product that is worth the hassle. The results are either not that good or way too expensive, and if you couldn't rely on open models paid for by VC, you wouldn't be able to get anything off the ground.

[–] agressivelyPassive@feddit.de 8 points 7 months ago (2 children)

Nobody fired workers because of AI, that's just the narrative so they don't have to say "we're running out of money".

[–] agressivelyPassive@feddit.de 3 points 7 months ago

Will increasing the model actually help? Right now we're dealing with LLMs that literally have the entire internet as a model. It is difficult to increase that.

Making a better way to process said model would be a much more substantive achievement. So that when particular details are needed it's not just random chance that it gets it right.

Where exactly did you write anything about interpretation? Getting "details right" by processing faster? I would hardly call that "interpretation" that's just being wrong faster.

[–] agressivelyPassive@feddit.de 8 points 7 months ago (2 children)

It is far off. It's like saying you have the entire knowledge of all physics because you skimmed a textbook once.

Interpretation is also a problem that can be solved, current models do understand quite a lot of nuance, subtext and implicit context.

But you're moving the goal post here. We started at "don't get better, at a plateau" and now you're aiming for perfection.

[–] agressivelyPassive@feddit.de 9 points 7 months ago (4 children)

That is literally a complete misinterpretation of how models work.

You don't "have the Internet as a model", you train a model using large amounts of data. That does not mean, that this model contains any of the actual data. State of the at models are somewhere in the billions of parameters. If you have, say, 50b parameters, each being a 64bit/8 byte double (which is way, way too much accuracy) you get something like 400gb of data. That's a lot, but the Internet slightly larger than that.

[–] agressivelyPassive@feddit.de 3 points 7 months ago (6 children)

That's maybe because we've reached the limits of what the current architecture of models can achieve on the current architecture of GPUs.

To create significantly better models without having a fundamentally new approach, you have to increase the model size. And if all accelerators accessible to you only offer, say, 24gb, you can't grow infinitely. At least not within a reasonable timeframe.

[–] agressivelyPassive@feddit.de 1 points 7 months ago (1 children)

I still need a cable modem. And as far as I know, none of the ones that can be used with my provider support any other OS.

[–] agressivelyPassive@feddit.de 16 points 7 months ago (1 children)

I thought that was obvious enough.

[–] agressivelyPassive@feddit.de 5 points 7 months ago (1 children)

Jellyfin on a NAS plus a cheap little box attached to the TV should be fine.

An old RPi3 could be enough. Only complications might be transcoding. If the player can't handle the format, you might need to transcode, which could be taxing on the NAS.

view more: ‹ prev next ›