GissaMittJobb

joined 2 years ago
[–] GissaMittJobb@lemmy.ml 14 points 3 months ago

A tech chop shop, they buy tech companies, lay off most of the staff and then hike the price as much as they possibly can.

[–] GissaMittJobb@lemmy.ml 3 points 3 months ago

I knew I felt completely done with the town I grew up in at the point where I graduated high school. Through some funny circumstances, I then spent an additional 3 years there attending University.

Then I moved to the biggest city in our country in search of better job prospects. It was mostly about the availability of jobs at that point, I didn't have a specific desire to move to this particular city for any other reasons, as I didn't really know how it would be to live in a different city. I figured I could always move back - or to another place - if things didn't work out.

I have never looked back, as I learned that I really enjoy living in larger cities over smaller more car-dependent ones. I miss nothing from my old city, except maybe vicinity to my parents, which was never something I valued particularly highly anyway.

[–] GissaMittJobb@lemmy.ml 2 points 3 months ago

So quite literally worse than a coin flip, then.

Streaming lossless audio will use up three to six times as much data, along with the higher processing demands to play them back, so there's always a penalty involved. We didn't invent codecs for no reason.

[–] GissaMittJobb@lemmy.ml 6 points 3 months ago

Based on the vibes of every internet comment field, literally everyone and their mother wants to stream lossless.

You're right about the audible benefits however

[–] GissaMittJobb@lemmy.ml 24 points 3 months ago (6 children)

I have no idea what they think this will accomplish, to be honest. It has the legal value of posting on Facebook that you don't allow them to use your photos.

[–] GissaMittJobb@lemmy.ml 9 points 3 months ago (1 children)

Well, he has terminal cancer, so his spewing of hatred will soon come to a permanent end

[–] GissaMittJobb@lemmy.ml 2 points 3 months ago

I recommend the original Swedish movie, it's quite good.

[–] GissaMittJobb@lemmy.ml 1 points 3 months ago (1 children)

No, the old model does not have the training data. It only has "model weights". You can conceptualize those as the abstract rules that the old model learned when it read the training data. By design, they are not supposed to memorize their training data.

I expressed myself poorly, this is what I meant - it has the "essence" of the training data, but of course not the verbatim training data.

To outperform the old model, the new model needs more than what the old model learned. It needs primary sources, ie the training data itself. Which is going to be deleted.

I wonder how valuable in relative terms the old training data is to the process, compared to just the new training data. I can't answer it, but it would be interesting to know.

[–] GissaMittJobb@lemmy.ml 2 points 3 months ago (3 children)

I think we're in agreement with each other? The old model has the old training data, and then you train a new one on that model with new training data, right?

[–] GissaMittJobb@lemmy.ml 3 points 3 months ago (6 children)

I guess it depends on how important old data is when building upon new models, which I fully admit I don't know the answer to. As I understand it though, new models are not trained fully from scratch, but instead are a continuation of the older model trained with new techniques/new data.

To speculate, I guess not having the older data present in the new training stages might make the attributes of that data be less pronounced in the new output model.

Maybe they could cheat the system by trying to distill that data out of the older models and put that into the training data, but I guess the risk of model collapse is not-insignificant there

Again, limited understanding here, take everything I speculate with a grain of salt

[–] GissaMittJobb@lemmy.ml 3 points 3 months ago (8 children)

They essentially still have the information in the weights, so I guess they won't fret too much over not having it in the original training data.

view more: ‹ prev next ›