ArmoredCavalry

joined 1 year ago
[–] ArmoredCavalry@lemmy.world -2 points 5 months ago* (last edited 5 months ago) (1 children)
[–] ArmoredCavalry@lemmy.world 6 points 5 months ago (4 children)

Many are, but as far as I know, no hosting provider has ever tried something like what was claimed (which is why it made such news).

It seems like many people didn't even verify that portion of ToS was new (checking web archive), or wait for Vultr's response before closing their accounts.

Even after the official response, it feels like people stuck to their original assumptions and felt justified moving services?

Companies, and specifically the people in them, make mistakes. What matters is their reaction. I'm scratching my head to think what Vultr could do better in this case (other than creating a time machine to avoid the initial screw up).

[–] ArmoredCavalry@lemmy.world 58 points 5 months ago* (last edited 5 months ago) (13 children)

Vultr posted their response to the concerns here - https://www.vultr.com/news/a-note-about-vultrs-terms-of-service/

The portion of the ToS that people were worried about had been in place for years and had nothing to do with server intellectual property. They are removing it to avoid future confusion.

I don't disagree that it was poorly worded, but the amount of people jumping to the worst possible conclusions on this is concerning. What happened to Hanlon's Razor?

[–] ArmoredCavalry@lemmy.world 1 points 7 months ago

Weird! For reference one VM I run on only has 1 GB of memory, and Netdata uses 100-200 MB. Could be something going on with UnRAID though. Definitely some sort of bug I'd think, since normally resource usage should be very low across the board.

[–] ArmoredCavalry@lemmy.world 1 points 7 months ago (2 children)

That's strange, I've run it fine on some very underpowered hardware. Are you adding a specific monitoring integration with it, or just out of the box settings?

[–] ArmoredCavalry@lemmy.world 3 points 7 months ago* (last edited 7 months ago)

As others stated, you can run and access the interface locally (or setup your own reverse proxy) for free. Their Cloud dashboard is also free for up to 5 nodes. They recently added a flat-rate "Homelab" plan as well, if you want to remove the limit. It's all quite usable for $0 otherwise though!

[–] ArmoredCavalry@lemmy.world 20 points 7 months ago (11 children)

I'm a huge fan of Netdata, very configurable and monitors just about anything you could want. Great interface and alerts too - https://www.netdata.cloud/

[–] ArmoredCavalry@lemmy.world 11 points 11 months ago

Worth noting that this should not affect you if you are only using tunnels (no DNS entries / open ports).

[–] ArmoredCavalry@lemmy.world 6 points 1 year ago (1 children)

Netdata is fantastic, but not sure I'd call the UI mobile friendly (unless I'm missing something? 😂) To me, that's really one of the only weak points with it.

[–] ArmoredCavalry@lemmy.world 1 points 1 year ago

Thanks for the suggestion, I'll add that to my list was well!

[–] ArmoredCavalry@lemmy.world 12 points 1 year ago (3 children)

Started reading Parable of the Sower by Octavia E. Butler. I really like the style of writing, so much detail into the main character's mind.

It is also impressive just how relevant the topics are today, for a book written back in 1993 (climate change, wealth disparity, etc.). It's really fascinating (scary?) to see what the author thought the U.S. would look like in 2024 and onwards.

[–] ArmoredCavalry@lemmy.world 2 points 1 year ago (1 children)

It isn't how it works today. I'm talking about sometime in the distant (or near) future. Surely at some point AI will have the capabilities on par with at least a low level hacker.

Or, if you still think that's a stretch, just imagine all the ways perfectly legitimate software can cost companies money. Not through malicious design, but just by mistakes.

52
submitted 1 year ago* (last edited 1 year ago) by ArmoredCavalry@lemmy.world to c/technology@lemmy.world
 

When the idea of self-driving cars first started becoming mainstream, I remember a lot of debate about liability. If an accident occurs, who would be at fault? I think a lot of those questions are still unanswered.

Fast forward and now we have software like ChatGPT. I assume they'll only become more capable (and connected) over time.

Which makes it strange I haven't really heard any similar discussion around liability. What happens when it makes mistakes or causes damage?

Maybe in people's minds it doesn't matter, because AI is either something that helps with homework questions, or something that's taking over humanity. Reality is probably in between those two, with much more mundane mistakes or damages done.

What happens when the first ransomware is deployed by AI, on behalf of a user who just wanted tips on how to make more side income?

view more: next ›