this post was submitted on 01 May 2024
88 points (100.0% liked)

Technology

37720 readers
554 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] casmael@lemm.ee 38 points 6 months ago (30 children)

Why would you develop this technology I simply don’t understand. All involved should be sent to jail. What the fuck.

[–] some_guy@lemmy.sdf.org 18 points 6 months ago (5 children)

They mentioned one potential use that I thought has value and that I hadn't considered. For video conferencing, this could transmit data without sending video and greatly reduce the amount of bandwidth needed by rendering people's faces locally. I don't think that outweighs the massive harms this technology will unleash. But at least there was some use that would be legit and beneficial.

I'm someone who has a moral compass and I don't like that scammers will abuse this shit so I hate it. But there's no keeping it locked away. It's here to stay. I hate the future / now.

[–] flora_explora@beehaw.org 3 points 6 months ago (3 children)

Wouldn't you then have to run the AI locally on a machine (which probably draws a lot of power and memory) or use it via cloud (which depends on bandwidth just like a video call). I don't really see where this technology could actually be useful. Sure, if it is only a minor computation just like if you take a picture/video with any modern smartphone. But computing an entire face and voice seems much more complicated than that and not really feasible for the usual home device.

[–] Markaos@lemmy.one 3 points 6 months ago (1 children)

Yeah, it's not practical right now, but in 10 years? Who knows, we might finally have some built-in AI accelerator capable of running big neural networks on consumer CPUs by then (we do have AI accelerators in a large chunk of current CPUs, but they're not up to the task yet). The system memory should also go up now that memory-hungry AI is inching closer to mainstream use.

Sure, Internet bandwidth will also increase, meaning this compression will be less important, but on the other hand, it's not like we've stopped improving video codecs after h.264 because it was good enough - there are better codecs now even though we have the resources to handle bigger h.264 videos.

The technology doesn't have to be useful right now - for example, neural networks capable of learning have been studied since the 1940s, even though there would be no way to run them for many decades, and it would take even longer to run them in a useful capacity. But now that we have the technology to do so, they enjoy rapid progress building on top of that original foundation.

[–] flora_explora@beehaw.org 2 points 6 months ago

Fair point, I agree.

load more comments (1 replies)
load more comments (2 replies)
load more comments (26 replies)