this post was submitted on 30 Mar 2025
34 points (90.5% liked)

Asklemmy

47155 readers
1598 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

I can't see it happening tbh, but like the USA government discussed putting restriction on AI development, I think OpenAI or some other companies asked them to do so!? And there were short/reels of high profile developers hyping out the fact that "we don't know what we're doing", and one of them quit his job. So why is all that hype? Is the "Matrix" route actually a possible future ?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] CanadaPlus@lemmy.sdf.org 2 points 1 day ago* (last edited 15 hours ago)

Months ago I would have said "yes, it's possible". Now, it's become pretty clear LLMs are a dead end. They're trained to simulate the internet and can't do other things with any reliability.

It's still possible with whatever the future approach to making computers smarter, though. Natural intelligence exists, and we're made out of the same stuff as everything else, so artificial intelligence must also be possible. And, without the limits of recent evolution, an AGI could probably be made far better than us.

Don't forget that pandemics used to be a goofy sci-fi trope, too.