this post was submitted on 04 May 2025
100 points (79.8% liked)

Technology

72471 readers
2961 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Buffalox@lemmy.world 7 points 2 months ago* (last edited 2 months ago) (19 children)

I find it funny that in the year 2000 while attending philosophy at University of Copenhagen I predicted strong AI around 2035. This was based on calculations of computational power, and estimates of software development trailing a bit.
At the time I had already been interested in AI development and matters of consciousness for many years. And I was a decent programmer. I already made self modifying code back in 1982. So I made this prediction at a time where AI wasn't a very popular topic, and in the middle of a decades long futile desert walk without much progress.

And for 15 about years, very little continued to happen. It was pretty obvious the approach behind for instance Deep Blue wasn't the way forward. But that seemed to be the norm for a long time.
But it looks to me that the understanding of how to build a strong AI is much much closer now, as I expected. We might actually be halfway there!
I think we are pretty close to having the computational power needed now in AI specific datacenter clusters, but the software isn't quite there yet.

I'm honestly not that interested in the current level of AI, although LLM can yield very impressive results at times, it's also flawed, and I see it as somewhat transitional.
For instance partially self driving cars are kind of irrelevant IMO. But truly self driving cars will make all the difference regarding how useful it is, and be a cool achievement for current level of AI evolution when achieved.

So current level AI can be useful, but when we achieve strong AI it will make all the difference!

Edit PS:
Obviously my prediction relied on the assumption that brains and consciousness are natural phenomena, that don't require a god. An assumption I personally consider a fact.

[–] drspod@lemmy.ml 1 points 2 months ago (1 children)

It was pretty obvious the approach behind for instance Deep Blue wasn’t the way forward.

That's a weird example to pick. What exactly about Deep Blue do you think wasn't the way forward?

[–] Buffalox@lemmy.world 3 points 2 months ago* (last edited 2 months ago)

Deep blue was mostly based on raw computational power, with very little ability to actually judge whether a draw was "good" without calculating the possibilities following it.
As I understand it, it only worked on Chess as a "mathematical" problem, and was incapable of judging strategic positions, except if it had "seen" it before, and already calculated the possible outcomes.
In short, there was very little intelligence, it was based only on memory and massive calculation power. Which indeed are aspects of intelligence, but only on a very low level.

load more comments (17 replies)