this post was submitted on 15 Feb 2024
151 points (93.6% liked)

Technology

59377 readers
5843 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

“In 10 years, computers will be doing this a million times faster.” The head of Nvidia does not believe that there is a need to invest trillions of dollars in the production of chips for AI::Despite the fact that Nvidia is now almost the main beneficiary of the growing interest in AI, the head of the company, Jensen Huang, does not believe that

you are viewing a single comment's thread
view the rest of the comments
[–] eleitl@lemmy.ml 10 points 9 months ago (5 children)

So a Cerebras wafer will be 10^6 faster for the same computation as now, for the same price, in just 10 years? Not after Moore scaling ended many years ago and neural hardware architecture has matured. You can sure go analog, but that's not the same computation. And that's the end of the line, not without true 3d integration.

[–] Buffalox@lemmy.world 8 points 9 months ago* (last edited 9 months ago) (1 children)

It requires 4X speed increase every year, production quality scale can't provide even close to half of that, maybe 25%, then another 25% from design, and regarding increasing die sizes they are already close to the end. So the only way to get from 150% to 400% per year is by using multi chip designs, meaning they will have to use 2.5x more chips per year. so the multi chip package in 10 years will probably have to have almost 10,000 chips! All of them bleeding edge!!!

The H200 is estimated to cost $40K, the future 10 year chip will be more like $40 million. Or maybe more like impossible to achieve.

[–] agent_flounder@lemmy.world 2 points 9 months ago (1 children)

If chips = cpus, here, then I imagine that will hit a limit also (Amdahl's law).

[–] Buffalox@lemmy.world 4 points 9 months ago (1 children)

A chip is also called a die, it's the piece cut out from the wafer, which is then packaged onto a chip package.
Since traditionally there were always 1 chip per chip package, the 2 words were used almost synonymously.
I this case it's basically GPU chips, which AFAIK AMD has already figured out how to use in multi chip packages. Meaning one package contains multiple chips that work "almost" as well as a single chip of similar size.

The advantage of multichip packages are obvious, production costs are way lower because smaller dies causes lower percentage of flawed dies, and allows for better binning of higher end parts.
Additionally it allows designs of way more complex packages, than would be possible with monolithic chips. This is the reason AMD has been taking marketshare in server markets from Intel. Because Intel has not been able to match the multichip design AMD introduced with Epyc in 2016/17, which originally was 4 Ryzen chiplets/chips/dies packaged together as one big 32 core server chip. Where the biggest Intel could make was 28 cores.

But packaging almost 10000 GPU chips together is completely different, and I don't think that will be relevant within 10 years.

Amdahls law however is part obvious and part bullshit. Everything your mind is able to do semi efficiently, can be multithreaded, it is very few things that can't.
Amdahls law is basically irrelevant with regard to AI, as AI has a lot of patten recognition, and pattern recognition is perfect for multi threading.

[–] TheGrandNagus@lemmy.world 3 points 9 months ago

And to add: currently TSMC nodes have a reticle limit of 858mm². I.e. that's the largest chips you can make on their wafers. Then in the real world you do it slightly below that.

Future nodes are reducing this to the 350-450mm² range.

High end GPUs/HPC cards basically have to go to multi-die, even in the fantasy world of 100% perfect yields.

[–] Pistcow@lemm.ee 6 points 9 months ago

Then stop making new chips each year with a 5-7% performance improvement with a 20% increase in prices.

[–] DNU@lemmy.world 2 points 9 months ago (1 children)

So, for a bit more tech illiterate, their claim is bs?

[–] DNU@lemmy.world 2 points 9 months ago

I mean 1mio x is a big claim anyway.

[–] someacnt_@lemmy.world 1 points 9 months ago

Yeah really, semiconductor has begun stagnating in progress recently due to fundamental limits. I'm gonna call bull on this one, I think they are rather forecasting pluging demand.

[–] givesomefucks@lemmy.world -3 points 9 months ago (3 children)

It depends what you call AI.

True artificial intelligence likely requires quantum computing because there's some quantum stuff happening our brains and probably the smartest living human (Sir Roger Penrose) thinks that's where the secret to consciousness is hiding after spending the last couple decades investigating that after helping Hawking finish up Einstein's work

If you just mean a chat bot that can pass the Turing test, then yeah we can just wait a decade instead of developing special tech for AI.

I mean, if we really develop artificial intelligence before we understand our own consciousness, we're probably fucked anyways.

It'd be like somehow inventing a nuclear bomb before understanding what radiation was. We'd have no idea what we're creating or what the consequences of flipping the switch would be.

[–] General_Effort@lemmy.world 5 points 9 months ago

Roger Penrose is a mathematician who made important contributions to theoretical physics in the 1960ies, for which he received a Nobel Prize. In later decades, he published speculative books on consciousness, quantum physics, and neurobiology. These ideas have been out there for about 30 years now but have not been able to convince scientists in general. Rather, they are generally considered implausible or outright contradicted by the evidence. Simply put: It's wrong.

The idea that quantum physics plays a direct role in brain function is very much on the fringes of science.

No offense meant. I know these ideas are very important to many spiritual people, but I felt the casual reader should know that it is not important in science.

[–] NOSin@lemmy.world 3 points 9 months ago (2 children)

Do you know if there are, or if there are plans for a "new" Turing test ?

[–] jackalope@lemmy.ml 6 points 9 months ago

The turing test is a rhetorical tool by turing to outline his logical positivist beliefs. Turing did believe in its use as an actual test but it's not a discrete test, it's a test of hypothetically infinite time.

[–] 9488fcea02a9@sh.itjust.works 3 points 9 months ago

Yes. There is a newer test called Voigt-Kampf that can test advanced AI

[–] GnomeKat@lemmy.blahaj.zone -2 points 9 months ago (3 children)

Can we stop with this "not real AI" meme... it's a painfully dull response at this point, why does the goal post have legs? Just because Penrose thinks quantum mumbo jumbo is needed doesn't mean he is right, machine learning is completely outside his field of expertise.

[–] givesomefucks@lemmy.world 5 points 9 months ago (2 children)

Mate, I was using chatbots on AIM 24 years ago...

It wasn't AI then, it's not AI now.

The only reason to get super excited about current chatbots, is if you think they came out of nowhere and not something after decades of slow progression. There's no reason to expect there to be a sudden huge jump to actual AI unless you don't know the history.

People aren't changing definitions on you...

Well, some people are, it's just the ones telling you chatbots are AI.

They're just lying to generate hype to get investor money. You're a bystander that fell for it.

[–] GnomeKat@lemmy.blahaj.zone 3 points 9 months ago

im sure using AIM made you an AI expert too

[–] frezik@midwest.social 0 points 9 months ago* (last edited 9 months ago)

It doesn't have to be a full human-level intelligence to advance the field of AI.

[–] TropicalDingdong@lemmy.world 2 points 9 months ago (1 children)

I completely agree on the idiotic consensus around the no-true-AI meme.

The goal post is practically mounted on wheels they're having to move it so fast. Machine learning and complexity seems to be enough.

I think that ChatGPT represents a "deep blue" moment for AI. Finally, something fairly generally, that is at least some what competitive with humans. Hell chat gpt can probably play chess better than the average human too.

But what we're waiting for is the "alpha go" moment of AI. The moment when the unconquerable is toppled. I expect it to happen in 2-3 years. I think we've got almost all we need from a theoretical side, and that the rest will be engineering.

I expect AI to be largely independent, to have agency indistinguishable from a humans, but to be better, faster and broader in its scope than most humans in their ability. It will still get beaten by the best of the best humans. It will still make weird, sideways mistakes that don't seem like obvious mistakes to make to humans. But it will be generally better than most humans at most tasks.

[–] General_Effort@lemmy.world 1 points 9 months ago (1 children)

Deep Blue and AlphaGo were AI, though?

[–] TropicalDingdong@lemmy.world 3 points 9 months ago

Sure, but in the context of the time, the narrative was that AI would never beat humans at chess. The assumption was you would have to encode all winning positions and that there were just too many positions possible for that to be the case.

And the narrative and assumptions were wrong. Turns out computer systems can actually not even know what the rules of chess are, learn them, and then learn to play better than any human can ever play.

Then the nay-sayers came up with a bunch of new qualifications about what "real" AI would be, because they made the wrong assumptions in the first place. The same things are happening right now in the current conversation around AI.

My point is that there has been historically substantial goal post moving around this domain, and the nay-sayers have been consistently demonstrated to be wrong. Its fun and trendy to be a naysayer. It makes you seem smarter than you are. But we've failed to come up with an even basic definition of 'intelligence' that is useful for informing debate, let alone a useful definition for what is or is not 'artificial intelligence'.

I think we'll have systems that are indistinguishable if not significantly better at most tasks than humans in 2-3 years. Either you wont be able to tell it wasn't done by a human, or you'll be able to tell simply because its so much better than what you would expect a human to be capable of. It will seem 'super human' in this regard. Likewise, I think we'll solve the agency problem as well, at least when looking in from the outside. I don't think you'll be able to tell a difference between a machine system or a human operating behind a digital screen, whatsoever, in 2-3 years.

What is intelligence? What makes some intelligence artificial? Does that divide even really make sense? The whole concept is predicated on the assumption that there is something particularly special about whatever it is that humans possess, and when I see people moving goalposts, it strikes me that they are mostly working to protect "whatever" it is that humans have as something special or divine.

Realistically, we're about to get passed on the track. And then, we're going to get lapped before the naysayers have even noticed we're not in the lead any more. Its an intentional blindspot.

[–] match@pawb.social -1 points 9 months ago (2 children)

it's nice when words have meaning tho

[–] General_Effort@lemmy.world 4 points 9 months ago (1 children)

Yes. The term AI was coined 70 years ago and specifically includes neural nets. LLMs are definitely AI. I don't know what definition people use when they say it's not.

[–] match@pawb.social 2 points 9 months ago (1 children)

Sure, but 60 years ago they coined "machine learning" when it became clear that there was going to be more work needed to emulate intelligence

[–] General_Effort@lemmy.world 3 points 9 months ago (1 children)

That's wrong. Machine learning is considered part of AI. AI is not necessarily about learning. EG game AI typically doesn't learn/improve.

emulate intelligence

Feel free to define intelligence and/or emulated intelligence.

[–] match@pawb.social 2 points 9 months ago (1 children)

We're probably talking at crosspoints here. When people say not real AI, they usually mean not artificial general intelligence, or in many cases, not intelligent in the ways suited to the problem being addressed (e.g. ChatGPT being used out of the box as a customer service rep)

[–] General_Effort@lemmy.world 2 points 9 months ago

As you said, it's nice when words have meanings.

People who say it's not real AI simply don't know what the word has meant for decades. I think people want to say that it is not an actual person or something like that. Which, of course, it isn't. I have to say, with 8 billion people on the planet, making artificial people would be the greatest waste of human effort I can imagine.

[–] jackalope@lemmy.ml 1 points 9 months ago

Ai is a field. Using it in an appeal to "true ai" is meaningless.