this post was submitted on 15 Aug 2023
135 points (79.5% liked)

Technology

75018 readers
4408 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Top physicist says chatbots are just ‘glorified tape recorders’::Leading theoretical physicist Michio Kaku predicts quantum computers are far more important for solving mankind’s problems.

top 50 comments
sorted by: hot top controversial new old
[–] jaden@partizle.com 62 points 2 years ago (17 children)

A physicist is not gonna know a lot more about language models than your average college grad.

[–] ComradeKhoumrag@infosec.pub 1 points 2 years ago (1 children)

I disagree, physics is the foundational science of all sciences. It is the science with the strongest emphasis on understanding math well enough to derive the equations that actually take form in the real world

[–] jaden@partizle.com 5 points 2 years ago

Therefore, if you know physics, you know everything.

load more comments (16 replies)
[–] demesisx@infosec.pub 49 points 2 years ago (3 children)

Yes. Glorified tape recorders that can provide assistance and instruction in certain domains that is very useful beyond what a simple tape recorder could ever provide.

[–] TropicalDingdong@lemmy.world 24 points 2 years ago (2 children)

Yes. Glorified tape recorders that can provide assistance and instruction in certain domains that is very useful beyond what a simple tape recorder could ever provide.

I think a good analogue is the invention of the typewriter or the digital calculator. Its not like its something that hadn't been conceived of or that we didn't have correlatives for. Is it revolutionary? Yes, the world will change (has changed) because of it. But the really big deal is that this puts a big bright signpost of how things will go far off into the future. The typewriter led to the digital typewriter. The digital typewriter showed the demand for personal business machines like the first apples.

Its not just about where were at (and to be clear, I am firmly in the 'this changed the world camp'. I realize not everyone holds that view; but as a daily user/ builder, its my strong opinion that the world changed with the release of chatgpt, even if you can't tell yet.), the broader point is about where we're going.

The dismissiveness I've seen around this tech is frankly, hilarious. I get that its sporting to be a curmudgeon, but to dismiss this technology will be to have completely missed what will be one of the most influential human technologies to have been invented. Is this general intelligence? To keep pretending it has to be AGI or nothing is to miss the entire damn point. And this goal post shifting is how the frog gets slowly boiled.

[–] deranger@sh.itjust.works 4 points 2 years ago (1 children)

I reckon it’s somewhere in between. I really don’t think it’s going to be the revolution they pitched, or some feared. It’s also not going to be completely dismissed.

I was very excited when I started to play with various AI tools, then about two weeks in I realized how limited they are and how they need a lot of human input and editing to produce a good output. There’s a ton of hype and it’s had little impact on the regular persons life.

Biggest application of AI I’ve seen to date? Making presidents talk about weed, etc.

[–] TropicalDingdong@lemmy.world 3 points 2 years ago (2 children)

I reckon it’s somewhere in between. I really don’t think it’s going to be the revolution they pitched, or some feared. It’s also not going to be completely dismissed.

Do you use it regularly or develop ML/ AI applications?

[–] ekky43@lemmy.dbzer0.com 6 points 2 years ago* (last edited 2 years ago)

Yes. I wrote my masters in engineering about MLAI (before chatgpt and YOLO became popular and viable), and am also currently working with multi-object detection and tracking using MLAI.

It's not gonna be like the invention of the modern computer, but it's probably gonna reach about the same level as Google, or the electronic typing machine.

[–] deranger@sh.itjust.works 3 points 2 years ago

I use some image generation tools and LLMs.

I think it's a safe bet to estimate it will work out to be somewhere in the middle of the two extremes. I don't think AI/ML is going to be worthless, but I also don't think it's going to do all these terrible things people are afraid of. It will find its applications and be another tool we have.

[–] Meowoem@sh.itjust.works 2 points 2 years ago (1 children)

I find it fascinating how different sections of society see the tech totally differently, a lot of people seem to think because it can't do everything it can do nothing. I've been fascinated by ai for decades so to have finally cracked language comprehension feels like huge news because it opens so many other doors - especially in human usability of new tools.

We're going to see a huge shift in how we use technology, I don't think it will be long before we're used to telling the computer what we want it to do - organising pictures, sorting inventory in a game, finding products in shops... Being able to actually tell it 'i want a plug for my bath' and not being offered electrical plugs, even being told 'there are three main types as seen here, you will need to know the size of your plug hole to ensure the correct fit'

As the technology refines we'll see it get increasingly reliable for things like legal and medical knowledge, even if it's just referring people to doctors it could save a huge amount of lives.

It's absolutely going to have as much effect on our lives as the internet's development did, but I think a lot of people forget how significant that really was.

[–] TropicalDingdong@lemmy.world 2 points 2 years ago

I agree. I also think sending the request "Here is an example of a bluetooth driver for linux. It isn't working any more be cause of a kernal update (available here). Please update the driver and test it please. You have access to the port and there is a bluetooth device available for connection. Please develop a solution, test it, write a unit test, and make commits along the way (with comments please). Also, if you have any issues, email me at example@example.com, and I'll hop back online to help you. Otherwise, keep working until you are finished and have a working driver."

Are we there yet? No, I've tried some of the recursive implementations and I'm yet to have them generate something completely functional. But there is a clear path from the current technology to that implementation. Its inevitable.

[–] whatisallthis@lemm.ee 3 points 2 years ago (1 children)

Well it’s like a super tape recorder that can play back anything anyone has ever said on the internet.

load more comments (1 replies)
[–] kinsnik@lemmy.world 2 points 2 years ago

yeah, a "tape recorder" that adapts to what you ask... if there was a tape recorder before where i could put the docs i written and get recommendations on how to improve my style and organization, i missed it

[–] trekky0623@startrek.website 40 points 2 years ago* (last edited 2 years ago) (1 children)
[–] a_spooky_specter@lemmy.world 18 points 2 years ago

He's not even a top physicist, just well known.

[–] Goodman@discuss.tchncs.de 25 points 2 years ago (1 children)

I wouldn't call this guy a top physicist... I mean he can say what he wants but you shouldn't be listening to him. I also love that he immediately starts shilling his quantum computer book right after his statements about AI. And mind you that this guy has some real garbage takes when it comes to quantum computers. Here is a fun review if you are interested https://scottaaronson.blog/?p=7321.

The bottom line is. You shouldn't trust this guy on anything he says expect maybe string theory which is actually his specialty. I wish that news outlets would stop asking this guy on he is such a fucking grifter.

[–] hoodlem@hoodlem.me 8 points 2 years ago* (last edited 2 years ago)

I wouldn't call this guy a top physicist... I mean he can say what he wants but you shouldn't be listening to him.

Yeah I don't see how he has any time to be a "top physicist" when it seems he spends all his time on as a commenter on tv shows that are tangentially related to his field. On top of that LLM is not even tangentially related.

[–] PixelProf@lemmy.ca 17 points 2 years ago* (last edited 2 years ago) (1 children)

I understand that he's placing these relative to quantum computing, and that he is specifically a scientist who is deeply invested in that realm, it just seems too reductionist from a software perspective, because ultimately yeah - we are indeed limited by the architecture of our physical computing paradigm, but that doesn't discount the incredible advancements we've made in the space.

Maybe I'm being too hyperbolic over this small article, but does this basically mean any advancements in CS research are basically just glorified (insert elementary mechanical thing here) because they use bits and von Neumann architecture?

I used to adore Kaku when I was young, but as I got into academics, saw how attached he was to string theory long after it's expiry date, and seeing how popular he got on pretty wild and speculative fiction, I struggle to take him too seriously in this realm.

My experience, which comes with years in labs working on creative computation, AI, and NLP, these large language models are impressive and revolutionary, but quite frankly, for dumb reasons. The transformer was a great advancement, but seemingly only if we piled obscene amounts of data on it, previously unspeculated of amounts. Now we can train smaller bots off of the data from these bigger ones, which is neat, but it's still that mass of data.

To the general public: Yes, LLMs are overblown. To someone who spent years researching creativity assistance AI and NLPs: These are freaking awesome, and I'm amazed at the capabilities we have now in creating code that can do qualitative analysis and natural language interfacing, but the model is unsustainable unless techniques like Orca come along and shrink down the data requirements. That said, I'm running pretty competent language and image models on 12GB of relatively cheap consumer video card, so we're progressing fast.

Edit to Add: And I do agree that we're going to see wild stuff with quantum computing one day, but that can't discount the excellent research being done by folks working with existing hardware, and it's upsetting to hear a scientist bawk at a field like that. And I recognize I led this by speaking down on string theory, but string theory pop science (including Dr. Kaku) caused havoc in people taking physics seriously.

[–] Goodman@discuss.tchncs.de 12 points 2 years ago* (last edited 2 years ago) (1 children)

He is trying to sell his book on quantum computers which is probably why he brought it up in the first place

[–] PixelProf@lemmy.ca 7 points 2 years ago

Oh for sure. And it's a great realm to research, but pretty dirty to rip apart another field to bolster your own. Then again, string theorist...

[–] A2PKXG@feddit.de 16 points 2 years ago

Just set your expectations right, and chat it's are great. They aren't intelligent. They're pretty dumb. But they can say stuff about a huge variety of domains

[–] ClemaX@lemm.ee 14 points 2 years ago (1 children)

Well, one could argue that our brain is a glorified tape recorder.

[–] LapGoat@pawb.social 6 points 2 years ago

behold! a tape recorder.

holds up a plucked chicken

[–] Feathercrown@lemmy.world 9 points 2 years ago

He's a physicist. That doesn't make him wise, especially in topics that he doesn't study. This shouldn't even be an article.

[–] eestileib@sh.itjust.works 5 points 2 years ago

Kaku is a quack.

[–] Bishma@discuss.tchncs.de 4 points 2 years ago (2 children)

I call them glorified spread sheets, but I see the correlation to recorders. LLMs, like most "AIs" before them, are just new ways to do line of best fit analysis.

[–] feedum_sneedson@lemmy.world 7 points 2 years ago (1 children)

That's fine. Glorify those spreadsheets. It's a pretty major thing to have cracked.

[–] Bishma@discuss.tchncs.de 2 points 2 years ago

It is. The tokenization and intent processing are the thing that impress me most. I've been joking since the 90's that the most impressive technological innovation shown on Star Trek TNG was computers that understand the intent of instructions. Now we have that... mostly.

[–] Prager_U@lemmy.world 1 points 2 years ago (2 children)

To counter the grandiose claims that present-day LLMs are almost AGI, people go too far in the opposite direction. Dismissing it as being only "line of best fit analysis" fails to recognize the power, significance, and difficulty of extracting meaningful insights and capabilities from data.

Aside from the fact that many modern theories in human cognitive science are actually deeply related to statistical analysis and machine learning (such as embodied cognition, Bayesian predictive coding, and connectionism), referring to it as a "line" of best fit is disingenuous because it downplays the important fact that the relationships found in these data are not lines, but rather highly non-linear high-dimensional manifolds. The development of techniques to efficiently discover these relationships in giant datasets is genuinely a HUGE achievement in humanity's mastery of the sciences, as they've allowed us to create programs for things it would be impossible to write out explicitly as a classical program. In particular, our current ability to create classifiers and generators for unstructured data like images would have been unimaginable a couple of decades ago, yet we've already begun to take it for granted.

So while it's important to temper expectations that we are a long way from ever seeing anything resembling AGI as it's typically conceived of, oversimplifying all neural models as being "just" line fitting blinds you to the true power and generality that such a framework of manifold learning through optimization represents - as it relates to information theory, energy and entropy in the brain, engineering applications, and the nature of knowledge itself.

load more comments (2 replies)
[–] FlyingSquid@lemmy.world 2 points 2 years ago

More people need to learn about Racter. This is nothing new.

[–] akd@lemm.ee 2 points 2 years ago (1 children)
[–] Feathercrown@lemmy.world 2 points 2 years ago

Thanks for good article link

[–] NeoNachtwaechter@lemmy.world 1 points 2 years ago

That's an incredibly cool explanation.

load more comments
view more: next ›