this post was submitted on 03 Feb 2024
83 points (100.0% liked)
Technology
37705 readers
175 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
People now "ChatGPT isn't real AI because it says dumb shit all the time". People then: "Prolog is AI because it can solve logic problems".
Something with moving goalposts or something
They are both different parts of the same problem. Prolog can solve logical problems using symbolism. ChatGPT cannot solve logical problems, but it can approximate human language to an astonishing degree. If we ever create an AI, or what we now call an AGI, it will include elements of both these approaches.
In “Computing Machinery and Intelligence”, Turing made some really interesting observations about AI ("thinking machines" and "learning machines" as they were called then). It demonstrates stunning foresight:
You can view ChatGPT and Prolog as two ends of the spectrum Turing is describing here. Prolog is "thinking rationally": It is predictable, logical. ChatGPT is "acting humanly": It is an unpredictable, "undisciplined" model but does exhibit very human-like behaviours. We are "quite ignoerant of what is going on inside". Neither approach is enough to achieve AGI, but they are such fundamentally different approaches that it is difficult to conceive of them working together except by some intermediary like Subsumption Architecture.
This is what I expect too. And hope - LLMs are way too unpredictable to control important things on their own.
I often say LLMs are doing for natural language what early computation did for mathematics. There's still plenty of mathy jobs computers can't do, but the really repetitive ones are gone and somewhat forgotten - nobody thinks of "computer" as a title.