this post was submitted on 23 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

According to one of the sources, long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board's actions.

The maker of ChatGPT had made progress on Q*, which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans.

https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

you are viewing a single comment's thread
view the rest of the comments
[–] AdWestern1314@alien.top 1 points 1 year ago

I am not saying AGI is impossible but the arguments that we are close to achieving it sounds more like wishful thinking.

A couple of questions/comments I have:

  1. People assume that the development is exponential or at least linear but that is not necessarily true - it depends on what is possible to do with the resources we have and the limitations of the physical world we live in.
  2. GPT-4 has the appearance of being intelligent rather than being intelligent. How will we be able to tell the difference? What will prevent us from being fooled in a similar way with future systems?
  3. Isn’t there an issue with using benchmarks that has been around for a while to measure the performance of AI systems? Are we not, perhaps unconsciously, improving the scores on these tests rather than improving the system?
  4. Without understanding our own intelligence (or lack of), how are we going to understand AI?
  5. What is the goal with AI?