I know very little about ML (essentially nothing, I have a background in economics and, a bit, of statistics), but isn't AGI still miles away from these models?
Like, my understanding of LLMs is that they essentially "predict" the right word to respond to a prompt and then write a new word based on the previous one and so on. Actual human level intelligence seems to me to be a degree of complexity higher.
I dunno. I am asking dude