I don't need a theory for this, you're being highly reductive by focusing on a few features of human communication.
Phanatik
What research? These bots aren't that complicated beyond an optimisation algorithm. Regardless of the tasks you give it, it can't evolve beyond what it is.
There's no way these chatbots are capable of evolving into Ultron. That's like saying a toaster is capable of nuclear fusion.
Sounds like a great car! It does seem like something's wrong with the battery so a replacement is in order.
From the replies I've been getting, I think so.
My mum's 2019 Toyota Yaris has to have its engine run every few days or the battery dies from just sitting on the driveway. It could be a faulty car battery but considering this car isn't even that old and has barely driven 30k miles, it's not doing so great. I discovered yesterday that my EV charges better after I've driven it around and the battery's warmed up a bit. The car goes a bit haywire when you cold start so it seems like it needs some prep time before a drive.
Yeah but the difference is we still choose our words. We can still alter sentences on the fly. I can think of a sentence and understand verbs go after the subject but I still have the cognition to alter the sentence to have the effect I want. The thing lacking in LLMs is intent and I'm yet to see anyone tell me why a generative model decides to have more than 6 fingers. As humans we know hands generally have five fingers and there's a group of people who don't so unless we wanted to draw a person with a different number of fingers, we could. A generative art model can't help itself from drawing multiple fingers because all it understands is that "finger + finger = hand" but it has no concept on when to stop.
A comedian isn't forming a sentence based on what the most probable word is going to appear after the previous one. This is such a bullshit argument that reduces human competency to "monkey see thing to draw thing" and completely overlooks the craft and intent behind creative works. Do you know why ChatGPT uses certain words over others? Probability. It decided as a result of its training that one word would appear after the previous in certain contexts. It absolutely doesn't take into account things like "maybe this word would be better here because the sound and syllables maintains the flow of the sentence".
Baffling takes from people who don't know what they're talking about.
The problem isn't the misinformation itself, it's the rate at which misinformation is produced. Generative models lower the barrier to entry so anyone in their living room somewhere can make deepfakes of your favourite politician. The blame isn't on AI for creating misinformation, it's for making the situation worse.
Good news is that the Earth will stabilise once we're not around to fuck it up.
On the point of being a brain surgeon, it's not like the NHS is in great shape to be affording one of those! I wonder how that could have happened.
I've just done the dance already and I'm tired of their watered-down attempts at bringing human complexity down to a level that makes their chat bots seem smart.