So, if we put AI in an echo chamber it gets dumber? Wow it really does think like humans.
Technology
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Most human like thing about it yet, in fact.
The AI reaffirming it's incorrect data by telling itself that it is correct
So what I'm hearing is that if we don't like the direction AI is taking us, we should be littering the internet with as much AI text and art as we can while pretending it's not AI.
Separately, with how popular AI is obviously posed to become, does this mean we'll stagnate culturally? With AI making the artist, the authors, the creatives job extremely difficult to monetize since their work will always be replicated quicker, cheaper, and in higher quantity by the bot than them these things will become much less human generated. If AI cannot get past this we'll just be stuck here, with little cultural evolution.
Synthetic diamonds would have replaced natural diamonds already if we are as rational as you suggested
Feels like AI creators can only get away with using pre-2022 data for so long. At some point the information will be outdated and they'll have to train on newer data, and it'll be interesting to see if this is a problem that can be solved without harming the dataset's quality.
My guess is they'd need to have an AI that tries to find blatantly AI generated data and take it out of the dataset. It won't be 100% accurate, but it'll be better than nothing.
I'm surprised, these models don't have something like a "ground truth layer" by now.
Given that ChatGPT for example is completely unspecialized, I would have expected that relatively there's a way to hand encode axiomatic knowledge. Like specialized domain knowledge or even just basic math. Even tieried data (i.e. more/less trusted sources) seem not to be part of the design.
Because it's not designed to be a knowledge base, it's designed to imitate human communication. It's the same reason why ChatGPT can't do maths - it doesn't "know" anything, it just predicts the most likely word/bit-of-a-word to come next. ChatGPT being as good as it is at, say, writing code given a natural language prompt is sort of just a happy accident, but people now expect that to be it's primary function.
I think this is something that's easier said than done. Maybe at our current level, but as these AI get more advanced... What is truth? Sure mathematics seems like an easy target until we consider one of the best use cases for AI could be theory. An AI could have a fresh take on our interpretation of mathematics, where these base level assumptions would actually be a hindrance.
I mean, let's be honest here: AI will not be primarily used to find out new truthiness about the universe, but order butter at the right time. Or write basic essays, code, explain known things.
That kind of knowledge could easily be at categorized.
Interesting article, I find it hard to believe the major AIs of today will collapse for such a reason. This gives me "year 2000 collapse " vibes, I'm by no means trashing the article, just saying I'm skeptical we'll ever reach such a point. The article itself already mentions awareness of the feedback loop among devs as well as two possible ways to counteract it.