1980: TVs will fry your brain
1990: Videogames will fry your brain
2000: Computers will fry your brain
2010: Smartphones will fry your brain
2020: AI will fry your brain
Any takes for the 2030s?
This is a most excellent place for technology news and articles.
1980: TVs will fry your brain
1990: Videogames will fry your brain
2000: Computers will fry your brain
2010: Smartphones will fry your brain
2020: AI will fry your brain
Any takes for the 2030s?
Well looking around at where we are today, maybe TVs did fry our brains.
I mean, based fully on our current dystopian reality, I feel you just made a really good point about tech growing to a point where it fully captures you from reality, and indeed frys your brain by convincing you that fantasies are real.
MAGA is a great example of people with brains so fried they think a pedophile exconman with 34 felonies who killed millions of Americans trough a poor pandemic response is somehow helping them by destroying USAID, DEI, Healthcare, and Social Security.
Their brains are gonzo, all through the constant applied exploitation of all the tech you just mentioned combined.
AI will absolutley make it worse.
Climate change.
Literally.
Neural implants? Only this time they're really going to fry your brain.
2030: Cyborg w/AI will fry your brain. Literally though.
And before that books and comics. But LLMs are different: they pretend to be your friend but actually just encourage whatever you come up with. You can easily fry people's brains by being their sycophant, now everyone can subscribe to one.
2030: Critical thought will fry your brain
Should we trust a researcher whose brain got fried. Did they remember to do the old double-blind setup before the frying of the brains occurred?
I fucking hate this AI shit but I'll admit I end up using Gemini (knowing its wrong sometimes) but it's like how I'd use Google but just more of a complex ask instead of simple search query's, I couldn't imagine using it beyond that other than a follow-up or two.
It's just a chatbot that has access to info, who goes onto their cable companies website and befriends the chatbot?
I have found Google search to be getting progressively worse where as I can type out a question to Gemini that will return better results than Google search. It's annoying that Google search has gotten so bad and duckduckgo will return you something interesting but not relavent. So Gemini is my Google search nowadays.
It very well may be intentional; to drive people away from traditional search and in to Gemini.
I've used gpt a coup times when I was searching the web and forums for well over an hour and found nothing relevant enough to work. Theissue got solved in 5-10 minutes.
They enshittified the search so now using the chatbot is more useful. The search just returns slop and even fake slop forums.
Pretty much. Can't find useful info without having to put in ALOT of extra work that I wouldn't of a decade ago.
Fuck though I love being able to ask it for part numbers and info. Much less hassle to ask it then use the shitty corpo parts catalogues search features especially when there's weird naming schemes and a lack of description, clicking through 50 parts trying to find the right one sucks.
Its more that SEO is so well known at this point you can whip up whatever AI generated garabge you want to be ranked high on search engines in seconds. For now the AIs are just better at "wading" through the trash since they somewhat curate the data its training on. Once all they can train it on is slop you better hope you still have some encyclopedias and text books laying around
I mean I have been using DDG for years now. I just could not find the right answer for my specific issue on my specific linux distro and AI was sadly just faster
According to a new study by researchers at Carnegie Mellon, MIT, Oxford, and UCLA,
Study should be solid I guess.
participants who were given AI assistants (in this case, a chatbot powered by OpenAI’s GPT-5 model) would have the aid pulled from them without warning during the test
Wow, interesting idea. 👍
where they had their assistant removed, the AI group saw the solve rate fall off a cliff. They had a solve rate about 20% lower
And even worse IMO:
They also had nearly double the skip rate, meaning they simply chose not to solve the questions.
This seems very alarming IMO, because this indicates they lost some of their ability to think constructively on how to actually solve a problem!
I know there have always been some who cried wold every time new technology has become available, like calculators and computers. Even dictionaries were once claimed to be harmful once!
But maybe this time there is a real danger, because AI takes away a lot of the need to actually think creatively and constructively. And that's an ability we must not lose.
The last paragraph of the article is even worse. As it mentions 2 studies that show these effects are also long term!!!
When driving somewhere, if I set out with the mindset that I can’t rely on gps I can usually wing it and figure out where to go when a hiccup occurs. If I don’t, then I have a lot of trouble getting into that path finding mode when needed… similar to this maybe?
Yeah exactly, because although it's possible to do more with technology sometimes, you're actively de-skilling at the same time. When we invented the written word yes it legitimately made everything better, but also we lost oral traditions and the capacity to memorize large volumes of storytelling, songs, and histories. Now you can burn the books, and the knowledge dies. It's a real risk.
Everything is like this. Every technology has a cost beyond its price, and making a decision of whether to use it or not will always be in error unless you think about what you're losing in the process.
Changing the terms of the test in the middle of it, without warning, is disruptive. I’m not convinced it “fried their brains.” The same would happen with a calculator suddenly removed during the middle of an exam.
If I use AI for my personal coding projects I've found that if the task is unsolvable by the ai model, I'm not able to sit down and do it myself until the next day. It's like I've got to reset my brain.
If I want to save time and use AI for a specific part of the code, it probably saves me 5 hours of work. But then I spend five hours yelling at the ai to try to get it to actually solve it. Next day I'll just fix it myself in 2 hours.
But what you're describing is not that uncommon, even without AI: Oftentimes when trying to solve a complex problem and being unsuccessful you have to reset your brain by doing something fundamentally different or have a good night of sleep and after that you solve the problem easily.
May what you're experiencing is not AI related at all.
You're probably right, but I think it's made worse by AI. Jumping into the code after 3 hours with Claude doing the dirty work feels like an impossibility
i think reading the title of this post hurt my brain. like what are we doing here? making medical claims using sensationalist and meaningless language... seems unhelpful
AI is like a dog looking at itself in a mirror.
Some dogs are smart, and understand that this is a tool and that it is there to help you see things better.... Some dogs are fucking morons and think their reflection is another dog, and they wanna fuck and fight....
There are a ton of good use cases for ai, and none of them include coquettish sexbots or drawings of me as a Simpson or a Ghibli sketch.
The test seems kind of dogshit, you could make the same argument against any tool, calculators or even abacuses would have the same effect.
I'm made to use it for work and it does speed up some tasks, however for some stuff it ends up being like the experiment where not doing the work the first time means the whole process takes longer at the end.
To add to this, we already know that context switching causes a loss in performance.
A person who's thinking about how to solve a problem one way and then has to suddenly think about solving it in another way will perform worse.
The Neuroscience Behind the Pain
Context switching isn’t just annoying — it’s neurologically expensive. When you shift from debugging a race condition to answering emails, your brain doesn’t simply “change tabs.” It goes through a complex process:
-Memory consolidation: Storing your current mental model
-Attention disengagement: Breaking focus from the current task
-Cognitive reloading: Building a new mental model for the next task
-Re-engagement: Getting back into flow
Research from Carnegie Mellon shows that even brief interruptions can increase task completion time by up to 23%. For complex cognitive work like programming, this cost multiplies dramatically.
Here's another article from CMU discussing the same thing: https://www.sei.cmu.edu/blog/addressing-the-detrimental-effects-of-context-switching-with-devops/
What this study shows is that a person who is faced with an unexpected context switch performs worse on a task than a user who has spent the last 12 questions performing the task the same way.
This exact problem would happen if you replaced AI with a calculator, or made a person swap from using paper to doing mental math. The problem here is context switching, not AI.
The way to ensure that the problem is AI and not the context switch, would be to continue the quest and see if the first group reverts back to baseline after 12 questions. 12 questions is how long the control group had to become acclimated to the task before their last context swap at the start of the test.
Also, of note, this is a paper on arXiv it is not published so it has not gone through a peer-review process which would certainly catch the failure to set a proper control group.
I think the key point is that you’re not outsourcing critical thinking to LLMs, but are instead using it as a tool to do grunt work that you could’ve done yourself, but the LLM can pump out faster. This means constantly being critical of everything it does, asking questions, asking for links to credible sources, asking it to provide info to help evaluate the pros and cons of multiple approaches, with you making the decisions and learning along the way. Overall, any work a LLM produces that will have your name on it should be work you entirely understand and agree with. For coding, I find agent markdown files to be especially helpful to make sure the LLM follows my desired practices without me constantly making it refactor.
Largely, my assumption at this point is that LLMs may not always be around, so I definitely don’t want to be left holding the bag with a bunch of slop I can’t manage on my own. I think I’ll feel better when I can run open weight models on my own hardware that are fully competitive with cloud models. With models like Qwen 3.6 27B, it seems we are getting closer to that.
study already came out that hs people graduating cant even read or write, functionally illterate.
I really do see the issue with AI. I see people around me outsource thinking to it too much. Like literally. As if they are happy that a machine can make their life choices for them. This is extremely worrying It's About how people use it
Those are important studies but nothing shocking. The conclusion to draw from them is the same one we've drawn from all technologies that have improved our lives to some degree: Without them, we tend to either be incompetent as losing access to them isn't worth planning for, or we are demotivated because why would we deprive ourselves from technology that makes our work so much less exhausting?
It doesn't necessarily remove our capacity to think (and the article falsely generalises to critical thinking), it shifts what kind of thinking we do.
If AI is as good or better than I am at writing code, then I'll switch my brain to only do the orchestrating and architecture rather than the writing code part. And yes, if you remove AI, then the switch will cause me to perform less than I used to before AI, but not permanently, only until I get used to it again.
If an AI is better than a doctor at finding cancer indicators, then the doctor will focus their mind on finding solutions only rather than splitting it on both the detection and solution.
This is not new, not bad, and I'll even go to the extent of saying it's a great use of AI: Humans evolved for specialization. The less varied the tasks are, the better we are at the subset we specialize in. That's what has driven our rapid technological and societal advances in the past millenia.
But, AI has many issues and many detrimental applications as well, so don't see this comment as a full endorsement of AI.