This is one I'd say is comparing apples to oranges. They each are good and bad in different ways. The unfortunate thing is that our current government wants the UK to be more like the US which will be a net negative to everyone in the UK. For example, they've been gutting the NHS for years to pave the way for a privatised hellscape.
Phanatik
"Puddle-deep analyses" are all that's required with LLMs because they're not complicated. We've been living with the same tech for years through machine learning algorithms of regression models except no one was stupid enough to use the internet as their personal training model until OpenAI. ChatGPT is very good at imitating intelligence but that is not the same as actually being intelligent.
OpenAI and by extension have done a wonderful job with their marketing by lowering the standards for what constitutes an AI.
I'm sure governments will treat this with as much urgency as they've been the past 2 decades or so. Whatever they're planning, it's not happening fast enough and with the biggest offenders not even approaching the table to discuss a solution, I'm not optimistic 2040 is feasible.
It's almost like the incessant marketing of standard optimisation algorithms as artificial intelligence has diluted the tech industry with meaningless buzzwords.
That's only checks notes 10 years after we're all fucked.
As much as people like to crow on and on about having choices, they don't actually like making choices. It's been the same with Linux, there's so much you can configure and change that you have to be proactive with your choices. It's why people gravitate to Apple simple interface where every decision has been made for you and the answer to wanting to change how the system works is met with "go fuck yourself".
I'm a data analyst moving into data science. I have been ranting about this since the beginning but everyone's too obsessed with the new shiny thing.
It's just an optimisation algorithm that's learned certain words have different probabilities of occurring depending on context.
I had to remind my friend, "when it tells you something, it has no idea what it's just told you" because that's all it really does; it spits out text on a guessed context but has no precognition of that context.
How are people okay with this sort of treatment of children? They're kept in cells and punished for acting out when their needs aren't being met. They're fucking children, you can't do this to them and expect them to turn out healthy. It's so sadistic and cruel and it makes me want to hurt the monsters running this facility.
Yeah, I'm sure Microsoft is happy with the theft of copyrighted works and people's personal information.
It would help with this sentiment if the current Rail network was cheaper to use.
Did you finish BG3? In-game playtime for me is 65ish hours and I'm only at the start of Act 2.
There's a difference between using ChatGPT to help you write a paper and having ChatGPT write the paper for you. One invokes plagiarism which schools/universities are strongly against.
The problem is being able to differentiate between a paper that's been written by a human (which may or may not be written with ChatGPT's assistance) and a paper entirely written by ChatGPT and presented as a student's own work.
I want to strongly stress that in the latter situation, it is plagiarism. The argument doesn't even involve the plagiarism that ChatGPT does. The definition of plagiarism is simple, ChatGPT wrote a paper, you the student did not and you are presenting ChatGPT's paper as your own, ergo plagiarism.