this post was submitted on 15 Oct 2023
956 points (97.1% liked)

Technology

59635 readers
3437 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google has plunged the internet into a “spiral of decline”, the co-founder of the company’s artificial intelligence (AI) lab has claimed.

Mustafa Suleyman, the British entrepreneur who co-founded DeepMind, said: “The business model that Google had broke the internet.”

He said search results had become plagued with “clickbait” to keep people “addicted and absorbed on the page as long as possible”.

Information online is “buried at the bottom of a lot of verbiage and guff”, Mr Suleyman argued, so websites can “sell more adverts”, fuelled by Google’s technology.

you are viewing a single comment's thread
view the rest of the comments
[–] squaresinger@feddit.de 197 points 1 year ago (9 children)

The part about Google isn't wrong.

But the second half of the article, where he says that AI chatbots will replace Google search because they give more accurate information, that simply is not true.

[–] Enkers@sh.itjust.works 68 points 1 year ago* (last edited 1 year ago) (1 children)

I'd say they at least give more immediately useful info. I've got to scroll past 5-8 sponsored results and then the next top results are AI generated garbage anyways.

Even though I think he's mostly right, the AI techbro gameplan is obvious. Position yourself as a better alternative to Google search, burn money by the barrelful to capture the market, then begin enshitification.

In fact, enshitification has already begun; responses are comparatively expensive to generate. The more users they onboard, the more they have to scale back the quality of those responses.

[–] nilloc@discuss.tchncs.de 2 points 1 year ago

ChatGPT is already getting worse at code commenting and programming.

The problem is that enshitification is basically a requirement in a capitalist economy.

[–] sab@kbin.social 22 points 1 year ago (3 children)

Even if AI magically got to the point of providing accurate and good results, I would still profoundly object to using it.

First, it's a waste of resources. The climate impact of AI is enough of a reason why we should leave it dead until we live in a world with limitless energy and water.

Second, I don't trust a computer to select my sources for me. Sometimes you might have to go through a few pages, but with traditional search engines at least you are presented with a variety of sources and you can use your god given ability of critical thinking.

[–] Aceticon@lemmy.world 2 points 1 year ago

That's LLMs, which is what is necessary for Chat-AI (the first "L" in there quite literally stands for Large).

Remove the stuff necessary to process natural human language and those things tend to be way smaller, especially if they're just trained using the user's own actions.

[–] QuaternionsRock@lemmy.world 2 points 1 year ago (1 children)

I don't trust a computer to select my sources for me.

I’m not sure what you think modern search engines do, but this is pretty much it. Hell, all of the popular ones have been using AI signals for years.

You can request as many sources from an AI as you would get from Google.

[–] sab@kbin.social 2 points 1 year ago* (last edited 1 year ago)

Of course there are always challenges, especially with how results are ranked. I have been extremely dissatisfied with the development of search engines for years now. I find Duckduckgo to at least be less bad than Google. Currently I'm checking out Kagi, which at least lets me rank sources myself. Still on the fence though - it does seem to flirt more with AI than with transparency, which has me worried.

But absolutely, it's not that I think the current state of search engines is great either - it just seems to me everything is getting worse and the Internet has entered a death spiral between AI and the enshittification of social media.

Then again, maybe I just reached that age where you start hating everything.

[–] RatherBeMTB@sh.itjust.works -5 points 1 year ago

The climate change has become the new CP go to argument to condone the stupidest reasoning. Just like blocking Torrent sites to prevent CP, let's block AI to prevent climate change.

[–] ribboo@lemm.ee 21 points 1 year ago (1 children)

I mean most top searches are AI generated bullshit nowadays anyway. Adding Reddit to a search is basically the only decent way to get a proper answer. But those answers are not much more reliable than ChatGPT. You have to use the same sort of skepticism and fact checking regardless.

Google has really gotten horrible over the years.

[–] SmashingSquid@notyour.rodeo 4 points 1 year ago

Most of the results after the first page on Google are usually the same as the usable results, just mirrored on some shady site full of ads and malware.

[–] twinnie@feddit.uk 10 points 1 year ago (3 children)

I already go to ChatGPT more than Google. If you pay for it then the latest version can access the internet and if it doesn’t know the answer to something it’ll search the internet for you. Sometimes I come across a large clickbait page and I just give ChatGPT the link and tell it to get the information from it for me.

[–] Baines@lemmy.world 22 points 1 year ago (1 children)

give it time, algos will fuck those results as well

[–] Semi-Hemi-Demigod@kbin.social 2 points 1 year ago

They'll need to make money with a cheap cost-per-sale, so they'll put ads on the site. Then they'll put promoted content in the AI chat, but it's okay because they'll say it's promoted. Eventually it won't even say it's promoted and it will just be all ads, just like every other tech company.

Why? Because monetization leads directly to enshittification, because the users stop being the customers.

[–] kubica@kbin.social 22 points 1 year ago* (last edited 1 year ago) (1 children)

When I tried it it was never able to give me the sources of what it said. And it has given me way too many made up answers to just trust it without reasons. Having to search for sources after it said something has made me skip the middle man(machine).

[–] Zeth0s@lemmy.world -4 points 1 year ago (1 children)

You probably tried the free version. Check perplexity.ai to see how the paid version of chatgpt works. Every source is referenced and linked.

This guy is not talking about the current version of free chatgpt. He's talking of the much better tools that will be available in the next few years

[–] squaresinger@feddit.de 10 points 1 year ago (1 children)

Yeah, because people selling AI products have a great track record on predicting how their products will develop in the future. Because of that, Teslas don't have steering wheels any more, because Full Self Driving drives people incident-free from New York to California since 2017.

The thing with AI development is, that it rapidly gets to 50% of the desired solution, but then gets stuck there, not being able to get consistently good enough that you can actually rely on it.

[–] Zeth0s@lemmy.world -5 points 1 year ago* (last edited 1 year ago) (1 children)

I don't really understand what it means. If the product is unreliable people won't use it, and everything will stay as it is now. It's not a big issue. But It is already pretty reliable for many use cases.

Realistically the real future problem will be monetization (which is causing the issues of Google), not features

[–] Phanatik@kbin.social 6 points 1 year ago (1 children)

Well, here's the thing. How often are you willing to dismiss the misses because of the hits? Your measure of unreliability is now subject to bias because you're no longer assessing the bot's answers objectively.

[–] Zeth0s@lemmy.world -5 points 1 year ago* (last edited 1 year ago) (1 children)

I don't expect it to be 100% correct. I have realistic expectations built on experience. Any source isn't 100% reliable. A friend is 50% reliable, an expert maybe 95. A random web page probably 40... I don't know.

I built up my strategies to address uncertainty by applying critical thinking. It is not much different than in the past. By experience, chatgpt 4 is currently more reliable than a random web page that comes in the first page of a Google search. Unless I exactly search for a trustworthy source, such as nhs or guardian.

The main problem is the drop in quality of search engines. For instance, I often start with chatgpt 4 without plugins to focus my research. Once I understand what I should look for, I use search engines for focused searches on official websites or documentation pages.

[–] squaresinger@feddit.de 2 points 1 year ago (2 children)

The issue with reliability is a completely different one between web search and AI.

If you search something on Google, there are quite a few ways you can judge the quality of the answer with "metadata" around it. If you find a scientific paper, it's probably more reliable than a post on a parents forum. If the source is a quality newspaper or Wikipedia, that's also more on the reliable side, but some conspiracy theorist website is not. And if the source is some kind of forum or Q&A site, wrong answers often have comments under them that correct the error.

Also, you can follow multiple links and take a wider sample on the topic that way.

With AI that's not possible. Whether it is wrong or correct, the AI will give you an answer in the exact same format, with the same self-confident tone. You basically need to know the correct answer to know whether the answer is correct.

Sure, you can re-roll and ask it again, but that doesn't make the result more likely to be correct.

For example, I asked ChatGPT which Harry Potter chapter is the longest. It happily gave me a chapter, but it wasn't the longest. So I asked again and again and again, and each time it gave me a new wrong answer, every time with made-up word counts.

[–] Zeth0s@lemmy.world -1 points 1 year ago* (last edited 1 year ago)

This is the reason I am suggesting people to give a try to perplexity.ai to understand how these tools will work in the near future. And why I don't understand the reason I am downvoted for that.

Current "free" chatgpt was created as a proof of concept, not as a finished, complete solution for humanity issues. What we have now is a showcase of llm, for openai to improve the product via human feedback, for everyone else to enjoy what is it already now, with all its limitations, an extremely useful tool.

But this kind of LLM is intended to be a building block of the future solutions. To enable interactivity, summarization, analysis features within larger products with larger and more refined set of features.

If you don't have paid version of chatgpt, again, try perplexity.ai with the copilot feature, to see a (still imperfect, under development) proof of concept of the near future of AI assisted research.

And more tools will come, that will make easier to navigate the huge amount of information that is the main issue of modern internet.

For your specific case, gpt 3.5 has poor logical and mathematical capabilities. Gpt-4 is much better with that. But still, using a language model for math is almost never a good choice. What you'd need, is an llm able to access information from the internet and to have access to some math tool, such as python or Matlab. These options currently are available on chatgpt with plugin, but they are suboptimal. In the future you'll have better product able to combine llm, focused internet search and math.

We should focus on the future, not on the present when discussing AI. LLMs based products are in their infancy

[–] Dave@lemmy.nz 7 points 1 year ago

ChatGPT powers Bing Chat, which can access the internet and find answers for you, no purchase necessary (if you're not on edge, you might need to install a browser extension to access it as they are trying to push edge still).

[–] yoz@aussie.zone 4 points 1 year ago (1 children)

Its already happening at my work. Many are using bing AI instead of google.

[–] DudeDudenson@lemmings.world 3 points 1 year ago

Don't worry they'll start monetizing LLMs and injecting ads into them soon enough and we'll be back to square one

[–] lloram239@feddit.de 3 points 1 year ago* (last edited 1 year ago) (1 children)

because they give more accurate information, that simply is not true.

From my experience with BingChat, it's completely true. BingChat will search with Bing and summarize the results, providing sources and all. And the results are complete garbage most of the time, since search results are filled with garbage.

Meanwhile if you ask ChatGPT, which doesn't have Internet access, you get a far more sophisticated answer and correct answer. You can also answer follow up questions.

Web search is an absolutely terrible place for accurate information. ChatGPT in contrast consumes all the information out there, which makes it much harder for incorrect information to slip in, as information needs to be replicated frequently to stick around. It can and often is still wrong of course, but it is far better than any single website you'll find.

And of course all of this is still very early days for LLMs. GPT was never build with correctness in mind, it was build to autocomplete text, everything else was patchwork after the fact. The future of search is AI, no doubt about that.

[–] sndrtj@feddit.nl 12 points 1 year ago (1 children)

Chatgpt flat out hallucinates quite frequently in my experience. It never says "I don't know / that is impossible / no one knows" to queries that simply don't have an answer. Instead, it opts to give a plausible-sounding but completely made-up answer.

A good AI system wouldn't do this. It would be honest, and give no results when the information simply doesn't exist. However, that is quite hard to do for LLMs as they are essentially glorified next-word predictors. The cost metric isn't on accuracy of information, it's on plausible-sounding conversation.

[–] pascal@lemm.ee 3 points 1 year ago (1 children)

Ask chatgpt "tell me the biography of the famous painter sndrtj" to see how good the bot is at hallucinating an incredible realistic story that never happened.

[–] Takumidesh@lemmy.world 4 points 1 year ago (1 children)
[–] pascal@lemm.ee 2 points 1 year ago

Oh, they fixed that! But I see you're using v4.

[–] Aceticon@lemmy.world 0 points 1 year ago

I suspect that client-side AI might actually be the kind of thing that filters the crap from search results and actually gets you what you want.

That would only be Chat-AI if it turns out natural language queries are better to determine the kind of thing the user is looking for than people trying to craft more traditional query strings.

I'm thinking each person would can train their AI based on which query results they went for in unfiltered queries, with some kind of user provided feedback of suitability to account for click-bait (i.e. somebody selecting a result because it looks good but it turns out its not).

[–] Zeth0s@lemmy.world -1 points 1 year ago* (last edited 1 year ago) (1 children)

If you aren't paying for chatgpt, give a look to perplexity.ai, it is free.

You'll see that sources are references and linked

Don't judge on the free version of chatgpt

Edit. Why the hell are you guys downvoting a legit suggestion of a new technology in the technology community? What do you expect to find here? Comments on steam engines?

[–] tetris11@kbin.social 1 points 1 year ago (2 children)

Wow, it's really good. Who knew that asking a bot to provide references would immediately improve the quality of the answers?

[–] Zeth0s@lemmy.world -4 points 1 year ago* (last edited 1 year ago)

If you try "copilot" option, you get the full experience. It's pretty neat because it allows for brainstorming.

It is still a very "preliminary version" experience (it often gets stuck in a small bunch of websites), because the whole thing is just few months old. But it has a lot of potential

[–] cybersandwich@lemmy.world -1 points 1 year ago (1 children)

I dunno. There have been quite a few times where I am trying to do something on my computer and I could either spend 5 minutes searching, refining, digging through the results...or I can ask chatgpt and have a workable answer in 5 seconds. And that answer is precisely tailored to my specifics. I don't have to assume/research how to modify a similar answer to fit my situation.

Obviously it's dependent on the types of information you need, but for coding, bash scripting, Linux cli, or anything of that nature LLMs have been great and much better than Google searches.

[–] Excrubulent@slrpnk.net 14 points 1 year ago* (last edited 1 year ago) (2 children)

Okay but the problem with that is that LLMs not only don't have any fidelity at all, they can't. They are analogous to the language planning centre of your brain, which has to be filtered through your conscious mind to check if it's talking complete crap.

People don't realise this and think the bot is giving them real information, but it's actually just giving them spookily realistic word-salad, which is a big problem.

Of course you can fix this if you add some kind of context engine for them to truly grasp the deeper and wider meaning of your query. The problem with that is that if you do that, you've basically created an AGI. That may first of all be extremely difficult and far in the future, and second of all it has ethical implications that go beyond how effective of a search engine it is.

[–] cybersandwich@lemmy.world 2 points 1 year ago

Did you read my last little bit there? I said it depends on the information you are looking for. I can paste error output from my terminal into Google and try to find an answer or I can paste it into chatgpt and be, at the very least pointed in the right direction almost immediately, or even given the answer right away vs getting a stackoverflow link and parsing the responses and comments and following secondary and tiertiary links.

I absolutely understand the stochastic parrot conundrum with LLMs. They have significant drawbacks and they are far from perfect, but then neither is are Google search results. There is still a level of skepticism you have to apply.

One of the biggest mistakes people make is the idea that LLMs and websearching is a zero sum affair. They don't replace each other. They compliment each other. Imo, google is messing up with their "ai" integration into Google search. It sets the expectation that it is an equivalent function.

[–] Touching_Grass@lemmy.world -3 points 1 year ago (1 children)

I don't need perfect. I need good enough

[–] Excrubulent@slrpnk.net 7 points 1 year ago* (last edited 1 year ago) (1 children)

Sure but if that becomes the norm then a huge segment of the population will believe the first thing the bot tells them. You might be okay, but we're talking about an entire society filtering its knowledge through an incredibly effective misinformation engine that will lie rather than say "I don't know", because that simple phrase requires a level of self-awareness that eludes a lot of actual people, much less a chatbot.

[–] Touching_Grass@lemmy.world -1 points 1 year ago (1 children)

That's already a problem. The thing j think about is what will serve me better. Google or chat AI. The risk of bad information exists with both. But an AI based search engine is something that will be much better at finding context, retiring results geared towards my goals and I suspect less prone to fuckery because AI must be trained as a whole

[–] Excrubulent@slrpnk.net 3 points 1 year ago* (last edited 1 year ago) (1 children)

Except we already know that LLMs lie and people in general are not aware of this. Children are using these. When you as a person have to sift through results you get a sense of what information is out there, how sparse it is, etc. When a chatbot word-vomits the first thing it can think of to satisfy your answer, you get none of that, and perhaps you should be aware of that yourself. You don't really seem to be, it's like you think the saved time is more important than context, which apparently I have to remind you - the bot doesn't know context.

When you say:

an AI based search engine is something that will be much better at finding context

It makes me think that you really don't understand how these bots work, and that's the real danger.

We're talking in this thread about this wider systemic issue, not just what suits you personally regardless of how much it gaslights you, but if that's all you care about then you do you I guess ¯\_(ツ)_/¯

[–] Touching_Grass@lemmy.world 0 points 1 year ago* (last edited 1 year ago)

Lie is a weird way to describe it. They give you an answer based on probabilities. When they're off base they call it hallucinating. Its not lying its just lacking in data to give an accurate and correct a answer which will get better with more training and data. Everything else we have so far gets worse. Google isn't what it was 15 years ago.

I use chatgpt every day to find out answers over google. Its better in almost every single way to get information from and I can only imagine what it's capable of once it can interface with crawlers.

The language you're using to speak on this issue makes it seem like theres a personal vendetta against LLM. Why people get so mad at a new tool is always fascinating.