this post was submitted on 04 Oct 2024
640 points (97.8% liked)

Technology

60082 readers
4022 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Rogers@lemmy.ml 27 points 2 months ago* (last edited 2 months ago) (1 children)

I'd agree the first part but to say all Ai is snake oil is just untrue and out of touch. There are a lot of companies that throw "Ai" on literally anything and I can see how that is snake oil.

But real innovative Ai, everything to protein folding to robotics is here to stay, good or bad. It's already too valuable for governments to ignore. And Ai is improving at a rate that I think most are underestimating (faster than Moore's law).

[–] kaffiene@lemmy.world 6 points 2 months ago (1 children)

I think part of the difficulty with these discussions is that people mean all sorts of different things by "AI". Much of the current usage is that AI = LLMs, which changes the debate quite a lot

[–] Rogers@lemmy.ml 1 points 2 months ago (1 children)

No doubt LLMs are not the end all be all. That said especially after seeing what the next gen 'thinking models' can do like o1 from ~~ClosedAI~~ OpenAI, even LLMs are going to get absurdly good. And they are getting faster and cheaper at a rate faster than my best optimistic guess 2 years ago; hell, even 6 months ago.

Even if all progress stopped tomorrow on the software side the benefits from purpose built silicon for them would make them even cheaper and faster. And that purpose built hardware is coming very soon.

Open models are about 4-6 months behind in quality but probably a lot closer (if not ahead) for small ~7b models that can be run on low/med end consumer hardware locally.

[–] kaffiene@lemmy.world 5 points 2 months ago (1 children)

I don't doubt they'll get faster. What I wonder is whether they'll ever stop being so inaccurate. I feel like that's a structural feature of the model.

[–] keegomatic@lemmy.world 1 points 2 months ago (1 children)

May I ask how you’ve used LLMs so far? Because I hear that type of complaint from a lot of people who have tried to use them mainly to get answers to things, or maybe more broadly to replace their search engine, which is not what they’re best suited for, in my opinion.

[–] Lux18@lemmy.world 3 points 2 months ago (1 children)

What are they best suited for?

[–] keegomatic@lemmy.world 1 points 2 months ago

Personally, I’ve found that LLMs are best as discussion partners, to put it in the broadest terms possible. They do well for things you would use a human discussion partner for IRL.

  • “I’ve written this thing. Criticize it as if you were the recipient/judge of that thing. How could it be improved?” (Then address its criticisms in your thing… it’s surprisingly good at revealing ways to make your “thing” better, in my experience)
  • “I have this personal problem.” (Tell it to keep responses short. Have a natural conversation with it. This is best done spoken out loud if you are using ChatGPT; prevents you from overthinking responses, and forces you to keep the conversation moving. Takes fifteen minutes or more but you will end up with some good advice related to your situation nearly every time. I’ve used this to work out several things internally much better than just thinking on my own. A therapist would be better, but this is surprisingly good.)
  • I’ve also had it be useful for various reasons to tell it to play a character as I describe, and then speak to the character in a pretend scenario to work out something related. Use your imagination for how this might be helpful to you. In this case, tell it to not ask you so many questions, and to only ask questions when the character would truly want to ask a question. Helps keep it more normal; otherwise (in the case of ChatGPT which I’m most familiar with) it will always end every response with a question. Often that’s useful, like in the previous example, but in this case it is not.
  • etc.

For anything but criticism of something written, I find that the “spoken conversation” features are most useful. I use it a lot in the car during my commute.

For what it’s worth, in case this makes it sound like I’m a writer and my examples are only writing-related, I’m actually not a writer. I’m a software engineer. The first example can apply to writing an application or a proposal or whatever. Second is basically just therapy. Third is more abstract, and often about indirect self-improvement. There are plenty more things that are good for discussion partners, though. I’m sure anyone reading can come up with a few themselves.