this post was submitted on 07 Apr 2024
339 points (93.1% liked)

Technology

59427 readers
3178 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] misspacific@lemmy.blahaj.zone 17 points 7 months ago (26 children)

exactly, this will eliminate some jobs, but anyone who's asked an LLM to fix code longer than 400 lines knows it often hurts more than it helps.

which is why it is best used as a tool to debug code, or write boilerplate functions.

[–] Drewelite@lemmynsfw.com -3 points 7 months ago (10 children)

But the fact that this tech really kicked off just three years ago and is already threatening so many jobs, is pretty telling. Not only will LLMs continue to get better, but they're a big step towards AGI and that's always been an existential crisis we knew was coming. This is the the time to start adapting, quick.

[–] hark@lemmy.world 8 points 7 months ago (5 children)

They didn't just appear out of nowhere, they're the result of decades of research and development. You're also making the assumption that additional progress is guaranteed. AI has hit walls and dead ends in the past, there's no reason to assume that we're not hitting a local maximum again right now.

[–] Drewelite@lemmynsfw.com -4 points 7 months ago* (last edited 7 months ago) (1 children)

And there's no reason to believe that it is. I know there's been speculation about model collapse and limits of available training data. But there's also been advancements like training data efficiency and autonomous agents. Your response seems to ignore the massive amounts of progress we've seen in the space.

Also the computer, internet, and smart phone were based on decades of research and development. Doesn't mean they didn't take off and change everything.

The fact that you're saying AI hit walls in the past and now we're here, is a pretty good indication that progress is guaranteed.

[–] hark@lemmy.world 5 points 7 months ago (2 children)

You said there's no reason and then you list potential reasons right after. Yes, there has been progress and no one is arguing against that, but the two big issues are:

  1. What exists is being overhyped as far more capable than it really is.
  2. How much room there is to grow with current techniques is still unknown.

The computer, internet, and smart phone are all largely deterministic with actions resulting in direct known outcomes. AI as we know it is based on highly complex statistical models and relies heavily on the data it is trained on. It has far more things that can go wrong which makes it unsuitable for critical applications (just look at the disasters when it's used as a customer service representative). That's not even getting into the legal issues that have yet to actually be answered. Just look at the CTO of OpenAI squirming on the question of what Sora was trained on (timestamped).

Being able to overcome walls in the past doesn't guarantee overcoming walls in the present. That's like saying being able to jump over a hurdle is the same as leaping over a skyscraper. There's also the question of timing, it took decades for those previous walls to be overcome. Impact to the workforce is largely overstated and is being used as an excuse for cost cutting. It's just like the articles about automation after the great recession. I'm still waiting on robots that can flip burgers (article from 2012).

[–] PipedLinkBot@feddit.rocks 1 points 7 months ago

Here is an alternative Piped link(s):

the CTO of OpenAI squirming on the question of what Sora was trained on (timestamped)

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] Drewelite@lemmynsfw.com -4 points 7 months ago* (last edited 7 months ago) (1 children)

I listed reasons people usually cite and why I don't think they're a good reason to assume there won't be progress. I agree it's over-hyped today, because people are excited about the obvious potential tomorrow. I think it's foolish to hide behind that as if it's proof that it doesn't have potential.

Let's say you're right and we hit a wall for 50 years on any progress on AI. There's nothing magical about the human brain's ability to make logical decisions on observations and learning. It's going to happen. And our current system of economy that attributes a person's value to their labor will be in deep shit when it happens. It could take a century to make an appropriate change here. We're already way behind, even with a set back to AI.

I think it's funny when people complain about AI learning from copyright. AI's express goal is to be similar to a human consciousness. Have you ever talked to a human who's never watched a TV show, or a movie, or read a book from this century? An AI that's not aware of those things would be like a useless alien to us.

If people just want to use legal hangups to stop AI, fair play. But that plan is doomed, infinite brainpower is just too valuable. Copyright isn't there to protect the little guy, that was the original 28 year law. Its current form was lobbied by corporations to stifle competition. And they'll dismantle it (or ignore it) in a heartbeat once it suits them.

[–] hark@lemmy.world 4 points 7 months ago

The topic at hand is this survey which claims significant impacts to the workforce within five years and this is what I'm speaking towards. As for copyright, these models are straight-up not possible without that data and the link can be clearly demonstrated, they have their training data available which they may have to expose in a court case. Forget about the little guy, the large corporations who own the data will not be happy letting them build this lucrative AI without them getting paid for it. There will be legal fights and it is a potential complication in rolling this stuff out so it should be considered.

load more comments (3 replies)
load more comments (7 replies)
load more comments (22 replies)