this post was submitted on 22 May 2024
348 points (91.4% liked)

Technology

59608 readers
3472 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] funkless_eck@sh.itjust.works 11 points 6 months ago (2 children)

AI absolutely will not design machines.

It may be used within strict parameters to improve the speed of theoretically testing types of bearing or hinge or alloys or something to predict which ones would perform best under stress testing - prior to acutal testing to eliminate low-hanging fruit, but it will absolutely not generate a new idea for a machine because it can't generate new ideas.

[–] lanolinoil@lemmy.world 4 points 6 months ago (1 children)

The model T will absolutely not replace horse drawn carts -- Maybe some small group of people or a family for a vacation but we've been using carts to do war logistics for 1000s of years. You think some shaped metal put together is going to replace 1000s of men and horses? lol yeah right

[–] funkless_eck@sh.itjust.works 2 points 6 months ago (2 children)

apples and oranges.

You're comparing two products with the same value prop: transporting people and goods more effectively than carrying/walking.

In terms of mining, a drilling machine is more effective than a pickaxe. But we're comparing current drilling machines to potential drilling machines, so the actual comparison would be:

  • is an AI-designed drilling machine likely to be more productive (for any given definition of productivity) than a human-designed one?

Well, we know from experience that when (loosely defined) "AI" is used in, for e.g. pharma research, it reaps some benefits - but does not replace wholesale the drug approval process and its still a tool used by - as I originally said - human beings that impose strict parameters on both input and output as part of a larger product and method.

Back to your example: could a series of algorithmic steps - without any human intervention - provide a better car than any modern car designers? As it stands, no, nor is it on the horizon. Can it be used to spin through 4 million slight variations in hood ornaments and return the top 250 in terms of wind resistance? Maybe, and only if a human operator sets up the experiment correctly.

[–] lanolinoil@lemmy.world 7 points 6 months ago* (last edited 6 months ago) (1 children)

No, the thing I'm comparing is our inability to discern where a new technology will lead and our history of smirking at things like books, cars, the internet and email, AI, etc.

The first steam engines pulling coal out of the ground were so inefficient they wouldn't make sense for any use case than working to get the fuel that powers them. You could definitely smirk and laugh about engines vs 10k men and be totally right in that moment, and people were.

The more history you learn though, you more you realize this is not only a hubrisy thing, it's also futile as how we feel about the proliferation of technology has never had an impact on that technology's proliferation.

And, to be clear, I'm not saying no humans will work or have anything to do -- I'm saying significantly MORE humans will have nothing to do. Sure you still need all kinds of people even if the robots design and build themselves mostly, but it would be an order of magnitude less than the people needed otherwise.

[–] Beetlejuice001@lemmy.wtf 1 points 6 months ago

Maybe I’m pessimistic but all I see is every call center representative disappearing and that’ll be it

[–] sailingbythelee@lemmy.world 5 points 6 months ago

I agree that AI is just a tool, and it excels in areas where an algorithmic approach can yield good results. A human still has to give it the goal and the parameters.

What's fascinating about AI, though, is how far we can push the algorithmic approach in the real world. Fighter pilots will say that a machine can never replace a highly-trained human pilot, and it is true that humans do some things better right now. However, AI opens up new tactics. For example, it is virtually certain that AI-controlled drone swarms will become a favored tactic in many circumstances where we currently use human pilots. We still need a human in the loop to set the goal and the parameters. However, even much of that may become automated and abstracted as humans come to rely on AI for target search and acquisition. The pace of battle will also accelerate and the electronic warfare environment will become more saturated, meaning that we will probably also have to turn over a significant amount of decision-making to semi-autonomous AI that humans do not directly control at all times.

In other words, I think that the line between dumb tool and autonomous machine is very blurry, but the trend is toward more autonomous AI combined with robotics. In the car design example you give, I think that eventually AI will be able to design a better car on its own using an algorithmic approach. Once it can test 4 million hood ornament variations, it can also model body aerodynamics, fuel efficiency, and any other trait that we tell it is desirable. A sufficiently powerful AI will be able to take those initial parameters and automate the process of optimizing them until it eventually spits out an objectively better design. Yes, a human is in the loop initially to design the experiment and provide parameters, but AI uses the output of each experiment to train itself and automate the design of the next experiment, and the next, ad infinitum. Right now we are in the very early stages of AI, and each AI experiment is discrete. We still have to check its output to make sure it is sensible and combine it with other output or tools to yield useable results. We are the mind guiding our discrete AI tools. But over a few more decades, a slow transition to more autonomy is inevitable.

A few decades ago, if you had asked which tasks an AI would NOT be able to perform well in the future, the answers almost certainly would have been human creative endeavors like writing, painting, and music. And yet, those are the very areas where AI is making incredible progress. Already, AI can draw better, write better, and compose better music than the vast, vast majority of people, and we are just at the beginning of this revolution.

[–] essteeyou@lemmy.world 1 points 6 months ago (1 children)

It can solve existing problems in new ways, which might be handy.

[–] funkless_eck@sh.itjust.works 4 points 6 months ago (1 children)

can

might

sure. But, like I said, those are subject to a lot of caveats - that humans have to set the experiments up to ask the right questions to get those answers.

[–] essteeyou@lemmy.world 1 points 6 months ago (2 children)

That's how it currently is, but I'd be astounded if it didn't progress quickly from now.

[–] FiniteBanjo@lemmy.today 2 points 6 months ago (1 children)

OpenAI themselves have made it very clear that scaling up their models have diminishing returns and that they're incapable of moving forward without entirely new models being invented by humans. A short while ago they proclaimed that they could possibly make an AGI if they got several Trillions of USD in investment.

[–] essteeyou@lemmy.world 1 points 6 months ago (1 children)

5 years ago I don't think most people thought ChatGPT was possible, or StableDiffusion/MidJourney/etc.

We're in an era of insane technological advancement, and I don't think it'll slow down.

[–] FiniteBanjo@lemmy.today 3 points 6 months ago* (last edited 6 months ago) (2 children)

Okay but the people who made the advancements are telling you it has already slowed down. Why don't you understand that? A flawed Chatbot and some art theft machines who can't draw hands aren't exactly worldchanging, either, tbh.

[–] essteeyou@lemmy.world 0 points 6 months ago (1 children)

There are other people in the world. Some of them are inventing completely new ways of doing things, and one of those ways could lead to a major breakthrough. I'm not saying a GPT LLM is going to solve the problem, I'm saying AI will.

[–] leftzero@lemmynsfw.com 0 points 6 months ago (1 children)

Some of them are inventing completely new ways of doing things

No, they're not. All the money is now on the LLM autocomplete chatbots.

Real progress on AI won't resume until after the LLM bubble has burst. (And even then investors will probably be wary of putting money in AI for probably a few decades, because LLMs are being marked as AI despite having little to do with it.)

It's quite depressing, really.

[–] essteeyou@lemmy.world 1 points 6 months ago

Who was making this "real progress on AI" that you mention? Why did they stop that when an LLM became popular?

[–] lanolinoil@lemmy.world -1 points 6 months ago (1 children)

This is such a rich-country-centric view that I can't stand. LLMs have already given the world maybe it's greatest gift ever -- access to a teacher.

Think of the 800 million poor children in the world and their access to a Kahn academy level teacher on any subject imaginable with a cellphone/computer as all they need. How could that not have value and is pearl clutching drawing skills becoming devalued really all you can think about it?

[–] FiniteBanjo@lemmy.today 2 points 6 months ago* (last edited 6 months ago) (1 children)

Anything you learn from an LLM has a margin of error that makes it dangerous and harmful. It hallucinates documentation and fake facts like an asylum inmate. And it's so expensive compared to just having real teachers that it's all pointless. We've got humans, we don't need more humans, adding labor doesn't solve the problem with education.

[–] lanolinoil@lemmy.world 0 points 6 months ago (1 children)

bro I was taught in a textbook in the US in the 00s that the statue of liberty was painted green.

No math teacher I ever had actually knew the level of math they were teaching.

Humans hallucinate all the time. almost 1 billion children don't even have access to a human teacher, thus the boon to humanity

[–] FiniteBanjo@lemmy.today 3 points 6 months ago* (last edited 6 months ago) (1 children)

Those textbooks and the people who regurgitate their contents are the training data for the LLM. Any statement you make about human incompetence is multiplied by an LLM. If they don't have access to a human teacher then they probably don't have PCs and AI subscriptions, either.

[–] lanolinoil@lemmy.world 0 points 6 months ago

yeah but whatever the stats about as N increases alpha/beta error goes away thing is

[–] funkless_eck@sh.itjust.works 1 points 6 months ago* (last edited 6 months ago) (1 children)

i would be extremely surprised if before 2100 we see AI that has no human operator and no data scientist team even at a 3rd party distributor - and those things are neither a lie, nor a weaselly marketing stunt ("technically the operators are contractors and not employed by the company" etc).

We invented the printing press 584 years ago, it still requires a team of human operators.

[–] essteeyou@lemmy.world 1 points 6 months ago (1 children)

A printing press is not a technology with intelligence. It's like saying we still have to manually operate knives... of course we do.

[–] funkless_eck@sh.itjust.works 0 points 6 months ago* (last edited 6 months ago) (1 children)

the comment I originally replied to claimed AI will design the autonomous machines.

It will not. It will facilitate some of the research done by humans to aid in the designing of willfully human operated machinery.

To my knowledge the only autonomous machine that exists is a roomba, which moves blindly around until it physically strikes an object, rotates a random degree and continues in a new direction until it hits something else.

Even then, it is controlled with an app and on more expensive models, some boundary setting.

It is extremely generous to call that "autonomy."

[–] essteeyou@lemmy.world 1 points 6 months ago (1 children)

I was in a self-driving taxi yesterday. It didn't need to bump into things to figure out where it was.

[–] funkless_eck@sh.itjust.works 1 points 6 months ago

Fair, I thought they all got recalled but I guess they're back. but I'd also counter that Waymo is extremely limited about where it can operate - roughly 10 miles max - which, relevant to my original point was entirely hand-mapped and calibrated by human operators, and the rides are monitored and directed by a control center responding in real-time to the car's feedback.

Like my printing press example - it still takes a large human team to operate the "self" - driving car.