this post was submitted on 22 Sep 2024
73 points (90.1% liked)

Socialism

5113 readers
213 users here now

Rules TBD.

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] frightful_hobgoblin@lemmy.ml -2 points 1 day ago* (last edited 1 day ago) (3 children)

You're against computers being able to understand language, video, and images?

[–] Barabas@hexbear.net 49 points 1 day ago (1 children)

They don’t understand though. A lot of AI evangelists seem to smooth over that detail, it is a LLM not anything that “understands” language, video nor images.

There are uses for these kinds of models like semi-automating analysing large pools of data, but even in a socialist society the resources that allocated to do it like it is currently is completely unsustainable.

[–] frightful_hobgoblin@lemmy.ml 5 points 1 day ago (3 children)

They don’t understand though. A lot of AI evangelists seem to smooth over that detail, it is a LLM not anything that “understands” language, video nor images.

We're into the Chinese Room problem. "Understand" is not a well-defined or measurable thing. I don't see how it could be measured except from looking at inputs&outputs.

[–] Barabas@hexbear.net 31 points 1 day ago (1 children)

Does this mean that my TI-84 calculator was actually an AI since it could solve equations I put into it? Or Wolfram Alpha? Or a speed camera? These are all able to read external inputs to produce an output. At which point does your line go, because the current technology is nowhere near where mine goes.

We are currently ruining the biosphere so that some people might earn a lot of money by being able to lay off workers. If you remove this integral part to what “AI” is and all other negative externalities of course it will look better, but not all of the externalities are tied to the capitalist mode of production. Economies and resource allocation would still be a thing without capitalism, it isn’t like everything magically becomes good.

[–] Infamousblt@hexbear.net 12 points 1 day ago

A choose your own adventure novel is an AI because you feed it a set of inputs (page numbers) and it feeds you a set of outputs (a dynamic story).

[–] space_comrade@hexbear.net 21 points 1 day ago (1 children)

"Understand" is not a well-defined or measurable thing.

So why attribute it to an LLM in the first place then? All of the LLMs are just floating point numbers being multiplied and added inside a digital computer, the onus is on the AI bros to show what kind of floating point multiplication is real "understanding".

[–] frightful_hobgoblin@lemmy.ml 2 points 1 day ago* (last edited 1 day ago) (2 children)

But it's inherently impossible to "show" anything except inputs&outputs (including for a biological system).

What are you using the word "real" to mean, and is it aloof from the measurable behaviour of the system?

You seem to be using a mental model that there's

  • A: the measurable inputs & outputs of the system

  • B: the "real understanding", which is separate

How can you prove B exists if it's not measurable? You say there is an "onus" to do so. I don't agree that such an onus exists.

This is exactly the Chinese Room paper. 'Understand' is usually understood in a functionalist way.

[–] anarchoilluminati@hexbear.net 11 points 23 hours ago

But, ironically, the Chinese Room Argument you're bringing up supports what others are saying that LLMs do not 'understand' anything.

It seems to me like you are establishing 'understanding' with a functionalist meaning to be able to say that input/output is equivalent to understanding in order to say the measurable process in itself shows 'understanding'. But that's not what Searle, and seemingly the others here, seem to mean by 'understanding'. As Searle argues, it is not purely the syntactic manipulation in question but the semantic. In other words, these LLMs do not "know" the information they provide, they are just repeating based off the input/output process with which they were programmed. LLMs do not project or internalize any meaning to the input/output process. If they had some reflexive consciousness and any 'understanding', then they could have critically approach the meaning of the information in order to assess its validity against facts rather than just naïvely proclaiming that cockroaches got their name because they like to crawl into penises at night. Do you believe LLMs are conscious?

[–] space_comrade@hexbear.net 1 points 20 hours ago* (last edited 20 hours ago)

How can you prove B exists if it's not measurable?

Because I've felt it, I've felt how understanding feels, because ultimately understanding is a conscious experience within a mind, you cannot define understanding without referencing conscious experience, you cannot possibly define it only in terms of behavior or function. So either you have to concede that every floating point multiplication in a digital chip "feels like something" at some level or you show what specific kind of floating point multiplication does.

[–] booty@hexbear.net 6 points 22 hours ago (1 children)

I don't see how it could be measured except from looking at inputs&outputs.

Okay, then consider that when you input something into an LLM and regenerate the responses a few times, it can come up with outputs of completely opposite (and equally incorrect) meaning, proving that it does not have any functional understanding of anything and instead simply outputs random noise that sometimes looks similar to what one would output if they did understand the content in question.

[–] frightful_hobgoblin@lemmy.ml 1 points 22 hours ago (1 children)

Right. Like if I were talking to someone in total delirium and their responses were random and not a good fit for the question.

LLMs are not like that.

[–] booty@hexbear.net 2 points 21 hours ago (1 children)

You don't seem to have read my comment. Please address what I said.

[–] frightful_hobgoblin@lemmy.ml 1 points 21 hours ago (1 children)

when you input something into an LLM and regenerate the responses a few times, it can come up with outputs of completely opposite (and equally incorrect) meaning

Can you paste an example of this error?

[–] booty@hexbear.net 1 points 20 hours ago* (last edited 20 hours ago)

Have you ever used an LLM?

Here's a screenshot I took after spending literally 10 minutes with chatgpt very confidently stating incorrect answers to a simple question over and over. (from this thread) Not only is it completely incapable of coming up with a very simple correct answer to a very simple question, it is completely incapable of responding in a coherent way to the fact that none of its answers are correct. Humans don't behave this way. Nothing that understands what is being said would respond this way. It responds this way because it has no understanding of the meaning of anything that is being said. It is responding based on statistical likelihoods of words and phrases following one another, like a markov chain but slightly more advanced.

[–] bravesilvernest@lemmy.ml 25 points 1 day ago (1 children)

I'm against the current iteration of the buzzword that involves a bunch of wasted money being dumped into something that also generates a ton of energy use to get things somewhat correct rather than having it go towards actual needs we have affecting humanity.

[–] frightful_hobgoblin@lemmy.ml 0 points 1 day ago (1 children)

Fusion's close to a core need of humanity.

[–] vovchik_ilich@hexbear.net 22 points 1 day ago (1 children)

Wait, you think fusion will be developed thanks to AI?

[–] frightful_hobgoblin@lemmy.ml -3 points 1 day ago (1 children)
[–] vovchik_ilich@hexbear.net 23 points 1 day ago (1 children)

No, I haven't seen any major technological breakthroughs coming from language models, other than language models themselves. Have you?

[–] frightful_hobgoblin@lemmy.ml 1 points 1 day ago (1 children)

No. You want to suddenly change the subject to language models?

[–] vovchik_ilich@hexbear.net 19 points 1 day ago (2 children)

What other type of current AI claims problem-solving capabilities?

[–] DarkenLM@kbin.earth 5 points 1 day ago (1 children)

The entire field of Machine Learning, that has existed for decades, long before LLMs were even a theory?

[–] frightful_hobgoblin@lemmy.ml 0 points 1 day ago* (last edited 1 day ago)

This thread is funny. A few users are like "😡😡😡I hate everything about AI😡😡😡" and also "😲😲😲AI is used for technical research??? 😲😲😲 This is news to me! 😲😲😲"

Talk about no-investigation-no-right-to-speak. How can you have an opinion on a field without even knowing roughly what the field is?

[–] frightful_hobgoblin@lemmy.ml 0 points 1 day ago (1 children)
[–] vovchik_ilich@hexbear.net 14 points 1 day ago (1 children)
[–] frightful_hobgoblin@lemmy.ml -1 points 1 day ago (1 children)
[–] vovchik_ilich@hexbear.net 14 points 1 day ago (1 children)

So, are there any results of technological achievements from any AI models that show a trend towards increasing solving of scientific and technical problems?

[–] frightful_hobgoblin@lemmy.ml 0 points 1 day ago (2 children)

Yes. I mean, this is absolute basics.

[–] TheDoctor@hexbear.net 15 points 1 day ago (1 children)

I think you’re going to need to link to some proof or example. You’re clearly using a definition of AI that’s broader than the colloquial definition everyone’s assuming you’re using.

[–] frightful_hobgoblin@lemmy.ml 7 points 1 day ago* (last edited 1 day ago)

Here is the latest edition of Nature Machine Intelligence, to give you a basic idea of the sort of research that constitutes the AI field: https://www.nature.com/natmachintell/current-issue

Topics in Frontiers In Artificial Intelligence: https://www.frontiersin.org/journals/artificial-intelligence/research-topics

Foundations and Trends in Machine Learning: https://www.nowpublishers.com/MAL

[–] vovchik_ilich@hexbear.net 10 points 1 day ago (1 children)
[–] frightful_hobgoblin@lemmy.ml 6 points 1 day ago (1 children)
[–] vovchik_ilich@hexbear.net 8 points 1 day ago (2 children)

The very first link shows that this is incremental benefit that's been taking place since 2010. Computational tools are useful, but you're providing mostly links of algorithms/learning models to sort pictures for medical purposes and diagnosis (useful and cool), and saying that somehow that means fusion will be solved by AI

[–] frightful_hobgoblin@lemmy.ml 2 points 1 day ago* (last edited 1 day ago) (1 children)

I'm mostly answering the question I was asked: what are some examples of technical research in the field.

How can we solve plasma control without AI? And why exclude that tool?

[–] vovchik_ilich@hexbear.net 7 points 1 day ago (1 children)

I'm not saying we should exclude any tools, I'm just skeptical about the trend of calling everything AI, attributing all computational advances to AI, and jumping into the bandwagon of businesses trying to oversell any and all computating as AI.

[–] frightful_hobgoblin@lemmy.ml 2 points 1 day ago* (last edited 1 day ago) (2 children)

That's just cosmetic stuff. Why care about what words people use?

[–] GalaxyBrain@hexbear.net 7 points 1 day ago

Because the words people use are very very important.

[–] Alaskaball@hexbear.net 5 points 23 hours ago* (last edited 23 hours ago) (1 children)

Because that's how you end up with dipshits calling federal funding of the CIA socialism.

Socialism is when the government does stuff. If it does a lot of stuff that's communism.

[–] frightful_hobgoblin@lemmy.ml 2 points 23 hours ago (1 children)

That's the least plausible slippery-slope argument I have heard this month.

[–] Alaskaball@hexbear.net 4 points 23 hours ago (1 children)

And yet I can go to some TYT video or a DSA meeting and hear some dipshit lib say socialism is when the government does stuff IRL.

Hell, I can go find a few coworkers who say that too, and immediately follow it up with calling Kamala a communist and Biden a Maoist.

But I suppose that's A-okay with you since

That's just cosmetic stuff. Why care about what words people use?

[–] frightful_hobgoblin@lemmy.ml 2 points 22 hours ago (1 children)

As you're trying to make a link between [using neural nets to research plasma control for fusion] and [Biden is a Maoist], I have no.reason to take you seriously.

[–] Alaskaball@hexbear.net 4 points 21 hours ago

You're advocating for the dilution of linguistic terminology and making it so you can smear people who hate dogshit stolen art as people who hate medical science.

The only person who shouldn't be taken seriously is you.

[–] frightful_hobgoblin@lemmy.ml 1 points 1 day ago

Like if I go to Journal of Fusion Energyhttps://link.springer.com/journal/10894 – the latest article is titled 'Artificial Neural Network-Based Tomography Reconstruction of Plasma Radiation Distribution at GOLEM Tokamak' and the 4th-latest is 'Deep Learning Based Surrogate Model a fast Soft X-ray (SXR) Tomography on HL-2 a Tokamak'. I am sorry if that upsets you but that's the way the field is.

[–] Infamousblt@hexbear.net 11 points 1 day ago (1 children)

I am against the marketing buzz that is pretending (lying) that computers can understand language, video, and images, yes.

I am not against actual AI but it does not exist yet

[–] frightful_hobgoblin@lemmy.ml -1 points 1 day ago (1 children)

They can functionally understand a good portion of it.

e.g. I can input a meme plus the words "explain this meme" and it can output an explanation.