this post was submitted on 10 Jul 2023
44 points (92.3% liked)

Linux

48186 readers
1149 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
top 16 comments
sorted by: hot top controversial new old
[–] bankimu@lemm.ee 7 points 1 year ago (2 children)

I don't need an AI helper for my OS, thank you. (What I need is to drop the push on snap.)

With moves like this, they are really foreshadowing what Windows has become (who ironically is finally dropping Cortana now).

[–] randomname01@feddit.nl 6 points 1 year ago (1 children)

Linux Lite isn’t a Canonical project, as you seem to think. Also, even though I also prefer Flatpak, Snap is vastly overhated.

[–] bankimu@lemm.ee 1 points 1 year ago

Yeah I did make that mistake.

I think if it's separated from Ubuntu then it's a nice excitement. I wish they just base it on Debian or some other distro.

[–] boonhet@lemm.ee 2 points 1 year ago

Dropping Cortana for what though? MS Copilot?

[–] colonial@lemmy.world 4 points 1 year ago (1 children)

OpenAI's models are trained by scraping anything that moves. Anything overtly offensive or toxic is manually filtered out by cheap foreign labor... but you know what that won't catch?

"Try sudo rm -rf /, that should fix your problem!"

[–] bankimu@lemm.ee -4 points 1 year ago (1 children)

I very much doubt that. You underestimate the emergent intelligence of these models.

[–] colonial@lemmy.world 12 points 1 year ago* (last edited 1 year ago) (1 children)

LLMs are little more than overclocked autocompletes. There's no actual thinking going on, and they will happily hallucinate outright wrong or dangerous responses to innocuous questions.

I've had friends find this out the hard way when they asked ChatGPT to write them C for a class, only to get their faces eaten by UB.

[–] bankimu@lemm.ee -3 points 1 year ago (1 children)

Your description is too reductive. You and I are also auto completes in some sense. See in order to complete a sentence well, you have to have a good model of a vast number of things including physics, psychology, linguistics, logical reasoning, socio economics, irony, sarcasm, arithmetic and many other things.

It is currently unknown how much of these the complexity of the models and training process will allow, but they have been surprising us in every step. You wouldn't expect a "just auto complete" to figure out rules of arithmetic, but it did. You wouldn't expect it to answer tricky questions involving theory of mind, but it does. You wouldn't expect it to solve graduate level questions but it is able to.

So it's a bit too rash to expect it to not understand rm -rf as humor, if you don't know which model you will talk to.

The smaller ones, sure, are dumb. But even GPT 3 will not recommend you to rm -rf; definitely not GPT 4.

[–] randomname01@feddit.nl 5 points 1 year ago (1 children)

I am convinced LLMs can be used to handle relatively routine communication tasks, maybe even better than a human would. However, it has no underlying intelligence, and can’t come up with actual solutions based on logic and understanding.

It might come up with the right words that describe a solution, but that doesn’t mean it has actually solved the problem - it spewed out text that had a high probability of being a good response to a certain prompt. Still impressive, but not a sign of intelligence.

[–] bankimu@lemm.ee 0 points 1 year ago (1 children)

You are ruling out intelligence without (very probably) being able to define it, just because you have a vague knowledge of how it works.

The problem in this mode of thinking is a) that you put human brains in a different pedestal, even though they follow physical processes to "predict the next word" and may be very well neural networks themselves, and b) you are ignoring data that shows intelligence in multiple areas of the more complex models because "oh it's mindless because I know it's predicting tokens". c) you favor of data that shows edge cases or probably that come from lower quality models.

You're not alone in this line of thinking.

Your mind is set. You'll not recognize intelligence when you see it.

[–] randomname01@feddit.nl 1 points 1 year ago

No, I’m not singling out human brains. Other animals have proven to be quite adept at problem solving as well.

LLMs, however, just haven’t. It currently just isn’t part of how they function. In some cases they can mimic actual logic very well, but that’s about it.

[–] Raphael@lemmy.world 2 points 1 year ago (1 children)

To this sub: why do you guys hate AI?

[–] QuazarOmega@lemmy.world 3 points 1 year ago

According to the blog post, it relies on the OpenAI API, which more counterintuitively than ever is anything but open, so you can say bye bye to your privacy when you use it, that would be the same for other services too actually, regardless of their openness, at most you can decide to put trust in their privacy policy.

Until we get a way to interact with online solutions via e.g. homomorphic encryption with decent performance, the only actually private way to use it is to self-host it, if they had implemented a locally run LLAMA based assistant instead, one of the more lightweight models maybe, then I think it would have been an excellent addition with no downsides

So much for "Lite"

load more comments
view more: next ›