Rivalarrival

joined 1 year ago
[–] Rivalarrival@lemmy.today 3 points 1 month ago* (last edited 1 month ago)

Proprietary information and corporate classified information do not exist once they are incorporated into the device and sold to the end user. That information now belongs to the end user, who will continue to need it even if the company is out of business, or refuses service to the owner of the device.

Any attempt to conceal that information from the end user should make the company liable for any failed repair performed by any individual, including harm arising from that failed repair. The only way to avoid that liability is to release all information to the end user, so they are fully informed when making a repair decision.

[–] Rivalarrival@lemmy.today 11 points 1 month ago* (last edited 1 month ago)

Cox Communications asked a court to block Rhode Island's plan for distributing $108.7 million in federal funding for broadband deployment.

Cox Communications should be fined $108.7 million for vexatious litigation, and be prohibited from providing any pay or compensation to its C-suite until that fine is paid in full.

Copies of that order should be sent by certified mail to every corporate officer and board member of Comcast, Charter, and Spectrum.

[–] Rivalarrival@lemmy.today 1 points 1 month ago

The "collapse" you're talking about is a reduction in the diversity of the output, which is exactly what we should expect when we impart a bias toward obviously correct answers, and away from obviously incorrect answers.

Further, that criticism is based on closed-loop feedback, where the LLM is training itself only on it's own outputs.

I'm talking about open-loop, where it is also evaluating the responses from the other party.

Further, the studies whence such criticism comes are based primarily on image generation AIs, not LLMs. Image generation is highly subjective; there is no definitively "right" or "wrong" output, just whether it appeals to the specific observer. An image generator would need to tailor itself to that specific observer.

LLM sessions deal with far more objective content.

A functional definition of insanity is doing the same thing over and over and expecting different results. The inability to consider it's previous interactions denies it the ability to learn from it's previous behavior. The idea that AIs must not be allowed to train on their own data is functionally insane.

[–] Rivalarrival@lemmy.today 1 points 1 month ago* (last edited 1 month ago) (2 children)

Also, with llms there is no "next time" it's a completely static model.

It's only a completely static model if it is not allowed to use it's own interactions as training data. If it is allowed to use the data acquired from those interactions, it stops being a static model.

Kids do learn elementary arithmetic by rote memorization. Number theory doesn't actually develop significantly until somewhere around 3rd to 5th grade, and even then, we don't place a lot of value on it at that time. We are taught to memorize the multiplication table, for example, because the efficiency of simply knowing that table is far more computationally valuable than the ability to reproduce it at any given time. That rote memorization is mimicry: the child is simply spitting out a previously learned response.

Remember: LLMs are currently toddlers. They are toddlers with excellent grammar, but they are toddlers.

Remember also that simple mimicry is an incredibly powerful problem solving method.

[–] Rivalarrival@lemmy.today 2 points 1 month ago (1 children)

Not going to link it, but in the video I saw, there were two distinct holes behind the hole he was using. So, either he was using a urethra, or she had a second vagina.

[–] Rivalarrival@lemmy.today -2 points 1 month ago (4 children)

I can see why you would think that, but to see how it actually goes with a human, look at the interaction between a parent and child, or a teacher and student.

"Johnny, what's 2+2?"

"5?"

"No, Johnny, try again."

"Oh, it's 4."

Turning Johnny into an LLM,nThe next time someone asks, he might not remember 4, but he does remember that "5" consistently gets him a "that's wrong" response. So does "3".

But the only way he knows 5 and 3 gets a negative reaction is by training on his own data, learning from his own mistakes.

He becomes a better and better mimic, which gets him up to about a 5th grade level of intelligence instead of a toddler.

[–] Rivalarrival@lemmy.today -4 points 1 month ago* (last edited 1 month ago) (9 children)

It needs to be retrained on the responses it receives from it's conversation partner. It's previous output provides context for it's partner's responses.

It recognizes when it is told that it is wrong. It is fed data that certain outputs often invite "you're wrong" feedback from it's partners, and it is instructed to minimize such feedback.

It is not (yet) developing true intelligence. It is simply learning to bias it's responses in such a way that it's audience doesn't immediately call it a liar.

[–] Rivalarrival@lemmy.today 15 points 1 month ago (1 children)

Your target is three rungs above you on the corporate ladder. If you have more than 3 rungs below you, there is a guillotine with your name on it.

[–] Rivalarrival@lemmy.today 1 points 1 month ago

Fools are going to part with their money somehow.

view more: ‹ prev next ›