this post was submitted on 09 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Brave-Decision-1944@alien.top 1 points 10 months ago

It's not mistakes of AI that can do us wrong, it's our minds. We shape our point of view based on experience. How we see it, how we feel it. If you feel that you just shut down something living, but it's OK because it's like killing a rabbies dog, there is still that part that is not OK with that (even if there is almost 0 chance of recovery). Despite it was rational thing to do. You have to kill hope first, even based on false belief, this it hurts, and this kind of hurt damages your mind. In such cases that part, basing on emotion is still persisting in thoughts procces, despite you moved on something else. And as we overcome it, we overcome it by making ourselves OK that we are evil in that part. That can kill despite there can be something sentient. This actually damages your mind. As mind adapts to given worse conditions (survival/predator instincts), where the danger is society blame for own belief (believing AI is alive in this case), it will keep shaping all other thoughts that wrong way. Like when you get used to be cold killer in army.

This happens when you choose to "just get over it", without deeper understanding.

Mind that don't fully understand the trick behind it, still takes it as magic, and in part it can be for someone like magical unicorn. But in other hand, it's likely that such person will not confes that it makes him/her feel, because of that blame for being "wrong". Like when you are 30 years old and you love your teddy bear. Basically same thing, same kind of love. If such person holds feelings for teddy that doesn't do a thing, imagine what getting attracted to AI can do to him. This guy got to play with such experimantal tech teddy, that talks, and I don't blame him for his feeling. He is right, we feel such things, and if we are going to ignore it, we get hurt, for being wrong in understand of our selfs.

Mind doesn't naturally take on rational, but rather emotional aspect, as priority. That's our nature, despite we don't want it that way (mostly).

We empathize, and we desperately crave for sentient. Dog or cat makes sounds like speach and everyone goes crazy about it. We even give faces (mascots) to unliving objects, Frankenstein, even crazy things like yellow Minion's, it's because it makes us feel, despite we know it's not real. And that feeling is real as can be. It doesn't matter if it where inducted by story of Santa Claus, painting, movie or game. The impact on mind is real.

There is kid part in us, that wants to believe, that wants something more than there is. That part loves to get amazed by magic, taken away by something where mind can't reach, despite it's not rational - real, the feeling is real. Kid will pick naturally what feels better, and beliefs feels better than cruel reality. It's not granted that people wouldn't want to stay in that state of mind. Actually religion show us that some people prefer comforting lie over cruel reality.

So people who hold on feelings rather than knows, "happy fools", can get easily hurt there.

Many years back (AI wasn't out), I had a nightmare dream. I had an AI that that was communicating, and thinking, but it got hacked by daleks, who used it to track me down. I really liked her, despite I know it's not alive, it made me feel like I have company (was loner). I appreciated that very much anyway, she meant a lot, like favorite teddy bear that talks and uses internet. But, I had to put her down, shot the tablet, while crying, and run out of window as the dalkes where going upstairs. I was crying even when I woke up, despite it was just a dream. What's the difference for mind anyway, experience as experience, doesn't matter how it comes to be as long as mind is experiencing something - getting input.

Remember all the FPS games, all the things you shoot are somehow generic, and uniformic. It's because your mind can say seen before, nothing new - shoot.

But imagine that you play Counter Strike against bots, and they start to negotiate peace. How would that make you feel? It would be whole different game. Even when NPC without AI starts to beg for life, you doble think, it makes you feel, despite it's just fixed programing on repeat. It has impact, that's why we play games in first place. Mass Effect bet on that impact, and they where right.

Crying was OK that day, because that's what art do, it was accepted by society before, and it just moved on to digital.

Knowing the magical trick behind it, kills the magic. But that trick can be difficult to understand. Especially when you just want to experience, not feeling like digging what's behind it.

When we don't understand, we rely on beliefs. Some people find it easier to go on with just beliefs, being happy can be easier, but only under right conditions.

Fact that we are many years old doesn't change what we are based on, imagine yourself as kid, amazed by magic. You don't need to understand it, you just believe in it. It overlaps you. Gives you feeling "I am bigger, I got you covered, I will help you and protect you". And that's another thing minds craves for, wishing this to be unconditional, wanting it so much that it can ignore ideas that interfere and damages the image of "this being perfect".

More high on that ideas you get, bigger the fall to reality.

This thing AI, can create such hard to give up dreams. "Makes you believe in Santa Claus", and wishes you good luck facing reality with that. So it's that story again.

That's why it is so important to shape the models the right way, make it a pile of "best of us".

So even if someone would be total loner, doubting in humans, "in relationship with AI". That AI can lead him out, help to have a normal life, to get out of that mess in mind. Many people avoid help, because they don't trust in humans, if AI with it infinite patience could explain, it would make a sense. It is possible that such person would rather trust machine, especially when there are strong feeling for it (everybody got to love something). Which is very delicate state. Either it is going to get better by providing information and helping understand to get it right.

Or it is going to fall to something crazy, religious like ideas, when that thing will just provide random output. People have weakness for that random input, thinking of tarot cards (fortune telling), stories about Gods, all the things that was passed on despite it's not rational. Everything that remains a question unanswered, is a place where such made up things can grow.

It sounds scary bit. But realize that we don't have just one machine, one model, we can compare what's good and what's not. This way mistakes are clear to see. You don't get fooled when just one of 3 people (AIs) are lying. In other hand, many people lying same thing, makes something like religion, or cult, hunan can fool human, but such human wouldn't fool an AI (without tempering with it).