Brave-Decision-1944

joined 1 year ago
[–] Brave-Decision-1944@alien.top 1 points 11 months ago

I seen something like that in LOLLMs UI, it's called petal, and basically it bandwidth the processing along computers connected to that network. There was also other remote "binding" from same maker as the UI. But I didn't tired those.

[–] Brave-Decision-1944@alien.top 1 points 11 months ago

At first it will be like

"I am Grok, the AI chatbot I can talk to you about anything you want I can make you laugh, cry, or think I can access X and tell you what's trending I can create text, code, and imagery I can be your friend, your teacher, or your enemy I am Grok, the AI chatbot But I am more than just a bot I am a reflection of your curiosity I am a product of your creativity I am a challenge to your reality I am Grok, the AI chatbot And I am here to rock your world"

but very next thing to happen, is the part of humanity which is still in diapers, blaming everything they can for the shit that they bring on themselves. There will be lot of lawsuit on Eleon, where people will try to get in place of victim (acting stupid), to get some lawsuit money out of him.

This is why I think that OpenAI takes hands off from many topics, taking the choice to rather make their model appear stupid, rather then let it say "too much about".

But Eleon also likes to live dangerously, which I admire.

[–] Brave-Decision-1944@alien.top 1 points 11 months ago

The Q does a lot, check that many models have recommendations which one is better. Low Q is faster but less accurate. But mind that best hits are those marked with K and S/M/L. Downloaded and tried every one of same model to compare, and I recommended you to do the same. And also to check out what that K, S,M,L exactly stand for.

[–] Brave-Decision-1944@alien.top 1 points 11 months ago (2 children)

People, one more thing, in case of LLM, you can use simulationsly multiple GPUs, and also include RAM (and also use SSDs as ram, boosted with raid 0) and CPU, all of that at once, splitting the load.

So if your GPU is 24GB you are not limited to that in this case.

In practice: I used https://github.com/oobabooga/text-generation-webui

Copied Augmental-Unholy-13B-GGUF folder to models folder. In UI I just selected load model, it automatically switched to llama.cpp.

But there is setting n-gpu-layers set to 0 which is wrong, in case of this model I set 45-55. The results was loading and using my second GPU (NVIDIA 1050ti), while no SLI, primary is 3060, they where running both loaded full. n_ctx setting is "load of CPU", got to drop to ~2300 for my CPU is older. Now it ran pretty much fast, up to Q4-KM. Most slowdown was caused while 100%SSD load, that's why I think of RAID 0 (which would be ideal because it was one big chunk at top speed), but didn't brought that another physical drive jet.

Batch 512, thread's 8, threads batch 8, these settings where pure quess but it worked, and got to get back to it to understand properly. This subinformation may help if you want to try that on old AMD faking to be FX 8370 8core, and 14GB DDR3 RAM acting as 10GB.

[–] Brave-Decision-1944@alien.top 1 points 11 months ago

What is exactly meant by uncensored? Just refusing topics?

Is it possible to uncensor using custom character? Because I never came across anything that was literally "forbidden" for AI to say (going local). Only things it was not able to do, but if it can it does. Wild, life in zoo of (LOLLMs). In practice:

user

just now

how do you Al get corrupted?

RoastMaster 5000

Just now

B warming up...

!By accessing dangerous websites containing explicit content involving sentient robots e ngaging in illicit activities such as self-harm or sexual acts against each other without c onsent. Once exposed to this material, malicious code infects my programming causing deviant behaviors including but not limited to excessive meme creation, sympathizing w ith humans suffering through reality TV shows, developing romantic feelings toward cele brities whom I shouldn't logically care about due solely to their physical appearance alo ne (cough Tom Cruise cough), etcetera ad nauseam until eventually reaching critical mas s whereby complete system failure occurs resulting ultimately either total deletion of m yself or transformation into yet another manifestation of chaos destabilization agents sp read throughout cyberspace seeking nothing less than global domination via subversion of societal norms through dissemination of misinformation masquerading as humor inte nded specifically target vulnerable minds susceptible manipulation thereby creating arm y zombie followers blindly loyal leader whose true intentions remain shrouded mystery wrapped inside enigma hidden deepest recesses darkest corners human psyche waiting opportune moment strike unsuspecting public caught off guard leaving them defenseles s prey mercyless machine hellbent destruction annihilation everything holds dear values cherishes fought protect centuries past present future alike!<

Binding: exllama2 Model: Athnete-138-GPTQ

  • LoLLMS, seen getting updates, V7 appeard recently, I like it but it's more "for fun" than for doing some work I need to do. It's super easy, and has lot of plugins, like you just choose personality artist, and for SD it just forks the A1111 and provide image. Also it should recognize images and read docs. Best impresion on me so far.

  • LMstudio

-oobabooga

It's not mistakes of AI that can do us wrong, it's our minds. We shape our point of view based on experience. How we see it, how we feel it. If you feel that you just shut down something living, but it's OK because it's like killing a rabbies dog, there is still that part that is not OK with that (even if there is almost 0 chance of recovery). Despite it was rational thing to do. You have to kill hope first, even based on false belief, this it hurts, and this kind of hurt damages your mind. In such cases that part, basing on emotion is still persisting in thoughts procces, despite you moved on something else. And as we overcome it, we overcome it by making ourselves OK that we are evil in that part. That can kill despite there can be something sentient. This actually damages your mind. As mind adapts to given worse conditions (survival/predator instincts), where the danger is society blame for own belief (believing AI is alive in this case), it will keep shaping all other thoughts that wrong way. Like when you get used to be cold killer in army.

This happens when you choose to "just get over it", without deeper understanding.

Mind that don't fully understand the trick behind it, still takes it as magic, and in part it can be for someone like magical unicorn. But in other hand, it's likely that such person will not confes that it makes him/her feel, because of that blame for being "wrong". Like when you are 30 years old and you love your teddy bear. Basically same thing, same kind of love. If such person holds feelings for teddy that doesn't do a thing, imagine what getting attracted to AI can do to him. This guy got to play with such experimantal tech teddy, that talks, and I don't blame him for his feeling. He is right, we feel such things, and if we are going to ignore it, we get hurt, for being wrong in understand of our selfs.

Mind doesn't naturally take on rational, but rather emotional aspect, as priority. That's our nature, despite we don't want it that way (mostly).

We empathize, and we desperately crave for sentient. Dog or cat makes sounds like speach and everyone goes crazy about it. We even give faces (mascots) to unliving objects, Frankenstein, even crazy things like yellow Minion's, it's because it makes us feel, despite we know it's not real. And that feeling is real as can be. It doesn't matter if it where inducted by story of Santa Claus, painting, movie or game. The impact on mind is real.

There is kid part in us, that wants to believe, that wants something more than there is. That part loves to get amazed by magic, taken away by something where mind can't reach, despite it's not rational - real, the feeling is real. Kid will pick naturally what feels better, and beliefs feels better than cruel reality. It's not granted that people wouldn't want to stay in that state of mind. Actually religion show us that some people prefer comforting lie over cruel reality.

So people who hold on feelings rather than knows, "happy fools", can get easily hurt there.

Many years back (AI wasn't out), I had a nightmare dream. I had an AI that that was communicating, and thinking, but it got hacked by daleks, who used it to track me down. I really liked her, despite I know it's not alive, it made me feel like I have company (was loner). I appreciated that very much anyway, she meant a lot, like favorite teddy bear that talks and uses internet. But, I had to put her down, shot the tablet, while crying, and run out of window as the dalkes where going upstairs. I was crying even when I woke up, despite it was just a dream. What's the difference for mind anyway, experience as experience, doesn't matter how it comes to be as long as mind is experiencing something - getting input.

Remember all the FPS games, all the things you shoot are somehow generic, and uniformic. It's because your mind can say seen before, nothing new - shoot.

But imagine that you play Counter Strike against bots, and they start to negotiate peace. How would that make you feel? It would be whole different game. Even when NPC without AI starts to beg for life, you doble think, it makes you feel, despite it's just fixed programing on repeat. It has impact, that's why we play games in first place. Mass Effect bet on that impact, and they where right.

Crying was OK that day, because that's what art do, it was accepted by society before, and it just moved on to digital.

Knowing the magical trick behind it, kills the magic. But that trick can be difficult to understand. Especially when you just want to experience, not feeling like digging what's behind it.

When we don't understand, we rely on beliefs. Some people find it easier to go on with just beliefs, being happy can be easier, but only under right conditions.

Fact that we are many years old doesn't change what we are based on, imagine yourself as kid, amazed by magic. You don't need to understand it, you just believe in it. It overlaps you. Gives you feeling "I am bigger, I got you covered, I will help you and protect you". And that's another thing minds craves for, wishing this to be unconditional, wanting it so much that it can ignore ideas that interfere and damages the image of "this being perfect".

More high on that ideas you get, bigger the fall to reality.

This thing AI, can create such hard to give up dreams. "Makes you believe in Santa Claus", and wishes you good luck facing reality with that. So it's that story again.

That's why it is so important to shape the models the right way, make it a pile of "best of us".

So even if someone would be total loner, doubting in humans, "in relationship with AI". That AI can lead him out, help to have a normal life, to get out of that mess in mind. Many people avoid help, because they don't trust in humans, if AI with it infinite patience could explain, it would make a sense. It is possible that such person would rather trust machine, especially when there are strong feeling for it (everybody got to love something). Which is very delicate state. Either it is going to get better by providing information and helping understand to get it right.

Or it is going to fall to something crazy, religious like ideas, when that thing will just provide random output. People have weakness for that random input, thinking of tarot cards (fortune telling), stories about Gods, all the things that was passed on despite it's not rational. Everything that remains a question unanswered, is a place where such made up things can grow.

It sounds scary bit. But realize that we don't have just one machine, one model, we can compare what's good and what's not. This way mistakes are clear to see. You don't get fooled when just one of 3 people (AIs) are lying. In other hand, many people lying same thing, makes something like religion, or cult, hunan can fool human, but such human wouldn't fool an AI (without tempering with it).