DarkThoughts

joined 1 year ago
[–] DarkThoughts@kbin.social -2 points 9 months ago (5 children)

You need a little gpu server farm for proper models & context sizes though. Single consumer gpus don't have enough vram for that.

[–] DarkThoughts@kbin.social 1 points 9 months ago

See my other reply for some basic info & pointers.

[–] DarkThoughts@kbin.social 16 points 9 months ago* (last edited 9 months ago) (8 children)

The bots (what the actual girlfriends or whatever other characters are) aren't the problem. You can find them on chub.ai for example or write them yourself fairly easily. The issue the software, and even more so the hardware. You need something like the mentioned Kobold.ccp or oobabooga, and then you'd also need a trained LLM model that you can get on huggingface.co, which is already where it gets complicated (they'll be loaded within kobold or oobabooga). You also need to understand how they work in regards to context sizes & bytes, because they need a lot, and I mean A LOT of vram to work properly. Basically, the more vram you have, the better the contextual understanding, their memory is. Otherwise you'd have a bot that maybe knows to only contextualize the last couple messages. For paid services like novelai.net you basically have your bots run through big ass server farms with lots of GPUs that bundle their vram and processing power, giving you "decent" context sizes (imo the greatest weak point of LLMs and it is deeply rooted in how they work) and decent speed. NovelAI also supports front-ends like SillyTavern which is great for local bot management and settings, regardless if you self host or use a paid service (NOT EVERY PAID SERVICE HAS AN API FOR THIS! OpenAI's ChatGPT technically does too but they do not allow NSFW content and can ban you for that if caught).
There's a bunch of "free" online services too, like janitorai.com but most of them have slow speeds and the chat degrades significantly after just a few messages, because they have low context sizes. The better / paid models suffer from this degradation too but it is slower and less noticeable, at least at first. You can use that to get an idea of how LLMs work though.

Edit: Should technically self explanatory / common sense, but I would advise not to share ANY personal information through online service chats that could identify you as a person!

[–] DarkThoughts@kbin.social 4 points 9 months ago (1 children)

I tried oobabooga and it basically always crashes when I try to generate anything, no matter what model I try. But honestly, as far as I can tell all the good models require absurd amounts of vram, much more than consumer cards have, so you'd need at least like a small gpu server farm to local host them reliably yourself. Unless of course you want like practically nonexistent context sizes.

[–] DarkThoughts@kbin.social 3 points 9 months ago (1 children)

"AI" is funny anyway because you're basically gaslighting them the whole time to have them behave as they're supposed to.

[–] DarkThoughts@kbin.social 1 points 10 months ago* (last edited 10 months ago)

Seems to be librist that's the issue here. The nobara-amdgpu-config package issue also errors out on the first command though, skipping a package.

Problem: cannot install the best update candidate for package pipewire-codec-aptx-0.3.69-1.fc38.x86_64

  • nothing provides pipewire >= 1.0.1 needed by pipewire-codec-aptx-1.0.1-1.fc38.x86_64 from rpmfusion-free-updates

This also happened on the following nobara-sync command. And the second command gives:

error: package nobara-amdgpu-config is not installed

Went through with it anyway but I feel that's potentially one of those things that eventually causes issues further down the line until the system doesn't boot anymore...

[–] DarkThoughts@kbin.social 1 points 10 months ago (2 children)

I already fail at the first one:

sudo dnf remove -y blender discord telegram-desktop oneapi-level-zero librist libndi-sdk firefox onlyoffice-desktopeditors

Which errors out:

Error:
Problem: The operation would result in removing the following protected packages: plasma-desktop
(try to add '--skip-broken' to skip uninstallable packages)

[–] DarkThoughts@kbin.social 1 points 10 months ago (1 children)

Using a Chrome user agent in FF can result in broken video / audio playback on various sites.

[–] DarkThoughts@kbin.social 3 points 10 months ago

“You’re just a tiny minority, most people like the change”

They did the same shit with their redesign with their idiotic floating tabs. They look ugly and they even take up way more space, while displaying less information, for literally no reason. They argued the need this change for future FF features, which yet, several years later, have yet to appear. Here's a quote from "Paul", one of their moderators - almost 3 years ago:

Hi,

We bring a modernized and differentiated look to tabs since Firefox 89 in order to create a signature Firefox look and experience. This major redesign will help us enable more use cases and features in the future.

https://support.mozilla.org/en-US/questions/1338169

I love Firefox and will continue to use it, but its decline is a mixture of Google's aggressive embrace, extend, and extinguish approach and straight up continued mismanagement of the Mozilla Corporation.

[–] DarkThoughts@kbin.social 27 points 10 months ago (4 children)

I'm surprised how many people use bleach / bleach based products. You really don't need such aggressive stuff the majority of times. Regular cleaners work just fine. Or is that an US thing where bleach is in every cleaning product or something?

[–] DarkThoughts@kbin.social 1 points 10 months ago* (last edited 10 months ago) (1 children)

That link does not work on kbin as it links to "https://kbin.social/c/kde@lemmy.kde.social" instead of "https://kbin.social/m/kde@lemmy.kde.socia".

[–] DarkThoughts@kbin.social 10 points 10 months ago

If you mean low poly & texture sized 3D games, then I would say BallisticNG, which is directly inspired by Wipeout from the original PSX. Also, while not quite the same, Valheim has a similar pixelated texture style with low poly models (for today's standards at least).

view more: ‹ prev next ›