Mikina

joined 2 years ago
[–] Mikina@programming.dev 12 points 11 months ago* (last edited 11 months ago)

I'd also add that IMO, it's also heavily caused by misalignment of social network personalization algorithms. It's very probable that someone developed a ML algorithm during the early years of FB/YT/Google (not LLM, just some kind of feedbacky ML), that takes data they have about you as input, and selects what posts to show you next to maximize your time spent scrolling on the app.

You have unimaginable amount of data (with literally billions of active users), and it could've been running and getting better for the last decade.

The algorithm gets better and better at gluing you to the screen, at manipulating and changing people. My theory is that one of the best ways how to keep someone glued to a social network is radicalization and introduction into a conspiraci theory. It probably removes you from "normal" people around you IRL, because you're now wierd, you feel smart because you've "figured out the truth", you don't spend time with people around you or read "traditional" media, because they are lying and don't get you, and the only safe space you have is the echo chamber on the social network. That sounds like a pretty good recipe how to keep people interacting on the platform, and there's not really a way how to prevent it, assuming it's a ML algorithm driving it. No one knows how it works, and it only works with one goal - maximize app time at all costs.

Just take a look how good some ML models are at the task of "text -> image". Now imagine it has billions of people and a decade to experiment, with a task "person -> next content to show". It's horrifying to think about what it would be able to manipulate you into, and it is even better at it that the image models, because it had exponentially more data and room to experiment in real time on real people.

Also - there's no way how to fight back. Even if you know about it, there are tens of thousands people like you, who are also "immune" to this approach. But the ML algorithm gets to experiment on them, and if there is a way how to manipulate even them, it will figure it out. Because it knows what approach won't work on people like you. The only way you can prevent this is by not using anything that has a personalized feed - no Google search, no FB wall, no YT recommendations, etc. This probably doesn't lead to radicalization in this case, because the goal is to keep you in the app, not radicalize. For now, at least. Thankfully, people managing the biggest social networks are reasonable people who are just running a business, and they have no reason to change the goal of the algorithm into something else than screen time, right?

[–] Mikina@programming.dev 2 points 11 months ago

Thank you, it was an interesting read.

Unfortunately, as I was looking more into it, I've stumbled upon a paper that points out some key problems with the proof. I haven't looked into it more and tbh my expertise in formal math ends at vague memories from CS degree almost 10 years ago, but the points do seem to make sense.

https://arxiv.org/html/2411.06498v1

[–] Mikina@programming.dev 183 points 11 months ago (73 children)

Lol. We're as far away from getting to AGI as we were before the whole LLM craze. It's just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what's the next most likely letter/token based on what's before it, that can't even get it's facts straith without bullshitting.

If we ever get it, it won't be through LLMs.

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

[–] Mikina@programming.dev -1 points 11 months ago

Kagi can do this by default, without having to hope that a random extension doesn't one day suddenly decides to update into a infostealer. In general, apart from the few super popular ones, installing a random extension that can do the random niche thing you need is a pretty big risk.

I speak from experience, few years ago the developer of Nano Defender, which was at the time better at avoiding anti-adblock scripts, decided to sell/handout the extension to someone, who turned it into a cookie/info stealer, which got through automatic update and started wreaking havoc on everything I had logged in. Since then, I avoid extensions as much as possible.

As for Reddit having the answers - nah, never had an issue with finding what I need without reddit, for the last year I stopped using it, and in the few cases I didn't and resorted to turning off my vpn and looking at the thread, it was a mix of adverts pushing their product masquerading as comments, deleted or edited relics of the exodus, straith up wrong suggestions, and in general it didn't help me at all.

[–] Mikina@programming.dev 2 points 11 months ago

I had issues with setting up Fefora on NVIDIA for gaming (skill issue, probably), but switching to Nobara has fixed all of them and I've been single boot for almost a year since then.

[–] Mikina@programming.dev 6 points 11 months ago

From my experience, all the linux for mobile distros I've tried on my Pinephone were a really bad experience, with a lot of issues. But the option is there, and while it wasnt reliable enough to use as a daily phone, I still carry it in the bag with a dock and Kali, which sometimes can get useful during pentesting.

[–] Mikina@programming.dev 1 points 11 months ago

I don't think I've ever met a cheater in any of those games (it was more than 15 years ago). And if I did, since it was one of the more active servers, there was usually an admin available. I don't remember it being an issue.

[–] Mikina@programming.dev 2 points 11 months ago* (last edited 11 months ago) (2 children)

I a large part of my childhood, around age like 9-12, playing SW: Jedi Outcast and Jedi Academy multiplayer. Hanging out on a JAVA server with people I met over there, being part of a clan with regular practice that attended tournaments, but most of the fun was just chilling on the server, exploring the plethora of custom maps filled with secrets, and having a great time.

The experience is something I can't imagine in this day and age, epsecialy because matchmaking killed this kind of friendship between random players, and most of the social aspects of games. All of the Free For All servers were mostly about just chilling, with combat only done in agreed-upon duels that had it's own unwritten rules/etiquette that everyone respected. The community was amazing.

[–] Mikina@programming.dev 18 points 11 months ago* (last edited 11 months ago)

From what I remember from college, I think what you're talking about is mostly about intrinsic motivation vs. extrinsic motivation, into which there's a lot of research. Just adding it in case someone wanted to look more into it, and was looking for some keywords.

It's one of the things that's worth knowing about, because you can somehow work around it to get motivated better, and it's one of the more important topics in game design. So, in general a usefull piece of psychology knowledge.

[–] Mikina@programming.dev 3 points 11 months ago

It was only two years, and it was basically half nornal computer science classes, and half working with engines, making a game with classmates and mentors from the industry throughout the year, and learning about rendering, AI behaviors (the videogame kind, not LLMs). The graphics part was about shaders, lighting, post-processing, global illumination, renderers and math, not modeling. It was mostly technical, but we had some game desing classes.

[–] Mikina@programming.dev 6 points 11 months ago* (last edited 11 months ago) (1 children)

Having AI not bullshiting will require an entirely different set of algorithms than LLM, or ML in general. ML by design aproximates answers, and you don't use it for anything that's deterministic and has a correct answer. So, in that rwgard, we're basically at square 0.

You can keep on slapping a bunch of checks on top of random text prediction it gives you, but if you have a way of checking if something is really true for every case imaginable, then you can probably just use that to instead generate the reply, and it can't be something that's also ML/random.

[–] Mikina@programming.dev 5 points 11 months ago (1 children)

This is my favorite sentence from his replies.

I've learned today that you are sensitive to ensuring human readability over any concerns in regard to AI consumption

view more: ‹ prev next ›