Gaywallet

joined 3 years ago
MODERATOR OF
[–] Gaywallet@beehaw.org 4 points 1 week ago* (last edited 1 week ago)

This solves nothing, the exact same people will just move to another company.

The only way to effectively stop this kind of behavior is with regulation. The following types of regulation can help curb this behavior:

  1. Steep financial penalties for violations that are actually enforced. These need to be anchored directly to total value or profitability over a certain time frame. A specific number value will easily be outpaced by consolidation and gigantic companies can basically ignore them. Even a 100 million dollar fine can be ignored by companies the size of Amazon, Nvidia, and so forth. The EU has been good at architecting this kind of legislation.
  2. Strong rewards for whistle blowing on criminal behavior. Note that this is not prosecution of individuals responsible for said behavior because it will be very difficult to prove this in court and utilizing simple information warfare tactics, folks can be glass cliffed, made into patsies, or otherwise obscured from any record of their involvement or require extreme in-depth investigations to figure out.
  3. Strong criminal prosecution for repeat offenders and funding for real investigations of any company who has been found liable of any penalties or suspected of bad behavior. Some people hop from company to company doing the same thing over and over again. When we are focused on the companies rather than the people behind such bad behavior, they get a slap on the wrist at most and continue to do damage to society. We need to more aggressively profile and prosecute individuals with a track record of malicious behavior. As already mentioned, this is unfortunately the most difficult of the above to both legislate and enforce as what is considered "malicious" behavior is up for debate and difficult to quantify.
[–] Gaywallet@beehaw.org 20 points 2 weeks ago (2 children)
[–] Gaywallet@beehaw.org 31 points 1 month ago (1 children)

A bit strongly negative on phone use in general, but I'd say it's a good overview of why most major social media websites (with algorithmic recommendations being a heavy handed component of one's feed) are bad and why the traditional method (time-based sorting with subscriptions or feed curation) are better although not perfect.

[–] Gaywallet@beehaw.org 3 points 1 month ago

Well.... yes and no. Violence can have both positive and negative effects on a movement, it really depends on what kind of violence, who is committing the violence (racism sexism etc. all come into play here), and what kind of resistance they are met with. Here's two great reviews which outline what the literature has to say on this.

[–] Gaywallet@beehaw.org 8 points 1 month ago* (last edited 1 month ago) (1 children)

It could be the person was already in a problematic situation with family and friends, and they just need to blame someone or something and don’t want to admit the real problems. Kind of what often happened back in the day with videogames getting blamed for killing humans.

This is not a fair analogy for what is going on here. Video games being blamed harkens back to times when music or other counter cultural media was blamed for behavior. We have a lot of literature which shows that the passive consumption of media doesn't really affect someone in the ways which they were being blamed. From the beginning, this argument lacked a logical or hypothetical framework as well - it was entirely based on moral judgement values by certain individuals in society who simply "believed" that these were the cause.

AI on the other hand, interacts back with you, and amplifies psychosis. Now this is early days and most of what we have is theoretical in nature, based off case-studies, or simply clinical hypothesis [1, 2, 3]. However, there is a clear difference in media itself - the chatbot is able to interact with the user in a dynamic way, and is programmed in a manner by which to reinforce certain thoughts and feelings. The chatbot is also human-seeming enough for a person to anthropomorphize the chatbot and treat it like an individual for the purposes of therapy or an attempt at emotional closeness. While video games do involve human interaction and a piece of media could be designed to be psychologically difficult to deal with, that would be hyper-specific to the media and not the medium as a whole. The issues with chatbots (the LLM subset of AI) is pervasive across all chatbots because of how they are designed and the populace they are serving.

we could end up in a society where everyone undermines real problems in physical world and blames Ai to sideload the question

This is a valid point to bring up, however, I think it is shortsighted when we think in a broader context such as that of public health. We could say the same about addictive behaviors and personalities, for example, and absolve casinos of any blame for designing a system which takes advantage of these individuals and sends them down a spiraling path of gambling addiction. Or, we can recognize that this is a contributing and amplifying factor, by paying close attention to what is happening to individuals in a broad sense, as well as smartly applying theory and hypothesis.

I think it's completely fair to say that this kid likely had a lot of contributing factors to his depression and ultimate and final decision. There is a clear hypothetical framework with some circumstantial evidence with strong theoretical support to suggest that AI are exacerbating the problem and also should be considered a contributing factor. This suggests that regulation may be helpful, or at the very least increased public awareness of this particular technology having the potential to cause harm to certain individuals.

[–] Gaywallet@beehaw.org 4 points 1 month ago

Great article. I laugh at the folks who think this dude is bought into the fantasy that some folks have turned into what best represents a spirituality. As in if they haven't seen folks who go a little too hard in any one specific part of their life. Sure, gooning as a term has long since entered the cultural zeitgeist and has been used, both ironically and not, as a way to simply now refer to excessive masturbation. But to discount that there is a loneliness epidemic out there and folks who have turned to gooning as some form of extreme kink or outlet for some need for human connection and healing, going 24/7 like many dom/sub relationships or cnc, ferality, etc. shows either a lack of exposure to the vastness of this damaged world or an attempt to poke fun at the author for seriously studying a cultural phenomenon. Either way, this is a fascinating look into a weird niche subculture and a really well written article. Thank you for sharing.

[–] Gaywallet@beehaw.org 4 points 2 months ago* (last edited 2 months ago) (1 children)

All it means is that the government cannot arrest you for saying somethnig

Actually, it means a lot more than that.

It means you're entitled to a platform - that you can say things into a microphone to a large crowd gathered for any reason on federal land that's open to individuals... including to talk about how other humans are deserving of hate. We don't owe them a space to spread hate speech. We can do better.

[–] Gaywallet@beehaw.org 4 points 2 months ago

like explicitly excluding yelling fire in a theater sent us down a worse path, I'm sure

[–] Gaywallet@beehaw.org 16 points 2 months ago (5 children)

Absolutely nothing about this is surprising to me in the least. What is surprising, however, is how much people recognize this is a serious problem that seems to continue to get worse, and yet people will insist that free speech is more important. We've placed restrictions on yelling fire in a theater when there is none, because it causes harm to society to do it. Why, similarly, can we not place restrictions on obviously hateful and intolerant speech? Certainly those which have larger platforms and opportunity to sew this intolerance and erode democracy should have more scrutiny, no?

[–] Gaywallet@beehaw.org 12 points 2 months ago

Unfortunately the world has become so divorced from reality it no longer matters whether something is true. It only matters whether you can convince someone it looks or feels true. Management wants subtle changes made by a hallucination engine because it doesn't matter if they fail, they still get their golden parachute and move on to another company they get to ruin 🤷‍♀️

[–] Gaywallet@beehaw.org 4 points 2 months ago

Yea fair there is definitely the sportswashing angle on this, but they are absolutely leveraging debt for this purchase which they will put on the company. Their deck also talks a ton about AI, which is where the AI/stripping angle comes from. As to whether they can just ignore the debt because oil money, that's I suppose another question entirely.

[–] Gaywallet@beehaw.org 4 points 2 months ago (3 children)

Actually, it's pretty clear they are planning on completely gutting this company. They're taking on debt to buy this deal, which they will put on the company. Their pitch is to eliminate jobs with AI (which they probably know won't work) which means they'll cut most of the staff and "replace" it with AI, likely contracts with companies they own so that they can continue to leech off whatever income comes in from game sales. The company will continue to churn out trash and make some money by repeating last year's sports game this year but now with AI coding until it eventually declares bankruptcy and is either auctioned off to be stripped for what's left of its parts or simply shutters forever.

view more: next ›