this post was submitted on 09 Dec 2025
73 points (98.7% liked)

PC Gaming

12931 readers
725 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 2 years ago
MODERATORS
 

Oh dear. Goonswarm Games are shutting down after Running With Scissors cancelled POSTAL: Bullet Paradise due to the use of generative AI. It's a bit of a saga this one.

I covered the initial announcement, along with a follow-up update in there where Running With Scissors attempted to defend the developer. The backlash only continued, and eventually RWS cancelled it as per the statement GamingOnLinux was sent on December 5th via Vince Desi, founder of Running With Scissors

you are viewing a single comment's thread
view the rest of the comments
[–] rayquetzalcoatl@lemmy.world 7 points 3 days ago* (last edited 3 days ago) (1 children)

I feel we're moving away from what I understood your point to be: the public has to get comfortable with Generative AI being used in media and products.

I don't agree with that point.

[–] NotANumber@lemmy.dbzer0.com 0 points 3 days ago (1 children)

If the public can't tell the difference then how can they be comfortable or uncomfortable if they don't even know?

You will increasingly find as well that more and more things will be made using AI to the point you can't really avoid it unless you only consume older media. This is especially true for anything involving programming such as video games as almost all future programming projects will involve AI as it's simply much faster and more effective to code with AI than without it. Since you don't have access to the source code of most games there really isn't any way to tell. So even if you try to avoid video games with AI generated assets you won't be able to avoid games with AI generated code.

Basically I am trying to say that trying to boycott AI entirely is futile and hating it changes nothing. It's the modern equivalent to being a luddite.

[–] rayquetzalcoatl@lemmy.world 5 points 2 days ago* (last edited 2 days ago) (1 children)

I can't taste the difference between eggs from battery farmed hens and free range ones. I still try to avoid battery farmed eggs.

You can call me a luddite, and you're totally welcome to your opinions about the ethics of Generative AI usage in media, art, and products that you buy or enjoy. You don't need me to tell you that.

I am also welcome to my opinions about the ethics of Generative AI usage.

I, personally, do not like it. I, personally, will not knowingly purchase art, media, or products that use it. It's that simple.

I have to take the word of the company or creator I'm buying a product from if they say they haven't used AI, but I can take that word in context. Do I feel like this company has a bad reputation of lying to customers? Do I feel like I can see signs of Generative AI? Do I, ultimately, trust that word?

I don't have to buy big mainstream video games if I want to avoid Generative AI. As it happens, I've not long since sold my PS5 and all my games. I have a PS2 and a small collection of older games. I rarely use it.

I stand by my beliefs. ✌️

[–] NotANumber@lemmy.dbzer0.com 1 points 1 day ago (1 children)

Okay now you are the ones loosing track of the point. Moral arguments didn't work on the war on drugs, and there was much more agreement there than there is here. I never actually made an argument about morals, I am saying that whatever you believe is basically irrelevant as you won't stop this from happening, and can't even stop consuming AI generated things yourself all of the time. I am not saying you are morally incorrect, I am saying your actions and arguments are futile. Probably the luddites were correct on some level; that did not change the outcome of their movement.

That being said I am interested in what moral objections you have. I am not a big believer in morality myself, but it's nonetheless interesting to hear what the objections specifically are. I understand some of the ones surrounding climate issues or job losses; I would still be interested in learning if there are other reasons people are unhappy.

[–] rayquetzalcoatl@lemmy.world 1 points 1 day ago (1 children)

I was responding to your comment about consumers not being able to tell if Generative AI has been used. I can't tell the difference between battery farmed eggs and free range ones, but I still avoid eggs that aren't free range. That doesn't seem like I've lost track of the point, to me.

I then expanded what I was saying, by explaining that I don't agree with battery farming, which is why I avoid eggs produced that way, despite not being able to actually tell if an egg is free range or battery farmed. This relates to the point about Generative AI use because I also don't agree with using Generative AI in art and media, and might or might not be able to tell if it's used.

To respond to your latest comment about my actions and beliefs being irrelevant because my not supporting something I disagree with won't stop it happening; you're right. Hens are still battery farmed. As much as I wish it wouldn't happen, it does. Does that make me sticking by my own morals and feelings futile or irrelevant? I don't think so.

As for the objections I have with Generative AI, there are a few: I think it's inauthentic. I want to experience art that was made with intention. I want to see what people are capable of making, and how people tell stories. We're a storytelling species, I think it's really important for us. I feel like I've been lied to when art is generated by an unthinking machine and then presented as though it was made by a human.

For me, art is a connection between me and the artist. If somebody writes a sad song, and plays it, then I get to experience and understand their feelings in that moment. It's a communication. I feel something, and they've given that to me. If a chatbot did it... Well nobody communicated anything. It's a lie. It basically catfished my emotions.

There are other objections, too; the plagiarism of actual human work without recompense. The fact that these chatbots are making people mentally unstable, the fact that these chatbots only exist to enrich the already wealthy. The fact that all of this is being sold to us as some way to remove effort from our lives, even the fun parts of our lives. I think effort and hard work are their own reward a lot of the time, and I hate to see laziness championed because it leads to uninteresting and lame shit.

Sorry, that was a long one, and I'll cut it off here before it gets any longer 😂

[–] NotANumber@lemmy.dbzer0.com 1 points 18 hours ago* (last edited 18 hours ago) (1 children)

Most of the time when people talk about plagarism in relation to AI it's not actually plagiarism. Unless you are referring to people using image edit models to remix someone else's work, but you could say the same about photoshop or making a collage. These mostly come from misunderstanding how the models work, and there is a reason you don't see technical people or machine learning engineers arguing this.

I do agree though that there is an issue with people becoming mentally unstable as a result of using LLMs or VLMs. There is a specific model family that caused this primarily due to alignment issues. That model being GPT-4o. To some extent other models did also contribute to this, but the primary reason is GPT-4o. OpenAI and others have tried to fix this, but the community surrounding ChatGPT has been very resistant. To the point that GPT-4o was fully removed from ChatGPT but people demanded it be returned to them and unfortunately they got their wish. It seems people had become emotionally attached to the model. I think in this case the people and community surrounding the models are their own worst enemy. There are some interesting benchmarks on LessWrong by AI safety experts showing that some models are much better at detecting and handling psychosis than others. I believe Claude and Kimi models performed the best.

As for authenticity and intentionality: I think you might have a point for some use cases. It's also important to bare in mind that image and video generation are only one tiny subset of AI and even they have some good uses. In particular they can be used to tell stories written and voiced by human beings. Here I am referring to things like Gossip Goblin, which use AI generated video, but all the stories being told are written by humans. The GenAI here is being used instead of manually doing animation and special effects. One of the biggest uses of AI is in programming. This is used in everything from the latest Windows and Linux OSes to video games and websites. I don't really see how using AI for writing code can remove intentionality from the process of making a game or other interactive media experience.

Edit: also you might want to read this: https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-of-parasitic-ai

[–] rayquetzalcoatl@lemmy.world 0 points 18 hours ago

Re your last point, I'm a full time web developer and while I'm not building entire games or whatever, I am currently working on a fairly complex and involved data migration project. My boss has demanded I do the whole thing with AI and don't write code.

Thus far, it's been incredibly frustrating to get it to do what I need it to do without having the chatbot change tonnes of tiny things or assume and hallucinate stuff that simply shouldn't be there. Beyond those time-wasting frustrations, the fact that I'm not getting hands-on means my mental model of how the data is translating from one system to another is muddy. It's not as clear as it would be were I building this thing myself. Specifically, because I'm not building it myself, I'm not running into edge cases personally and unpicking the knots of the current system.

There's no intentionality in what chatbots generate, by definition. They have no intention, they're not alive, they can't think. They don't understand things.

I'm sorry but I'm sort of done with this topic. I don't like Generative AI, I think it's disingenuous, lazy, furthering the commodification of art and creativity, and damaging our abilities to think critically. However, I do understand that some people have found it helpful in some contexts, and other people like to play with it. Thanks for the chat. 👍