this post was submitted on 18 Oct 2023
3 points (80.0% liked)

PC Gaming

8256 readers
773 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 1 year ago
MODERATORS
 

The Nvidia NV1 was released in 1995, it was the first GPU with 3D capabilities for PC... form there we know how things went by.

Now it's 2023, so let's make some "retro futuristic" prediction... what would you think about a AI board, open source driver, open API as Vulkan which you can buy to power the AI for your videogames? It would make sense to you? Which price range it should be?

What's supposed to do for your games... well, that's depend on videogames. The quickiest example I can think of is having endless discussion with your NPC in your average, single player, Fantasy RPG.

For example, the videogame load your 4~5 companions with the psychology/behaviors: they are fixated with the main quest goal (like you talk with fanatic people, this to make sure the game the main quest is as much stable as possible) but you can "break them" by making attempt to reveal some truths (for example, breaking the fourth wall), and if you go for this path, the game warns that you're probably going to lock out the main quest (like in Morrowind when you kill essential NPC)

top 20 comments
sorted by: hot top controversial new old
[–] thepianistfroggollum@lemmynsfw.com 7 points 11 months ago

AI dedicated boards already exist, and Nvidia can't produce them fast enough to keep up with demand.

Source: A senior AI engineer at AWS told me.

[–] RightHandOfIkaros@lemmy.world 6 points 11 months ago (2 children)

This just sounds like putting a second CPU on a PCIe board. I can't see this being a benefit for games because developers would never go through the pain of programming AI with advanced enough behaviours to even need a secondary CPU.

[–] howrar@lemmy.ca 0 points 11 months ago

Why wouldn't they? It's a lot easier to write out intricate backstories for each character/location independently than it is to build decision trees for every possible combination of decisions that the player makes. That's basically what current LLMs allow for.

[–] thepianistfroggollum@lemmynsfw.com 0 points 11 months ago (1 children)

Programming AI is actually super easy, unless you decided to create your own foundation model. Even then, you would have data scientists building it, not devs.

Plenty of FMs and LLMs already exist that would be up to the task.

[–] RightHandOfIkaros@lemmy.world 0 points 11 months ago (1 children)

Programming AI with behaviour complex enough to need a second CPU would be hard. Syncing its output with the primary CPU could be a problem.

LLMs would not be useful for anything except maybe generating new dialogue, but it would need a lot of restraints to prevent the end user from breaking it. For the purposes of dialogue and story telling, most developers would opt to just pre-program dialogue like they always have.

Again, this sounds like a useless PC part that pretty much no game developer would ever take advantage of.

[–] thepianistfroggollum@lemmynsfw.com 0 points 11 months ago

You don't need an LLM for this. You just need a FM that you fine tune, and you'd be surprised at how little computing power is actually required.

For our uses (which are similar to what OP wants), it takes longer for us to do an OCR scan on the documents our AI works with than for Sagemaker to do it's thing on a rather small instance.

And, devs would just be implementing API calls, so it wouldn't be a big deal to make the switch.

[–] norske@lemmynsfw.com 2 points 11 months ago

If the board provided enough benefit to outweigh the cost? Sure I might be talked into it.

Reminiscent of PhysX boards when they were a thing for 30 seconds. It’s all about the return on investment for me.

[–] squid@feddit.uk 2 points 11 months ago

Game publishers won't want direct ai in games, losses them too much control, also they can't use the excuse of its always online so NPCs have ai powered language. With how things look as everything is becoming subscription I doubt well be getting powerful ai on a single board to put into pci-e my prediction is more Aline to we won't have gaming PCs, GPUs will be price hiked and anyone wanting to game will be on a subscription service

[–] Omega_Jimes@lemmy.ca 2 points 11 months ago (1 children)

Yeah, so, dedicated hardware like that rarely ever pans out. I mean, graphics cards did, but there's not much of a market for gaming sound cards or physX cards anymore. I imagine that the specific type of AI that will be useful for this will eventually just be improved and made efficient enough that it'll be done by processors that already exist in your system.

[–] darkpanda@lemmy.ca 1 points 11 months ago

Yeah, like GPUs, which is basically what most LLMs are designed to run on now.

[–] Blamemeta@lemm.ee 2 points 11 months ago (3 children)

Wouldn't that just be a GPU? That's literally what all our AIs run on. Just a ton of tiny little processors running in parallel.

[–] wccrawford@lemmy.world 6 points 11 months ago

That kind of like saying "Wouldn't that just be a CPU?" about the GPU. It can be optimized. The question is if it's worth optimizing for on a consumer level, like GPUs were.

[–] meteokr@community.adiquaints.moe 4 points 11 months ago (2 children)

While that is true now, in the future maybe there will be discrete hardware AI accelerators in the same way we have hardware video encoding.

[–] baconisaveg@lemmy.ca 3 points 11 months ago

Have you not seen the size of modern GPU's? It'll just be another chip on the 3.5 slot 600w CPU.

[–] thepianistfroggollum@lemmynsfw.com 0 points 11 months ago

They already exist.

[–] thepianistfroggollum@lemmynsfw.com 2 points 11 months ago

They're meaning something more along the lines of an ASIC. A board specifically engineered for AI/ML.

[–] solariplex@slrpnk.net 1 points 11 months ago (1 children)

I mean, that kind of board has existed for a while. They're usually called AI-accelerator boards, IIRC

[–] thepianistfroggollum@lemmynsfw.com 1 points 11 months ago

Yup. Nvidia can't make em fast enough to keep up with demand.

[–] BetaDoggo_@lemmy.world 1 points 11 months ago

If this were to ever become mainstream this would likely be incorporated into the GPU for cost reasons. Small machine learning acceleration boards already exist but their uses are limited because of limited memory. Google has larger ones available but they're cloud only.

Currently I don't see many uses in gaming other than upscaling.

[–] BCsven@lemmy.ca 1 points 11 months ago* (last edited 11 months ago)

Is the 1995 and first 3d accurate? we were using 3d CAD tools in the range of 1991-1995 before Nvidia. Edit: seems S3 and Creative Labs had some earlier CAD cards, prices too high for general PC use till voodoo cards in 95