this post was submitted on 08 Jan 2026
1061 points (99.3% liked)

Technology

78512 readers
4425 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Sam_Bass@lemmy.world 39 points 2 days ago* (last edited 2 days ago) (1 children)

Doesn't confuse me, just pisses me off trying to do things I don't need or want done. Creates problems to find solutions to

[–] Gsus4@mander.xyz 4 points 2 days ago (2 children)

Can the NPU at least stand in as a GPU in case you need it?

[–] UsoSaito@feddit.uk 3 points 1 day ago (1 children)

No as it doesn't compute graphical information and is solely for running computations for "AI stuff".

[–] Gsus4@mander.xyz 0 points 1 day ago* (last edited 1 day ago) (1 children)

GPUs aren't just for graphics. They speed up vector operations, including those used in "AI stuff". I just never heard of NPUs before, so I imagine they may be hardwired for graph architecture of neural nets instead of linear algebra, maybe, so that's why they can't be used as GPUs.

[–] JATtho@lemmy.world 2 points 1 day ago* (last edited 1 day ago) (1 children)

Initially, x86 CPUs didn't have a FPU. It cost extra, and was delivered as a separate chip.

Later, GPU is just a overgrown SIMD FPU.

NPU is a specialized GPU that operates on low-precision floating-point numbers, and mostly does matrix-multiply-and-add operations.

There is zero neural processing going on here, which would mean the chip operates using bursts of encoded analog signals, within power consumption of about 20W, and would be able to adjust itself on the fly online, without having a few datacenters spending exceeding amount of energy to update the weights of the model.

[–] UsoSaito@feddit.uk 1 points 9 hours ago

NPUs do those calculations far more effectively than a GPU though is what I was meaning.

[–] Sam_Bass@lemmy.world 4 points 2 days ago

Nope. Don't need it