home
-
all
|
technology
-
piracy
-
linux
-
asklemmy
-
memes
-
selfhosted
-
technology
-
nostupidquestions
-
mildlyinfuriating
-
games
-
worldnews
-
privacy
-
opensource
-
gaming
-
programmerhumor
-
showerthoughts
-
fediverse
-
lemmyworld
-
android
-
asklemmy
-
more »
log in
or
sign up
|
settings
Additional_Code@alien.top
overview
[+]
[–]
Additional_Code
joined 11 months ago
sorted by:
new
top
controversial
old
A100 inference is much slower than expected with small batch size
in
c/localllama@poweruser.forum
[–]
Additional_Code@alien.top
1 points
11 months ago
In my personal experience, inference speed on RTX 3090 > A100 > A6000.
permalink
fedilink
source
In my personal experience, inference speed on RTX 3090 > A100 > A6000.