home
-
all
|
technology
-
piracy
-
linux
-
asklemmy
-
memes
-
selfhosted
-
technology
-
nostupidquestions
-
mildlyinfuriating
-
games
-
worldnews
-
privacy
-
opensource
-
gaming
-
programmerhumor
-
showerthoughts
-
fediverse
-
lemmyworld
-
android
-
asklemmy
-
more »
log in
or
sign up
|
settings
Ill_Initiative_8793@alien.top
overview
[+]
[–]
Ill_Initiative_8793
joined 1 year ago
sorted by:
new
top
controversial
old
Running large models below requirements?
in
c/localllama@poweruser.forum
[–]
Ill_Initiative_8793@alien.top
1 points
11 months ago
llama.cpp and upload some layers to VRAM, you may be able to run 70B, depends on quantization.
permalink
fedilink
source
llama.cpp and upload some layers to VRAM, you may be able to run 70B, depends on quantization.