What does nvcc --version return?
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
/u/Noxusequal
I just went through this exact issue building Llama.cpp inside an Ubuntu docker container. If you don't have nvcc installed it will compile without error, but wont include CUDA support regardless of what you set for options. Check to make sure nvcc is installed in the machine
jup that was part of it :) its working now tahnk you
You may have to tell it to build with cublas AND force the reinstall:
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir
otherwise the build may be going 'it's already done' and the install may be reinstalling the non-gpu version
Also, no idea about WSL, I've only tried this on actual linux installs.
I did this :) should have specified but when reinstalling i set both flags as env variables again.
I think you don't have cuda properly setup. Use pip install --verbose
to see the compilation messages when it's trying to build llamacpp with cuda. You might need to manually set the CUDA_HOME environment variable.
Okay its working now i need to install nvcc seperatly and change the CUDA_HOME evironment. Also to install nvcc i needed to get the simlinks working manually. but with 15 minutes of google search i got it to work :D thank you all :)