this post was submitted on 23 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Right. Kind of feels like Intel are leaving money on the table by not writing software for this lol
They did. That's why software that uses Pytorch like FastChat and SD work very well with Intel Arc. But llama.cpp doesn't use Pytorch.
Here's the base of their software. An API that they are pushing as a standard since it also supports nvidia and AMD as well.
https://www.oneapi.io/
Also, Intel has their own package of LLM software.
https://github.com/intel-analytics/BigDL