this post was submitted on 23 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Amazon has the Acer A770 on sale for $250. That's a lot of compute with 16GB of VRAM for $250. There is no better value. It does have it's challenges. Somethings like MLC Chat run with no fuss just like on any other card. Other things need some effort like Oob, Fastchat and BigDL. But support for it is getting better and better everyday. At this price, I'm tempted to get another. I have seen some reports of running multi-GPU setups with the A770.

It also comes with Assassins Mirage for those people that still use their GPUs to game.

https://www.amazon.com/dp/B0BHKNK84Y

top 22 comments
sorted by: hot top controversial new old
[–] CasimirsBlake@alien.top 1 points 10 months ago (1 children)

But how easy does it work with ooba nowadays? How about running two?

[–] fallingdowndizzyvr@alien.top 1 points 10 months ago (1 children)

Intel GPUs are an option in the 1-click installer. So, ideally, it's the same as installing a nvidia or AMD GPU. Ideally. We aren't there yet. You can monitor the issue here. But the fact that they have a pinned Intel discussion to go with the pinned Mac and AMD discussions I think speaks to the commitment.

https://github.com/oobabooga/text-generation-webui/issues/1575

FastChat is suppose to support multiple Arcs. But since I only have one I can't confirm that.

"The most notable options are to adjust the max gpu memory (for A750 --max-gpu-memory 7Gib) and the number of GPUs (for multiple GPUs --num-gpus 2). "

https://github.com/itlackey/ipex-arc-fastchat/blob/main/README.md

[–] CheatCodesOfLife@alien.top 1 points 10 months ago (1 children)

Have you tried it though? I've been trying for a few months / updates and it doesn't work.

[–] fallingdowndizzyvr@alien.top 1 points 10 months ago (1 children)

With Oob, I've tried, but haven't been successful. But I wasn't successful getting it to work with my 2070 either months ago. I gave up on it, switched to llama.cpp and didn't look back. Until now. So I have tried getting it to work with the A770. But as pointed out in the discussion, there's that issue. But I haven't tried the workaround posted a couple of days ago.

[–] CheatCodesOfLife@alien.top 1 points 10 months ago (1 children)

I'm finding I prefer llama.cpp now as well (the last few days), though for work I usually use Oob + gptq.

If you have it handy, could you post the compile command you used to get llamacpp built for the A770?

[–] fallingdowndizzyvr@alien.top 1 points 10 months ago (1 children)

It's just the normal OpenCL and Vulkan compile flags. So "make LLAMA_CLBLAST=1" and "make LLAMA_VULKAN=1". You will have to download the Vulkan PR for Vulkan. Bu as I said, it's painfully slow. Like slower than the CPU. So not worth it. Both are equally as slow so there seems to be something in common that is not A770 friendly. Although I haven't tried Vulkan in a couple of weeks so that might be better. I even tried giving it the Intel specific OpenCL more than 4GB option but that didn't make any difference at all.

[–] CheatCodesOfLife@alien.top 1 points 10 months ago (1 children)

Right. Kind of feels like Intel are leaving money on the table by not writing software for this lol

[–] fallingdowndizzyvr@alien.top 1 points 10 months ago

They did. That's why software that uses Pytorch like FastChat and SD work very well with Intel Arc. But llama.cpp doesn't use Pytorch.

Here's the base of their software. An API that they are pushing as a standard since it also supports nvidia and AMD as well.

https://www.oneapi.io/

Also, Intel has their own package of LLM software.

https://github.com/intel-analytics/BigDL

[–] AnomalyNexus@alien.top 1 points 10 months ago (1 children)

multi-GPU

That's the question I guess. If you can get say 5x of these for the price of a 4090 then that may look interesting. Though that's a hell of a lot of overhead & hassle on power and pcie slots etc.

[–] JFHermes@alien.top 1 points 10 months ago (2 children)

3090's draw 350W as per google. Arc a770 draws 225.

I guess people here normally go with a dual 3090 setup. That's 700 watt for 48gb of VRAM. This comes out to 14.58 watt per gigabyte of VRAM.

Lets assume you manage to cool properly, you could probably run 4 A770 for 64gb of VRAM which sounds pretty nice. This watt to VRAM gigabyte ratio is 14.06 which is actually better than the nvidia cards. Also noteworthy, the pins are 1x8 pin & a 1.6 pin if I'm reading correctly. So you would have to be careful what mobo you went with but I'm pretty sure that's doable.

It all comes down to driver support, but it could be a really nice rig that would you to run more complex models on the cheap. I'm not taking into account driver support which Nvidia obviously has an advantage in, this would affect the processing speeds of the A770. I would say however, the addition of cheap VRAM would probably be worth the extra processing times so long as things actually worked.

[–] CheatCodesOfLife@alien.top 1 points 10 months ago

My 3090's don't draw 350W for inference, more like ~200 tops.

I did manage to draw 350W from one by running whisper to subtitle something though.

[–] AnomalyNexus@alien.top 1 points 10 months ago

There is also the issue of pcie slots. Currently running a second card in a x4 slot and it’s noticeably slower. Getting four full speed x16 slots is going to be some pretty specialised equipment. All the crypto rigs are slow slots to my knowledge since it doesn’t matter there

It is good to see more competitive cards in this space though. Dual 770 could be very accessible

[–] a_beautiful_rhind@alien.top 1 points 10 months ago

The arc have support in pytorch from intel. I've seen GPTQ work with them. Not sure how good the speed is.

[–] UndoubtedlyAColor@alien.top 1 points 10 months ago (1 children)

Not sure of the performance of that card, but you can get pre-owned 24gb Nvidia Tesla cards for less.

[–] fallingdowndizzyvr@alien.top 1 points 10 months ago (1 children)

Used is not the same as new. Also, a P40 can't be used as a GPU like with graphics. No video out. These can.

[–] UndoubtedlyAColor@alien.top 1 points 10 months ago

You can use the P40 as a GPU via integrated GPU out, but that's not good solution.

The A770 seems like a way better option with that much more performance.

Those extra 8gb of the P40 are nice though.

[–] knvn8@alien.top 1 points 10 months ago

I'd like to see what people can do with these cards. Wonder if the 560GB/s memory bandwidth might be a more important bottleneck than VRAM.

[–] r3tardslayer@alien.top 1 points 10 months ago

Is multi GPu set up good wit the a770 Thinking about buying 4 of these bad boys and selling my rtx 4090

[–] Dankmre@alien.top 1 points 10 months ago

I remain skeptical of intel arc compatibility. It is a good deal but that probably why.

For instance would koboldcpp let me offload to both an intel and nvidia card?

[–] No_Baseball_7130@alien.top 1 points 10 months ago (1 children)

P100s are also an okay-ish choice for super-budget builds (sxm is only 50$ but pcie is ~150$), but doesn't output video. It has a higher mem bandwidth as it use HBM instead of GDDR, and is faster than the p40 by a lot at 19.05 TFLOPS for FP16.

[–] fallingdowndizzyvr@alien.top 1 points 10 months ago (1 children)

The Mi25 is even faster at 24 TFLOPS for FP16. It's only $70-$90. You can get 2 for the price of one P100. And you can activate the mini-DP port on it for video with a BIOS flash. So you can use it to game with.

[–] No_Baseball_7130@alien.top 1 points 10 months ago

Mi25 is also an option, but a lot of programs are a lot more optimized for CUDA devices