WitchSayo

joined 1 year ago
[–] WitchSayo@alien.top 1 points 1 year ago (1 children)

Using a PCIe splitter cable, split the PCIe 4.0x16 x1 into PCIe 4.0x8 x2. And all gpu use a PCIe extension cable.

[–] WitchSayo@alien.top 1 points 1 year ago (3 children)

emmmm, I'm not sure, all I can do is just plug them into the motherboard.

Do you have a link to the post please? I'd like to check it out.

 

Currently I have a 4090x2 and I am looking to upgrade my machine. Due to well known reasons, the price of 4090 and 3090 is insanely high right now, and I see another option: magic modding a 3080 with 20g of vram.

My aim is to use qlora to fine tune a 34B model, and I see that the requirement for fine tuning a 34B model using a single card from qlora is 24g vram, and the price of 4090x2 is about equal to 3080 20g x8. so what would be a better choice for a multi-card?

4090x2 or 3080 20g x8?

[–] WitchSayo@alien.top 1 points 1 year ago

There are tests in the original lora paper where the boost is very small once the rank is greater than 8.

https://preview.redd.it/ii53qcx8031c1.png?width=1080&format=png&auto=webp&s=821bac1232255bf791120afde7d9e9f3506a89f5

[–] WitchSayo@alien.top 1 points 1 year ago

You can merge lora A to the base model, and than to finetune B and C on the merged model.