Be careful with your motherboard choices if you're running 2 video cards. Many boards are only really designed to support 1x video card at x8 or x16 PCI speeds.
synn89
I dug into this a lot back when I was building 2 AI servers for home use, for both inference and training. Dual 4090's are the best you can get for speed at a reasonable price. But for the best "bang for your buck" you can't beat used 3090's. You can pick them up reliably for $750-800 each off of Ebay.
I went with dual 3090's using this build: https://pcpartpicker.com/list/V276JM
I also went with NVLink which was a waste of money. It doesn't really speed things up as the board can already do x8 PCI on dual cards.
But a single 3090 is a great card you can do a lot with. If that's too much money, go with a 3060 12gb card. The server oriented stuff is a waste for home use. Nvidia 30xx and 40xx series consumer cards will just blow them away in a home environment.
Building a system that supports two 24GB cards doesn't have to cost a lot. Boards that can do dual 8x PCI and cases/power that can handle 2 GPUs isn't very hard. The problem I see past that is you're running into much more exotic/expensive hardware. AMD Threadripper comes to mind, which is a big price jump.
Given that the market of people that can afford that is much lower than dual card setups, I don't feel like we'll see the lion's share of open source happening at that level. People tend to tinker on things that are likely to get used by a lot of people.
I don't really see this changing much until AMD/Intel come out with graphics cards that bust the consumer card 24GB barrier to compete with Nvidia head on in the AI market. Right now Nvidia won't do that, as to not compete with their premium priced server cards.