That depends, when you say you are building out a new server, are we talking a proper 1 or 2u dell, HPE, etc type server? If so you'll have to contend with the GPU footprint, for example my 1u servers can only take up to 2, half height, half length GPU's, and they can only be powered by PCIE so I'm limited to 75w.
In my 2u servers I can get the "GPU enablement kit" which is essentially smaller form factor heatsinks for the CPU's and some long 8pin power connectors to go from the the mobo to the PCIE riser, allowing many more options, but still there are problems to address with heat, power draw (CPUs are limited to 130TDP I believe) and the server firmware complaining about the GPU/forcing the system fans to run at an obnoxious level, etc...
If you are homebrewing a 3u, a tower or using consumer parts than things change quite a bit.
I have that exact same proc in one of my nodes:
dell r630 2x E5-2680v4 128gb ram 8x spinning SAS drives 1x quadro K1200 dual SFP+ nic with SFP+ modules
power consumption: 168w was 108-124w before the K1200 gpu.
I'm going to hopefully pull those 8 disks and swap in 4x SAS SSD's this weekend, hopefully get my power consumption back closer to 100w. Then I'm going to throw in one or two T1000 8GB GPUS so I'll probably be back up to to 175-200w but thats damn impressive for the compute capacity of that box.