camixx

joined 11 months ago
[–] camixx@alien.top 1 points 11 months ago

I ran an iperf3 test and noticed that my network connection seemed to be maxing out at 6 Gb/sec (750 MB/sec). So I changed NICs on the client machine to use the onboard 10 Gbit NIC (Marvell AQtion) instead of the Cisco/Intel X710-DA. It seems like it was a NIC issue with the X710-DA because now I’m getting way faster speeds. So at this point I’m wondering if it’s a driver issue with the X710-DA, the SFP+ module, or the NIC itself. But at least I know now what was causing the bottleneck.

Before (Cisco/Intel X710-DA):

[SUM] 0.00-10.00 sec 7.07 GBytes 6.07 Gbits/sec sender [SUM] 0.00-10.00 sec 7.07 GBytes 6.07 Gbits/sec receiver

After NIC Switch (Marvell AQtion):

[SUM]   0.00-10.00  sec  11.0 GBytes  9.48 Gbits/sec                  sender

[SUM] 0.00-10.00 sec 11.0 GBytes 9.48 Gbits/sec receiver

 

I'm trying to figure out what could be causing the bottleneck on my NAS. By my calculations I should be able to easily hit over 1000 MB/sec sustained read but I don't know why I'm not. Any useful troubleshooting advice would be appreciated.

System:

OS: TrueNAS-SCALE-22.12.2

CPU: Threadripper 1950X

RAM: 128 GB DDR4

Network: 10 Gbit (Intel X710-DA2 -> Mikrotik CRS317-1G-16S+RM -> AQUANTIA AQC107)

Controller: Adaptec PMC ASR-72405

Drives: 8 Seagate Exos X20

ZFS Pool Config: 2 VDevs (4 drives each) in RAIDZ1

SMB Share Benchmark