(this is a post to gather my thoughts, work out what I want and what’s out there and capture the context of several browser tabs that have been open for months :-)
I’ll clean it up and give it more structure in the future …. possibly)
I have been toying with the idea to build me an all-flash nas. My storage needs are relatively modest. I currently have a TrueNas Scale server (Dell T20) running 2 x 500 Gb SATA SSD in a mirror and 3 x 4 Tb in a RaidZ-1, booting from a small 120 Gb SATA SSD. That’s six slots in total so indeed I am running an HBA for more ports. Current state of affairs. The whole thing is approached as a fun learning experience with no timelines. I can reduce the amount of needed SATA ports to 5 (boot + 2 mirrors); I want redundancy and I want two different pools. Adding a single SATA port would already make it possible to remove the HBA as I can reduce the RaidZ-1 to a mirror.
I am not really looking for performance, I’d like to reduce power draw (currently around 50W idle) and go smaller. To be a bit more future proof I am thinking about going for NVMe SSDs as they provide a lot more performance for the same price (or lower) than SATA SSDs but the lack of slots and lanes on reasonably priced and low power motherboards is not helping.
I found Brian Moses diy nas 2023 edition and was initially very interested until I found out that the NVMe slots have 1 lane (PCIe 3.0 x1 , 0.985 GB/s) which is faster than SATA3 at 6 Gbit/s or 600 MB/s but not much of an improvement. Still it would allow PCIe 3.0 NVMe SSDs to run at a decent speed while allowing them to be reused in a better solution at full speed later.
An alternative is to use PCIe cards that hold one or more NVMe slots and expand that way. The same challenges apply here, available lanes in the system and the ability of the motherboard to split the slot (x8 or x16) into multiple parts like 8x4x4 or 4x4x4x4, called bifurcation|https://shuttletitan.com/miscellaneous/pcie-bifurcation-what-is-it-how-to-enable-optimal-configurations-and-use-cases-for-nvme-sdds-gpus/. Bifurcation looks like a decent sidestep while I wait for better solutions in the near future but bifurcation is a mid-to-high end feature AND the CPU and chipset must have enough lanes available to make full use of the NVMe SSDs.
There is a way to avoid that: use an x8 or x16 slot and a PCIe switch on the card but that is not cheap and probably also not low-power (still need to investigate).
The current server, a Dell T20, has
- (Slot 1) One full-height, half-length x16 PCIe Gen3 card slot connected to processor
- (Slot 2) One full-height, half-length x1 PCIe Gen2 card slot connected to Platform Controller Hub (PCH)
- (Slot 3) One full-height, half-length PCI 32/33 card slot connected to PCIe and PCI Bridge
- (Slot 4) One full-height, half-length x16 (x4) PCIe Gen2 card slot connected to PCH
Source
Links with more info/background
- 7 watts idle on Intel 12th/13th gen: the foundation for building a low power server/NAS
- Not so much the components but the details about power states in various configs
- RaidSonic PCIe to NVMe
- Dual M.2 PCIE Adapter for SATA or PCIE NVMe SSD with Advanced Heat Sink Solution,M.2 SSD NVME (m Key) and SATA (b Key) 22110 2280 2260 2242 2230 to PCI-e x 4 Host Controller Expansion Card
- CPU comparison
- Alternative mobo NAS Motherboard J6413 J6412, 2 * Intel i226-V, 1 * RTL8125BG 2.5G LANs, 2 * NVMe, 6 * SATA3.0, 2 * DDR4, 1 * PCIe Mini ITX Soft Router Mainboard
- Mostly flash build with lots of background (Truenas forums) - AMD Ryzen with ECC and 6x M.2 NVMe build
- Over the top 50 TB NVMe build idea - Reddit thread
Update Sept 2023
STH review of the WD Blue SN580
Not too hot, PCIe 4.0