A Variant of the EC-P3X4 That Can Better Utilize the Abundant PCIe 5.0 Bandwidth


  • The 4-Drive M.2 NVMe SSD to PCIe 3.0 x4 Adapter Card is nice, but it matches 16 PCIe 3.0 downstream lanes to 4 PCIe upstream lanes. This would be useful for an array of SSDs (or other M.2 form-factor add-in cards) that are on PCIe 2.0 or use less than half of a lane of PCIe 3.0 bandwidth, but the vast majority of SSDs are already capable of saturating PCIe 3.0 × 4, and the motherboards they are attached to are capable of PCIe 4.0 or 5.0. Thus, the adapter card’s design is a bottleneck in what could otherwise be a much more efficient utilization of the underutilized PCIe bandwidth on tap.

    Working out the theoretical efficiencies of each hypothetical adapter pair with different sets of SSDs:

      4 × PCIe 4.0 Upstream 8 × PCIe 4.0 Upstream 4 × PCIe 5.0 Upstream 8 × PCIe 5.0 Upstream
    4 × PCIe 3.0 SSDs 50% efficiency 100% efficiency 100% efficiency 50% efficiency
    8 × PCIe 3.0 SSDs 25% efficiency 50% efficiency 50% efficiency 100% efficiency
    4 × PCIe 4.0 SSDs 25% efficiency 50% efficiency 50% efficiency 100% efficiency
    8 × PCIe 4.0 SSDs 12.5% efficiency 25% efficiency 25% efficiency 50% efficiency
    4 × PCIe 5.0 SSDs 12.5% efficiency 25% efficiency 25% efficiency 50% efficiency

    Most SSDs barely saturate the PCIe lanes they operate over, and most only ever use a little more than a half of their possible bandwidth, so 50% efficiency could be taken as a sweet spot goal to target.

    Motherboards with 4 × PCIe 5.0 slots hardly exist. I believe there is only a handful of such motherboards, so a 4 × PCIe 5.0 upstream card would be a mismatch of capabilities for most users. If not 8 × PCIe 4.0, an 8-lane upstream/16-lane downstream configuration would be the most favored choice since there aren’t any PCIe 5.0 switches on the market with less than 24 lanes. An 8 × PCIe 5.0 adapter card that could accommodate 4 PCIe 5.0 SSDs would also be able to target a possibly burgeoning market of 2-lane PCIe 5.0 SSDs such as the SAMSUNG 990 EVO 5.0.

    I’d understand if a PCIe 5.0 variant poses insurmountable challenges to bring to market. After all, the likes of Microchip and Broadcom haven’t even brought such adapters to market despite being the designers of the very PCIe 5.0 switch chips on the market that would be needed to make them possible!



  • @Kevin Li We considered other implementations, including faster links to each drive and a wider upstream. It comes down to price and demand, and we're in the business of consumers rather than servers. The primary goal was to allow the addition of more storage while maintaining the advantage of PCIe and NVMe at the potential cost of some bandwidth.


  • Looks like someone’s already shipping such a switch AIC, but with a PCIe 5.0 × 16 host connection. It’s wider than I need, but fits comfortably for my use case.

    @Sabrent: we're in the business of consumers rather than servers

    There is a wide gulf between consumers and servers including enthusiasts and prosumers. But given the cost of that new PCIe 5.0 switch AIC, it’s understandable that you’d take a wait-and-see approach. Their marketing promotes it for AI/ML, HPC, telecommunications, fintech, and cloud computing, which is collectively a notch above prosumer and tends closer to server. An AI/ML workstation is pretty much the only thing that isn’t a server, but I don’t know how big of a market that is…


  • @Kevin Li Correct, the cost of higher-end switches and the low demand for that level of performance for a consumer product has us experimenting with something more reasonable with the EC-P3X4. It is possible we will update the line with a faster AIC down the road if the demand is there. We do have some interest in the enterprise/server market but don't want to jump the gun.


Please login to reply this topic!