← All articles
HARDWARE PCIe Cards for Your Home Lab: HBAs, NICs, and Expansion 2026-02-09 · pcie · hardware · hba

PCIe Cards for Your Home Lab: HBAs, NICs, and Expansion

Hardware 2026-02-09 pcie hardware hba network-card

PCIe slots are the expansion ports of your homelab. They're how you add fast networking, connect a shelf of hard drives, or bolt on NVMe storage that your motherboard doesn't natively support. But PCIe has enough generations, lane widths, and compatibility quirks that it's easy to buy the wrong card or leave performance on the table.

This guide covers the PCIe cards that matter most for homelabs: HBA controllers for storage, 10GbE network cards, NVMe expansion cards, and SAS expanders. We'll go through what the specs actually mean, which specific cards are worth buying, and the compatibility gotchas that catch people.

PCIe Fundamentals

Generations and Bandwidth

Each PCIe generation doubles the per-lane bandwidth of the previous one:

Generation Per Lane (each direction) x1 x4 x8 x16
PCIe 2.0 500 MB/s 500 MB/s 2 GB/s 4 GB/s 8 GB/s
PCIe 3.0 ~1 GB/s ~1 GB/s ~4 GB/s ~8 GB/s ~16 GB/s
PCIe 4.0 ~2 GB/s ~2 GB/s ~8 GB/s ~16 GB/s ~32 GB/s
PCIe 5.0 ~4 GB/s ~4 GB/s ~16 GB/s ~32 GB/s ~64 GB/s

These are theoretical maximums. Real-world throughput is typically 85-95% of the listed numbers due to encoding overhead.

Lanes

A PCIe slot's physical size tells you the maximum number of lanes it supports:

Here's the thing that trips people up: a slot's physical size doesn't always match its electrical lanes. A motherboard might have a physical x16 slot that's only wired for x4 electrical lanes. The card will work — PCIe is backward compatible in both directions — but it'll run at x4 speeds. Check your motherboard manual.

Backward Compatibility

PCIe is forward and backward compatible:

This compatibility is why used PCIe cards are such a good deal for homelabs. A PCIe 2.0 x8 HBA in a PCIe 3.0 x8 slot still has 4 GB/s of bandwidth — more than enough for 8 spinning hard drives.

HBA Cards: Connecting Storage

An HBA (Host Bus Adapter) connects SATA or SAS drives to your server. If you're building a NAS or storage server with more than 6 drives, you probably need one.

The King: LSI 9211-8i

The LSI SAS 9211-8i is the most recommended HBA in the homelab community, and for good reason:

IT mode (Initiator Target mode) is what you want for ZFS, TrueNAS, or any software-defined storage. Unlike IR mode (RAID mode), IT mode doesn't hide drives behind a RAID controller. The OS sees each drive individually, which is exactly what ZFS needs.

Flashing to IT Mode

Many 9211-8i cards ship in IR (RAID) mode. Flashing to IT mode:

# Download the firmware and tools from Broadcom's support site
# You need: sas2flash utility and the IT mode firmware (2118it.bin)

# Boot from a FreeDOS USB or use a Linux live environment
# Check current firmware
sas2flash -list

# Flash to IT mode
sas2flash -o -f 2118it.bin -b mptsas2.rom

# Verify
sas2flash -list

The flashing process is well-documented but slightly nerve-wracking the first time. Follow a guide specific to your card revision. The Broadcom (formerly Avago, formerly LSI) firmware page is the authoritative source, though navigating their website is an adventure.

Cables

The 9211-8i uses Mini-SAS SFF-8087 connectors. You'll need breakout cables:

Buy cables that are long enough. In a rackmount chassis, you need at least 0.5m cables. In a tower case, 0.75m is safer.

Other HBA Options

LSI 9207-8i: The PCIe 3.0 successor to the 9211-8i. Same 8-port capability, double the bus bandwidth. Useful if you're saturating the 9211's bandwidth with SSDs, but for spinning drives, the 9211 is plenty. $25-40 used.

LSI 9300-8i / 9305-8i: 12 Gb/s SAS3 cards (PCIe 3.0 x8). Needed if you're using SAS3 drives or want maximum SSD throughput. $40-70 used.

LSI 9400-8i: SAS3/NVMe tri-mode card (PCIe 3.1 x8). Can connect both SAS drives and NVMe drives through the same card. Premium pricing, usually $100+.

For most homelabs running spinning disks, the 9211-8i is still the right answer. It's cheap, well-supported, and the bandwidth is not the bottleneck when your drives max out at 200 MB/s each.

10GbE Network Cards

If you're moving large files between machines — VM images, backup repos, media libraries — gigabit Ethernet becomes the bottleneck fast. A 1 TB file takes about 15 minutes over 10GbE versus 2.5 hours over 1GbE.

Mellanox ConnectX-3

The Mellanox ConnectX-3 is to 10GbE what the LSI 9211 is to HBAs: the default homelab recommendation.

# Verify the card is detected
lspci | grep Mellanox

# Check link status
ip link show
# or
ethtool enp3s0

The ConnectX-3 uses the mlx4_en driver, which is in the mainline Linux kernel. No driver installation needed on any modern Linux distribution.

SFP+ vs RJ45

Mellanox cards (and most affordable 10GbE NICs) use SFP+ ports, not RJ45. This means you need either:

For a typical homelab where the NAS and hypervisor are in the same rack or room, a DAC cable is the easiest and cheapest option.

Other 10GbE Options

Intel X520-DA2: Dual-port SFP+ 10GbE, PCIe 2.0 x8. Very well supported in Linux and VMware. $20-35 used. Slightly older than ConnectX-3 but rock-solid.

Intel X540-T2: Dual-port RJ45 10GbE, PCIe 2.1 x8. The go-to if you want to use regular Ethernet cables. $30-50 used. Runs warmer than SFP+ cards.

Mellanox ConnectX-4 Lx: The successor to the ConnectX-3. 25GbE capable, PCIe 3.0 x8. Prices are dropping ($30-50 used) and it's a good upgrade path if you want headroom.

Switch Considerations

A 10GbE NIC is only useful if there's something on the other end. Options:

NVMe Expansion Cards

Most consumer motherboards have 1-2 M.2 NVMe slots. If you need more NVMe storage — for a caching tier, fast VM storage, or just because your motherboard ran out of slots — NVMe expansion cards are the answer.

Single NVMe Adapters

A basic M.2 to PCIe adapter costs $8-15 and drops an NVMe drive into a PCIe x4 slot:

The drive gets its full x4 bandwidth since there's no multiplexing. This is the simplest and cheapest way to add an NVMe drive.

Quad NVMe Cards

Cards like the ASUS Hyper M.2 or generic quad-NVMe adapters put 4x M.2 slots on a single PCIe x16 card. Each drive gets x4 lanes for a total of x16.

But here's the catch: this only works if your motherboard and CPU support PCIe bifurcation.

PCIe Bifurcation

Bifurcation is the ability to split a single PCIe slot's lanes into multiple independent connections. A x16 slot can be split into:

Quad NVMe cards need x4/x4/x4/x4 bifurcation. Without it, only the first NVMe slot works.

Checking Bifurcation Support

Bifurcation is configured in BIOS/UEFI. Look for settings like:

Server motherboards (Supermicro, ASRock Rack) almost always support bifurcation. Consumer motherboards are hit or miss. Many don't expose the setting at all, even if the CPU supports it.

If your motherboard doesn't support bifurcation but you need multiple NVMe drives, use a card with a PLX/Broadcom PCIe switch chip (like the HighPoint SSD7540). These cards handle the lane splitting in hardware, so bifurcation support isn't needed. But they cost significantly more ($100-200+).

SAS Expanders

When 8 drive ports aren't enough, a SAS expander multiplexes more drives through a single HBA connection. Think of it as a network switch but for storage.

An HP SAS Expander (part number 468406-B21) gives you 24 SAS/SATA ports from a single SFF-8087 connection to your HBA. They're about $20-30 used.

The trade-off is bandwidth. All 24 drives share the bandwidth of the upstream HBA connection. For spinning drives at ~200 MB/s each, the math usually works out fine — you'd need to be hitting all 24 drives simultaneously at full speed to saturate a PCIe 2.0 x8 link, which doesn't happen in practice. For SSDs, don't use an expander — the bandwidth sharing becomes a real bottleneck.

Cabling Topology

HBA (9211-8i)
  ├── SFF-8087 ─── SAS Expander (24 ports)
  │                 ├── 4x SATA breakout ── Drives 1-4
  │                 ├── 4x SATA breakout ── Drives 5-8
  │                 ├── 4x SATA breakout ── Drives 9-12
  │                 └── ... up to 24 drives
  └── SFF-8087 ─── 4x SATA breakout ── Drives 25-28 (direct)

You can mix direct and expander connections on the same HBA.

Compatibility Gotchas

Power Delivery

Some older or budget motherboards don't deliver enough power through the PCIe slot for hungry cards. Symptoms: the card isn't detected, the system won't boot, or it crashes under load.

Most HBAs and NICs draw 10-20W and are fine. Quad NVMe cards with four drives can draw 30-40W — check your motherboard specs.

IOMMU Groups (For Virtualization)

If you're passing PCIe cards through to VMs (GPU passthrough, HBA passthrough to a TrueNAS VM), the card needs to be in its own IOMMU group. Consumer motherboards often lump multiple devices into the same IOMMU group.

# List IOMMU groups
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
    echo "IOMMU Group ${g##*/}:"
    for d in $g/devices/*; do
        echo -e "\t$(lspci -nns ${d##*/})"
    done
done

If your HBA shares an IOMMU group with another device, you either pass both through or neither. The ACS override patch can split groups, but it's a security trade-off.

Server motherboards (Supermicro X11/X12/X13 series) almost always have clean IOMMU separation. This is another reason homelabbers gravitate toward server hardware.

BIOS/UEFI Boot

If you want to boot from a drive connected to an HBA, the HBA needs an option ROM. The 9211-8i in IT mode supports this with the mptsas2.rom boot ROM flashed alongside the firmware. Not all HBAs support boot — check before buying if this matters to you.

Thermal Throttling

Cards in the bottom PCIe slot of a rackmount chassis often run hot because airflow is restricted. HBAs with passive heatsinks can thermal throttle if they don't get enough airflow. If you notice intermittent drive disconnects, check the card's temperature:

# For LSI HBAs
sas2ircu LIST
sas2ircu 0 DISPLAY | grep -i temp

# General PCIe device temperatures
sensors

Add a small fan pointed at the card if temps are consistently above 70C.

Buying Guide

For a typical homelab storage server on a budget:

Component Recommendation Used Price
HBA LSI 9211-8i (IT mode) $15-30
Breakout cables SFF-8087 to 4x SATA (x2) $10-15 each
10GbE NIC Mellanox ConnectX-3 (dual SFP+) $15-30
DAC cable 10GbE SFP+ DAC, 1-3m $10-15
NVMe adapter M.2 to PCIe x4 (single) $8-15

Total: $60-105 for an HBA, 10GbE networking, and an extra NVMe slot. All used, all well-supported in Linux, all running reliably in thousands of homelabs.

The used enterprise hardware market is one of the best things about the homelab hobby. Cards that cost $500+ new are $20 used because enterprises refresh on 3-5 year cycles and the old hardware works perfectly fine. Take advantage of it.