← All articles
STORAGE NFS vs SMB vs iSCSI: Choosing the Right Network Stor... 2026-02-09 · nfs · smb · iscsi

NFS vs SMB vs iSCSI: Choosing the Right Network Storage Protocol

Storage 2026-02-09 nfs smb iscsi storage protocols

Your NAS has storage. Your servers need storage. Something has to bridge that gap. The three protocols you'll encounter in every home lab are NFS, SMB (also called CIFS), and iSCSI. They all move data across the network, but they do it in fundamentally different ways, and picking the wrong one for your workload leads to frustrating performance problems or unnecessary complexity.

This guide covers what each protocol actually does, when to use it, how to set it up on Linux, and the trade-offs you need to understand.

The Quick Comparison

Before diving deep, here's the high-level picture:

Feature NFS SMB/CIFS iSCSI
Type File-level File-level Block-level
Best for Linux-to-Linux sharing Windows clients, mixed environments VM disk storage, databases
Setup complexity Low Medium Medium-High
Performance Good Good (SMB3) Best (raw block)
Permissions Unix (uid/gid) ACLs (user/password) N/A (handled by filesystem on top)
Encryption Optional (Kerberos + krb5p) Built-in (SMB3) Optional (IPsec or VPN)
Proxmox support Yes (shared storage) Not for VM storage Yes (shared storage)
Multi-client access Yes Yes Dangerous without clustering

The fundamental difference: NFS and SMB share files — you mount a remote directory and access files by name. iSCSI shares block devices — you mount a remote disk and put your own filesystem on it. This distinction matters more than anything else in this guide.

NFS: Network File System

NFS is the Unix/Linux standard for network file sharing. It's been around since 1984 (Sun Microsystems), and it remains the go-to protocol for Linux-to-Linux file sharing. It's simple, fast, and does exactly what you'd expect.

NFS v3 vs v4

Two versions matter for home labs:

NFS v3 (1995):

NFS v4 (2003, with v4.1 and v4.2 updates):

Use NFS v4 unless you have a reason not to. It's simpler (one port), more secure (supports Kerberos), and performs as well or better than v3. The main reason to use v3 is compatibility with older systems or devices that don't support v4.

Setting Up an NFS Server (Linux)

# Install NFS server
# Debian/Ubuntu
sudo apt install nfs-kernel-server

# Fedora/RHEL
sudo dnf install nfs-utils

Create the export directory and set permissions:

sudo mkdir -p /srv/nfs/shared
sudo chown nobody:nogroup /srv/nfs/shared
sudo chmod 775 /srv/nfs/shared

Configure exports in /etc/exports:

# /etc/exports
# Format: directory  client(options)

# Share with entire subnet, read-write
/srv/nfs/shared  192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)

# Share with specific host, read-only
/srv/nfs/media   192.168.1.50(ro,sync,no_subtree_check)

# Share with multiple subnets
/srv/nfs/backups 192.168.1.0/24(rw,sync,no_subtree_check) 10.0.0.0/24(rw,sync,no_subtree_check)

Key options explained:

Apply the configuration:

sudo exportfs -ra
sudo systemctl enable --now nfs-server

# Verify exports
showmount -e localhost

Mounting NFS on a Client

# One-time mount
sudo mount -t nfs4 192.168.1.100:/srv/nfs/shared /mnt/shared

# Persistent mount via /etc/fstab
# server:/export  mountpoint  type  options  dump  pass
192.168.1.100:/srv/nfs/shared  /mnt/shared  nfs4  defaults,_netdev  0  0

For fstab, _netdev tells the system to wait for the network before mounting. Without this, the boot process can hang if the NFS server is unreachable.

NFS Performance Tuning

# Mount with performance options
sudo mount -t nfs4 192.168.1.100:/srv/nfs/shared /mnt/shared \
  -o rsize=1048576,wsize=1048576,hard,intr,timeo=600

# In fstab
192.168.1.100:/srv/nfs/shared  /mnt/shared  nfs4  rsize=1048576,wsize=1048576,hard,_netdev  0  0

SMB/CIFS: Server Message Block

SMB is the Windows file sharing protocol, but it's everywhere now. If you have Windows machines, macOS computers, smart TVs looking for media, or basically any non-Linux device that needs file access, SMB is what you need.

A Brief History

Setting Up Samba (Linux SMB Server)

# Install Samba
# Debian/Ubuntu
sudo apt install samba

# Fedora/RHEL
sudo dnf install samba

Configure /etc/samba/smb.conf:

[global]
   workgroup = WORKGROUP
   server string = Homelab NAS
   security = user
   map to guest = Never

   # Require SMB3 minimum (disable insecure old versions)
   server min protocol = SMB3_00

   # Performance
   socket options = TCP_NODELAY IPTOS_LOWDELAY
   read raw = Yes
   write raw = Yes
   use sendfile = Yes
   aio read size = 16384
   aio write size = 16384

[shared]
   path = /srv/samba/shared
   browsable = yes
   writable = yes
   valid users = @labusers
   create mask = 0664
   directory mask = 0775

[media]
   path = /srv/samba/media
   browsable = yes
   read only = yes
   guest ok = yes

Create Samba users (separate from system users, but must exist as system users first):

# Create system user (no shell, can't log in)
sudo useradd -M -s /usr/sbin/nologin labuser

# Set Samba password
sudo smbpasswd -a labuser
sudo smbpasswd -e labuser

# Create the group
sudo groupadd labusers
sudo usermod -aG labusers labuser

Start and enable:

sudo systemctl enable --now smbd nmbd
# On Fedora/RHEL: smb nmb

# Test config
testparm

Mounting SMB on Linux

# One-time mount
sudo mount -t cifs //192.168.1.100/shared /mnt/shared \
  -o username=labuser,password=secret,vers=3.0,uid=1000,gid=1000

# Better: use a credentials file
sudo mount -t cifs //192.168.1.100/shared /mnt/shared \
  -o credentials=/root/.smbcredentials,vers=3.0,uid=1000,gid=1000

The credentials file (/root/.smbcredentials, mode 600):

username=labuser
password=secret
domain=WORKGROUP

For /etc/fstab:

//192.168.1.100/shared  /mnt/shared  cifs  credentials=/root/.smbcredentials,vers=3.0,uid=1000,gid=1000,_netdev  0  0

SMB Multichannel

SMB3 multichannel can aggregate multiple network interfaces for higher throughput. If your server and client both have two NICs:

# In smb.conf [global]
server multi channel support = yes

The client and server will automatically use multiple paths. This can double your throughput with two 1 Gbps links, or provide failover if one link goes down.

iSCSI: Block-Level Network Storage

iSCSI is different from NFS and SMB in a fundamental way: it doesn't share files. It shares raw block devices over the network. The client (called the "initiator") sees the remote disk as a local block device — like a virtual hard drive. You format it with whatever filesystem you want and use it like a local disk.

When iSCSI Makes Sense

When iSCSI Does NOT Make Sense

Setting Up an iSCSI Target (Server)

Using targetcli on Linux:

# Install
sudo apt install targetcli-fb    # Debian/Ubuntu
sudo dnf install targetcli       # Fedora/RHEL

# Launch the interactive shell
sudo targetcli

Inside targetcli:

# Create a block device or file-backed store
/backstores/block create disk0 /dev/sdb
# Or file-backed (easier to start with):
/backstores/fileio create disk0 /srv/iscsi/disk0.img 100G

# Create an iSCSI target
/iscsi create iqn.2026-02.sh.homelab:storage.target0

# Create a LUN (Logical Unit Number)
/iscsi/iqn.2026-02.sh.homelab:storage.target0/tpg1/luns create /backstores/fileio/disk0

# Create an ACL for the initiator
/iscsi/iqn.2026-02.sh.homelab:storage.target0/tpg1/acls create iqn.2026-02.sh.homelab:client1

# Set the portal (listen address)
/iscsi/iqn.2026-02.sh.homelab:storage.target0/tpg1/portals create 192.168.1.100

# Save and exit
saveconfig
exit

Enable the target service:

sudo systemctl enable --now target    # or rtslib-fb-targetctl on some distros

Setting Up an iSCSI Initiator (Client)

# Install
sudo apt install open-iscsi          # Debian/Ubuntu
sudo dnf install iscsi-initiator-utils  # Fedora/RHEL

# Set your initiator name
echo "InitiatorName=iqn.2026-02.sh.homelab:client1" | sudo tee /etc/iscsi/initiatorname.iscsi

# Discover targets
sudo iscsiadm -m discovery -t sendtargets -p 192.168.1.100

# Login to the target
sudo iscsiadm -m node -T iqn.2026-02.sh.homelab:storage.target0 -p 192.168.1.100 --login

# The new block device appears (check dmesg or lsblk)
lsblk
# You'll see a new disk, e.g., /dev/sdc

# Format and mount like any local disk
sudo mkfs.ext4 /dev/sdc
sudo mount /dev/sdc /mnt/iscsi

To make it persistent across reboots:

sudo iscsiadm -m node -T iqn.2026-02.sh.homelab:storage.target0 -p 192.168.1.100 --op update -n node.startup -v automatic

iSCSI Multipath

In production environments, you run multiple network paths to your iSCSI target for redundancy and performance. This is called multipathing. If one network link goes down, I/O continues on the other.

# Install multipath tools
sudo apt install multipath-tools

# Configure /etc/multipath.conf
defaults {
    user_friendly_names yes
    find_multipaths     yes
    polling_interval    5
    path_grouping_policy  failover
    failback           immediate
}

# Restart and check
sudo systemctl enable --now multipathd
sudo multipath -ll

For a home lab, multipath is usually overkill unless you're practicing for enterprise environments or genuinely need the redundancy.

Performance Comparison

Real-world performance depends on your network, hardware, and workload. But here are some general expectations on a 10 Gbps network with SSDs:

Metric NFS v4 SMB3 iSCSI
Sequential read 800-1000 MB/s 700-900 MB/s 900-1100 MB/s
Sequential write 600-800 MB/s 500-700 MB/s 800-1000 MB/s
Random 4K IOPS (read) 20,000-40,000 15,000-30,000 30,000-60,000
Latency overhead Low Medium Lowest
CPU overhead Low Medium Low

The numbers above are rough and depend heavily on configuration. Key observations:

Which Protocol for Which Use Case

VM Storage (Proxmox/ESXi)

Best: iSCSI or NFS

Proxmox supports both NFS and iSCSI as shared storage for VM disks. iSCSI gives slightly better performance. NFS is simpler to set up and manage. Both support live migration.

SMB is not supported as VM storage on Proxmox.

# Proxmox: Add NFS storage via CLI
pvesm add nfs nas-nfs --server 192.168.1.100 --export /srv/nfs/vms --content images,iso,vztmpl

# Proxmox: Add iSCSI storage via CLI
pvesm add iscsi nas-iscsi --portal 192.168.1.100 --target iqn.2026-02.sh.homelab:storage.target0

File Sharing (Documents, Media)

Best: SMB (mixed environment) or NFS (Linux-only)

If you have Windows machines, macOS, smart TVs, or other consumer devices: use SMB. Everything speaks SMB.

If your entire environment is Linux: NFS is simpler, faster, and has less overhead.

Backups

Best: NFS

Backup tools like Proxmox Backup Server, Borg, and Restic work well with NFS mounts. NFS is simple, reliable, and doesn't add unnecessary protocol overhead.

Databases

Best: iSCSI or local storage

Databases want low-latency block access. iSCSI gives them what looks like a local disk. NFS works for databases too but adds a layer of indirection. For serious database workloads, local storage (or Ceph RBD) is better than any network protocol.

Common Troubleshooting

NFS

# Mount hangs
# Check: Is the NFS server running?
systemctl status nfs-server

# Check: Can you reach port 2049?
nc -zv 192.168.1.100 2049

# Check: Are exports correct?
showmount -e 192.168.1.100

# Permission denied
# Check: Does the client IP match the export rule?
# Check: Are uid/gid mappings correct?
# Check: Is root_squash causing issues?

# Stale file handle
# Remount:
sudo umount -f /mnt/shared
sudo mount -a

SMB

# Can't connect
# Check: Is Samba running?
systemctl status smbd

# Check: Can you see the shares?
smbclient -L //192.168.1.100 -U labuser

# Check: Firewall ports (445/tcp)
sudo ss -tlnp | grep 445

# Permission denied
# Check: Is the Samba password set?
sudo pdbedit -L

# Slow performance
# Check: Are you using SMB3?
smbstatus --shares
# Ensure server min protocol = SMB3_00 in smb.conf

iSCSI

# No targets found during discovery
# Check: Is the target service running?
systemctl status target

# Check: Can you reach port 3260?
nc -zv 192.168.1.100 3260

# Check: Is the portal configured correctly?
sudo targetcli ls

# Disk not appearing after login
# Check: Is the ACL configured for your initiator name?
# Check: dmesg for errors
dmesg | tail -20

# Session dropped
# Check: network connectivity
iscsiadm -m session -P 3

Authentication and Security

NFS

By default, NFS v3 authenticates by IP address only — anyone who can spoof or reach the allowed IP can access the share. NFS v4 with Kerberos (sec=krb5p) provides real authentication and encryption, but setting up Kerberos is significant effort for a home lab.

For most home labs, IP-based access control on a trusted LAN is sufficient. If you need more, NFS v4 + Kerberos or using a VPN (WireGuard/Tailscale) to restrict network access is the way to go.

SMB

SMB3 includes built-in encryption. Enable it:

# In smb.conf [global]
smb encrypt = required

Authentication is user/password based through Samba's own user database. For more sophisticated setups, you can integrate with LDAP or Active Directory.

iSCSI

iSCSI supports CHAP (Challenge-Handshake Authentication Protocol) for basic authentication. It's not encryption — it just verifies identity.

# In targetcli, set CHAP credentials
/iscsi/iqn.../tpg1 set attribute authentication=1
/iscsi/iqn.../tpg1/acls/iqn.../  set auth userid=myuser
/iscsi/iqn.../tpg1/acls/iqn.../  set auth password=mysecretpassword

For encryption, wrap iSCSI in IPsec or run it over a VPN. On a trusted home LAN, most people skip encryption for iSCSI.

Making the Decision

If you're standing up a new home lab NAS and aren't sure which protocol to use:

  1. Start with NFS if your clients are all Linux. It's the simplest, fastest, and most predictable.
  2. Add SMB when you need Windows, macOS, or media device access. You can run both NFS and SMB on the same server sharing the same data.
  3. Use iSCSI when you need block-level storage — primarily for hypervisor VM storage or databases.

Most mature home labs end up running multiple protocols. Your TrueNAS or Linux NAS serves NFS to Proxmox for VM storage, SMB to your desktop for file sharing, and maybe iSCSI for a specific database VM. They coexist just fine.

The important thing is matching the protocol to the workload instead of trying to use one protocol for everything. NFS for Linux servers, SMB for Windows and media, iSCSI for block storage. Simple rules, reliable results.