NFS is the Network Filesystem, originally developed by Sun Microsystems in 1984. It has been ported to almost every OS imaginable, but is mostly used on BSD and Linux systems.
This guide assumes installation of both the client and server on Ubuntu, and all shares will be protected with standard UNIX user permissions.
Install the NFS kernel daemon by running:
# apt-get install nfs-kernel-server
Install the NFS common package by running:
# apt-get install nfs-common
Shares, also known as exports, are listed in
/etc/exports. An example is shown below.
/tank/storage 192.168.30.101(rw,async,no_subtree_check) /tank/backup 192.168.30.101(rw,async,no_subtree_check)
The first column is the absolute path on the server to be shared.
The second column is the IP address or hostname that is allowed to connect.
* can be used as a wildcard.
The options listed in parentheses apply to that client. Additional columns can be added to allow for multiple clients with differing rules per client.
The options used here are as follows:
rw - allow both reading and writing to the share.
async - allow asynchronous writes. Use the slower but safer
sync option to force synchronous writes.
no_subtree_check - do not check if there are any other mountpoints below the share directory that would allow traversal to a higher directory that is not shared. This improves performance when not needed. Use
subtree_check when not sharing an entire volume that could pose issues.
Once the exports are set, you can reload them by running:
# exportfs -a
# exportfs -r
It is good to test that mounting is possible before making anything permanent. To do this, create the destination mountpoint and make sure it has the correct permissions for the share being bound to it. Then run:
# mount -t nfs 192.168.30.100:/tank/storage /media/mark/storage
assuming that the server IP is
192.168.30.100 and the export is
/tank/storage and it is being mounted to
If all the exports work correctly, then they can be mounted at boot time by adding them to
192.168.30.100:/tank/storage /media/mark/storage nfs _netdev,auto,nfsvers=4,hard,intr,rsize=65536,wsize=65536,noatime 0 0 192.168.30.100:/tank/backup /media/mark/backup nfs _netdev,auto,nfsvers=4,hard,intr,rsize=65536,wsize=65536,noatime 0 0
The options used here are as follows:
_netdev - indicate that this mount requires the network and to not try to mount it until the network has an established link.
auto - indicate that this is to be mounted at boot automatically.
nfsvers=4 - specify we want NFS version 4. Most servers will also accept version 3, however at reduced performance.
hard - try to perform I/O operations indefinitely and not give up if the server crashes. Processes waiting on files will hang and continue when the server is back up.
intr - allow hanging processes waiting on a file on a crashed server to be interrupted by interrupt or termination signals.
rsize - the maximum size of a block of data that can be read from the server. Must be in increments of 1024. Try different values to tune performance.
wsize - the maximum size of a block of data that can be written to the server. Must be in increments of 1024. Try different values to tune performance.
noatime - do not update the access time of files. This is usually a pointless operation and just degrades performance.
Even when using the
_netdev option in
/etc/fstab, upstart and other init systems will mount the shares in parallel and not ensure that they are mounted before continuing the boot process and loading lightdm or another display manager. If directories such as
/home or anything that is depended on by
~/.config/users-dirs.dirs are on a share, they will fail to load on login, unless the user waits at the login prompt for the mounts to silently complete. Instead, the system should wait for the mounts to complete before allowing the user to log in.
A simple approach to implementing this is to add a condition to lightdm's startup script. Add the mountpoints to
/etc/init/lightdm.conf as follows:
start on ((filesystem and runlevel [!06] and started dbus and plymouth-ready) and mounted MOUNTPOINT=/media/mark/storage and mounted MOUNTPOINT=/media/mark/backup or runlevel PREVLEVEL=S)
RDMA and Infiniband
Infiniband, often referred to as IB, is a hardware network standard used in distributed high-performance computers. Infiniband supports an extension known as IPoIB, or Internet Protocol over Infiniband, which allows NFS (or any other TCP/UDP protocol) to operate at high speed over an IB link. This however is suboptimal as IB operates very differently to other IP interconnects such as Ethernet and therefore cannot realize the full speed of the connection. Infiniband also has another extension known as RDMA, or Remote Direct Memory Access, which allows the network link to perform many operations with limited CPU and OS intervention. NFSv4 supports using the RDMA protocol, and thus it is much preferred over IPoIB.