NFS Setup

From trapsink.com
Jump to: navigation, search


Overview

Network File Sharing (NFS) is a file and directory sharing mechanism native to Linux. It requires a Server and Client configuration. The Server presents a named directory to be shared out with specific permissions as to which clients can access it (IP addresses) and what capabilities they can use (such as read-write or read-only). The configuration can use both TCP and UDP at the same time depending on the needs of the specific implementation. The process utilizes the Remote Procedure Call (RPC) infrastructure to facilitate communication and operation.

Within the modern world of NFS, there are two major levels of functionality in use today - NFSv3 (version 3) and NFSv4 (version 4). While there are many differences between the two, some of the more important changes:

NFSv3 NFSv4
Transports TCP and UDP TCP
Authentication IP only Kerberos support (optional)
Protocols Stateless. Several protocols required such as MOUNT, LOCK, STATUS Stateful. Single protocol within the stack with security auditing
Locking NLM and the lock protocol Lease based locking within the protocol
Security Traditional UNIX permissions, no ACL support Kerberos and ACL support
Communication One RPC call per operation Several operations supported by one RPC call


Both NFSv3 and NFSv4 services are handled by the same software package installations as outlined in this article; a system can use NFSv3 only, NFSv4 only, or a mixture of the two at the same time depending on the configuration and implementation. Older versions such as NFSv2 are considered obsolete and not covered herein, NFSv3 support has been stable for over a decade.

NFSv4: The use of idmapd with sec=sys (system level, not Kerberos) may not always produce the results expected with permissions. NFSv4 uses UTF8 string principals between the client and server; it's sufficient to use the same user names and NFSv4 domain on client and server while the UIDs differ when using Kerberos (sec=krb*). However, with AUTH_SYS (sec=sys) the RPC requests will use UID and GIDs from the client host. In a general sense, this means that you still have to manually align the UIDs/GIDs among all hosts in the NFS environment like traditional NFSv3 - idmapd does not do what you think. For detailed information read this thread on the linux-nfsv4 mailing list. Red Hat has a comprehensive article detailing this subject: https://access.redhat.com/solutions/386333


TCP Wrappers

The concept and software package TCP Wrappers was invented back in 1990 before the modern ecosphere of pf (BSD), iftables (older Linux) and iptables (modern Linux) existed. In general, a software author could link their application against a shared library (libwrap.so) that would provide a common mechanism to provide network-level access control (ACL) without the software authors having to implement it independently. In modern Linux installs, the use of iptables instead of TCP Wrappers is the standard and preferred.

The design has two main configuration files of note:

  • /etc/hosts.allow
  • /etc/hosts.deny

Ensure that these files are examined for any lines that might begin with keywords like portmap, nfs, or rpc and comment them out in both files. Unless a very specific need and use case is required, using TCP Wrappers with NFS is to be avoided in favor of iptables. No restarts are required of any software after editing these files, the changes are dynamic - the NFS subsystem is still linked to this library so it cannot be uninstalled, instead ensure the configuration ignores it.

A common question is "How do I know if an application uses TCP Wrappers?" - the binary itself will be dynamically linked against the libwrap.so binary, ldd​ can be used to check; for example:

[root@nfs-server ~]# ldd /sbin/rpcbind | grep libwrap
    libwrap.so.0 => /lib64/libwrap.so.0 (0x00007fbee9b8e000)

By linking to the libwrap.so binary as provided by the tcp_wrappers package, we know it can be affected by the /etc/hosts.allow and /etc/hosts.deny configuration.


Firewalls and IPtables

Standard network communication is required between the server and client; exact ports and protocols will be discussed below. In this document we will use the private network 192.168.5.1/24 between the servers, and have added a generic IPtable rule to open all communication carte blanche for this subnet:

-A INPUT -s 192.168.5.0/24 -j ACCEPT

The exact configuration and what may need to be changed to allow communication could involve IPtables, firewalld, UFW, firewall ACLs and so forth - the exact environment will dictate how the network security needs to be adjusted to allow cross-server communication. If using the standard /etc/sysconfig/nfs pre-defined ports for RHEL, this set of iptables rules should suffice - the Debian / Ubuntu sections will also use these same ports for compatibility:

-A INPUT -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT -p udp -m udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 662 -j ACCEPT
-A INPUT -p udp -m udp --dport 662 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 892 -j ACCEPT
-A INPUT -p udp -m udp --dport 892 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT
-A INPUT -p udp -m udp --dport 2049 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 32803 -j ACCEPT
-A INPUT -p udp -m udp --dport 32769 -j ACCEPT


Client systemd NFS Mounts

With the advent of systemd, the use of a service like netfs is no longer required or present - the mounts are still configured in /etc/fstab but how they work is completely different mechanically. The new systemd methodology of mounting filesystems at boot time is using the generator infrastructure with RHEL/CentOS 7, Debian 8, Arch and others.

Upon boot, systemd-fstab-generator examines the configured mounts and writes each one as a systemd Unit in the runtime directory, /run/systemd/generator, that are then "started" as if they were traditional systemd unit files. These Unit files have the normal dependency chains listed for that particular mount point; using the example mount point in this wiki from fstab:

/etc/fstab
192.168.5.1:/data  /data  nfs  vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,noatime  0  0

...the generator creates this file on boot, data.mount:

/run/systemd/generator/data.mount
# Automatically generated by systemd-fstab-generator

[Unit]
SourcePath=/etc/fstab
Before=remote-fs.target

[Mount]
What=192.168.5.1:/data
Where=/data
Type=nfs
Options=vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,noatime

These units are then instantiated by the remote-fs.target Unit that is Wanted by the multi-user.target Unit. This methodology ensures that the networking layer is up and running, the rpcbind/nfs-lock service running and local filesystem ready to perform the mount in a fully automated method. No longer does a tech need to remember to enable a service like netfs to get NFS shares mounted at boot.

The use of Generators goes beyond just NFS mounts - please see the official documentation for a full overview.


Client noatime NFS Mounts

There tends to be a misconception about the use of noatime and NFS client mounts; in short, using noatime from the client mount has no real effect. The NFS server should mount it's source data directory using noatime instead, then export that to the clients. Red Hat has written an Access article detailing the process that happens under the covers:

An excerpt from the article:

"... Because of this caching behavior, the Linux NFS client does not support generic atime-related mount options. See mount(8) for details on these options. In particular, the atime/noatime, diratime/nodiratime, relatime/norelatime, and strictatime/nostrictatime mount options have no effect on NFS mounts."

However, it should be noted that specifying noatime in the mount options will reduce the amount of traffic sent to the server, even if conceptually the data is inaccurate. When mounting with noatime this allows the client to make full use of it's local cache from the GETATTR NFS calls at the expense of possibly having outdated information about these attributes from the server. Without specifying noatime on the client mount, the client system will basically bypass it's cache to ensure it's local view of the attributes is up to date, which you may not actually care about in practical use.

Therefore the best solution is to ensure that both the server and client mount the filesystem with noatime to increase performance on both ends of the connection; the server will need to mount the real filesystem with noatime first, export the share, then the client mount noatime in /etc/fstab. The noatime options is not used on the server's /etc/exports options at all, it's an invalid setting in this location.


NFSv3 Server

RHEL / CentOS

Install the required software packages for the specific release of RHEL / CentOS; basically, rpcbind replaced portmap from RHEL5 to RHEL6:

# RHEL5 / CentOS5
yum -y install portmap nfs-utils

# RHEL6 / CentOS6 / RHEL7 / CentOS7
yum -y install rpcbind nfs-utils


By default the NFSv3 methodology will assign random ephemeral ports for each one of the required daemons upon start; this needs to be changed so that the NFS server is firewall/VLAN ACL friendly and always use static ports on every start. Red Hat provides a configuration file ready to use.

The default configuration only enables 8 NFS threads, a holdover from 2 decades ago. Set RPCNFSDCOUNT to a value that the specific server can handle based on it's available resources. In practice, 32 or 64 threads works for most servers since at this point your disk I/O probably can't keep up if they get exhausted. This can be changed at runtime as well.

Edit /etc/sysconfig/nfs and uncomment the lines that have _PORT in them to use the predefined static ports and set the threads. For RHEL5 and RHEL6 all the ports are pre-defined for NFSv3, for RHEL7 some of the configuration must be added manually.

## RHEL5 / CentOS5 / RHEL6 / CentOS6
# egrep "(PORT|COUNT)" /etc/sysconfig/nfs
RPCNFSDCOUNT=64
RQUOTAD_PORT=875
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
STATD_PORT=662
STATD_OUTGOING_PORT=2020

## RHEL7 / CentOS7
# egrep -v "(^(#|$)|=\"\")" /etc/sysconfig/nfs
RPCRQUOTADOPTS="-p 875"
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
RPCNFSDCOUNT=64
RPCMOUNTDOPTS="-p 892"
STATDARG="-p 662 -o 2020"
GSS_USE_PROXY="no"


Next, define the specific directory to be shared along with it's permissions and capabilities in /etc/exports. Be careful and understand that a space between the IP/name and the opening parenthesis is usually incorrect – the parsing of this file treats the space as a field separator for the entire permissions object! In this example, we share out the /data directory to all servers in the subnet 192.168.5.x:

/etc/exports
/data 192.168.5.0/24(rw,no_root_squash)

Notice the format is '(what) (who)(how)' in nature, with no space between the who and how; an entry on this line without who (or a *) applies global permissions! Only one what allowed per line, several who(how) combinations can be used like so:

/etc/exports
Correct, no space between /who/ and (how):

 /data 192.168.1.4(rw) 192.168.1.9(rw,no_root_squash) *(ro)
 /opt 10.11.12.0/24(rw) 10.11.13.0/24(rw,no_root_squash)

Incorrect, spaces between /who/ and (how):

 /data 192.168.1.4 (rw) 192.168.1.9 (rw,no_root_squash) * (ro)
 /opt 10.11.12.0/24 (rw) 10.11.13.0/24 (rw,no_root_squash)


Start the portmap/rpcbind, nfslock, and nfs services. Enable them to start at boot.

# RHEL5 / CentOS5
service portmap start; chkconfig portmap on
service nfslock start; chkconfig nfslock on
service nfs start; chkconfig nfs on

# RHEL6 / CentOS6
service rpcbind start; chkconfig rpcbind on
service nfslock start; chkconfig nfslock on
service nfs start; chkconfig nfs on

# RHEL7 / CentOS7
systemctl start rpcbind nfs-lock nfs-server
systemctl enable rpcbind nfs-lock nfs-server


Finally, check the server configuration locally - rpcinfo is used to check the protocols, daemons and ports, showmount is used to check the exported share.

# rpcinfo -p
   program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd

# showmount -e
Export list for nfs-server.local:
/data 192.168.5.0/24


Debian / Ubuntu

Install the required software packages:

apt-get update
apt-get install rpcbind nfs-common nfs-kernel-server

IMMEDIATELY stop the auto-started services so that we can unload the lockd kernel module! In order to set static ports the module has to be unloaded, which can be troublesome if it's not done right away – if you do not perform these actions now, you might have to reboot later which is undesirable.

# Debian 7
service nfs-kernel-server stop
service nfs-common stop
service rpcbind stop
modprobe -r nfsd nfs lockd

# Ubuntu 14
service nfs-kernel-server stop
service statd stop
service idmapd stop
service rpcbind stop
modprobe -r nfsd nfs lockd


By default the NFSv3 methodology will assign random ephemeral ports for each one of the required daemons upon start; this needs to be changed so that the NFS server is firewall/VLAN ACL friendly and always use static ports on every start. The ports are configured in two different files differently that Red Hat; they can be configured to use the same static port numbers which is recommended for maximum compatibility.

First, edit /etc/default/nfs-common to define that you want to run STATD and set the ports; notice on Ubuntu that we need to disable rpc.idmapd using an Upstart override:

## Debian 7
# egrep -v "^(#|$)" /etc/default/nfs-common
NEED_STATD=yes
STATDOPTS="-p 662 -o 2020"
NEED_IDMAPD=no
NEED_GSSD=no

## Ubuntu 14
# egrep -v "^(#|$)" /etc/default/nfs-common
NEED_STATD=yes
STATDOPTS="-p 662 -o 2020"
NEED_GSSD=no

## Ubuntu 14 Only
echo "manual" > /etc/init/idmapd.override

The default configuration only enables 8 NFS threads, a holdover from 2 decades ago. Set RPCNFSDCOUNT to a value that the specific server can handle based on it's available resources. In practice, 32 or 64 threads works for most servers since at this point your disk I/O probably can't keep up if they get exhausted. This can be changed at runtime as well.

Second, edit /etc/default/nfs-kernel-server to set the RPCNFSDCOUNT higher than the default 8 threads and set the MOUNTD ports:

# egrep -v "^(#|$)" /etc/default/nfs-kernel-server 
RPCNFSDCOUNT=64
RPCNFSDPRIORITY=0
RPCMOUNTDOPTS="--manage-gids -p 892"
NEED_SVCGSSD=no
RPCSVCGSSDOPTS=

Last, create /etc/modprobe.d/nfs-lockd.conf' with this content to set the ports for LOCKD:

echo "options lockd nlm_udpport=32769 nlm_tcpport=32803" > /etc/modprobe.d/nfs-lockd.conf


Next, define the specific directory to be shared along with it's permissions and capabilities in /etc/exports. Be careful and understand that a space between the IP/name and the opening parenthesis is usually incorrect – the parsing of this file treats the space as a field separator for the entire permissions object! In this example, we share out the /data directory to all servers in the subnet 192.168.5.x:

/etc/exports
/data 192.168.5.0/24(rw,no_root_squash,no_subtree_check)

Notice the format is '(what) (who)(how)' in nature, with no space between the who and how; an entry on this line without who (or a *) applies global permissions! Only one what allowed per line, several who(how) combinations can be used like so:

/etc/exports
Correct, no space between /who/ and (how):

 /data 192.168.1.4(rw) 192.168.1.9(rw,no_root_squash) *(ro)
 /opt 10.11.12.0/24(rw) 10.11.13.0/24(rw,no_root_squash)

Incorrect, spaces between /who/ and (how):

 /data 192.168.1.4 (rw) 192.168.1.9 (rw,no_root_squash) * (ro)
 /opt 10.11.12.0/24 (rw) 10.11.13.0/24 (rw,no_root_squash)


Start the rpcbind, nfs-common and nfs-kernel-server services. Enable them to start at boot - this is normally automatic on Debian/Ubuntu, however to be 100% safe run the commands to verify it's configured.

# Debian 7
service rpcbind start; insserv rpcbind
service nfs-common start; insserv nfs-common
service nfs-kernel-server start; insserv nfs-kernel-server

# Ubuntu 14 - Upstart controlled rpcbind/statd
service rpcbind start
service statd start
service nfs-kernel-server start; update-rc.d nfs-kernel-server enable


Finally, check the server configuration locally - rpcinfo is used to check the protocols, daemons and ports, showmount is used to check the exported share.

# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd

# showmount -e
Export list for nfs-server.local:
/data 192.168.5.0/24


NFSv3 Client

RHEL / CentOS

Install the required software packages for the specific release of RHEL / CentOS; basically, rpcbind replaced portmap from RHEL5 to RHEL6:

# RHEL5 / CentOS5
yum -y install portmap nfs-utils

# RHEL6 / CentOS6 / RHEL7 / CentOS7
yum -y install rpcbind nfs-utils


Start the portmap/rpcbind and nfslock services; note that the nfs service is not required, that is for the server only. On the client the netfs service is enabled, which mounts network filesystem from /etc/fstab after networking is started during the boot process. The netfs service is not typically started by hand after the server is online, only run at boot.

# RHEL5 / CentOS5
service portmap start; chkconfig portmap on
service nfslock start; chkconfig nfslock on
chkconfig netfs on

# RHEL6 / CentOS6
service rpcbind start; chkconfig rpcbind on
service nfslock start; chkconfig nfslock on
chkconfig netfs on

# RHEL7 / CentOS7
systemctl start rpcbind nfs-lock nfs-client.target
systemctl enable rpcbind nfs-lock nfs-client.target


From the client, query the server to check the RPC ports and the available exports - compare against the server configuration for validity:

# rpcinfo -p 192.168.5.1
   program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd

# showmount -e 192.168.5.1
Export list for 192.168.5.1:
/data 192.168.5.0/24


Make the destination directory for the mount, and test a simple mount with no advanced options:

# showmount -e 192.168.5.1
# mkdir /data
# mount -t nfs -o vers=3 192.168.5.1:/data /data
# df -h /data
# umount /data


Now that it's confirmed working, add it to /etc/fstab as a standard mount. This is where the Best Practices and performance options can be applied; in general the recommended set of options is typically based on using TCP or UDP, which will depend on the environment in question. See the man page nfs(5) for a full list of everything that can be tuned.

/etc/fstab
# TCP example
192.168.5.1:/data  /data  nfs  vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,noatime  0  0

# UDP example
192.168.5.1:/data  /data  nfs  vers=3,proto=udp,hard,intr,rsize=32768,wsize=32768,noatime  0  0


Finally, test the mount and check the desired options were applied.

# mount /data
# touch /data/test-file
# df -h /data
# grep /data /proc/mounts 

Performance testing should typically be performed at this point (possibly with a tool such as fio) to determine if any of these options could be adjusted for better results.


Debian / Ubuntu

Install the required software packages:

apt-get update
apt-get install rpcbind nfs-common


The service should automatically be started; if not, start them and ensure they're enabled on boot. Note that the nfs-kernel-server service is not required, that is for the server only. On the client the mountnfs.sh service is enabled, which mounts network filesystem from /etc/fstab after networking is started during the boot process. The mountnfs.sh service is not typically started by hand after the server is online, only run at boot.

# Debian 7
service rpcbind start; insserv rpcbind
service nfs-common start; insserv nfs-common
insserv mountnfs.sh

# Ubuntu 14 - Upstart controlled rpcbind/statd/mountnfs
service rpcbind start
service statd start
service idmapd stop
echo "manual" > /etc/init/idmapd.override


From the client, query the server to check the RPC ports and the available exports - compare against the server configuration for validity:

# rpcinfo -p 192.168.5.1
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd

# showmount -e 192.168.5.1
Export list for 192.168.5.1:
/data 192.168.5.0/24


Make the destination directory for the mount, and test a simple mount with no advanced options:

# showmount -e 192.168.5.1
# mkdir /data
# mount -t nfs -o vers=3 192.168.5.1:/data /data
# df -h /data
# umount /data


Now that it's confirmed working, add it to /etc/fstab as a standard mount. This is where the Best Practices and performance options can be applied; in general the recommended set of options is typically based on using TCP or UDP, which will depend on the environment in question. See the man page nfs(5) for a full list of everything that can be tuned.

/etc/fstab
# TCP example
192.168.5.1:/data  /data  nfs  vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,noatime  0  0

# UDP example
192.168.5.1:/data  /data  nfs  vers=3,proto=udp,hard,intr,rsize=32768,wsize=32768,noatime  0  0


Finally, test the mount and check the desired options were applied.

# mount /data
# touch /data/test-file
# df -h /data
# grep /data /proc/mounts

Performance testing should typically be performed at this point (possibly with a tool such as fio) to determine if any of these options could be adjusted for better results.


NFSv4 Server

If the server is supporting both NFSv3 and NFSv4, be sure to combine the setup steps above with these below to set static ports. Kerberos support is explicitly not configured within these instructions, traditional sec=sys (security = system) mode is being used.

RHEL / CentOS

Install the required software packages:

# RHEL5 / CentOS5
yum -y install portmap nfs-utils nfs4-acl-tools

# RHEL6 / CentOS6 / RHEL7 / CentOS7
yum -y install rpcbind nfs-utils nfs4-acl-tools


Edit /etc/idmapd.conf to set the Domain – all the servers and clients must be on the same domain:

/etc/idmapd.conf
[General]
Domain = example.com


Edit /etc/sysconfig/nfs to set the RPCNFSDCOUNT higher than the default 8 threads:

# egrep "(PORT|COUNT)" /etc/sysconfig/nfs
RPCNFSDCOUNT=64


Unlike NFSv3, NFSv4 has a concept of a "root file system" under which all the actual desired directories are to be exposed; there are many ways of doing this (such as using bind mounts) which may or may not work for the given situation. These are defined in /etc/exports just like NFSv3 but with special options; the textbook method for setting up the parent/child relationship is to first make an empty directory to be used as fsid=0 (root/parent) - /exports is the commonly used name:

mkdir /exports

Now bind mount the desired data directories into it - for example, the real directory /data will be bind-mounted to /exports/data like so:

touch /data/test-file
echo '/data  /exports/data  none  bind  0 0' >> /etc/fstab
mkdir -p /exports/data
mount /exports/data
ls -l /exports/data/test-file

Now we build the /etc/exports listing this special parent first with fsid=0 and crossmnt in the options, then the children using their bind-mounted home:

/etc/exports
/exports      192.168.5.0/24(ro,no_subtree_check,fsid=0,crossmnt)
/exports/data 192.168.5.0/24(rw,no_subtree_check,no_root_squash)


Start the required services:

# RHEL5 / CentOS5
service portmap start; chkconfig portmap on
service rpcidmapd start; chkconfig rpcidmapd on
service nfs start; chkconfig nfs on

# RHEL6 / CentOS6
service rpcbind start; chkconfig rpcbind on
service rpcidmapd start; chkconfig rpcidmapd on
service nfs start; chkconfig nfs on

# RHEL7 / CentOS7
systemctl start rpcbind nfs-idmap nfs-server
systemctl enable rpcbind nfs-idmap nfs-server


Finally, check the local exports with showmount:

# showmount -e
Export list for nfs-server.local:
/exports      192.168.5.0/24
/exports/data 192.168.5.0/24


Debian / Ubuntu

Install the required software packages:

apt-get update
apt-get install rpcbind nfs-common nfs4-acl-tools nfs-kernel-server

Stop the auto-started services so we can configure them:

# Debian 7
service nfs-kernel-server stop
service nfs-common stop
service rpcbind stop
modprobe -r nfsd nfs lockd

# Ubuntu 14
service nfs-kernel-server stop
service statd stop
service idmapd stop
service rpcbind stop
modprobe -r nfsd nfs lockd


Edit /etc/idmapd.conf to set the Domain – all the servers and clients must be on the same domain:

/etc/idmapd.conf
[General]
Domain = example.com


Edit /etc/default/nfs-common to indicate that idmapd is required and statd is not, along with rpc.gssd:

## Debian 7
# egrep -v "^(#|$)" /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_IDMAPD=yes
NEED_GSSD=no

## Ubuntu 14
# egrep -v "^(#|$)" /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_GSSD=no


Edit /etc/default/nfs-kernel-server to set the RPCNFSDCOUNT higher than the default 8 threads:

# egrep -v "^(#|$)" /etc/default/nfs-kernel-server 
RPCNFSDCOUNT=64
RPCNFSDPRIORITY=0
RPCMOUNTDOPTS=--manage-gids
NEED_SVCGSSD=no
RPCSVCGSSDOPTS=


Unlike NFSv3, NFSv4 has a concept of a "root file system" under which all the actual desired directories are to be exposed; there are many ways of doing this (such as using bind mounts) which may or may not work for the given situation. These are defined in /etc/exports just like NFSv3 but with special options; the textbook method for setting up the parent/child relationship is to first make an empty directory to be used as fsid=0 (root/parent) - /exports is the commonly used name:

mkdir /exports

Now bind mount the desired data directories into it - for example, the real directory /data will be bind-mounted to /exports/data like so:

touch /data/test-file
echo '/data  /exports/data  none  bind  0 0' >> /etc/fstab
mkdir -p /exports/data
mount /exports/data
ls -l /exports/data/test-file

Now we build the /etc/exports listing this special parent first with fsid=0 and crossmnt in the options, then the children using their bind-mounted home:

/etc/exports
/exports      192.168.5.0/24(ro,no_subtree_check,fsid=0,crossmnt)
/exports/data 192.168.5.0/24(rw,no_subtree_check,no_root_squash)


Start the required services and enable at boot - this is normally automatic on Debian/Ubuntu, however to be 100% safe run the commands to verify it's configured.

# Debian 7
service rpcbind start; insserv rpcbind
service nfs-common start; insserv nfs-common
service nfs-kernel-server start; insserv nfs-kernel-server

# Ubuntu 14 - Upstart controlled rpcbind/statd
service rpcbind start
service idmapd start
service nfs-kernel-server start; update-rc.d nfs-kernel-server enable


Finally, check the local exports with showmount:

# showmount -e
Export list for nfs-server.local:
/exports      192.168.5.0/24
/exports/data 192.168.5.0/24


NFSv4 Client

If the client is supporting both NFSv3 and NFSv4, be sure to combine the setup steps above with these below to set static ports. Kerberos support is explicitly not configured within these instructions, traditional sec=sys (security = system) mode is being used.

RHEL / CentOS

Install the required software packages:

# RHEL5 / CentOS5
yum -y install portmap nfs-utils nfs4-acl-tools

# RHEL6 / CentOS6 / RHEL7 / CentOS7
yum -y install rpcbind nfs-utils nfs4-acl-tools


Some releases of the nfs-utils package may have buggy behaviour trying to load gssd incorrectly, blacklist the module as a workaround if required. This usually manifests in a ~15 second delay when the mount command is issued until it completes.

modprobe -r rpcsec_gss_krb5
echo "blacklist rpcsec_gss_krb5" > /etc/modprobe.d/blacklist-nfs-gss-krb5.conf

See this bug and this patch for further details.


Set a static callback port for NFSv4 4.0; the server will initiate and use this port to communicate with the client. The nfsv4.ko kernel module is loaded when the share is mounted, so there should be no need to unload the module first.

echo 'options nfs callback_tcpport=4005' > /etc/modprobe.d/nfsv4_callback_port.conf

The above is no longer necessary with NFSv4 4.1, as the client will initiate the outgoing channel for callbacks instead of the server instantiating the connection.


Next, edit /etc/idmapd.conf to set the Domain – all the servers and clients must be on the same domain, so set this to what the server has configured:

/etc/idmapd.conf
[General]
Domain = example.com


Start the required services:

# RHEL5 / CentOS5
service portmap start; chkconfig portmap on
service rpcidmapd start; chkconfig rpcidmapd on

# RHEL6 / CentOS6
service rpcbind start; chkconfig rpcbind on
service rpcidmapd start; chkconfig rpcidmapd on

# RHEL7 / CentOS7
systemctl start rpcbind nfs-idmap nfs-client.target
systemctl enable rpcbind nfs-idmap nfs-client.target


Make the destination directory for the mount, and test a simple mount with no advanced options:

# showmount -e 192.168.5.1
# mkdir /data
# mount -t nfs4 192.168.5.1:/data /data
# df -h /data
# umount /data


Now that it's confirmed working, add it to /etc/fstab as a standard mount.

/etc/fstab
192.168.5.1:/data  /data  nfs4  sec=sys,noatime  0  0


Test the mount and check the desired options were applied.

On RHEL5 a warning about rpc.gssd not running may occur; since we are not using Kerberos this can be ignored

# mount /data
# touch /data/test-file
# df -h /data
# grep /data /proc/mounts


With NFSv4 and the use of idmapd additional testing should be performed to ensure the user mapping is performing as expected if possible. Make a user account on the server and client with the same UIDs, then test setting ownership on one side is mapped to the same user on the other:

# NFSv4 server
useradd -u 5555 test

# NFSv4 client
useradd -u 5555 test

# NFSv4 server
chown test /exports/data/test-file
ls -l /exports/data/test-file

# NFSv4 client - should show 'test' owns the file
ls -l /data/test-file


Additionally, test that the nfs4_setfacl and nfs4_getfacl commands seem to perform as expected (the format is not the same as setfacl/getfacl), see nfs4_acl(5) man page for details. Note that the principal (user) listed is in the format user@nfs.domain - the same domain that was used in /etc/idmapd.conf which may not necessarily be the same has the hostname domain.

# Add user 'test' to have read
nfs4_setfacl -a A::test@example.com:r /data/test-file

# Look for expected result - some distros list user@domain, some the UID
nfs4_getfacl /data/test-file

 A::OWNER@:rwatTcCy
 A::test@example.com:rtcy  (or: A::5555:rtcy)
 A::GROUP@:rtcy
 A::EVERYONE@:rtcy

# From the *server*, try a standard 'getfacl' and it should also show up
getfacl /exports/data/test-file 

 # file: exports/data/test-file
 # owner: test
 # group: root
 user::rw-
 user:test:r--
 group::r--
 mask::r--
 other::r--


Debian / Ubuntu

Install the required software packages:

apt-get update
apt-get install rpcbind nfs-common nfs4-acl-tools

Stop the auto-started services so we can configure them:

# Debian 7
service nfs-common stop
service rpcbind stop
modprobe -r nfsd nfs lockd

# Ubuntu 14
service statd stop
service idmapd stop
service rpcbind stop
modprobe -r nfsd nfs lockd


Some releases of the nfs-common package may have buggy behaviour trying to load gssd incorrectly; blacklist the rpcsec_gss_krb5 module as a workaround if this problem is encountered. The problem usually manifests in a ~15 second delay when the mount command is issued until it completes, and has "RPC: AUTH_GSS upcall timed out" in dmesg output. If these symptoms are encountered, use this method:

modprobe -r rpcsec_gss_krb5
echo "blacklist rpcsec_gss_krb5" > /etc/modprobe.d/blacklist-nfs-gss-krb5.conf

See this bug and this patch for further details.


Set a static callback port for NFSv4 4.0; the server will initiate and use this port to communicate with the client. The nfsv4.ko kernel module is loaded when the share is mounted, so there should be no need to unload the module first.

echo 'options nfs callback_tcpport=4005' > /etc/modprobe.d/nfsv4_callback_port.conf

The above is no longer necessary with NFSv4 4.1, as the client will initiate the outgoing channel for callbacks instead of the server instantiating the connection.


Next, edit /etc/idmapd.conf to set the Domain – all the servers and clients must be on the same domain, so set this to what the server has configured:

/etc/idmapd.conf
[General]
Domain = example.com


Edit /etc/default/nfs-common to indicate that idmapd is required and statd is not, along with rpc.gssd:

## Debian 7
# egrep -v "^(#|$)" /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_IDMAPD=yes
NEED_GSSD=no

## Ubuntu 14
# egrep -v "^(#|$)" /etc/default/nfs-common
NEED_STATD=no
STATDOPTS=
NEED_GSSD=no


Start the required services:

# Debian 7
service rpcbind start; insserv rpcbind
service nfs-common start; insserv nfs-common

# Ubuntu 14 - Upstart controlled rpcbind/statd
service rpcbind start
service idmapd start


Make the destination directory for the mount, and test a simple mount with no advanced options:

# showmount -e 192.168.5.1
# mkdir /data
# mount -t nfs4 192.168.5.1:/data /data
# df -h /data
# umount /data


Now that it's confirmed working, add it to /etc/fstab as a standard mount.

/etc/fstab
192.168.5.1:/data  /data  nfs4  sec=sys,noatime  0  0


Test the mount and check the desired options were applied.

# mount /data
# touch /data/test-file
# df -h /data
# grep /data /proc/mounts


With NFSv4 and the use of idmapd additional testing should be performed to ensure the user mapping is performing as expected if possible. Make a user account on the server and client with the same UIDs, then test setting ownership on one side is mapped to the same user on the other:

# NFSv4 server
useradd -u 5555 test

# NFSv4 client
useradd -u 5555 test

# NFSv4 server
chown test /exports/data/test-file
ls -l /exports/data/test-file

# NFSv4 client - should show 'test' owns the file
ls -l /data/test-file


Additionally, test that the nfs4_setfacl and nfs4_getfacl commands seem to perform as expected )the format is not the same as setfacl/getfacl), see nfs4_acl(5) man page for details. Note that the principal (user) listed is in the format user@nfs.domain - the same domain that was used in /etc/idmapd.conf which may not necessarily be the same has the hostname domain.

# Add user 'test' to have read
nfs4_setfacl -a A::test@example.com:r /data/test-file

# Look for expected result - some distros list user@domain, some the UID
nfs4_getfacl /data/test-file

 A::OWNER@:rwatTcCy
 A::test@example.com:rtcy  (or: A::5555:rtcy)
 A::GROUP@:rtcy
 A::EVERYONE@:rtcy

# From the *server*, try a standard 'getfacl' and it should also show up
getfacl /exports/data/test-file 

 # file: exports/data/test-file
 # owner: test
 # group: root
 user::rw-
 user:test:r--
 group::r--
 mask::r--
 other::r--


Additional Reading