NFS v2 & v3 and MOUNT v1 & v3 protocol server

Note: You must be root to start this daemon.


nfsd [-DFPt] [-c file] [-f n] [-H n] [-h n] [-o option] [-p n]
     [-s n] [-x n] &

Runs on:



-c file
Use file as the exports file. The default is /etc/exports.
Operate in debugging mode.
Truncate the subdirs and mntdtab files, and then exit.
-f n
Set the size of the open file cache (the default is 16).

The open file cache is used to cache open files and directories (with a 5-second idle timeout). If you know nfsd services only one client that only reads/writes to a single file, reducing this cache may save memory. If you know nfsd services many clients that read/write many files, increasing this cache could improve performance for read/write operations.

Note: Keep this cache a reasonable size, as file descriptors (open files) are a limited resource — by default, QNX Neutrino sets a maximum of 1000 open files per process. Besides this cache, nfsd needs file descriptors for sockets (servicing TCP consumes more than just UDP) and internal readdir() operations.

-H n
Specify the size of the file handle cache hash (the default is 997).
-h n
Specify the size of the file handle cache (the default is 200).

The file handle cache is a straight memory/performance trade-off, however it doesn't significantly affect read/write performance. It helps speed up ls-type operations (very useful for compiling/makefiles). To get a rough idea of how large this cache should (optimally) be, use the output of:

find mnt1 ... mntN | wc -l
-o option
Specify an additional option, where option is one of the following:
Parse the exports file, to check for errors, and then exit.
-p n
Run nfsd on port n, and don't register with portmap. By default, the port is 2049, and nfsd registers with portmap.
-s n
Flush the cache every n idle seconds (the default is 5).
Service TCP transport.
-x n
Specify the size of the XID cache (default is 16).

The XID cache isn't used for performance, but rather to ensure nonidempotent operations are responded to correctly.

Consider what happens when a client issues a remove request. Normally, the server receives the request, removes the file, and sends back a successful response. Suppose that, for some reason, the server doesn't respond fast enough for the client, and the client retransmits the request. If the server tries to remove the file (again), it fails.

Instead, each request is assigned a transaction identifier, known as an xid, which remains constant for retransmissions. If the client retransmits the request, the server matches it with the previous request and just replies with the previous status. Generally, the busier the network and server are, the more requests are retransmitted by the client(s), and the larger the XID cache should be.


The nfsd daemon services both NFS mount requests and NFS requests, as specified by the exports file. When it starts, nfsd reads the /etc/exports.hostname file (if this file doesn't exist, nfsd reads /etc/exports instead) to determine which mountpoints to service. Changes made to this file don't take effect until you either restart nfsd or you send nfsd a SIGHUP signal:

slay -s SIGHUP nfsd

There's no direct check for root on the mount; nfsd checks only that requests come in on a privileged port, which implies root access.

The nfsd daemon supports a maximum of 15 nested directory levels.

Note: The nfsd command doesn't tolerate any parsing errors while reading the exports file. If an error is detected, nfsd terminates. To keep downtime to a minimum if you modify the exports file, we recommend that you either:
  • Start another nfsd. If no parsing errors are detected, the second nfsd reports a bind failure, and exits — this indicates the exports file was parsed correctly.


  • Start a second instance of nfsd, specifying the -P option.

Security Issues

NFS is a very unsecure protocol. Although nfsd checks each request's origin against restrictions specified in the exports file, this helps only in an “honest” network. It's not difficult to spoof NFS requests.

Configuring Caches

Fine-tuning nfsd caches may result in less memory usage and improved performance, but these qualities are usually exclusive. Before modifying the default behavior of nfsd, it's important to know what its clients will demand from it. Also note that these caches are shared across all mountpoints.

See also:

/etc/exports, fs-nfs2, fs-nfs3, io-pkt*, mount, syslogd, umount

NFS filesystem in the Working With Filesystems chapter of the User's Guide