NFS 2 client filesystem (QNX Neutrino)

Note: You must be root to start this manager.


fs-nfs2 [global_options] [mountpoint_options] [mountpoint_spec] [[mountpoint_options] mountpoint_spec ...]

Runs on:

QNX Neutrino


The global options include the following:

-B size
Set the buffer size to size bytes. The default is set by the first server, and is usually 8K.
-b num
Use num buffers (default: 200).
Run in the foreground.
Display usage information.
-i nodes
Set the number of inodes to nodes. The default is to increase the number dynamically.
-o opt
Specify additional options, separated by commas; opt can be one of the following:
  • port=n1:n2 — the port number range that the client uses. The default is to use a privileged port.
  • soft=n — use a soft mount. The mount attempt terminates after failing n times.
  • disable_fdinfo — don't support iofdinfo().
-T num
Set the number of threads. The default is 5.
-w wopt[,wopt...]
Options to control write operations, separated by commas. The following options are available:
  • delay=sec — indicate the time, in seconds, after which the data will be flushed to the server. The default is 2 seconds.
  • number=num — the number of buffers (default is 10.) Each buffer manages content for one file. The default value indicates that 10 files can be buffered simultaneously.
  • size=num — the size of the buffer, in units of 1 KB. The default is 8 KB.
  • sync=hard — turn off write caching. Data is written synchronously on the server. By default, data is buffered locally and written onto the server later.
Verbose output; add more v characters for more verbosity. In order to capture the log messages, you need to have syslogd running.

The mountpoint options include the following:

Set the NO EXEC flag for the mounted filesystem.
Set the READ ONLY flag for the mounted filesystem.
Don't cache symlinks.
Use a soft mount. The mount attempt terminates after failing four times.
Use TCP instead of UDP. If this fails, fs-nfs2 uses UDP.
Use UDP (which is the default). If this fails, fs-nfs2 fails.
-Z n
The value of n indicates how to attach to the path:
  • B or b — attach before other managers.
  • A or a — attach after other managers.
  • O or o — make the attachment opaque; don't resolve to mountpoints with shorter pathname matches. The pathname resolver tries to find the longest match against all pathnames attached.

The default is none of these. For more information, see "Pathname Management" in the Process Manager chapter of the System Architecture guide.

The format of a mountpoint specification is:

A mountpoint_spec is in the form:

remote_host:/remote_export local_mountpoint

The components are as follows:

The name of the NFS server.
The directory to be exported from the server.
The name under which the exported directory is to be mounted.


The fs-nfs2 filesystem manager is an NFS 2 client operating over TCP/IP. To use it, you must have an NFS server.

This filesystem manager requires a TCP/IP transport layer, such as the one provided by io-pkt*. It also needs socket.so and libc.so.

By default, this utility does not set any upper limit for number of inodes.

You can also create mountpoints with the mount command by specifying nfs for the type. You must start fs-nfs2 before creating mountpoints in this manner. If you start fs-nfs2 without any arguments, it runs in the background so you can use mount. The -o options that you can use with mount include the following:

Use TCP instead of UDP. If this fails, mount uses UDP.
Use UDP (which is the default). If this fails, mount fails.
Don't cache symlinks.
Use fs-nfs2 instead of fs-nfs3.
Use a soft mount (i.e., break the connection if unable to reach the server).

If you try to access a link that has a trailing slash, fs-nfs2 immediately returns EINVAL (invalid argument) instead of resolving the link and reporting errors such as EPERM (permission denied) or ENOTDIR (not a directory). It does this to reduce network traffic, because a path that ends in a slash must be a directory, and so the access will ultimately fail anyway.


Mount the qnx_bin export as /bin from an NFS server named server_node:

fs-nfs2 server_node:/qnx_bin /bin &

Mount /nfs1 using TCP, and /nfs2 using UDP:

fs-nfs2 -t host1:/ /nfs1 host2:/ /nfs2

Mount both using TCP:

fs-nfs2 -t host1:/ /nfs1 -t host2:/ /nfs2


If possible, you should use fs-nfs3 instead of fs-nfs2.