Home
Developer Resources
Technical Articles

QNX Technical Articles

QNX® Momentics® 6.3.0 SP2 PE and SE Maintenance Patch for QNX Momentics 6.3.0 SP2 Network Protocol Components (Patch ID 234) Release Notes

QNX® Momentics® Development Suite 6.3.0 SP2 PE and SE

Date of this edition: October 03, 2006

Target OS: QNX® Neutrino® 6.3.0 SP2, QNX Neutrino Core OS 6.3.2

Host OS: Microsoft Windows XP SP1 and SP2, 2000 SP4, or NT SP6a; Sun Solaris 7, 8, 9, or 10; QNX® Neutrino® 6.3.0 SP2, QNX Neutrino Core OS 6.3.2; Linux Red Hat 8, 9, or Enterprise WS 3 or 4


Note: This patch supersedes the Maintenance Patch for the QNX Momentics 6.3.0 SP1 Full TCP/IP Stack (Patch ID 97); all enhancements included in Patch ID 97 are included in this patch.

For the most up-to-date version of these notes, go to our website (www.qnx.com), log into your myQNX account, and then go to the Download Center.


Contents

Throughout this document, you may see reference numbers associated with particular issues, changes, etc. When corresponding with our Technical Support staff about a given issue, please quote the relevant reference number. You might also find the reference numbers useful for tracking issues as they become fixed.

What's in this patch?

Binaries

This patch corrects a number of reported issues affecting QNX Neutrino network-protocol and network-filesystem components. This includes the full TCP/IP stack, Qnet (npm-qnet-l4_lite.so), fs-cifs, fs-nfs2, fs-nfs3 and nfsd. This patch also includes all fixes from the Maintenance Patch for the QNX Momentics 6.3.0 SP1 Full TCP/IP Stack (Patch ID 97).

Several of these fixes correct problems that can affect your system's network service availability or fix potential data corruption issues, so we recommend that you update your system to use the binary versions included in this patch. See reference numbers 21549, 21962, 25036, 38470, 38674, 28765, 38575, 24275, 38567, 23581, and 23582.

Installed files

These files are installed under $QNX_TARGET/, under the subdirectories for each supported target-platform:

  • ARMBE
    • armbe/lib/dll/npm-tcpip-v4.so
    • armbe/usr/bin/netstat
    • armbe/sbin/route
    • armbe/usr/bin/arp
    • armbe/sbin/sysctl
    • armbe/lib/dll/npm-qnet-l4_lite.so
    • armbe/usr/sbin/nfsd
    • armbe/usr/sbin/fs-nfs2
    • armbe/usr/sbin/fs-nfs3
    • armbe/usr/sbin/fs-cifs
  • ARMLE
    • armle/lib/dll/npm-tcpip-v4.so
    • armle/usr/bin/netstat
    • armle/sbin/route
    • armle/usr/bin/arp
    • armle/sbin/sysctl
    • armle/lib/dll/npm-qnet-l4_lite.so
    • armle/usr/sbin/nfsd
    • armle/usr/sbin/fs-nfs2
    • armle/usr/sbin/fs-nfs3
    • armle/usr/sbin/fs-cifs
  • MIPSBE
    • mipsbe/lib/dll/npm-tcpip-v4.so
    • mipsbe/usr/bin/netstat
    • mipsbe/sbin/route
    • mipsbe/usr/bin/arp
    • mipsbe/sbin/sysctl
    • mipsbe/lib/dll/npm-qnet-l4_lite.so
    • mipsbe/usr/sbin/nfsd
    • mipsbe/usr/sbin/fs-nfs2
    • mipsbe/usr/sbin/fs-nfs3
    • mipsbe/usr/sbin/fs-cifs
  • MIPSLE
    • mipsle/lib/dll/npm-tcpip-v4.so
    • mipsle/usr/bin/netstat
    • mipsle/sbin/route
    • mipsle/usr/bin/arp
    • mipsle/sbin/sysctl
    • mipsle/lib/dll/npm-qnet-l4_lite.so
    • mipsle/usr/sbin/nfsd
    • mipsle/usr/sbin/fs-nfs2
    • mipsle/usr/sbin/fs-nfs3
    • mipsle/usr/sbin/fs-cifs
  • PPCBE
    • ppcbe/lib/dll/npm-tcpip-v4.so
    • ppcbe/usr/bin/netstat
    • ppcbe/sbin/route
    • ppcbe/usr/bin/arp
    • ppcbe/sbin/sysctl
    • ppcbe/lib/dll/npm-qnet-l4_lite.so
    • ppcbe/usr/sbin/nfsd
    • ppcbe/usr/sbin/fs-nfs2
    • ppcbe/usr/sbin/fs-nfs3
    • ppcbe/usr/sbin/fs-cifs
  • SHLE
    • shle/lib/dll/npm-tcpip-v4.so
    • shle/usr/bin/netstat
    • shle/sbin/route
    • shle/usr/bin/arp
    • shle/sbin/sysctl
    • shle/lib/dll/npm-qnet-l4_lite.so
    • shle/usr/sbin/nfsd
    • shle/usr/sbin/fs-nfs2
    • shle/usr/sbin/fs-nfs3
    • shle/usr/sbin/fs-cifs
  • x86
    • x86/lib/dll/npm-tcpip-v4.so
    • x86/usr/bin/netstat
    • x86/usr/sbin/route
    • x86/usr/bin/arp
    • x86/sbin/sysctl
    • x86/lib/dll/npm-qnet-l4_lite.so
    • x86/usr/sbin/nfsd
    • x86/usr/sbin/fs-nfs2
    • x86/usr/sbin/fs-nfs3
    • x86/usr/sbin/fs-cifs

The following files are installed under $QNX_TARGET/usr/include:

  • netinet/icmp_var.h
  • netinet/in.h
  • sys/netmsg.h

Fixed issues

This patch addresses the following issues:

npm-tcpip-v4.so
  • The TCP/IP stack could formerly fault if you tried to get file-descriptor information (e.g. by executing sin fd) on a system where a process had called shutdown() for a TCP/IP socket descriptor, but the socket hadn't yet been closed. This has been fixed. You also now get more diagnostic information when you use sin fd to view information on socket file descriptors. (Ref# 21549)
  • The TCP/IP stack obtains a timer, which starts at time 0, from the process manager. If the TCP/IP stack and a TCP/IP application that tries to connect to a remote host start executing too soon, the TCP/IP stack may apply a time of 0 seconds to ARP cache entry structures.

    If this occurs, you may end up with a permanent ARP entry (i.e. one that never times out). You can also end up with permanent, incomplete ARP entries that never time out and that the TCP/IP stack doesn't attempt to resolve. If this happens, your host won't be able to communicate with one or (possibly) more remote hosts (i.e. the ones the TCP/IP application in the OS image is trying to reach).

    You can check for permanent ARP entries by running the arp -an command and examining the output. The only permanent entries listed should be for the IP addresses assigned to your host's interfaces; there shouldn't be any permanent, incomplete entries. If you find a permanent entry that isn't for the IP address of an interface on your host, and you didn't explicitly create a permanent entry, then you could be encountering this problem. A workaround for your OS image was to delay the start of the TCP/IP stack or the first TCP/IP application by at least one second, by using the sleep command (e.g. sleep 1) or some other delay mechanism. This has been fixed, so this workaround is no longer necessary. (Ref# 21395)

  • If you connect() on an unlinked or nonexistent AF_LOCAL socket, errno used to be incorrectly set to ECONNREFUSED instead of ENOENT. This has been fixed. (Ref# 21664)
  • If a program called bind() for an AF_LOCAL socket, and the path namespace entry was created, the TCP/IP stack used to leak a small amount of memory, even if the path were unlinked. This has been fixed. (Ref# 21639)
  • A user TCP application that was blocked on read() formerly could unblock and return 0 when the sin utility was run. It appeared as if the remote TCP application had closed its end of the socket, when it hadn't. A user symptom could be TCP server applications that terminated, closed a TCP session for no reason, or reported that the client had ended the session when it hasn't. This has been fixed. (Ref# 21962)
  • The socket() function call formerly could have set errno incorrectly if the system were out of memory for AF_LOCAL sockets. (Ref# 22917)
  • When the fastforward path in the TCP/IP stack was used (see the fastforward option to npm-tcpip-v4.so or npm-tcpip-v6.so), the TCP/IP stack didn't deal with the next hop gateway properly if it wasn't responding to an ARP request. The TCP/IP stack now correctly marks the route as being down and sends an ICMP-unreachable packet back to the source. (Ref# 23864)
  • If a packet were to be forwarded, and there was no route specified on the gateway, the TCP/IP stack returned ICMP_UNREACH with an ICMP_UNREACH_HOST code. The TCP/IP stack has been changed to return a ICMP_UNREACH_NET code. (Ref# 23900)
  • When the TCP/IP stack ran out of threads, it wasn't obvious to the user that this had occurred, beyond the slow response of the TCP/IP stack. (See the threads option for npm-tcpip.so. Now if this occurs, this situation is logged to sloginfo. (Ref# 26556)
  • When multiple threads at the same priority sent data on a stream socket, the data would sometimes be intermixed in a nonintuitive way when received at the peer. We've corrected this. (Ref# 24873)
  • The TCP/IP stack no longer causes io-net to run READY when using the SO_BINDTODEVICE socket option. This could have occurred if a socket were bound to one network interface, but a packet arrived on another network interface that had no socket bound to it, targeting the same IP port. Applications that could have been affected by this behavior include dhcp.client, dhcpd, dhcprelay, and bootpd. (Ref# 25036)
  • When unlinking Unix Domain Name pathnames, and binding new pathnames, the stack formerly might have maintained a reference to freed memory, which could have caused the bind() function to fail for Unix sockets, or corrupted the TCP/IP stack's memory. We've corrected this. (Ref# 38470)
  • When the number of interfaces is greater than 8, the stack formerly could incorrectly assign two interfaces with the same interface index. This has been fixed. (Ref# 25359)
  • We've changed the source to be compatible with gcc 3.4.4. (Ref# 26682)
  • The TCP/IP stack no longer faults when you use the ioctl() SIOCGIFALIAS to get information on an address that doesn't exist. (Ref# 25166)
  • The TCP/IP stack formerly could sometimes generate a double reply in response to sin fd queries. This didn't cause any issues, but made kernel traces confusing. (Ref# 25856)
  • The stack formerly could corrupt an internal packet cache and then fault when more than three link layer types were in use (e.g. ethernet, PPP, other). (Ref# 28896)
route, arp, netstat, npm-tcpip-v4.so
  • The TCP/IP stack used a monotonic clock internally. Interfaces that used an absolute timeout value to specify a timeout, or to query a timeout, were converted to use a relative timeout value. Utilities that used this interface were converted to use a relative timeout value. This introduced portability and maintenance issues because the timeout values were incorrectly applied if the application wasn't modified. These interfaces once again use an absolute timeout. The interfaces effected are:
    • The Routing Socket rt_metrics structure member rmx_expire — the lifetime for the route

    The arp, route and netstat utilities had been modified to supply a relative timeout rather than an absolute one. We've changed them back, so that they again use an absolute timeout value. (Ref# 22877)

sysctl, npm-tcpip-v4.so, <netinet/icmp_var.h>
The TCP/IP stack would always respond to the ICMP timestamp request. You can now turn this feature off with the sysctl utility or the sysctl() function. The object to control this is net.inet.icmp.tstamprepl; the default is 1 (on). (Ref# 23329)
sysctl, npm-tcpip-v4.so, <netinet/in.h>, <netinet/ip_var.h>
The TCP/IP stack by default uses sequential IP header IDs. You can now enable random IP header IDs, by using the sysctl utility or the sysctl() function. The object to control this is net.inet.ip.random_id; the default is 0 (off). (Ref# 23328)
npm-qnet-l4_lite.so
  • Qnet formerly could potentially leak a small amount of memory during name resolution if a network error occurred. In this scenario, it was also possible for an application to become permanently REPLY-blocked on Qnet. This affected applications calling netmgr_remote_nd(). This has been fixed. (Ref# 38674, 28814, 25949)
  • Qnet formerly could potentially fault or corrupt the memory of the io-net process if your network contained a large number of Qnet nodes, or nodes with long hostnames and domains. This was typically an issue if the number of nodes names in the /net directory were more than approximately 100 nodes, but could have been an issue with a smaller number of nodes with very long host and domain names. (Ref# 28765)
  • We've add a new Qnet IOMSG, _NETMGR_QOS_FLUSH. This message lets an application force a retransmission instead of waiting for Qnet to activate its own timeout. This is useful for periodic detectable hardware failures where the application can take action, instead of enabling shorter timeout periods for Qnet, which adds more load to the system. (Ref# 38260)

    Here's an example:

    #include <sys/netmgr.h>
    #include <sys/netmsg.h>
    
    extern int __netmgr_send( void *smsg1, int ssize1, const void *smsg2,
                              int ssize2, void *rmsg, int rsize);
     
    int main (void)
    {
                struct _io_msg msg;
    
                msg.type = _IO_MSG;
                msg.combine_len = sizeof(msg);
                msg.mgrid = _IOMGR_NETMGR;
                msg.subtype = _NETMGR_QOS_FLUSH;
    
                __netmgr_send(&msg, sizeof(msg), 0, 0, 0, 0);
    }
        
  • Qnet now supports a vtag=tag_number option that causes Qnet to insert a 4-byte vlan tag into the packet. The tag_number must be greater than zero. If you use this option, Qnet accepts only packets that have a tag value that matches the given tag_number. If the driver being used doesn't support 1518-byte packets, you must also use the Qnet mtu_en=1496 option. (Ref# 38260)
  • Qnet had a calculation issue when determining whether to retry during a node resolution. It would retry twice the number specified by the res_retries option, over twice the amount of time. The default is 2, which resulted in 4 retries. This has been fixed. (Ref# 25781)
  • We've changed the possible values for the periodic_ticks option from 1, 2, 5, or 10 to any value from 1 to 1000. (Ref# 24893)
  • We've added the max_tx_bufs=num option. This option specifies the number of tx buffers that Qnet holds in reserve before allocating more. The default was also changed from 100 to 500. If your application sends large messages, you may want to increase this value for performance. If your application typically sends small messages (most default system traffic is small messages) you may want to decrease this value to save memory. (Ref# 23852)
  • Qnet priority inheritance now works correctly for messages being sent with a priority greater than 63. The priority was formerly capped at 63. (Ref# 23352)
  • Qnet supported a maximum of two network interfaces (the first two located) although the user could override and specify which two (with the bind=interface option). Qnet now supports more interfaces via the max_num_l4s=num option. The default is two; the maximum is four. (Ref# 22269)
  • Qnet now lets you specify the maximum transmission unit (MTU) of a Qnet packet via the mtu_en=num option. The num argument must be greater than 100. All nodes in network must use the same value. The default is 1500. (Ref# 22250)
  • Qnet used to log an event for every misaligned packet supplied by a driver. This could have an impact on performance. Qnet now logs this event only periodically. This is caused only by certain drivers. (Ref# 21621)
  • Qnet used a pulse with a hard-coded priority for the QoS transmission thread, and the QoS periodic transmission thread. You can now set the priority of these pulses, using the options qos_per_pri=num and qos_tx_pri=num, where num is the new priority. We recommend that you not change these values; we supply these options for the user case where the priority of the io-net subsystems must be changed, which is typically rare. (Ref# 20891)
  • Qnet has added more debugging sloginfo events in the kernel interface layer to aid in debugging. (Ref# 20290)
  • Qnet now loads the resolver file at the proper time in its initialization. It was being loaded too late previously. (Ref# 38267)
  • When a channel is destroyed, Qnet notifies the client node involved. If that client node is down, Qnet no longer faults logging an event regarding this scenario. (Ref# 21336)
  • Qnet has the option of using CRC to check for packet corruption. By default it relies on the hardware. When using software CRC checks, Qnet verifies the CRC if it's present, although it also allows packets that don't contain a CRC value (other nodes not using the no_crc option). We've added a new enforce_crc option:

    enforce_crc=num

    If you use this option in combination with do_crc, only packets that contain a valid CRC are accepted. This option has an effect only when do_crc is also set to 1. Setting enforce_crc to one causes packets that are received without a valid software-level CRC generated by the remote mode (i.e. it's running do_crc=0) to be discarded, because the packet content's integrity is unknown, and could be suspect. The default is zero, which allows received packets without a generated software-level CRC to be processed. (Ref# 39171)

fs-cifs
  • When using fs-cifs, pread() no causes the file pointer to be changed. This had the potential to corrupt binaries being launched over the CIFS network protocol when using QNX Neutrino Core OS 6.3.2. (Ref# 38575)
  • fs-cifs no longer faults if an empty password ("") is passed on the command line. (Ref# 27957)
  • The timestamp of created files is no longer incorrect when the server or client (fs-cifs) system has its time zone set to something other than GMT. (Ref# 25619)
  • fs-cifs no longer faults when you use a password longer than 32 characters. (Ref# 24252)
  • If you're using fs-cifs to rename a file in the same directory, while changing only the case of letters in the filename (e.g. file to FILE) and the server is Windows, the file is no longer deleted. (Ref# 23706)
  • By default, if the CIFS server can't be reached, or isn't responding, it could take a long period of time before an application fails, as TCP timeouts would be applied. We've added a new -o timeout=num_seconds option that lets you specify the timeout to apply. This is applied to all server-blocking operations (e.g. mount, open(), read(), write(), and so on). Note that you could always use a signal to unblock the application. (Ref# 23115)
  • fs-cifs formerly might not have responded if left idle for several hours. This has been fixed. (Ref# 22226)
  • fs-cifs now supports more modern dialects. This improves compatibility with later Windows OS releases. This results in improvements in the accuracy of file time status, and quicker mounting of Windows shared directories. (Ref# 38233)
  • When using fs-cifs on PPCBE targets, fs-cifs formerly could return an unexpected errno back to applications that were trying to connect to resource-manager processes, but resolve the manager pathname via the CIFS mountpoint rather than the manager process if that process wasn't running. This could occur if the fs-cifs mountpoint is /, and the application is trying to resolve a pathname under /dev. For example, devc-pty might not have launched properly if the power manager weren't running, because fs-cifs returned an errno other than EBADF. This has been fixed. (Ref# 25447)
  • If there had been a successful mount of a share, and then the share was umounted, subsequent mounts of that same share would succeed, even if the user and password credentials specified were incorrect or missing. This has been fixed. (Ref# 27151)
  • The stat() times of a file accessed over the network using fs-cifs on a PPC target are now correctly decoded. (Ref# 22726)
  • When fs-cifs has a connection to the network server, and the server is rebooted, or some kind of temporary network failure occurs, fs-cifs tries to reestablish the connection and resume the operation it might have been doing. If that operation is a readdir(), fs-cifs no longer loses its place in the directory. It formerly could have returned duplicate entries, or missed entries. (Ref# 23729)
  • There used to be potentially a very small timing window (out of the user's control) where access() could fail with EINVAL on an fs-cifs mountpoint. We've fixed this. (Ref# 20372)
  • A certain pattern of read(), seek() and ftruncate() no longer returns corrupt data when you call read(). (Ref# 24275, 23583)
  • fs-cifs now supports mmap() properly. (Ref# 24255)
  • fs-cifs now implements the file access modes O_APPEND and O_TRUNC correctly. It formerly could potentially result in file corruption. (Ref# 38567)
  • If fs-cifs failed to authenticate with the server using an encrypted password, it would then attempt to authenticate with the server using an older method while sending the password unencrypted. This behavior has been changed to only send the password encrypted, unless you specify the option -o plainpwd. You might need the -o plainpwd option when mounting shares on older versions of Windows. (Ref# 39865)

    The new password options are as follows:

    -o plainpwd
    If logging in with an encrypted password fails, try to log in by using the password unencrypted.

    Note: Sending passwords in plain text may be considered a security problem.

    -o showpwd
    Show the plain-text password in the log file.

    Note: Adding the password to the log file may be a security problem if unauthorized personal have access to the log file.

fs-nfs2
A certain pattern of read(), seek(), and ftruncate() used to potentially return corrupt data when read() was called. We've fixed this issue. (Ref# 23581)
fs-nfs3
  • fs-nfs3 no longer (depending on the server implementation) causes an application calling readdir() for a NFS-mounted directory to loop forever. The call to readdir() never returned NULL; instead, it started at the first directory entry again. This has been fixed. (Ref# 23092)
  • Using the -w size=num and -w number=num options to change the size of the cache for asynchronous write operations no longer causes internal cache corruption. (Ref# 25186)

    Note: These options aren't documented yet; for a description, see Known issues,” below.

  • When implementing asynchronous write (write caching), fs-nfs3 now correctly submits cached NFS write operations with the user ID of the application performing the write instead of root. (Ref# 23425)
  • fs-nfs3 now supports file sizes greater than 2 GB. (Ref# 21592)
  • The usage message now correctly says that the -s option, which specifies to use a soft mount, is a mountpoint option, not a global option. (Ref# 23381)
  • A certain pattern of read(), seek(), and ftruncate() used to potentially return corrupt data when read() was called. We've fixed this issue. (Ref# 23582)
  • fs-nfs3 now supports values greater than 8096 for the -B option. (Ref# 20810)
  • If you're using fs-nfs3 with asynchronous writes, and the server is using root-squashing (i.e. it maps root to anonymous or nobody), you can now create directories and files correctly. (Ref# 25113)
fs-nfs2, fs-nfs3
  • If an application is executed from an NFS-mounted filesystem, and has its binary code changed on the filesystem while maintaining the same path and filename on the filesystem, executing the application a second time no longer causes the already executing instance of the application to fault. (Ref# 22601)
  • NFS requests always have a unique identifier (xid) value. If a NFS server stops responding (for example because of a temporary network failure) fs-nfs2 and fs-nfs3 try to remount the server and resend the request. We've found that some servers refuse to reply to a request if the request's xid hasn't been changed after a remount, which is against the NFS specification. We've added an -x option to work around this problem; it causes the client to change the xid of the request after a remount. (Ref# 25402)
  • When you mount and unmount a NFS mountpoint repeatedly for hundreds of iterations, there's no longer a potential for a resource to be unreleased, causing all further mount attempts to fail. There's still a timeout period that must expire before mounts can continue. Unless you're doing this operation in a loop, you aren't likely to encounter this timeout. (Ref# 15078)
  • fs-nfs2 and fs-nfs3 are no longer vulnerable to improperly formatted RPC packets (e.g. those generated by a reconfigured NFS server for which the client had established a mountpoint). (Ref# 21480)
  • fs-nfs2 and fs-nfs3 now recover properly if the NFS server terminates and is restarted while the client is actively exchanging data with the server. Attempts to access files over the NFS link used to result in errors, with errno set to EPERM. (Ref# 25765)
  • When you now call mkdir() with a pathname, if a symbolic link already exists with that pathname, mkdir fails with the errno value EEXIST; mkdir() previously resolved the link. (Ref# 19213)
  • In some cases, fs-nfs2 and fs-nfs3 wouldn't treat a TCP connection as a stream, and would expect that the entire packet should be returned with a read(). This would cause false failures, and fs-nfs2 and fs-nfs3 would remount the NFS server more times than necessary. We've fixed this. (Ref# 8873)
  • A link pathname that ends in a slash is no longer treated as a link rather than the directory it points to. For example, readlink() now returns an error instead of data for a path that ends in /. (Ref# 19209)
  • If you specify O_EXCL and O_CREAT for open(), and the path names a symbolic link, open() now correctly fails and sets errno to EEXIST. (Ref# 19210)
  • mkfifo() now fails with EEXIST if the path names a symbolic link. (Ref# 19212)
  • When deleting a directory, fs-nfs2 and fs-nfs3 would incorrectly attempt to use a file operation first, and then a directory operation. This made some NFS servers stop communicating with fs-nfs2 or fs-nfs3. We've fixed both client filesystems to use only a directory operation. (Ref# 38671, 38672)
fs-cifs, fs-nfs2, fs-nfs3
The usage message now correctly states that you don't need to specify a server-export/mountpoint combination on the command line when you launch fs-cifs, fs-nfs2, or fs-nfs3. You can invoke the mount command later to add mountpoints. (Ref #15446)
nfsd
  • This server no longer faults when listing the contents of directories when the pathnames exceed 255 characters. (Ref# 23248)
  • If the export in the /etc/exports file is a link to another directory, showmount now correctly displays the link pathname instead of the target pathname. (Ref# 26149)
  • nfsd allows you to access files only up to 16 subdirectory levels deep within the directory exported in the /etc/exports file. It now returns an error of NFS_ENAMETOOLONG to the client instead of faulting. (Ref# 38838)
  • nfsd previously became permanently blocked on a FIFO if a Neutrino NFS client attempted to create and use one. It now supports FIFOs as described in the NFS version 3 specification. The Neutrino fs-nfs2 and fs-nfs3 NFS clients don't currently support FIFO special files. (Ref# 11098)

    Note: When using nfsd, be careful about what files are exported. If you don't have specific control over the NFS client and what files are accessed by the client, you shouldn't export directories that contain nonregular files (e.g. /dev). If these files don't have regular file semantics, nfsd may have unexpected behavior, such as becoming permanently blocked on the manager that owns the pathname.

  • nfsd no longer faults if it accesses a directory pathname that's longer than 256 characters to service a NFS version 2 request. (Ref# 38441)

Known issues

npm-tcpip-v4.so
  • If a packet is smaller than the minimum Ethernet packet size, the packet may be padded with random data, rather than zeroes. (Ref# 21460)
  • The TCP/IP stack doesn't maintain the statistics for outbound packets over VLAN interfaces. (Ref# 16684)
  • The TCP/IP stack doesn't maintain the statistics for the number of input and output bytes or packets if the packets are forwarded via the fast-forward feature. (Ref# 23041)
  • The TCP/IP stack doesn't maintain proper interface statistics for the link speed. (Ref# 27015)
  • If the default UDP socket receive-buffer size is set near its limit (for example sysctl -w net.inet.udp.recvspace=240000), UDP-based sockets become unreliable. (Ref# 27386)
npm-qnet-l4_lite.so
  • Once Qnet has a domain, you can't set Qnet to not use a domain; you can only change the domain. (Ref# 38802)
  • You can't use the Qnet “file” resolver (resolve=file option) to resolve IP addresses. It currently supports MAC addresses in combination with the option bind=en. (Ref# 38997)
  • Qnet treats the _CS_DOMAIN configuration string differently if it's undefined or set to a NULL string. If it's undefined, Qnet uses a domain of .net.intra; if it's set to a NULL string, Qnet applies that as the domain (for example hostname.). (Ref# 19676)
fs-cifs, fs-nfs2, fs-nfs3
The documentation incorrectly implies that at least one server-export/mountpoint combination is required on the command line when you launch fs-cifs, fs-nfs2, or fs-nfs3. You can invoke the mount command later to add mountpoints. (Ref #15446)
fs-nfs3
  • The -s option, which specifies to use a soft mount, isn't documented. (Ref# 23381)
  • The -w size=num and -w number=num options aren't documented. (Ref# 23583)

    Workaround: Here's a description:

    The fs-nfs3 manager supports a write-caching feature (by default), which greatly enhances performance. This is accomplished by performing write operations locally and releasing the application, but delaying transmission of the write operation to the NFS server until a later time.


    Note: You should disable this feature if multiple NFS clients will be accessing the same files on the NFS server, or else corruption may result because the server's file contents might be out of date compared to the cached version on each client. You can disable this feature using the -w sync=hard option.

    -w sync=hard
    Disable write caching.
    -w size=num
    The buffer size for each file, in units of 1 KB (default is 8; size = 8).
    -w number=num
    The number of buffers; each buffer can buffer one file. The default is 10, which means up to 10 files can be buffered simultaneously.
    -w delay=sec
    The data in the cache will be flushed to the server after sec seconds. The default is 2 seconds.
fs-nfs2, fs-nfs3
  • fs-nfs2 lets you modify the on-disk binary file of an executable that is executing. It should return an error with errno set to EBUSY. (Ref# 38563)
  • If you use the mount command to mount an NFS server, and you specify the argument -o ver3 and fs-nfs3 isn't running, fs-nfs2 will accept this mount request. (Ref# 38640)
  • fs-nfs2 and fs-nfs3 don't support fchdir(). (Ref# 38548)
  • The NFS clients don't distinguish between a pathname ending or not ending in / when passed as the argument to mkdir(). (Ref# 38484)
  • NFS is a connectionless protocol. If a server stops responding to the NFS client, it continues to try to reach the server to complete an operation until the server becomes available, or the user stops the operation. While the fs-nfs2 and fs-nfs3 clients are trying to reach the server, NFS operations are blocked until they're successful. This isn't an issue if the client is talking only to one server, but if an fs-nfs2 process has mounted multiple servers, the blocked operation also block the client's ability to talk to the other servers. (Ref# 39084)

    Workaround: Start separate client (fs-nfs2, fs-nfs3) processes for each server you wish to mount.

fs-nfs2
  • If you attempt to access a link to a file using a pathname that ends in slash, the operation fails with an incorrect errno of EINVAL instead of ENOTDIR. (Ref# 38560)
  • fs-nfs2 doesn't support files larger than 2 GB. (Ref# 39060)
  • fs-nfs2 doesn't correctly implement the options -w size=n and -w number=n as described in the fs-nfs2 usage message. Don't use them. (Ref# 39031)
  • fs-nfs2 doesn't support a -B option greater than 8096. (Ref# 39022)
fs-nfs3
The exit status is always 255. (Ref# 38512)
fs-cifs
  • pwrite() behavior isn't consistent with other Neutrino filesystems. If you open a file with O_APPEND, the offset supplied to pwrite() doesn't override O_APPEND. The data is written to the end of the file, regardless of the file offset supplied. (Ref# 38576)
  • If you unlink() a file on a CIFS mount point, any open file descriptors for that file become invalid. (Ref# 38574)
  • If you perform a socket operation on a file descriptor of a file on a fs-cifs mountpoint, the operation fails, but the errno returned might not be ENOTSOCK. (Ref# 38572)
  • fs-cifs doesn't support POSIX file-locking functions. (Ref# 38570)
  • fs-cifs incorrectly sets an errno of EPERM instead of EBADF if you attempt to write to a file opened as O_RDONLY or O_ACCMODE. (Ref# 38565)
  • If a component of a pathname supplied to a function isn't a directory, fs-cifs should return ENOTDIR. It currently returns ENOENT. (Ref# 38564)
  • fs-cifs may return a different errno when the mkdir() function call fails than those returned by the local filesystem. (Ref# 38573)
  • PATH_MAX for CIFS (and thus fs-cifs) isn't 1024 as in POSIX. This is set by both Windows and the CIFS specification. The pathname length can be up to 255 characters. (Ref# 38566)
nfsd
nfsd lets you access files only up to 16 subdirectory levels deep within the directory exported in the /etc/exports file. Deeper directory levels and files aren't accessible. (Ref# 40104)

Technical support

If you have any questions, comments, or problems with a QNX product, please contact Technical Support. For more information, see the How to Get Help chapter of the Welcome to QNX Momentics guide or visit our website, www.qnx.com.