Home
Support

Knowledge Base

BSPs and drivers
Community resources
Product documentation
Questions?
Contact us

io-pkt loop-back TCP socket communications speed degradation
 
________________________________________________________________________

Applicable Environment
________________________________________________________________________
  • Topic: io-pkt loop-back TCP socket communications speed degradation
  • SDP: 6.4.0
  • Target: Any supported target
________________________________________________________________________

Solution
________________________________________________________________________

An issue was reported in io-pkt using a simple client and server TCP socket application on a single machine over the loop-back device 127.0.0.1. After about 20 packets (~1500 bytes in size) of data had been sent, a severe degradation in performance was seen.
- Before this degradation, the sending/receiving of these packets takes around
190us.
- After this degradation the sending/receiving of these packets takes around
5s.

Using the same client and server programs over an actual physical network link (running on different machines), the issue was not seen.

It turns out that the test program in question was setting the SO_RCVBUF and SO_SNDBUF to a value of 1500. This is much smaller than the nominally recommended size of 4x the MSS as noted in "UNIX Network Programming - The Sockets Networking API -- 3rd Ed."
=====
The TCP socket buffer sizes should be at least four times the MSS for the connection. If we are dealing with unidirectional data transfer, such as a file transfer in one direction, when we say "socket buffer sizes" we mean the socket send buffer size on the sending host and the socket receive buffer size on the receiving host. For bidirectional data transfer, we mean both socket buffer sizes on the sender and both socket buffer sizes on the receiver. With typical default buffer sizes of 8192 bytes or larger, and a typical MSS of 512 or 1460, this requirement is normally met.
=====

There is also a reference to these values in the QNX documentation here:

http://www.qnx.com/developers/docs/6.4.0/neutrino/lib_ref/g/getsockopt.html#SO_RCVBUF
Which states:
===
SO_RCVBUF and SO_SNDBUF
level: SOL_SOCKET

Gets or sets the normal buffer sizes allocated for output (SO_SNDBUF) and input (SO_RCVBUF) buffers. You can increase the buffer size for high-volume connections, or decrease it to limit the possible backlog of incoming data. The system places an absolute limit on these values and defaults them to at least 16K for TCP sockets.
===

Following these guidelines resolved the issue.

________________________________________________________________________
NOTE: This entry has been validated against the SDP version listed above. Use caution when considering this advice for any other SDP version. For supported releases, please reach out to QNX Technical Support if you have any questions/concerns.
________________________________________________________________________


Related Attachments
 None Found





Please contact us with your questions or concerns.