There is a description here:
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_MRG/1.0/html/Realtime_Tuning_Guide/sect-Realtime_Tuning_Guide-Application_Tuning_and_Deployment-TCP_NODELAY_and_Small_Buffer_Writes.htmlAlso from the tcp(5) man page:
TCP_NODELAY
If set, disable the Nagle algorithm. This means that segments are always sent as
soon as possible, even if there is only a small amount of data. When not set, data
is buffered until there is a sufficient amount to send out, thereby avoiding the
frequent sending of small packets, which results in poor utilization of the net-
work. This option is overridden by TCP_CORK; however, setting this option forces
an explicit flush of pending output, even if TCP_CORK is currently set.
I've written a sample TCP server that sets TCP_NODELAY on the socket. It then sends out data in various sizes from 1 to 8192 bytes. You can compile and run it like this:
$ gcc -o nodelay_test nodelay_test.c
$ ./nodelay_test
Then from another shell you can connect to it like this, using netcat:
$ nc localhost 5678
You can make various timing experiments by enabling/disabling the TCP_NODELAY setting, and using "time" with nc like this:
$ time nc localhost 5678 > /dev/null
real 0m0.003s
user 0m0.000s
sys 0m0.000s
Of course you can also use it over the network for more interesting results. Just replace localhost with the IP adderss of the machine running the test program. And you can increase the size of the buffer.
Hope that helps!