[omniORB] Throughput Testing

youzhong liu liu@ecel.ufl.edu
Mon, 2 Aug 1999 17:09:23 -0400 (EDT)


I am doing some testing results on different CORBA implementations over
Gigabit Ethernet. Thanks to omniORB's many good features, its latency and
throughput are the best in my testing.

However, I can't explain the following testing result:

  For ONEWAY, socket buffer size is 8K bytes. I'm changing data size to
get different throughput result:
    when data size is 2K bytes, the throughput is about 110Mbps
    when data size is 4K bytes, the throughput drops to about 8Mbps
    when data size is 8K bytes, the throughput is about 110Mbps again.

    I changed the step of data size, I found the throughput begins to
drop at 3.5K (3k and 512 bytes), reachs the bottom at 4.0K, recovers to
the normal value at 4.5K.
  
    When I change the socket buffer size to 64K, this phenomenon
disappears! There is no sudden drop when data size is 4K.

Previously, I assumed this to be produced by TCP_NODELAY. But it doesn't
make sense. If the environment is set to be DELAY, then all the throughput
of small data size should be low instead of 4K only. 

Could anybody help me to explain this? Thanks very much!

Youzhong