[omniORB] Strange performance result

Serguei Kolos Serguei.Kolos at cern.ch
Fri May 19 18:35:35 BST 2006


Hello

For our project we are still using still the latest stable omniORB 
version (4.0.7). We plan to
move to the 4.1.x branch as soon as it will become the mainstream 
version. We have very
strong performance requirements and I wander if you notice any 
difference in performance
between the 4.0.7 and 4.1.0 versions, at least in the tests which were 
running stably.

Cheers,
Serguei

Zhao, Jason wrote:

>Please ignore my previous message. We installed latest release 4.1.0
>beta 2 on Linux 2.6.12 (no smp) and ran several tests. This time the
>results are consistent.
>
>Jason
>
>-----Original Message-----
>From: omniorb-list-bounces at omniorb-support.com
>[mailto:omniorb-list-bounces at omniorb-support.com] On Behalf Of Zhao,
>Jason
>Sent: Thursday, May 18, 2006 11:25 AM
>To: omniorb-list at omniorb-support.com
>Subject: [omniORB] Strange performance result
>
>Hi,
>
>I'm sorry if this problem looks complicated.
>
>I used a simple idl to test omniorb's performance on oneway calls to
>pass data between two machines. When the bandwidth between the two
>machines is less than 150Mbps, the performance results are consistent:
>different runs produced similar numbers and within in the same run,
>larger message sizes result in higher application level throughput. But
>when the bandwidth between the two machines are set to be higher than
>150Mbps (using Linux traffic control and physical interface is gig
>Ethernet), the result numbers are unpredictable: not consistent across
>multiple runs and even within the same run, larger message sizes some
>times result in lower application level throughput.
>
>I tested ACE TAO using the same idl and under the same environment, the
>results are always consistent either across runs or within the same run.
>Does anyone know what might caused omni to behave unpredictably when the
>bandwidth is higher than 150Mbps (cpu utilization was not very high in
>those cases)?
>
>Thank you.
>
>Jason
>
>
>Configuration:
>
>OMNIORB_4_1_0_BETA_1 snapshot got from
>http://omniorb.sourceforge.net/snapshots/omniORB-4.1-latest.tar.gz on
>05/15
>Ipv6
>Linux version 2.4.21-4.ELsmp (bhcompile at daffy.perf.redhat.com) (gcc
>version 3.2.3 20030502 (Red Hat Linux 3.2.3-20)) #1 SMP Fri Oct 3
>17:52:56 EDT 2003 CPU * Intel Xeon 2.66Ghz 533Mhz FSB 512K Cache MEMORY
>* 2x 512MB DDR266 PC2100 ECC Reg Memory
>
>===============
>Here is the idl used (from ACE TAO's performance test suit), I tested
>message sizes of 128, 256, 512, 1024, 2048, and 4096 bytes, sent 10000
>messages for each size and calculated application level throughput per
>message size.
>
>module Test
>{
>  /// The data payload
>  typedef sequence<octet> Payload;
>  struct Message {
>    unsigned long message_id;
>    Payload the_payload;
>  };
>
>  /// Implement a simple interface to receive a lot of data
>  interface Receiver
>  {
>    /// Receive a big payload
>    oneway void receive_data (in Message the_message);
>
>    /// All the data has been sent, print out performance data
>    void done ();
>  };
>
>  /// Implement a factory to create Receivers
>  interface Receiver_Factory
>  {
>    /// Create a new receiver
>    Receiver create_receiver ();
>
>    /// Shutdown the application
>    oneway void shutdown ();
>  };
>};
>
>================
>Here is the omniorb configuration file used (comments removed for
>message size issue), basically I make the server side single-threaded so
>that messages were processed in the order they were sent.
>
>traceLevel = 0
>traceExceptions = 0
>traceInvocations = 0
>traceInvocationReturns = 0
>traceThreadId = 0
>traceTime = 0
>dumpConfiguration = 0
>maxGIOPVersion = 1.2
>giopMaxMsgSize = 2097152    # 2 MBytes.
>strictIIOP = 0
>tcAliasExpand = 0
>useTypeCodeIndirections = 1
>acceptMisalignedTcIndirections = 0
>scanGranularity = 5
>nativeCharCodeSet = ISO-8859-1
>nativeWCharCodeSet = UTF-16
>abortOnInternalError = 0
>abortOnNativeException = 0
>InitRef = NameService=corbaname::[2001:411:2:6:2::]
>DefaultInitRef = corbaloc::[2001:411:2:6:2::] clientCallTimeOutPeriod =
>0 clientConnectTimeOutPeriod = 0 supportPerThreadTimeOut = 0
>outConScanPeriod = 120 maxGIOPConnectionPerServer = 1
>oneCallPerConnection = 1 offerBiDirectionalGIOP = 0
>diiThrowsSysExceptions = 0 verifyObjectExistsAndType = 0
>giopTargetAddressMode = 0 bootstrapAgentPort = 900 endPoint =
>giop:tcp:[2001:411:2:6:2::]:20000 serverCallTimeOutPeriod = 0
>inConScanPeriod = 180 threadPerConnectionPolicy = 1
>maxServerThreadPerConnection = 1 maxServerThreadPoolSize = 100
>threadPerConnectionUpperLimit = 10000 threadPerConnectionLowerLimit =
>9000 threadPoolWatchConnection = 1 connectionWatchPeriod = 50000
>connectionWatchImmediate = 0 acceptBiDirectionalGIOP = 0
>unixTransportDirectory = /tmp/omni-%u unixTransportPermission = 0777
>supportCurrent = 0 copyValuesInLocalCalls = 0 objectTableSize = 100
>poaHoldRequestTimeout = 0 poaUniquePersistentSystemIds = 1
>supportBootstrapAgent = 0
>
>_______________________________________________
>omniORB-list mailing list
>omniORB-list at omniorb-support.com
>http://www.omniorb-support.com/mailman/listinfo/omniorb-list
>
>_______________________________________________
>omniORB-list mailing list
>omniORB-list at omniorb-support.com
>http://www.omniorb-support.com/mailman/listinfo/omniorb-list
>  
>




More information about the omniORB-list mailing list