[omniORB] Bad performance with oneway invocation

dahaverk at rockwellcollins.com dahaverk at rockwellcollins.com
Mon Jul 21 13:00:52 BST 2003


> To some extent yes. My application is a kind of Information Service. It
is a single CORBA server
> to which many other applications (~1000) publish their information. There
are also several tens
> receiver applications, which subscribed to some information in this
server. When the info in the server
> is changed, an appropriate subscriber is notified. The notification is a
oneway message, which fits
> well to the IS model: the server does not really care if subscriber is
working or dead, but it has to
> do notification efficiently in order not to allow slow subscribers to
affect the faster ones. In the
> real configuration some receivers may get several thousands messages per
second. Of course
> these messages are not empty, but they are normally short (few tens of
bytes).

This sounds like more of an issue with the underlying protocol.  I haven't
looked at how omniORB does oneway messages....  But if they are being done
with TCP messages that could be the preformance problem with your
application.   Note that this behavior is because in the CORBA messaging
Specification one-way messages are not truely asynchronous,   they are
treated as deferred synchronous (OMG spec terms).   A basic assumption for
CORBA messaging is that the underlying transport is reliable...   Thus the
OMG has a spec for mapping GIOP onto TCP.

You should look at having omniORB use T/TCP instead.   You application
sounds like it is transaction oriented.   T/TCP is compatible with TCP and
its usage should be transparent with respect to the ORB.

The late W. R. Stevens wrote about T/TCP (Transaction oriented TCP).
T/TCP has been around since 1995 and Linux and BSD do have support for
it...  However you may have to add it into the system.   Maybe application
needs like this could help convince people to make T/TCP part of the
standard open-source distro's.

Later,

David Haverkamp



                                                                                                                                            
                      Serguei Kolos                                                                                                         
                      <Serguei.Kolos at cern.ch>              To:                                                                              
                      Sent by:                             cc:       omniorb-list at omniorb-support.com                                       
                      omniorb-list-bounces at omniorb-        Subject:  Re: [omniORB] Bad performance with oneway invocation                   
                      support.com                                                                                                           
                                                                                                                                            
                                                                                                                                            
                      07/18/2003 09:16 AM                                                                                                   
                                                                                                                                            
                                                                                                                                            




Hello

Duncan Grisby wrote:
      On Monday 14 July, Serguei Kolos wrote:

      [...]

            2. The asynchronous (oneway) round-trip time is from 3 to 10
            times worse
            with the omniORB.


      Before I address the specific points, can I just ask what your
      application is doing?  Does it reflect what this benchmark is doing?
To some extent yes. My application is a kind of Information Service. It is
a single CORBA server
to which many other applications (~1000) publish their information. There
are also several tens
receiver applications, which subscribed to some information in this server.
When the info in the server
is changed, an appropriate subscriber is notified. The notification is a
oneway message, which fits
well to the IS model: the server does not really care if subscriber is
working or dead, but it has to
do notification efficiently in order not to allow slow subscribers to
affect the faster ones. In the
real configuration some receivers may get several thousands messages per
second. Of course
these messages are not empty, but they are normally short (few tens of
bytes).

      As you say in a later email...

      [...]

                But, unfortunately there is a serious performance
            drawback in one particular case - when a bunch of oneway
            messages is sent to the server over single connection. In this
            case
             the server reads many of them with a single recv operation
            (which is very good!!!), but then it put each message into a
            separate
            buffer by copying it with the memcpy function
            (giopStream::inputMessage
            function in file giopStream.cc).
            This seriously downgrades the performance in such cases and
            noticeably (by a factor of 4) increases CPU consumption.
            Can this code be re-worked to eliminate memory copying?


      It would be possible to avoid the copying, but to be honest I don't
      think it's worth the effort. The large amount of copying only happens
      in a very restricted set of circumstances, i.e. that a client is
      sending requests so fast that TCP batches them into single packets,
      _and_ that each request has very few arguments, so many requests fit
      in a single ORB buffer. Furthermore, the overhead of copying is only
      relevant to the overall invocation time if the operation
      implementation does so little work that the time to do a memory
      allocation and a memcpy is significant.
You are right about the memcpy operation. I'm sorry - I overestimated it's
impact.

The good thing is that the impact of splitting a bunch of oneway calls is
really negligible :-)

But the bad thing is that the problem with oneway calls still exist
somewhere :-(    (I hope not only in my mind)

I repeated my tests putting a delay on client side between remote calls.
Now client sends only 50
messages per second. The server always receive one message in time and does
not do any splitting.
But ... the thread, which processes oneway calls, consumes 8 times more CPU
(user time) then the thread,
which processes identical two-way calls. The system CPU consumption is
almost identical.
Do you have any idea why it is so?

Thanks,
Sergei
_______________________________________________
omniORB-list mailing list
omniORB-list at omniorb-support.com
http://www.omniorb-support.com/mailman/listinfo/omniorb-list








More information about the omniORB-list mailing list