[omniORB] Performance Question

Georg Huettenegger georg@mondoshawan.unix.cslab.tuwien.ac.at
Sat, 10 Jul 1999 18:30:57 +0200 (CEST)


Sai-Lai Lo,

I can't tell you 100% VisiBroker uses shared memory, but I think so. I
have myself used shared memory under windows nt (but not yet under unix,
so I can't compare to your comments about unix) and I think there should
be no problem. With windows you can wait for an event that is set when a
process (presumably the partner) dies that is the moment for cleanup.
Using shared memory one approach would be to have WaitForMultipleObjects
waiting for an event signalling a new message and a second for the
partner. I must admit nevertheless that I did not use shared memroy quite
in such a context.

I agree with you that tests with more clients would be very interesting
and 1:1 tests already exist (I did not develop a new Corba benchmark just
slightly adopt such a benchmark). But to compare the variety of systems I
try to compare I most definitely would end up with either no benchmark or
VERY questionable results. So I choose to use the simple scenario that
does more testing of the latency and marshalling.

I agree with you that OmniOrb is one of the fastest ORBs available that is
why I was interested what caused the bad array results. I expected OmniOrb
to be the fastest ORB remote and VisiBroker to be the fastest locally. (I
suspect no tool with a reasonable network protocal can do faster transfers
over the network than OmniOrb).

To conclude this argument I think you are right about the priorities but
better local performance is most definetely also important for an ORB to
be universally usable (and to get real fast local performance the only
possibility is to use a dll as a the server hopefully transparently done
so by the ORB so that exchanging the server with a remote one would not
make a difference -- this is also a feature VisiBroker claims to have but
I did not try this).

Best Regards,
  Georg Huettenegger

> We take performance seriously. I think it bears out in our own test and in
> all the other tests I've seen on the Web that omniORB2 is among the fastest
> ORBs available. The point I was trying to made, perhaps not as clearly as
> it should be, is that there are other aspects to an ORB design that can
> affect the scaling of an ORB's performance.
> 
> I honestly don't know why in Georg's measurment, visibroker performs better
> in the local bulk data transfer. It could well be that it is using shared
> memory.  Otherwise, I do not see where one can cut the overhead further
> than it has been done in omniORB2.
> 
> Shared memory for obvious reasons could be faster than a loop back socket
> interface. (Is there shared memory support on windows?) There are however
> drawbacks in using shared memory. The System V shared memory API, which as
> far as I know, is the API for shared memory on all unix platforms is
> deficient. The semantics allows shared memory to be allocated and never
> clean up if the processes using a shared memory segment died. This is bad
> news. A proper way to use shared memory would be to have a shared memory
> manager to allocate segments to other processes and look after the cleanup
> if these processes misbehave. AFAIK, this is how things are done in the
> Oracle database. I do not know if other ORBs that are purported to use
> shared memory for local transport have taken the precaution to avoid shared
> memory segment being left behind and not cleaned-up when the ORB processes
> abort. I may also be totally wrong about shared memory, if so please point
> out my mistakes.
> 
> I take your point that local bulk data transfer could do better with other
> IPC mechanism. It just isn't something that we have time to look into at
> this time. I assume everybody is happy to see GIOP 1.2, bi-directional GIOP
> for doing callbacks through a firewall, proper codeset conversion support
> for wchar and wstring to come sooner.
> 
> If anyone wants to create a local IPC transport for omniORB2, it would be
> most welcomed. The interfaces to plug in a transport is well-defined and I
> have given pointers on how to use the interfaces several times on this
> list. 
> 
> Regards,
> 
> Sai-Lai
> 
> 
> >> It seems to me quite a number of people have done similar comparisons. It
> >> is more interesting to me to see comparisons that involves multiple
> >> concurrent clients doing transfers. Measurements on how much internal
> >> buffering an ORB is consuming, etc.
> 
> > It is more interesting in our particular case to see comparisons such as
> > Georg is doing, with a single client doing transfers, because that's how
> > WE need to use omniORB :).
> 
> > That is, the primary potential performance bottleneck in our environment
> > is a single local connection to a single server transferring several
> > megabytes of data per second, possibly for extended periods of time. The
> > cost of passing this data via sockets for a local transfer really cuts
> > into performance badly!
> 
> > The only possible workaround is creating the server as a DLL, and loading the 
> > DLL so that it uses virtual function calls. But it seems that defeats one of 
> > the reasons we use CORBA in the first place: so that different companies can 
> > create different plug-in components (as executables) in a standard manner.
> 
> > A possible solution has been suggested in Georg's comment: using shared
> > memory for local connections.
> 
> > The apparent implication that the omniORB team can't be bothered with
> > performance is rather disheartening in light of the obvious pride they
> > take in having the "fastest ORB" around. Here's a possible weakness
> > that's been demonstrated in omniORB's handling of large data sets. Won't
> > your team at least please look at it, rather than dismissing it as
> > irrelevant? Because it isn't, at least for us and our customers; and I
> > suspect for other companies as well.
> 
> 
> 
> -- 
> Sai-Lai Lo                                   S.Lo@uk.research.att.com
> AT&T Laboratories Cambridge           WWW:   http://www.uk.research.att.com 
> 24a Trumpington Street                Tel:   +44 1223 343000
> Cambridge CB2 1QA                     Fax:   +44 1223 313542
> ENGLAND
> 
>