[omniORB] Performance comparison of OmniEvents and OmniNotifiy
alex.tingle at bronermetals.com
Wed Mar 2 11:02:59 GMT 2005
> Well, in this case, I was reffering to the API interface, which is so
> simple on Elvin that it's almost laughable to call it an API. This is
> how you would connect and subscribe to something matching the
> constraint language in Python:
> connection = elvin.connect("elvin://random.foo.org")
> sub = connection.subscribe("require(event_id)")
> insert_event is a method I've written to actually do the work of
> dealing with an incoming event.
In all fairness, the Event Service API is quite simple too:
print "Push Consumer: push()"
print "Push Consumer: disconnect_push_consumer()"
> I've run Elvin with 2,500 events per second without using more the 10%
> CPU on an iMac G5. It's pretty impressive, actually. I haven't had a
> chance to push omniEvents anywhere near that yet, as I've only
> recently started looking at moving to a pure CORBA environment. While
> I generally argue against early optimization, I do have to worry about
> performance in this scenerio.
How much data are you sending with each event? Do you need a low
latency as well as high throughput?
> Individual sources can generate hudreds of events in a second, and the
> overall system may have to deal with 5-10 thousand events per second.
> That necessitates dealing with things a bit differently.
CORBA over TCP is always going to struggle to achieve rates like that,
especially with the OMG's one-at-a-time delivery scheme. The latency on
even a loopback can be ~0.1ms.
As I say, my preference is for multicast over the network, and delivery
via Unix pipe transport at the edges. The alternative is batching,
which just pushes complexity into the user code.
> I need to use the pull model,
> which unfortunately sets up a polling situation, and given the event
> load, I can imagine polling 500x a second to get events could be a
> killer, even with appropriate back-off algorithms. Hense, I had
> designed it to grab events in batches of up to 100, and work from
Can you explain why you need pull? I can't imagine why a high-volume
application would use it. (BTW - use 'try_pull', instead of plain
>> ...just have omniEvents
>> running on each client machine. All the network traffic can then be
>> optimised, but the interface you see is still standards compliant. The
>> one-at-a-time delivery then always happens over the local loopback (or
>> even through a pipe - on Unix).
> This is something I hadn't thought of, honestly. So in that case, I
> would need to build conduits between the various systems, yes? Is
> there some way to do that programatically, rather than through command
> line utilities?
Of course you can do it programatically. 'eventf' just gets proxies
from two channels and tells them to talk to each other. However, you
may not realise that the relationship persists until it it destroyed -
it does not have to be recreated each time omniEvents is restarted.
Unless you need to change your architecture on the fly, you should be
able to set this up at installation time and then just forget about it.
> Thank you ever so much for your help! In many ways, I'm dealing with
> similar scale issues to what is addressed in the telecom space with
> call detail records, etc.
I know that Rubix Information Tech. use omniEvents in a telecoms
environment. I don't think their event throughput is quite as high as
:: alex tingle
:: alex.tingle AT firetree.net +44-7901-552763
More information about the omniORB-list