[omniORB] thread pool vs. thread per connection

baileyk@schneider.com baileyk@schneider.com
Mon Nov 25 15:14:00 2002


I'm still looking at the performance of omniNames.  I've measured the
performance of the following sequence of calls

- resolve a context nested 4 levels deep ( e.g.
id1.kind1/id2.kind2/id3.kind3/id4.kind4 )
- narrow the result to a naming context
- call list() with a how_many == 100
- recieve 12 contexts as a result.

The client was Python/omniORBpy 2.0.0, while the server was omniNames
4.0.0.  The performance fix to avoid an empty binding iterator was in place
for all runs.  This is on Solaris 8, C++ v5.3.

With threadPerConnectionPolicy true:
- average time was 5 ms.

With threadPerConnectionPolicy false, and threadPoolWatchConnection false
- average time was > 500 ms.

Using thread-per-connection is fine for me with omniNames.  However, I have
other servers which should not use it (clients will multiplex calls on a
connection, large number ( > 10 ) of clients should be treated fairly,
etc.)  I understand there is a context switch overhead involved in using a
thread pool, but the impact in omniNames suggests it is much higher than
expected.  I'm trying to find the details of the thread pool used in
omniORB.  Any help would be appreciated, along with ideas for additional
configurations to test.

- Kendall