[omniORB] thread pool vs. thread per connection

baileyk@schneider.com baileyk@schneider.com
Mon Nov 25 18:53:00 2002


This is related to my previous post regarding the performance of omniNames
with different thread handling configurations.  That post mentioned a 5ms
vs 500ms measured time for a sequence of calls.  The latest tests I've run
puts it closer to 5ms vs 225ms in a more controlled test environment.
Still a big difference.

I've done a quick scan of the giopServer.cc code which handles the thread
policy.  It appears that the thread pool policy does not refer to a pool of
idle threads.  Other thread pools I've worked with and written myself have
involved the worker threads blocking on a condition variable, and one or
more would be signaled to start some job (or take over as leader).  When
the job is done, the thread returns to the pool by waiting on the condition
variable again.  In omniORB's thread pool, the worker threads die as soon
as they've finished a job and there are no pending jobs to start.  A new
thread is created for each request if the job queue is always empty.
What's being called a thread pool policy looks to be a thread per request
policy, but with thread reuse when a high water mark is reached.

Is anyone else depending on the thread pool policy added to omniORB4, and
seeing excessive thread creation?  If my analysis is correct, is there
interest in an "idle" thread pool policy to reduce the number of threads
created and destroyed?

My concern stems from this scenario:  a J2EE client tier ( servlet/JSP )
hitting a C++ omniORB server.  The Java ORB multiplexes calls over a small
number (possible one?) connection, but we need highest performance
possible.  Some requests should take only 5-10ms while others may take
several seconds.   Number of concurrent requests could be large (10 - 15?).
Thread creation/destruction overhead is not acceptable.  I suspect this
isn't a unique role for omniORB to want to play in.

Thanks,
Kendall