[omniORB] Scalability problems; 300+ clients

Duncan Grisby duncan at grisby.org
Wed Jul 5 22:39:54 BST 2006


On Wednesday 5 July, "Slawomir Lisznianski" wrote:

> OK guys, here is some additional info. When we use
> threadPerConnectionPolicy=1 we do _NOT_ experience any connectivity
> problems unless, of course, we run out of per-process memory on the
> server side at which point omniORB cannot spawn any more worker threads
> (and that's the error it logs). So, with threadPerConnectionPolicy=1 and
> thread stack size set to 1MB, we were able to serve over 1100 clients
> concurrently without getting any errors. We used Red Hat AS 3 running on
> 2.4 SMP kernel, 32-bit architecture.
> 
> As soon as we set threadPerConnectionPolicy=0 we were able to serve at
> most ~330 clients before we started seeing COMM_FAILURE exceptions on
> clients side.

It may be due to having more file descriptors in use than can be put in
an fd_set. When you're in thread pool mode, omniORB has a thread doing
select() on all the open connections. If the file descriptor number gets
larger than FD_SETSIZE, omniORB can't service it in thread pool mode, so
it has to close it as soon as it's opened, which would cause the client
to see a COMM_FAILURE. On Linux, FD_SETSIZE is normally 1024, but if
your application opens lots of files, or the server uses callbacks to
the clients, you could easily hit that with 300 or so clients. If you
run with traceLevel 20, you'll see a message if omniORB has to close a
connection for that reason.

You might try omniORB 4.1 beta, since that uses poll(), which doesn't
have a file descriptor limit.

Cheers,

Duncan.

-- 
 -- Duncan Grisby         --
  -- duncan at grisby.org     --
   -- http://www.grisby.org --



More information about the omniORB-list mailing list