[omniORB] omniNames performance issues on Solaris 2.8.....

baileyk@schneider.com baileyk@schneider.com
Thu Nov 21 16:14:01 2002


The name tree is 5 or 6 contexts deep.  The number of sub-contexts is
usually small, about 3 or 4.  At the next-to-lowest level of contexts for
this test there are 21 sub-contexts.  I think listing these 21 is the slow
phase.  Each lowest context typically holds a single object reference, but
could hold two or more of a service is replicated for load balancing
purposes.  In production, the higher level contexts have even fewer
branches.  There are more branches in dev/test since many developers share
a name service to support many sandboxed environments.  The total size of
the omniNames log file in dev is about 100KB.  I haven't counted contexts
and references.

The only measure of locate requests I know of took place with the stock
omniNames version, with non-nil iterators.  However, I believe the 17 sec
-> 14 sec improvement was measured with all our improvements in place, and
just turning off the exists/type checks in omniORB.

Our designs rely on the name service to store small collections of
replicated services.  In order for a client to choose one, it lists the
context to get all replicas.  This doesn't happen too often.  The use case
which is suffering is an intermediate server is listing contexts one level
higher to determine all of the distinct service types (not replicas) on a
per request basis.  I've recommended these be cached in the intermediate
server and refreshed occasionally.  Still, the old design was acceptable
with the Orbix2000 name service.  ( I may be off a little on the details
here, since I'm not directly involved in the project suffering the
performance issue ).

The omni config file uses a stringifies IOR (as dumped by omniNames on
startup) to configure the initRef.  We've change to use
resolve_initial_references() only once to get the root name service
reference.

Our designs generally assume the number and location of services can change
at any time.  Clients generally bind to one and maintain an affinity, but
on failure, the name service is used to find the current list of available
replicas.  The problem case was a little unique, since the intermediate
service was stateless and did not bind to particular references for longer
than the duration of a call from one of it's own clients.  In this case it
appears we must duplicate some of the tree structure in the name service
locally in the intermediate service.

Thanks,
Kendall



                                                                                                                             
                    Duncan Grisby                                                                                            
                    <duncan@grisby.org>                To:     steinhoffc@schneider.com                                      
                    Sent by:                           cc:     omniorb-list@omniorb-support.com, baileyk@schneider.com,      
                    omniorb-list-admin@omniorb-s        LaValleyJ@schneider.com                                              
                    upport.com                         Fax to:                                                               
                                                       Subject:     Re: [omniORB] omniNames performance issues on Solaris    
                                                        2.8.....                                                             
                    11/21/2002 08:40 AM                                                                                      
                                                                                                                             
                                                                                                                             




On Wednesday 20 November, steinhoffc@schneider.com wrote:

>      We then started to look at our code and the omniNames installation.
> Rational Quantify has shown that the block of code is completely
dominated
> by the list(), next_n() and destroy() methods to omniNames.It also
> indicated that the iterator returned from the list() method was not nil,
> but always returned an empty list from the next_n() call, and was then
> destroyed.  This raised the question as to why a non-nil iterator was
> returned in the first place.. This cause as to make the following code
> change in the ./omniORB-4.0.0/src/appl/omniNames/NamingContext_i.cc.

I have checked in Kendall's addition to omniNames. Thanks for that.
Nobody has ever considered the old behaviour a problem before,
presumably because people don't generally require huge performance
from the Naming service.

>      We then looked at the ORB and saw that it was doing a lot of
> LOCATE requests to the name service.

Was this with the original version, or the modified one that returned
nil iterators?  With the original, each new iterator would cause a
locate request to check that it really existed.

>  By going back to the
> resolve_initial_references() call each time a lookup is going to
> happen, it may be causing omniORB to 'forget' what it knows about
> the name service's object reference.

In some cases, resolving the root naming context again would cause a
new object reference to be created, and thus trigger a new locate
request. If you can store the object reference yourself, that will
guarantee the locate request doesn't happen again. How are you
configuring the naming service location?  If you are using the old
ORBInitialHost/Port config, that will trigger a request to the service
on every resolve_initial_references call, in addition to the locate
request caused by the new object reference.

How many object references are you storing in the Naming service?
omniNames isn't designed to be particularly efficient, since it's
expected to be used purely to bootstrap applications, rather than
being under heavy load.

Cheers,

Duncan.

--
 -- Duncan Grisby         --
  -- duncan@grisby.org     --
   -- http://www.grisby.org --
_______________________________________________
omniORB-list mailing list
omniORB-list@omniorb-support.com
http://www.omniorb-support.com/mailman/listinfo/omniorb-list