[omniORB] Trouble with life cycle management in omniORBpy

Diez B. Roggisch deets at web.de
Wed Jun 1 23:34:26 BST 2005


Hi, 

I've got trouble with my life-cycle-management in my server. I did model it 
after the tic tac toe game example from duncan. The relevant pieces of code 
are these:

def authenticate(self, user, password):
        Authenticator.AUTHENTICATOR.authenticate(user, password)
        context = claros.domain.Manager.create_context(DomainManager.DB, user)
        context.poa = self.root_poa.create_POA("DomainPOA:%i" % 
self.manager_count, None, [])
        GarbageCollector.register_context(context)
        
        self.manager_count += 1
        dm = DomainManager(context)
        # Get the object reference
        dmobj = this(dm, context.poa)
        # Activate the POA
        context.poa._get_the_POAManager().activate()
        return dmobj

This method is called upon the base object of my service, which is registered 
with the root poa.

The context object is a sort of session object per client. It stores 
db-connections and the like, as well as the created child poa - as you 
(hopefully) can see.

The DomainManager is created inside that child poa. All objects that are 
created due to requests made to it are registered in that child poa using 
this custom method (conveniently called "this"):

def this(o, poa):
    return poa.id_to_reference(poa.activate_object(o))

I don't know if it matters, but this() can be called several times for a given 
servant - what's the possible consequence of this? 

For example a Domain (which is managed by DomainManager) is created and 
registered like  this (get is defined on DomainManager):

    def get(self, name):
        """ tas::sync """        
        try:
            return this(Domain(DomainManager.MANAGER[name], self.context), 
self.context.poa)
        except KeyError:
            raise omphalos.NoSuchDomain()

Now in a bg thread I loop over the registered context's like this 

    def run(self):
        logger = logging.getLogger("claros.gc")
        logger.debug("#CONTEXTS: %i" % len(self.CONTEXTS))
        for context in self.CONTEXTS:
            try:
                if context.idle > self.timeout:
                    logger.debug("Found timed out context")
                    context.poa.destroy(False, False)
                    self.CONTEXTS.remove(context)
            except:
                logger.error(str(sys.exc_info()[1]))

The context gets it's idle-timestamp updated whenever a method is called on an 
object belonging to it's poa.

So eventually I expect all child poas to be destroyed - however on one freeing 
context (at least) this happens in my logfiles:

2005-06-01 22:09:53,039 claros.gc -1318761552 DEBUG #CONTEXTS: 1
2005-06-01 22:09:53,039 claros.gc -1318761552 DEBUG Found timed out context
2005-06-01 22:09:53,040 claros.gc -1318761552 ERROR Minor: 
OBJECT_NOT_EXIST_POANotInitialised, COMPLETED_NO.


And a self-written garbage collection analyzing tool shows me that there are 
references of servants that are not freed. 

I'm pretty much lost on this - can somebody enlighten me on this? As the 
number of objects is quite large, my service eats up memory pretty fast if I 
don't solve that problem somehow.

Regards,

Diez B. Roggisch



More information about the omniORB-list mailing list