[omniORB] ORB initialization with endpoint leaks memory if initialization fails

Duncan Grisby duncan at grisby.org
Thu Sep 3 17:41:43 BST 2009


On Wednesday 2 September, "=?iso-8859-1?q?G=F6ttlicher?=, Dr.-Ing Martin" wrote:

> in a project I have to use an endpoint to initialize the ORB. If the 
> initialization is successful everything is fine. If the initialization fails 
> (because the port is already occupied) memory is leaked as shown by the 
> following run with valgrind:

It's a bug. The trivial fix is to remove the line that says

  if( !initialised )  return;

at line 504 of src/lib/omniORB/orbcore/objectAdapter.cc

I'll check in that fix.

[...]
> In order to reproduce the problem please compile and run the attached program 
> via the runOrbTest start script (perhaps the LD_LIBRARY_PATH and/or the path 
> to valgrind has to be changed) in two different shell windows. The program 
> instance which is started first will use port 50000. After initializing the 
> orb the program will sleep for 20 seconds and then terminate. This program 
> instance shows no leaks. If a second program instance is started during the 
> 20 seconds period it will fail to bind the port 50000 and then terminate. Its 
> the second program instance which shows the leak above.

I had a very brief look at your code before deciding that it was complex
enough that it was simpler to just reproduce your issue with the omniORB
echo examples. However, in my brief look, I did notice that you have
some functions that return _var types. That's always a bad idea, because
it confuses the memory management rules. You should always return _ptr
types and assign the results to _vars.

Cheers,

Duncan.

-- 
 -- Duncan Grisby         --
  -- duncan at grisby.org     --
   -- http://www.grisby.org --



More information about the omniORB-list mailing list