[omniORB] COMM_FAILURE for "long" sequences

Duncan Grisby duncan at grisby.org
Tue Mar 22 10:46:53 GMT 2005


On Tuesday 22 March, =?ISO-8859-1?Q?M=E5rten_Bj=F6rkman?= wrote:

> Thank you for you remark. Unfortunately, it's not due to improperly
> set giopMaxMsgSize, which I believe is set to about 2M by
> default. Then I would have got an exception earlier and never reach
> socket sent(). It seems that the limit is about 128K. I tested a
> minimal client-server thingie on another (single processor) Linux
> machine running kernel version 2.4.20 and unfortunately got the same
> error.  If this were a Linux error, many others would have seen the
> same thing, I suppose. What makes an omniORB based application
> different are the multiple threads. Somehow the ORB thread can not
> read data stored by the server thread, if the sequence to be sent is
> longer than 128K.

There is no "ORB thread". If you allocate your buffer in an up-call, it
is the same thread that tries to do the send as was used for the
up-call.

Are you sure you're not accidentally releasing the image buffer before
you return from the operation?  I don't know how your Image class works,
but it looks to me like there's a good chance it's releasing the memory
before you return. What happens if you run your code in valgrind?

If you're definitely not releasing the memory prematurely, try editing
the send function in src/lib/omniORB/orbcore/tcp/tcpConnection.cc. Right
at the start of the function, there's an #ifdef for VMS that limits the
send size to 64K. Try enabling that.

Cheers,

Duncan.

-- 
 -- Duncan Grisby         --
  -- duncan at grisby.org     --
   -- http://www.grisby.org --



More information about the omniORB-list mailing list