[omniORB] How to use TCP_USER_TIMEOUT in OmniORB4.1.6

Vishwanath Bhat vishwamegur at gmail.com
Mon Dec 8 14:31:19 GMT 2014


Hello,

We have OmniORB 4.1.6 running on Linux Kernel 2.6.32. We have a client
server architecture where in client needs to identify the failure of the
server application when the system hosting the server is taking reset.  Due
to change in TCP in 2.6.32, client waits almost 10 minutes+ before giving
up on a Send call if the Server system reset while the Send call is being
made.

We do not want to use setCLientCallTimeout since that will result in
timeout even if Server delays sending the response by some time due to some
system parameters.

So we want to bail out only on true connection issues.

So one of the option was to use TCP_USER_TIMEOUT (
http://man7.org/linux/man-pages/man7/tcp.7.html ) so we can have a control
over TCP to TCP communication.

Questions :

1/. Is there a way to get the Socket Descriptors value from a Object
Reference ?. If so, I can get the sockfd and then do a setsockopt

2/. Will OmniORB provide any mechanism to make use of TCP_USER_TIMEOUT ?

Truly appreciate your help on this.

Thank you,

- Vishwa -
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.omniorb-support.com/pipermail/omniorb-list/attachments/20141208/4e66e3f1/attachment.html>


More information about the omniORB-list mailing list