[omniORB] omniORB 2.71 and Stack Size ??

David Riddoch djr@uk.research.att.com
Tue, 13 Apr 1999 10:50:43 +0100 (GMT)


Rosimildo,

The stack size required will vary from platform to platform, will depend
on the data types you are passing in operation invocations, and also
depend on what you are doing in your operation upcalls. The only datatypes
which use recursive code for marshalling/unmarshalling are the dynamic
types (TypeCode and any).  The depth of the recursion will be related to
the depth of the structure of the type being marshalled.

Other than operation invocations, threads are used for the scavengers. I
don't believe these need a very large stack.

I suggest you try varying the value of the stack size, and see what works!
If you are too optimistic though the results will be pretty nasty ...

David


On Thu, 8 Apr 1999, Rosimildo DaSilva wrote:

> Hi,
> 
> A while back, I posted this message and got no answers. So, there we go
> again.
> 
> I am porting the omniORB to RTEMS,  and embedded RTOS,  and I saw a comment
> in
> posix.cc that states that the stack size for a thread has to be around 32K.
> 
> From:  lib/omnithread/posix.cc,  line 531
> ======================================================
> #if defined(__osf1__) && defined(__alpha__) || defined(__VMS)
> 
>     // omniORB requires a larger stack size than the default (21120)
>     // on OSF/1
> 
>     THROW_ERRORS(pthread_attr_setstacksize(&attr, 32768));
> ======================================================
> 
> This seems to be a bit large for an embedded system.
> 
> Can anyone comment about stack sizes in omniORB2 ?
> 
> What is the minimal size for the stack of a thread created by the omniORB
> run-time ?
> 
> Regards, Rosimildo.
> 
> +---------------------------------------------------------------------
> | ConnectTel, Inc. - Austin, Texas.
> | Phone : 512-338-1111 - Fax : 512-918-0449
> | Email : devl@connecttel.com  or mkting@connecttel.com
> | Site  : http://www.connecttel.com
> +---------------------------------------------------------------------
> 
> 
> 
>