Tru64 UNIX
Guide to the POSIX Threads Library


Previous Contents Index

2.3.2.2.1 Techniques for Setting the Scheduling Policy Attribute

Use either of two techniques to set a thread attributes object's scheduling policy attribute:

When you change the scheduling policy attribute, you must be sure the scheduling parameter attribute is compatible with the scheduling policy attribute before using the attributes object to create a thread.

2.3.2.2.2 Comparing Throughput and Real-Time Policies

The default throughput scheduling policy is intended to be an "adaptive" policy, giving each thread an opportunity to execute based on its behavior. That is, for a thread that does not execute often, the Threads Library tends to give it high access to the processor because it is not greatly affecting other threads. On the other hand, the Threads Library tends to schedule with less preference any compute-bound threads with throughput scheduling policy.

This yields a responsive system in which all threads with throughput scheduling policy get a chance to run fairly frequently. It also has the effect of automatically resolving priority inversions, because over time any threads that have received less processing time (among those with throughput scheduling policy) will rise in preference while the running thread drops, and eventually the inversion is reversed.

The FIFO and RR scheduling policies are considered "real-time" policies, because they require the Threads Library to schedule such threads strictly by the specified priority. Because threads that use real-time scheduling policies require additional overhead, the incautious use of the FIFO or RR policies can cause the performance of the application to suffer.

If relative priorities of threads are important to your application---that is, if a compute-bound thread really requires consistently predictable execution---then create those threads using either the FIFO or RR scheduling policy. However, use of "real-time" policies can expose the application to unexpected performance problems, such as priority inversions, and therefore their use should be avoided in most applications.

2.3.2.2.3 Portability of Scheduling Policy Settings

Only the SCHED_FIFO and SCHED_RR scheduling policies are portable across POSIX-conformant implementations. The other scheduling policies are extensions to the POSIX standard.

Note

The SCHED_OTHER identifier is portable, but the POSIX standard does not specify the behavior that it signifies. For example, on non-HP or non-Compaq platforms, the SCHED_OTHER scheduling policy could be identical to either the SCHED_FIFO or the SCHED_RR policy.

2.3.2.3 Setting the Scheduling Parameters Attribute

The scheduling parameters attribute specifies the execution priority of a thread. (Although the terminology and format are designed to allow adding more scheduling parameters in the future, only priority is currently defined.) The priority is an integer value, but each policy can allow only a restricted range of priority values. You can determine the range for any policy by calling the sched_get_priority_min() or sched_get_priority_max() routines. The Threads Library also supports a set of nonportable symbols designating the priority range for each policy, as follows:
Low High
PRI_FIFO_MIN PRI_FIFO_MAX
PRI_RR_MIN PRI_RR_MAX
PRI_OTHER_MIN PRI_OTHER_MAX
PRI_FG_MIN_NP PRI_FG_MAX_NP
PRI_BG_MIN_NP PRI_BG_MAX_NP

Section 2.3.6 describes how to specify a priority between the minimum and maximum values, and it also discusses how priority affects thread scheduling.

Use either of two techniques to set a thread attributes object's scheduling parameters attribute:

Note

On Tru64 UNIX Systems:
There are system security issues for threads running with system contention scope. High priority threads may prevent other users from accessing the system. A system contention scope thread cannot have a priority higher than 19 (the default user priority). A system contention scope thread with SCHED_FIFO policy, because it will prevent execution by other threads of equal priority, cannot have a priority higher than 18.

2.3.2.4 Setting the Stacksize Attribute

The stacksize attribute represents the minimum size (in bytes) of the memory required for a thread's stack. To increase or decrease the size of the stack for a new thread, call the pthread_attr_setstacksize() routine and use the specified thread attributes object when creating the thread and stack. You must specify at least PTHREAD_STACK_MIN bytes.

After a thread has been created, your program cannot change the size of the thread's stack. See Section 3.4.1 for more information about sizing a stack.

2.3.2.5 Setting the Stack Address Attribute

The stack address attribute represents the location or address of a region of memory that your program allocates for use as a thread's stack. The value of the stack address attribute represents the origin of the thread's stack (that is, the initial value to be placed in the thread's stack pointer register). However, please be aware that the actual address you specify, relative to the stack memory you have allocated, is inherently nonportable.

To set the address of the stack origin for a new thread, call the
pthread_attr_setstackaddr() routine, specifying an initialized thread attributes object as an argument, and use the thread attributes object when creating the new thread. Use the pthread_attr_getstackaddr() routine to obtain the value of the stack address attribute of an initialized thread attributes object.

After a thread has been created, your program cannot change the address of the thread's stack.

Code using this attribute is nonportable because the meaning of "stack address" is undefined and untestable. Generally, implementations likely assume, as does the Threads Library, that you have specified the initial stack pointer; however, this is not required by the standards. Even so, some machines' stacks grow up while others grow down, and many may modify the stack pointer either before or after writing (or reading) data. In other words, one system may require that you pass the base, another base - sizeof(int) , another base + size , another base + size + sizeof(long) . Furthermore, the system cannot know the size of the stack, which may restrict the ability of debuggers and other tools to help you. As long as you are using an inherently nonportable interface, consider using pthread_attr_setstackaddr_np() .

You cannot create two concurrent threads that use the same stack address. The amount of storage you provide must be at least PTHREAD_STACK_MIN bytes.

The system uses an unspecified (and varying) amount of the stack to "bootstrap" a newly created thread.

2.3.2.6 Setting the Guardsize Attribute

The guardsize attribute represents the minimum size (in bytes) of the guard area for the stack of a thread. A guard area can help a multithreaded program detect overflow of a thread's stack and the stack. A guard area is a region of no-access memory that is allocated at the overflow end of the thread's writable stack. When the thread attempts to access a memory location within the guard area, a memory addressing violation occurs.

A new thread can be created using a thread attributes object with a default guardsize attribute value. This value is platform dependent, but will always be at least one "hardware protection unit" (that is, at least one page; non-zero values are rounded up to the next integral page size). For more information, see this guide's platform-specific appendixes.

The Threads Library allows your program to specify the size of a thread stack guard area for two reasons:

To set the guardsize attribute of a thread attributes object, call the pthread_attr_setguardsize() routine. To obtain the value of the guardsize attribute in a thread attributes object, call the pthread_attr_getguardsize() routine.

2.3.2.7 Setting the Contention Scope Attribute

When creating a thread, you can specify the set of threads with which this thread competes for processing resources. This set of threads is called the thread's contention scope.

A thread attributes object includes a contention scope attribute. The contention scope attribute specifies whether the new thread competes for processing resources only with other threads in its own process, called process contention scope, or with all threads on the system, called system contention scope.

Use the pthread_attr_setscope() routine to set an initialized thread attributes object's contention scope attribute. Use the pthread_attr_getscope() routine to obtain the value of the contention scope attribute of an initialized thread attributes object. You must also set the inheritsched attribute to PTHREAD_EXPLICIT_SCHED to prevent a new thread from inheriting its contention scope from the creator.

In the thread attributes object, set the contention scope attribute's value to PTHREAD_SCOPE_PROCESS to specify process contention scope, or set the value to PTHREAD_SCOPE_SYSTEM to specify system contention scope.

The Threads Library selects at most one thread to execute on each processor at any point in time. The Threads Library resolves the contention based on each thread's scheduling attributes (for example, priority) and scheduling policy (for example, round-robin).

A thread created using a thread attributes object whose contention scope attribute is set to PTHREAD_SCOPE_PROCESS contends for processing resources with other threads within its own process that also were created with PTHREAD_SCOPE_PROCESS . It is unspecified how such threads are scheduled relative to threads in other processes or threads in the same process that were created with PTHREAD_SCOPE_SYSTEM contention scope.

A thread created using a thread attributes object whose contention scope attribute is set to PTHREAD_SCOPE_SYSTEM contends for processing resources with other threads in any process that also were created with PTHREAD_SCOPE_SYSTEM .

Whether process contention scope and system contention scope are available for your program's threads depends on the host operating system. Attempting to set the contention scope attribute to a value not supported on your system will result in a return value of [ENOTSUP]. The following table summarizes support for thread contention scope by operating system:

Table 2-1 Support for Thread Contention Scope
Operating System Available Thread Contention Scopes Default Thread
Contention Scope
Tru64 UNIX Process
System
Process
     
OpenVMS Process Process

Note

On Tru64 UNIX systems:

When a thread creates a system contention scope thread, the creation can fail with an [EPERM] error condition. This is because system contention scope threads can only be created with priority above "default" priority if the process is running with root privileges.

2.3.3 Terminating a Thread

Terminating a thread means causing a thread to end its execution. This can occur for any of the following reasons:

When a thread terminates, the Threads Library performs these actions:

  1. It writes a return value into the terminated thread's thread object:
    Another thread can obtain this return value by joining with the terminated thread (using pthread_join() ). See Section 2.3.5 for a description of joining with a thread.

    Note

    If the thread terminated by returning from its start routine normally and the start routine does not provide a return value, the results obtained by joining with that thread are unpredictable.
  2. If the termination results from either a cancelation or a call to pthread_exit() , the Threads Library calls, in turn, each cleanup handler that this thread declared (using pthread_cleanup_push() ) that had not yet been removed (using pthread_cleanup_pop() ). (It also transfers control to any appropriate CATCH , CATCH_ALL , or FINALLY blocks, as described in Chapter 5. You can also use Compaq C's structured handling (SEH) extensions.)
    The Threads Library calls the terminated thread's most recently pushed cleanup handler first. See Section 2.3.3.1 for more information about cleanup handlers.
    For C++ programmers: At normal exit from a thread, your program will call the appropriate destructor functions. You can also catch the exit or cancel exception using the catch(...) .
    To exit the terminated thread due to a call to pthread_exit() , the Threads Library raises the pthread_exit_e exception. To exit the terminated thread due to cancelation, the Threads Library raises the pthread_cancel_e exception.
    Your program can use the exception package to operate on the generated exception. (Note that the practice of using CATCH handlers in place of pthread_cleanup_push() is not portable.) Chapter 5 describes the exception package. The name of the native system extension, or that seen by C++, varies by platform.
  3. For each of the terminated thread's thread-specific data keys that has a non-NULL value and a non-NULL destructor function:
    This step is repeated until all thread-specific data values in the thread are NULL, or for up to a number of iterations equal to PTHREAD_DESTRUCTOR_ITERATIONS (4). This destroys all thread-specific data associated with the terminated thread. See Section 2.6 for more information about thread-specific data. Note that if after 4 iterations through the thread's thread-specific data values, there are still non-NULL values, they will be ignored. This may result in an application memory leak, and should be avoided.
  4. The thread (if there is one) that is currently waiting to join with the terminated thread is awakened. That is, the thread that is waiting in a call to pthread_join() is awakened
  5. If the thread is already detached or if there was a thread waiting in a call to pthread_join() , its storage is destroyed Otherwise, the thread continues to exist until detached or joined with. Section 2.3.4 describes detaching and destroying a thread.

After a thread terminates, it continues to exist as long as it is not detached. This means that storage, including stack, may remain allocated. This allows another thread to join with the terminated thread (see Section 2.3.5).

When a terminated thread is no longer needed, your program should detach that thread (see Section 2.3.4).

Note

For Tru64 UNIX systems:

When the initial thread in a multithreaded process returns from the main routine, the entire process terminates, just as it does when a thread calls exit() .

For OpenVMS systems:

When the initial thread in a multithreaded image returns from the main routine, the entire image terminates, just as it does when a thread calls SYS$EXIT.

2.3.3.1 Cleanup Handlers

A cleanup handler is a routine you provide that is associated with a particular lexical scope within your program and that can be invoked when a thread exits that scope. The cleanup handler's purpose is to restore that portion of the program's state that has been changed within the handler's associated lexical scope. In particular, cleanup handlers allow a thread to react to thread-exit and cancelation requests.

Your program declares a cleanup handler for a thread by calling the pthread_cleanup_push() routine. Your program removes (and optionally invokes) a cleanup handler by calling the pthread_cleanup_pop() routine.

A cleanup handler is invoked when the calling thread exits the handler's associated lexical scope, due to:

For each call to pthread_cleanup_push() , your program must contain a corresponding call to pthread_cleanup_pop() . The two calls form a lexical scope within your program. One pair of calls to pthread_cleanup_push() and pthread_cleanup_pop() cannot overlap the scope of another pair; however, pairs of calls can be nested.

Because cleanup handlers are specified by the POSIX standard, they are a portable mechanism. An alternative to using cleanup handlers is to define and/or catch exceptions with the exception package. Chapter 5 describes how to use the exception package. Cleanup handler routines, exception handling clauses (that is, CATCH , CATCH_ALL , FINALLY ), and C++ object destructors (or catch(...) clauses) are functionally equivalent mechanisms.


Previous Next Contents Index