This chapter describes how to improve your Oracle 8.1.7.x/9i database performance. It describes several monitoring tools and offers tuning recommendations, including:
Monitoring Oracle statistics (Section 4.1)
Improving the performance of the
gettimeofday()
function (Section 4.2)
Choosing and enabling IPC communication protocols (Section 4.3)
Note
This manual assumes that you are using Oracle Version 8.1.7 or higher. This version requirement is important because Oracle Version 8.1.7 and higher, use the direct I/O capabilities of AdvFS to bypass the Unified Buffer Cache (UBC) part of the file system layers.
4.1 Monitoring Oracle Statistics
There are several commands and utilities that you can use to gather system performance information. It is important that you gather statistics under a variety of conditions. Comparing sets of data will help you to diagnose performance problems.
Table 4-1
describes the tools that you can use to
monitor a system running an Oracle application.
Table 4-1: Tools to Detect Poor Oracle Application Performance
| Tools | Description | Reference |
collect
|
Records or displays specific operating system data. It also gathers the vital system performance information for specific subsystems. | Section 2.3.2 |
lockinfo |
Collects and displays locking statistics for the kernel
SMP locks.
It uses the
/dev/lockdev
pseudo driver to
collect data. |
Section 2.4.1 |
sched_stat |
Helps determine how well the system load is distributed among CPUs, what kinds of jobs are getting or not getting sufficient cycles on each CPU, and how well cache affinity is being maintained for these jobs. | Section 2.4.2 |
See
collect(8)lockinfo(8)sched_stat(8)4.2 Improving the Performance of the gettimeofday( ) Function
The Oracle server
times many functions as it executes.
This is especially true if the
INIT.ORA
parameter
timed_statistics
is set to
TRUE.
These timing functions result in system calls into the operating system kernel, which can degrade Oracle performance because the calling process relinquishes the CPU. There is a feature in Tru64 UNIX that gives a process direct access to the operating system's real-time clock.
Using this feature will improve performance on a heavily used system. It will also improve performance on a lightly loaded system, but it may not be as noticeable.
Note
This feature is supported on Oracle Version Version 7.3 and higher.
To enable this feature, enter the following commands:
# mknod /dev/timedev c 150
# chmod 644 /dev/timedev
If you are working in a cluster, enter the commands on each cluster
member.
The
/dev/timedev
special file will be persistent
across system reboots.
To use this feature with Oracle, the instance has to be restarted.
The
existence of the
/dev/timedev
file is checked only on
instance startup.
We recommend that all instances in a cluster (and hence
all nodes) have this feature enabled.
See
gettimeofday(2)4.3 Choosing and Enabling IPC Communication Protocols
Oracle can use either UDP or RDG (Reliable Datagram) for DLM/IPQ interinstance communication.We recommend using RDG as the protocol for IPC instead of using UDP.
Note
Although Oracle 8.1.7 does support the use of RDG for communication, it is not recommended to enable it but continue to use UDP. In some cases it might be necessary to use UDP as the communication protocol in a Oracle 9i environment.
The following commands show how to enable and disable the different protocols for IPC.
Use the following command to disable NUMA support on Oracle 8i or 9i:
#cd $ORACLE_HOME/rdbms/lib#make -f ins_rdbms.mk numa_off#make -f ins_rdbms.mkioracle
Enabling NUMA support in Oracle is currently not supported for Oracle 8.1.7/OPS and 9.0.1/RAC installations. If you are planning to use RAC or OPS you must disable NUMA.
Use the following commands to enable Oracle 9i RAC or 8.1.7 Oracle Parallel Server:
In all cases:
#cd $ORACLE_HOME/rdbms/lib
For Oracle 8i:
#make -f ins_rdbms.mk ops_on#make -f ins_rdbms.mk ioracle
For Oracle 9i:
#make -f ins_rdbms.mk rac_on#make -f ins_rdbms.mk ioracle
To make IPC use the UDP Protocol (default for Oracle 8i OPS):
#make -f ins_rdbms.mk ipc_udp#make -f ins_rdbms.mk ioracle
To use reliable datagram (RDG) for IPC (default for Oracle 9i):
#make -f ins_rdbms.mk rac_on#make -f ins_rdbms.mk ioracle
There are many kernel subsystem attributes that affect Oracle 8.1.7.x/9i database performance. This section offers primary tuning recommendations for some of the attributes for the following subsystems:
Virtual Memory (Section 4.4.1)
Advanced File System (Section 4.4.2)
Virtual File System (Section 4.4.3)
Interprocess Communication (Section 4.4.4)
Internet (Section 4.4.5)
Process (Section 4.4.6)
Real time (Section 4.4.7)
Reliable Datagram (Section 4.4.8)
Memory Channel (Section 4.4.9)
Note
Some kernel subsystem attributes enable you to modify their value and apply the value to a running system. Other attributes require you to reboot the system to use a new value. See Section 3.3.1 to determine if an attribute can be tuned at run time.
See
for more information. sys_attrs(5)
4.4.1 Modifying Virtual Memory Attributes
You may be able
to improve Oracle 8.1.7.x/9i
database performance by tuning
the following
vm
subsystem attributes:
new_wire_method
(Section 4.4.1.1)
rad_gh_regions
(Section 4.4.1.2)
gh_chunks
(Section 4.4.1.2.2)
ubc_maxpercent
(Section 4.4.1.3)
ubc_borrowpercent
(Section 4.4.1.4)
vm_ubcseqstartpercent
(Section 4.4.1.6)
vm_ubcdirtypercent
(Section 4.4.1.7)
vm_swap_eager
(Section 4.4.1.8)
See
sys_attrs_vm(5)4.4.1.1 Disabling Shared Memory
Using granularity
hints shared memory, often referred to as large pages, is the recommended
method of setting shared memory for Oracle 8.1.7.x/9i.
However,
using segmented shared memory (SSM), which is controlled by the
ssm_threshold
attribute in the
ipc
subsystem,
is also supported and is enabled by default.
Set
new_wire_method
to 0 if SSM is used.
For example,
if the
ssm_threshold
has a default value of 8 MB, set
new_wire_method
to 0.
There is an interaction between the
asynchronous I/O (AIO) and page wiring mechanism that can result in high system
time.
To solve this issue, we recommend setting the
new_wire_method
attribute to 0.
The
new_wire_method
default
is 1.
Disabling this tunable attribute does not result in any negative performance
impacts.
However, note that
sys_attrs_vm(5)
One way to avoid the interaction between
new_wire_method
and
ssm_threshold
is to use granularity hints memory (gh_regions
or
rad_gh_regions) to prewire memory
for the database to use.
See
Section 4.4.1.2.2
and
Section 4.4.1.2.1
for more information on
gh_chunks
and
rad_gh_regions.
4.4.1.2 Allocating Shared Memory
Two options
for allocating shared memory for Oracle 8.1.7.x/9i
are supported.
The traditional option is to use segmented shared memory (SSM), which is controlled
through the tunable
ssm_threshold
attribute in the
ipc
subsystem.
The
ssm_threshold
attribute is
enabled by default.
Using granularity hints such as
gh_chunks, often referred to as large pages, is the preferred method of allocating
shared memory on GS80/160/320 systems running Oracle 8.1.7.x/9i.
(Granularity hints attributes are documented in
sys_attrs_vm(5)shmget()
or
nshmget()
to allocate shared memory.
Use the
vmstat -P
command to determine the amount
of granularity hints memory configured on a system.
The two attributes that
determine how chunks of memory are reserved for shared memory are
rad_gh_regions
and
gh_chunks.
The use of
rad_gh_regions
and
gh_chunks, change depending
on which system you are running:
NUMA-aware systems (GS80, GS160, GS320) should set the
rad_gh_regions
attribute.
Although the
gh_chunks
attribute applies to both non-NUMA and NUMA-aware systems, on NUMA-aware systems
the
gh_chunks
attribute affects only the first Resource
Affinity Domain (RAD) supported by the system (see
Section 4.4.1.2.1).
We recommend setting the
gh_chunks
attribute
if you want to use granularity hints memory allocation on non-NUMA systems
such as the ES, DS, and GS140 platforms (see
Section 4.4.1.2.2).
4.4.1.2.1 Modifying the rad_gh_regions Attribute
To set the
rad_gh_regions
attribute on GS80/GS160/GS320 platforms, specify the amount of
memory per RAD/QBB, in MB, through the corresponding
rad_gh_regions
attribute.
For example, to allocate 2 GB on QBB0 change the
rad_gh_regions
attribute to 2048 MB.
To determine the
rad_gh_regions
setting, take your
planned or projected Oracle SGA size and divide by the number of QBBs/RAD's
your GS system has configured.
For each
rad_gh_regions[x]
(where
x
represents the QBB identifier from 0-7), specify
the required value in MB.
For example, if you are running a GS320 platform
that has 64 GB of main memory in 8 QBBs, and your Oracle SGA is sized at 16
GB, change the value to at least 2048 MB for
rad_gh_regions[0]
to
rad_gh_regions[7].
We recommend that the sum of
rad_gh_regions[*]
be
set to at least the size of the Oracle SGA.
However, you might consider allocating
a larger value for
rad_gh_regions
in order to resize the
Oracle SGA without having to reboot the system.
Changes to
rad_gh_regions
require a system reboot and
rad_gh_regions
is
not a dynamic system tunable.
Shared memory should generally be allocated in striped mode in order to distribute memory across all available RADs. Changing to a sequential allocation policy may adversely affect performance as it may cause hotspots in individual RADs.
If you set
rad_gh_regions
to any number except 0,
you must also disable
ssm
by setting
ssm_threshold
to 0.
The default value of the
rad_gh_regions
attribute is 0 or disabled.
See
Section 4.4.1.2.2
for more information about granularity
hints memory allocation.
4.4.1.2.2 Modifying the gh_chunks Attribute
The
gh_chunks
attribute specifies the number of 4-MB chunks of memory reserved
at boot time for shared memory use.
This memory cannot be used for any other
purpose, nor can it be returned to the system or reclaimed when not being
used.
There is only about a 7 percent overall performance improvement when
using
gh_chunks.
Therefore, it might not be the best option
for most systems due to its complexity to implement.
However, some systems might still benefit from using
gh_chunks, especially Oracle 8.1.7.x/9i
environments that
are having a very large number of clients connecting and disconnecting from
the database.
In this type of environment, using
gh_chunks
can actually result in a significant performance increase.
We recommend setting
gh_chunks
to at least the Oracle SGA size and dividing it by 4 MB.
Calculate the value for
gh_chunks
by dividing Oracle's
SGA size by 4 MB and then expressing this value in 4-MB units.
For example,
if your Oracle SGA is sized at 16GB, divide 16 GB by 4 MB, for a result of
4000 MB.
This equals setting
gh_chunks
to 1000.
If you
set
gh_chunks
to any number except 0, you must also disable
ssm
by setting
ssm_threshold
to 0.
The default
value of
gh_chunks
is 0 or disabled.
4.4.1.3 Modifying the Percentage of Physical Memory the UBC is Using
The
ubc_maxpercent
attribute specifies the maximum percentage of physical
memory that the UBC can use at one time.
Oracle 8.1.7.x/9i
uses the AdvFS direct I/O; therefore
you do not need to artificially restrict the Unified Buffer Cache (UBC).
We
recommend decreasing the physical memory that the UBC can use at one time
to a smaller value to prevent (double) caching in the file system.
However,
setting the
ubc_maxpercent
to a low value may cause contention
in the kernel and negatively impact performance.
We recommend increasing the percentage of physical memory that the UBC
can use at one time to at least 70 percent.
The value should not be set to
smaller than 35 percent.
4.4.1.4 Modifying the Percentage of Memory the UBC is Borrowing
The
ubc_borrowpercent
attribute specifies the percentage of memory above
which the UBC is only borrowing memory from the virtual memory subsystem.
Paging does not occur until the UBC has returned all its borrowed pages.
The
ubc_borrowpercent
default value is 20 percent,
which is a good percentage for most systems.
However, if you are running a
database server without any current interactive users, consider decreasing
the value to 10 percent to improve backup performance.
4.4.1.5 Modifying the Percentage of Memory the UBC Can Use For a Single File
The
vm_ubcseqpercent
attribute
specifies the maximum percentage of UBC memory that can be used to cache a
single file.
See
Section 4.4.1.6
for more information
about controlling when the UBC checks this limit.
The
vm_ubcseqpercent
default value is 10 percent, which is a good percentage for most
systems.
However, if the system you are running is a database server-only
environment, consider decreasing the value to 5 percent to improve performance
during backups.
4.4.1.6 Modifying the UBC Threshold
The
vm_ubcseqstartpercent
attribute specifies a threshold value (a percentage
of the UBC in terms of its current size) that determines when the UBC starts
to check the percentage of UBC pages cached for each file object.
If the cached
page percentage for any file exceeds the value of
vm_ubcseqpercent, the UBC returns that file's UBC LRU pages to virtual memory.
See
Section 4.4.1.5
for more information about the
vm_ubcseqpercent
attribute.
Note
The
vm_ubcseqstartpercentattribute is defined as a percentage of theubc_maxpercentattribute, which is itself a percentage of available memory. The definition change has no effect if theubc_maxpercentspecifies its default value (100 percent). However, the definition change has implications if the value for theubc_maxpercenthas been lowered. For example, the value forvm_ubcseqstartpercentshould be set to 25.
4.4.1.7 Modifying the Percentage of Pages that Must be Dirty
The
vm_ubcdirtypercent
attribute
specifies the percentage of pages that must be dirty (modified) before the
UBC starts writing them to disk.
For most systems the default value of 10
percent is effective.
However, if you are running a system with a lot of file
system/UBC activity and that would benefit from keeping file system pages
in the UBC, increase the value to 90 percent.
4.4.1.8 Modifying the Swap Allocation Mode
The
vm_swap_eager
attribute controls how the system will use the available
swap space by specifying the swap allocation mode, which can be immediate
mode (1) or deferred mode (0).
There is no performance benefit attached to
either mode.
The swap space allocation modes are as follows:
Immediate mode This mode reserves swap space when a process first allocates anonymous memory. Immediate mode is the default swap space allocation mode and is also called eager mode.
Immediate mode may cause the system to reserve an unnecessarily large amount of swap space for processes. However, it ensures that swap space will be available to processes if it is needed. Immediate mode is recommended for systems that overcommit memory (that is, systems that page).
Deferred mode This mode reserves swap space only if the virtual memory subsystem needs to write a modified virtual page to swap space. It postpones the reservation of swap space for anonymous memory until it is actually needed. Deferred mode is also called lazy mode.
Deferred mode requires less swap space than immediate mode and may cause the system to run faster because it requires less swap space bookkeeping. However, because deferred mode does not reserve swap space in advance, the swap space may not be available when a process needs it, and the process may be killed asynchronously. Deferred mode is recommended for large-memory systems or systems that do not overcommit memory (page).
If you set the
vm_swap_eager
attribute to 1, the
default, your system is in eager swap allocation mode.
If you are in this
mode, sum the anonymous virtual memory for all processes and add at least
10 percent to the size of your swap space.
Use eager swap allocation mode
for highly reliable systems that overcommit memory.
If you set the
vm_swap_eager
attribute to 0, your
system is in lazy swap allocation mode.
If you are in this mode, sum the anonymous
virtual memory for all processes and subtract half of the physical memory.
Use lazy mode for any system that does not overcommit memory.
If
vm_swap_eager
has been set to 0 and the system is in danger of running
out of available swap space, a process that attempts to allocate swap space
is killed.
There is no mechanism to protect processes being deleted in this
situation.
We recommend that the Oracle 8.1.7.x/9i
database server
environment have enough memory available to set
vm_swap_eager
to 1.
However, if the workload is well understood and the system has been
configured with enough memory to prevent it from swapping, you can set
vm_swap_eager
to 0.
For eager swap allocation mode, the default
value is 1.
4.4.2 Modifying the Advanced File System Attribute
You may be able to improve Oracle 8.1.7.x/9i
database performance by tuning the
advfs
subsystem
AdvfsSyncMmapPages
attribute.
The
AdvfsSyncMmapPages
attribute specifies
a value that controls whether modified (dirty) memory-mapped pages are flushed
to disk during a
sync()
system call.
If the value is 1,
the dirty memory-mapped pages are asynchronously written to disk.
If the value
is 1, dirty memory-mapped pages are not written to disk during a sync system
call.
Setting the parameter to 0 prevents AdvFS from trying to flush pages
of files that have been mmapped.
the default value of one causes memory-mapped
pages to be written asynchronously to disk during a
sync()
system call.
Most applications that use
mmap()
to map pages and
files into memory are using their own synchronization through the
fsync()
call, so there is no need for AdvFS to perform the same operation
again.
This setting also avoids AdvFS trying to flush pages that should actually
stay in memory.
See
sys_attrs_advfs(5)4.4.3 Modifying the Virtual File System Attribute
You may be
able to improve Oracle 8.1.7.x/9i
database performance by
tuning the following (vfs) subsystem
fifo_do_adaptive
attribute.
The
fifo_do_adaptive
attribute specifies a value that enables (1) or
disables (0) the pipe code that attempts to do batch writes to a pipe and
deliver the data in a single call to a reader.
The
fifo_do_adaptive
attribute is one of the tunables where the default value may not
be appropriate if the system is running as a database server.
The default setting of 1 enables alternate algorithms in the FIFO routines. This will create an optimal working set size and will perform fewer data transfer operations, but of a larger size. The default works reasonably well for applications that perform data transfers of a uniform or near-uniform size. The default does not work so well for some applications that perform data transfers of a random size, particularly those that started out performing transfers such that the FIFO code determined an optimal transfer size.
The default value is ineffective for some applications in which the
peer processes operate in sync; for example,
procA
transfers
to
procB
and then waits for
procB's
response.
By disabling the
fifo_do_adaptive
parameter,
performance for some applications degrades, and for other applications it
improves.
The performance change depends on how the pipes are used.
We recommend
setting this parameter to 0 in Oracle environments.
See
sys_attrs_vfs(5)4.4.4 Modifying Interprocess Communication Attributes
You
may be able to improve Oracle 8.1.7.x/9i
database performance
by tuning the following
ipc
subsystem attributes:
ssm_threshold
(Section 4.4.4.1)
shm_max
(Section 4.4.4.2)
shm_min
(Section 4.4.4.3)
shm_mni
(Section 4.4.4.4)
shm_seg
(Section 4.4.4.5)
See
sys_attrs_ipc(5)4.4.4.1 Modifying the System V Shared Regions
The
ssm_threshold
attribute specifies the minimum size, in bytes, of
a System V shared region for the use of shared page tables.
The
ssm_thresold
attribute controls which type of segmented shared memory
(SSM) implementation is used.
The default value is 8 MB.
Disable
ssm_threshold
if you are using large pages; that is, either
rad_gh_regions
or
gh_chunks, (see
Section 4.4.1.2).
We recommend leaving
ssm_threshold
at its default value
on non-GS series unless you are using
rad_gh_regions
or
gh_chunks; in that case, set
ssm_threshold
to
0 (disabled).
4.4.4.2 Modifying the System V Maximum Size of Shared Memory Region
The
shm_max
attribute specifies the maximum size, in bytes, of a single
System V shared memory region.
Oracle concatenates multiple shared memory
regions if the SGA is larger then the value configured for
shm_max.
The size for a single shared memory segment (SSM) could be larger
than 2 GB.
However, applications using shared memory on the same system may
have problems with shared memory segments larger than 2 GB.
To avoid compatibility issues, we recommend setting the maximum size
for an individual shared memory segment to 2 GB.
The recommended value is
2 GB minus 8 MB (2139095040).
If Oracle is the only application used on the
system, you can increase the size of
shm_max
to 4 GB minus
16 MB (4278190080).
The default value is 4,194,304 bytes (512 pages).
4.4.4.3 Modifying the System V Minimum Size of Shared Memory Region
The
shm_min
attributes specifies the minimum size, in bytes, of a single
System V shared memory region.
The recommended value is one region and the default value is one region.
4.4.4.4 Modifying the Shared Memory Regions that Can be Used at One Time
The
shm_mni
attribute specifies the maximum
number of shared memory regions that can be used on the system at one time.
The recommended value is 256 regions; the default value is 100 regions.
The system rounds the number to the value associated with the next higher
power of two, so the default value would actually be 128 regions.
4.4.4.5 Modifying the Shared Memory Regions that Can be Attached at One Time
The
shm_seg
attribute specifies the maximum
number of System V shared memory regions that can be attached to a single
process at one time.
The recommended value is 128 regions.
4.4.5 Modifying Internet Attributes
You may be able to improve Oracle 8.1.7.x/9i
database performance by tuning the following
inet
subsystem attributes:
udp_sendspace
(Section 4.4.5.1)
udp_recvspace
(Section 4.4.5.2)
ipport_unserreserved
(Section 4.4.5.3)
See
sys_attrs_inet(5)
Also, for more information about tuning Internet servers see
Chapter 6.
For recommendations specific to GigaBit Ethernet performance, see:http://www.tru64unix.compaq.com/docs/best_practices/BP_GIGABIT/TITLE.HTM.
4.4.5.1 Modifying the Send Buffer Size for the UDP Sockets
The
udp_sendspace
attribute specifies the default
send buffer size, in bytes, for UDP sockets.
If your application is using
a Gigabit Ethernet or heavy network activity, consider setting the value of
the
udp_sendspace
attribute to a higher value than the
suggested default.
The recommended value is 65536 (bytes) or larger, depending on the size
and number of user sessions and or queries.
4.4.5.2 Modifying the Receive Buffer Size for the UDP Sockets
The
udp_recvspace
attribute specifies the default
receive buffer size, in bytes, for UDP sockets.
If your application is using
a Gigabit Ethernet or heavy network activity, consider setting the value of
the
udp_recvspace
attribute to a higher value than the
suggested default.
The recommended value is 65536 (bytes) or larger, depending on the size
and number of user sessions and queries.
4.4.5.3 Modifying the Number of Times a System can make Outgoing Connections
The
ipport_userreserved
attribute specifies
the number of times a system can simultaneously make outgoing connections
to other systems.
The number of outgoing ports is the value of the
ipport_userreserved
attribute minus the value of the
ipport_userreserved_min
attribute.
The default value is 5000 (bytes).
Therefore, the default
number of outgoing ports is 3976.
The recommended value for large-scale Oracle
installations is also the maximum value of 65535 (bytes).
4.4.6 Modifying Process Attributes
You may be able to improve Oracle 8.1.7.x/9i
database performance by tuning the following
proc
subsystem attributes:
per_proc_stack_size
(Section 4.4.6.1)
max_per_proc_stack_size
(Section 4.4.6.2)
per_proc_data_size
(Section 4.4.6.3)
max_per_proc_data_size
(Section 4.4.6.4)
per_proc_address_space
(Section 4.4.6.5)
max_per_proc_address_space
(Section 4.4.6.6)
max_proc_per_user
(Section 4.4.6.7)
max_threads_per_user
(Section 4.4.6.8)
maxusers
(Section 4.4.6.9)
See
sys_attrs_proc(5)4.4.6.1 Modifying the Per Process Stack Size
The
per_proc_stack_size
attribute specifies,
in bytes, the per process stack size.
The default value of 8 MB should be
large enough for most Oracle environments.
However, in very large installations
and data warehouse type environments, the value should be increased.
The recommended value is 33,554,432 (32 MB).
4.4.6.2 Modifying the Maximum Size of the User Process Stack Size
The
max_per_proc_stack_size
attribute specifies the maximum size, in
bytes, of a user process stack.
The default value of 32 MB should be large
enough for most Oracle environments.
However, in very large installations
and data warehouse type environments, the value should be increased.
The recommended value is 536,870,912 (512 MB).
Depending on the Oracle
environment, the maximum stack size can be increased to a maximum value of
less than or equal to 896 MB.
This limit is due to the fact that Oracle has
a fixed PGA and fixed SGA at 0x38000000 and 0x58000000 for performance.
If
you use a larger value than 896 MB for this parameter, Oracle may corrupt
the fixed SGA and PGA.
4.4.6.3 Modifying the Per Process Data Size
The
per_proc_data_size
attribute specifies,
in bytes, the per process data size.
Set the value to the amount of physical
memory installed on your system.
You can raise the value to a larger value
than actual memory available.
However, doing so would allow a single process
to outgrow a system's main memory and cause extensive swapping and paging
(see
Section 12.5).
We recommend staying within the bounds of the available memory for the
per_proc_data_size
attribute.
Never raise
per_proc_data_size
to a value larger than the physical memory available plus the configured
swap space.
The recommended value is the amount of physical memory installed,
with a maximum value of 4,398,046,511,104 bytes.
4.4.6.4 Modifying the Maximum Size of the Per Process Data Size
The
max_per_proc_data_size
attribute specifies the maximum size, in
bytes, of a data segment for each process.
Set the value to the amount of
physical memory installed on your system.
You could raise the value to a larger
value than actual memory available.
However, doing so would allow a single
process to outgrow a system's main memory and cause extensive swapping and
paging (see
Section 12.5).
We recommend staying within the bounds of the available memory for the
max_per_proc_data_size
attribute.
Never raise
max_per_proc_data_size
to a value larger than the physical memory available plus the configured
swap space.
The recommended value is the amount of physical memory installed,
with a maximum of 4,398,046,511,104 bytes.
4.4.6.5 Modifying the Per Process Address Size
The
per_proc_address_space
attribute specifies,
in bytes, the per process address size.
Set this value to the amount of physical
memory installed on your system.
You could raise the value to a larger value
than actual memory available.
However, this would allow a single process to
outgrow a system's main memory and cause extensive swapping and paging (see
Section 12.5).
We recommend staying within the bounds of the available memory for the
per_proc_address_space
attribute.
Never raise
per_proc_address_space
to a value larger than the physical memory available, plus the
configured swap space.
The recommended value is the amount of physical memory
installed, with a maximum value of 4,398,046,511,104 bytes.
4.4.6.6 Modifying the Maximum Per Process Address Size
The
max_per_proc_address_space
attribute specifies the maximum amount, in bytes, of user process address
space.
Set the value to the amount of physical memory installed on your system.
You could raise the value to a larger value than actual memory available.
However, doing so would allow a single process to outgrow a system's main
memory and cause extensive swapping and paging (see
Section 12.5).
We recommend staying within the bounds of the available memory for the
max_per_proc_address_space
attribute.
Never raise
max_per_proc_address_space
to a value larger than the physical memory available plus the configured
swap space.
The recommended value is the amount of physical memory installed,
with a maximum of 4,398,046,511,104 bytes.
4.4.6.7 Modifying the Maximum Number of Processes
The
max_proc_per_user
attribute specifies the
maximum number of processes (tasks) that a user can create (the superuser
is not affected).
To disable the limits for the
max_proc_per_user
attribute, set the attribute value to 0.
The recommended value is 1024 (processes).
If the application requires
more than 1024 tasks per user, increase the value accordingly.
4.4.6.8 Modifying the Maximum Number of Threads
The
max_threads_per_user
attribute specifies
the maximum limit of threads a user can create (the superuser is not affected).
To disable the limits for the
max_threads_per_user
attribute,
set the attribute value to 0.
The recommended value is 4096 threads.
If the application requires more
than 1024 tasks per user, increase the value accordingly.
4.4.6.9 Modifying the Space Allocated to System Tables
The
maxusers
attribute specifies the number of simultaneous users that
a system can support without straining system resources.
System algorithms
use the maxusers value to size various system data structures and to determine
the amount of space allocated to system tables, such as the system process
table.
For ES40 class systems and higher, we recommend setting the attribute
to a value of 8192 (users) or up to the maximum value of 16,384 (users).
The
default value is system-dependent.
4.4.7 Modifying the Real-Time Attribute
You may be able to improve Oracle 8.1.7.x/9i
database performance by tuning the
rt
subsystem
aio_task_max_num
attribute.
The
aio_task_max_num
attribute specifies the limit
that indirectly controls the number of AIO requests that can be wired in physical
memory by restricting the amount of wired physical memory available for a
specified number of tasks.
One page of wired physical memory is available
to the number of tasks specified by
aio_task_max_num.
The recommended value should be greater than the maximum of either the
DBWR I/O operations or the value of the
DB_FILE_MULTIBLOCK_READ_COUNT
parameter, whichever is higher.
The maximum number of DBWR I/O
operations defaults to 8192 unless you specify the
_DB_WRITER_MAX_WRITES
initialization parameter.
The default value is 102 (one page of
wired memory per 102 tasks).
A simple formula for this tunable is:
(DB_WRITER_MAX_WRITES (default 8192) * DB_WRITER_PROCESSES)
+ (PARALLEL_MAX_SERVERS * DB_FILE_MULTIBLOCK_READ_COUNT) + 10.
See
sys_attrs_rt(5)4.4.8 Modifying Reliable Datagram Attributes
You may be able
to improve Oracle 8.1.7.x/9i
database performance by tuning
the following
rdg
subsystem attributes:
max_objs
(Section 4.4.8.1)
msg_size
(Section 4.4.8.2)
max_async_req
(Section 4.4.8.3)
max_sessions
(Section 4.4.8.4)
rdg_max_auto_msg_wires
(Section 4.4.8.5)
See
sys_attrs_rdg(5)4.4.8.1 Modifying the Maximum Number of Objects in the RDG
The
max_objs
attribute specifies the maximum number of objects in the
RDG endpoint and buffer list.
The recommended value is at least 5 times the number of Oracle processes
per node, up to 10240 or the number of Oracle processes multiplied by 70.
4.4.8.2 Modifying the Maximum Size of the RDG Message
The
msg_size
attribute specifies the maximum size in bytes of an RDG
message.
The recommended value is equal to or greater than the maximum value
of the
DB_BLOCK_SIZE
parameter for the database.
Oracle
recommends a value of 32768 because Oracle9i supports different block sizes
for each table space.
4.4.8.3 Modifying the Maximum Number of Messages in the RDG
The
max_async_req
attribute specifies the maximum number of asynchronous
messages held in the RDG send and receive queues.
The recommended value is at least 100.
A value of 256 might provide
better performance.
4.4.8.4 Modifying the Maximum Number of Sessions within the RDG Table
The
max_async_req
attribute specifies the maximum number of sessions
within any given RDG context table.
The recommended value is at least the number of Oracle processes plus
two.
4.4.8.5 Modifying the Maximum Number of Pages Wired For Message Packets
The
max_async_req
attribute specifies the maximum number of pages automatically
wired in memory for message packets.
We recommend setting the
max_async_req
attribute
to 0.
4.4.9 Modifying the Memory Channel Attribute
You
may be able to improve Oracle 8.1.7.x/9i
database performance
by tuning the
rm
subsystem
rm_check_for_ipl
attribute.
The
rm_check_for_ipl
attribute specifies the bitmask,
indicating when the CPU priority level (spl) of the processor should be
saved in a trace buffer.
We recommend setting this the attribute to 63, the default value.
See
sys_attrs_rm(5)