This chapter describes how to:
Create LSM disks (Section 4.1)
Create a disk group and display free space in a disk group (Section 4.2)
Create an LSM volume for new data (Section 4.3)
Create LSM volumes with special properties, such as a mirrored volume with each plex on a different bus (Section 4.4)
Configure UFS or AdvFS file systems to use an LSM volume (Section 4.5)
Place existing data in an LSM volume (Section 4.6)
Use the information from the worksheets you filled out in
Chapter 2
to create disk groups and LSM volumes.
4.1 Overview of Creating LSM Disks
You create an LSM disk when you initialize or encapsulate a disk or
disk partition for LSM use.
Specifying a disk name, such as
dsk10
, initializes the entire disk as an LSM
sliced
disk.
Specifying a partition name, such as
dsk10g
or
dsk10c
, initializes the partition as an LSM
simple
disk.
Encapsulating a disk or disk partition that contains data you want placed
under LSM control creates a
nopriv
disk.
Initializing an LSM disk:
Formats the disk or partition as an LSM disk
Assigns a disk media name to the LSM disk
Writes a new disk label
Overwrites existing data on the disk
Note
If the disk is new to the system and LSM is already running, enter the
hwmgr scan scsi
command, then thevoldctl enable
command, to make LSM recognize the disk.
To create LSM
sliced
or
simple
disks, you can use either the
voldisksetup
command or the
voldiskadd
script.
(To encapsulate a disk or partition, see
Section 4.6.1.)
The
voldisksetup
command initializes LSM
disks with the default attributes or the attributes you specify for the disk,
such as the number of configuration database copies (Section 4.1.1)
or a specific offset for the start of the public or private region (Section 4.1.2).
See
Section 4.1.3
to create LSM
sliced
disks or
Section 4.1.4
to create LSM
simple
disks.
The
voldiskadd
script lets you interactively
create LSM disks (with default attributes only), add them to disk groups,
or create new disk groups.
(See
Section 4.2.1.)
4.1.1 Overview of Configuration Database Copies
By default, LSM configures each
sliced
or
simple
disk with the potential to have one copy of the configuration
database for its disk group.
For disk groups with fewer than four disks, configure
each disk to have two copies of the configuration database to ensure multiple
copies in case one or more disks fail.
Use the
voldisksetup
command to specify the number of configuration database copies on an LSM disk.
An LSM
sliced
or
simple
disk can
have 0, 1, or 2 copies of the configuration database for its disk group.
LSM
enables the specified number of configuration database copies on each disk
only when you add the disk to a disk group and only if necessary to maintain
the proper number and distribution of copies for the disk group as a whole.
For most system configurations, initialize your LSM disks with the default
number of copies and allow LSM to manage them.
To maintain the proper number and distribution of LSM configuration database copies in Fibre Channel environments, see the Best Practice entitled Ensuring Redundancy of LSM Configuration Databases on a Fibre Channel at the following URL:
http://www.tru64unix.compaq.com/docs/best_practices
4.1.2 Overview of Disk Offsets
When you initialize a disk for LSM use, by default LSM skips over the
first 16 blocks on the disk to preserve the
disk header
and bootstrap information.
The public region of a
sliced
disk and the private region of a
simple
disk start at the first block after this default offset.
Using the
voldisksetup
command, you can specify a different offset, if necessary.
For example, you can set up your LSM disks to align I/O requests to the chunk size of an underlying RAID hardware device. For more information on this specific application, see the Best Practice entitled Aligning LSM Disks and Volumes to Hardware RAID Devices at the following URL:
http://www.tru64unix.compaq.com/docs/best_practices
4.1.3 Creating LSM Sliced Disks
The following examples show how to create LSM
sliced
disks with various attributes:
Default offset (16 blocks) with default number of configuration database copies (1):
# voldisksetup -i dsk5
Default offset with a different number of configuration database copies (can be either 0 or 2):
# voldisksetup -i dsk5 nconfig=2
Specified offset and default number of configuration database copies (1):
# voldisksetup -i dsk5 puboffset=128
Specified offset with a different number of configuration database copies (can be either 0 or 2):
# voldisksetup -i dsk5 puboffset=128 nconfig=2
Optionally (but recommended), make a backup copy of the disk label information for all your LSM disks (Section 4.1.5).
After you initialize a disk or disk partition as an LSM disk, you can
add it to a disk group.
For information on creating a disk group, see
Section 4.2.
For information on adding an LSM disk to an existing
disk group, see
Section 5.2.2.
4.1.4 Creating LSM Simple Disks
The default private region offset for
simple
disks
is 16 blocks on the
a
or
c
partitions
and 0 blocks on all other partitions.
The default number of configuration
database copies is 1.
The following examples show how to create LSM
simple
disks with various attributes:
Default offset with default number of configuration database copies (1):
# voldisksetup -i dsk7g
Default offset with a different number configuration database copies (either 0 or 2):
# voldisksetup -i dsk7g nconfig=2
Specified offset with default number of configuration database copies (1):
# voldisksetup -i dsk7c privoffset=128
Specified offset with a different number configuration database copies (either 0 or 2):
# voldisksetup -i dsk7c privoffset=128 nconfig=2
Optionally (but recommended), make a backup copy of the disk label information for all your LSM disks (Section 4.1.5).
After you initialize an LSM disk, you can add it to a disk group.
For
information on creating a disk group, see
Section 4.2.
For
information on adding an LSM disk to an existing disk group, see
Section 5.2.2.
4.1.5 Backing Up Disk Label Information
Back up the updated disk label information for each LSM disk. Having this information will simplify the process of replacing a failed disk, by allowing you to copy the failed disk's attributes to a new disk. After a disk fails, you cannot read its disk label and therefore cannot copy that information to a new disk.
You can back up the disk label information before or after adding a disk to a disk group; the information does not change.
To back up the disk label information:
# disklabel dskn > file
For more information, see
disklabel
(8)4.2 Creating Disk Groups
The default
rootdg
disk group is created when you
initialize LSM and always exists on a system running LSM.
You can create additional
disk groups to organize your disks into logical sets.
Each disk group that
you create must have a unique name and must contain at least one
sliced
or
simple
LSM disk to store the disk group's
configuration database.
An LSM disk can belong to only one disk group.
[Footnote 1]
For large LSM configurations, consider keeping
rootdg
fairly small (ten disks or fewer) and create other disk groups for the rest
of your LSM configuration.
If possible, use
rootdg
only
for volumes relating to the system disk (on a standalone system), the clusterwide
root,
/usr
, and
/var
file system domains,
and members' swap devices.
Having separate disk groups gives you the ability to move LSM volumes to different systems or clusters.
You can create an LSM disk group using the following commands:
The
voldiskadd
interactive script (Section 4.2.1)
The
voldg
command (Section 4.2.2)
4.2.1 Creating LSM Disks and Disk Groups Using the voldiskadd Script
The
voldiskadd
script lets you do all of the following
tasks in one interactive session:
Initialize disks or disk partitions (with default values only) for exclusive use by LSM
Create a disk group
Add disks to an existing disk group
Note
If a disk group will have fewer than four disks, see Section 4.1.1.
You can invoke the
voldiskadd
script with or without
a disk name.
If you invoke the script by itself, it prompts you for the following
information:
A disk or disk partition
If you specify an entire disk, LSM initializes it as an LSM
sliced
disk.
If you specify a disk partition, LSM initializes the
partition as an LSM
simple
disk.
You can specify several
disks and disk partitions at once, separated by a space; for example:
# voldiskadd dsk3 dsk4a dsk5 dsk6g
A disk group name
If you are creating a disk group, the disk group name must be unique and can contain up to 31 alphanumeric characters that cannot include spaces or the slash ( / ).
A disk media name for each disk you configure in the disk group
You can use the default disk media name (which will be the same as the disk access name) or you can assign a disk media name of up to 31 alphanumeric characters that cannot include spaces or the slash ( / ).
Whether to configure the disk as a hot-spare disk for the disk group
For more information about hot-spare disks, see Section 3.5. For the best protection, configure at least one hot-spare disk in each disk group that contains redundant volumes.
Example 4-1
uses a disk named
dsk9
to create a disk group named
dg1
:
Example 4-1: Creating an LSM Disk Group with the voldiskadd script
# voldiskadd dsk9
Add or initialize disks Menu: VolumeManager/Disk/AddDisks Here is the disk selected. dsk9 Continue operation? [y,n,q,?] (default: y) [Return] You can choose to add this disk to an existing disk group, a new disk group, or leave the disk available for use by future add or replacement operations. To create a new disk group, select a disk group name that does not yet exist. To leave the disk available for future use, specify a disk group name of "none". Which disk group [<group>,none,list,q,?] (default: rootdg) dg1 There is no active disk group named dg1. Create a new group named dg1? [y,n,q,?] (default: y) [Return] The default disk name that will be assigned is: dg101 Use this default disk name for the disk? [y,n,q,?] (default: y) [Return] Add disk as a spare disk for dg1? [y,n,q,?] (default: n) [Return] A new disk group will be created named dg1 and the selected disks will be added to the disk group with default disk names. dsk9 Continue with operation? [y,n,q,?] (default: y) [Return] The following disk device has a valid disk label, but does not appear to have been initialized for the Logical Storage Manager. If there is data on the disk that should NOT be destroyed you should encapsulate the existing disk partitions as volumes instead of adding the disk as a new disk. dsk9 Initialize this device? [y,n,q,?] (default: y) [Return] Initializing device dsk9. Creating a new disk group named dg1 containing the disk device dsk9 with the name dg101. Goodbye.
4.2.2 Creating Disk Groups Using the voldg Command
Use the
voldg
command to create disk groups from
disks that are initialized for LSM, including disks configured to have a nondefault
number of configuration database copies (Section 4.1).
By default, LSM maintains a minimum of four active copies of the configuration database in each disk group. You can specify a different number of active copies even if you are using disks initialized with the defaults. You can specify a maximum equal to the number of sliced and simple disks in the disk group.
For example, if you create a disk group with ten sliced or simple disks, each of which is configured by default to store one copy, you can set the number of copies for that disk group to ten. If you configured each disk to store two copies, you can set the number of copies for the disk group to 20.
For any disk group, the maximum number of active configuration database copies is derived from the number of sliced or simple disks in the group and the number of copies each of those disks is configured to store.
Note
You can set the number of configuration database copies that a disk group will maintain only when you create the disk group. You cannot change the number of active copies for existing disk groups.
Maintaining more than the default number of copies can affect performance, because every change to the LSM configuration is written to all active copies of the database. If you want to use a nondefault number, choose a number of copies sufficient to meet your environment's needs but small enough to minimize the performance impact.
To create a disk group with default values using the
voldg
command:
# voldg init disk_group disk [disk...]
For example:
# voldg init dg1 dsk5 dsk6 dsk7 dsk9 dsk10 dsk11 dsk12
If a disk group will have fewer than four disks, configure each disk to have two copies of the disk group's configuration database (Section 4.1) to ensure that the disk group has multiple copies in case one or more disks fail.
To create a disk group and set the number of configuration copies to 10:
# voldg init newdg disks nconfig=10
For example:
# voldg init newdg dsk100 dsk101 dsk102... dsk110 nconfig=10
4.3 Creating LSM Volumes for New Data
To create an LSM volume for a new file system or application, use the
volassist
command.
The
volassist
command either
finds the necessary space within the disk group and creates all the objects
for the volume or uses attributes you supply on the command line, such as
specific disk names.
You must assign a name and length (size) on the command
line.
You can specify values for other LSM volume attributes on the command line or in a text file that you create. If you do not specify a value for an attribute, LSM uses a default value.
By default, LSM uses a stripe width of 64K bytes for striped plexes and 16K bytes for RAID5 plexes. However, you can use a different stripe width if you have applications that use an I/O transfer size that requires a different value, or if you have created hardware devices with a particular stripe width and you want the LSM volume created from those devices to align its writes (data stripes) to the hardware's stripe width.
For more information, see the Best Practice entitled Aligning LSM Disks and Volumes to Hardware RAID Devices, available at the following URL:
http://www.tru64unix.compaq.com/docs/best_practices/sys_bps.html
To learn more about the default volume attributes file and creating your own volume attributes file, see Section 4.3.1.
To create an LSM volume with a single concatenated plex, see Section 4.3.2.
To create an LSM volume with mirrored concatenated plexes, see Section 4.3.3.
To create an LSM volume with a single striped plex, see Section 4.3.4.
To create an LSM volume with mirrored striped plexes, see Section 4.3.5.
To create an LSM volume with a RAID 5 plex, see Section 4.3.6.
4.3.1 Overview of LSM Volume Attributes
The following lists the priority given to assignable attributes:
Values on the command line
Values in a file that you specify by using the
volassist
-d
option
Values in the
/etc/default/volassist
file
Default values
To display the default values for volume attributes:
# volassist help showattrs
#Attributes: layout=nomirror,nostripe,span,nocontig,raid5log,noregionlog,nofpalog,diskalign,nostorage mirrors=2 columns=0 nlogs=1 regionlogs=1 raid5logs=1 fpalogs=1 min_columns=2 max_columns=8 regionloglen=0 raid5loglen=0 logtype=region stripe_stripeunitsize=128 raid5_stripeunitsize=32 usetype=fsgen diskgroup= comment="" fstype= user=0 group=0 mode=0600 probe_granularity=2048 alloc= wantalloc= mirror=
Some volume attributes have several options to define them.
Some options
define an attribute globally, where others define an attribute for a specific
plex type.
For example, you can specify the size of a stripe data unit using
the
stripeunit
(or
stwidth
) option for
both striped or RAID5 plexes, the
stripe_stripeunit
(or
stripe_stwid
) option specifically for striped
plexes, or the
raid5_stripeunit
(or
raid5_stwid
) option specifically for RAID5 plexes.
For a complete list of attributes, see
volassist
(8)Table 4-1: Common LSM Volume Attributes
Attribute Description | Attribute Options |
Plex type | layout={concatenated|striped|raid5} |
Usage type | -U
{fsgen|gen|raid5} |
Number of plexes (mirrors). Default is 2. | mirror={number|yes|no} |
Type of log:
|
logtype={drl|region|none} |
Number of FPA logs (for mirrored volumes only) | nfpalog=number |
Size of stripe width for a striped or RAID5 plex, in blocks, sectors, kilobytes, megabytes, or gigabytes | stripeunit=data_unit_size
or
stwid=data_unit_size |
Number of columns for a striped or RAID5 plex; typically, the number of disks in each plex | nstripe=number_of_columns
or
ncolumn=number_of_columns |
Creating a text file that specifies many of these attributes is useful
if you create many LSM volumes that use the same nondefault values for attributes.
Any attribute that you can specify on the command line can be specified on
a separate line in the text file.
By default, LSM looks for the
/etc/default/volassist
file when you create an LSM volume.
If you
created an
/etc/default/volassist
file, LSM creates each
volume using the attributes that you specify on the command line and in the
/etc/default/volassist
file.
Example 4-2
shows an LSM volume attributes file
called
/etc/default/volassist
that creates an LSM volume
using a four-column striped plex with two mirrors, a stripe width of 32K bytes,
and no log.
Example 4-2: LSM Volume Attribute Defaults File
# LSM Vn.n # volassist defaults file. Use '#' for comments # number of stripes nstripe=4 # layout layout=striped # mirroring nmirror=2 # logging logtype=none # stripe size stripeunit=32k
For example, to create an LSM volume using the attributes in the
/etc/default/volassist
file:
# volassist make volume length
If you have created a custom attributes file and want LSM to use the applicable attributes in that file when creating the volume, specify the attributes file as follows:
# volassist make volume length -d filename
With this option, LSM creates the volume using both the attributes that you specify on the command line (such as name and length) and those in the named file. If you specify an attribute that conflicts with the contents of the file, the command line takes precedence.
To specify a length (size) for the volume, enter a number and the appropriate suffix:
Suffix | Unit |
b | Blocks |
s | Sectors (default) |
k | Kilobytes |
m | Megabytes |
g | Gigabytes |
t | Terabytes |
4.3.2 Creating LSM Volumes with a Single Concatenated Plex
A volume with a single concatenated plex is also called a simple volume. It provides no data redundancy; if a disk fails, the data is lost. To avoid this, you can either create the volume initially with mirrored contatenated plexes (Section 4.3.3) or add another data plex to the volume later (Section 5.5.2).
You can either let LSM find space on any available disks in the disk group or specify which disks you want LSM to use in creating the volume.
For volumes that will support a file system, use the default
fsgen
usage type.
For volumes that will contain raw data, such as
a database, use the
gen
usage type.
To create an LSM volume with one concatenated plex on any available disks in the disk group:
# volassist [-g disk_group] [-U use_type] make volume length
For example, to make a 3 GB volume named
data01
with
the
gen
usage type in the
dg1
disk group:
# volassist -g dg1 -U gen make data01 3g
To create an LSM volume with one concatenated plex on the disk or disks you specify:
# volassist [-g disk_group] [-U use_type] make volume \ length disks
For example, to make an 800 MB volume named
acct_files
with the default
fsgen
usage type in the
dg1
disk group using disks
dsk10
and
dsk11
:
# volassist -g dg1 make acct_files 800m dsk10 dsk11
4.3.3 Creating LSM Volumes with Mirrored, Concatenated Plexes
To provide data redundancy (high availability) you can create an LSM volume with two or more concatenated plexes. To further increase availability, you can specify disks on a different bus for each data plex and the DRL plex by making the volume in steps, allowing you to control which disks LSM uses to create each plex.
For volumes that will support a file system, use the default
fsgen
usage type.
For volumes that will contain raw data, such as
a database, use the
gen
usage type.
You specify a usage
type only when you create the volume, but not when you add a mirror plex or
a log plex to an existing volume.
To improve availability of the logs, you can add multiple logs when
you create the volume, with the
nlog=count
attribute.
You can also add one or more FPA logs to the volume when you create
it, with the
nfpalog=count
attribute.
An FPA log is not required; it is used only when you create a secondary volume
from one plex of a primary volume, (Section 5.4.2.2).
Multiple FPA logs ensure that FPA logging remains in effect in case of disk
failure.
4.3.3.1 Creating a Mirrored, Concatenated Volume in One Step
To create an LSM volume with mirrored concatenated plexes on any available disks, optionally with multiple DRL logs or FPA logs (default: one DRL, no FPA logs):
# volassist [-g disk_group] [-U use_type] make volume length \
mirror=2 [nlog=count] [nfpalog=count
]
For example, to create a 256 MB volume named
mirrVol
with two concatenated plexes and the default
fsgen
usage
type on any available disks in the
dg1
disk group:
# volassist -g dg1 make mirrVol 256m mirror=2
By default, LSM creates a DRL plex for all mirrored volumes.
To create an LSM volume with mirrored, concatenated plexes on the disks you specify:
# volassist [-g disk_group] make volume \ length mirror=2 disks
For example, to create a 256 MB volume named
mirrVol
with two concatenated plexes and the
gen
usage type on
disks
dsk21
,
dsk22
,
dsk23
,
and
dsk24
in the
dg1
disk group:
# volassist -g dg1 -U gen make mirrVol 256m mirror=2 \ dsk21 dsk22 dsk23 dsk24
This creates a mirrored volume with a DRL plex on at least two of the specified disks. If you specify more disks than LSM needs, they are not used.
4.3.3.2 Creating a Mirrored, Concatenated Volume with Plexes on Different Buses
To create a mirrored, concatenated volume with each plex on a different bus:
Create the volume with a single concatenated plex, specifying the disks:
# volassist [-g disk_group] [-U use_type] make \ volume length disks
For example:
# volassist -g dg1 -U gen make vol2 3g dsk2 dsk3 dsk4
Add another concatenated plex (mirror) to the volume, specifying disks on a different bus:
# volassist [-g disk_group] mirror volume disks
For example:
# volassist -g dg1 mirror vol2 dsk5 dsk6 dsk7
When you add a mirror to a volume manually, LSM does not add a DRL plex.
Add a DRL plex to the volume, specifying a disk that is not used by one of the data plexes:
# volassist -g dg1 addlog volume disk
For example:
# volassist -g dg1 addlog vol2 dsk8
The volume is ready for use.
4.3.4 Creating LSM Volumes with a Single Striped Plex
An LSM volume with a striped plex can offer faster performance than a volume with a concatenated plex. You can specify the number of disks you want LSM to stripe the data over (the number of stripe columns), or you can let LSM stripe the data over as many disks as necessary based on the volume size and the stripe width you specify, if other than the default.
A volume with a single striped plex does not provide data redundancy; if a disk fails, the data is lost. To avoid this, you can either create the volume initially with mirrored striped plexes (Section 4.3.5) or add another data plex to the volume later (Section 5.5.2).
Note
In general, do not use LSM to stripe data if you also use a hardware controller to stripe data. In some specific cases such a configuration can improve performance but only if:
Most of the volume I/O requests are large (>= 1 MB).
The LSM volume is striped over multiple RAID sets on different controllers.
The LSM stripe size is a multiple of the full hardware RAID stripe size.
The number of LSM columns in each plex in the volume should be equal to the number of hardware RAID controllers. See your hardware RAID documentation for information about how to choose the best number of columns for the hardware RAID set.
This section contains a number of examples of creating volumes with one or more striped plexes. In the examples, the syntax for specifying a different stripe width is shown. To use the default stripe width, omit that option.
By default, the
volassist
command creates columns
for a striped plex on disks in alphanumeric order, regardless of their order
on the command line.
To improve performance, create the columns in each plex
using disks on different buses.
For more information about specifying the
disk order for columns in a striped plex, see
Section 4.4.2.
When creating volumes with a striped plex, you must specify the number of stripe columns per plex. Each column must be on a different disk; therefore, this is the number of disks over which to stripe the plex.
For volumes that will support a file system, use the default
fsgen
usage type.
For volumes that will contain raw data, such as
a database, use the
gen
usage type.
You specify a usage
type only when you create the volume, but not when you add a mirror plex or
a log plex to an existing volume.
The following examples show how to create LSM striped volumes with different properties:
To create an LSM volume with a single striped plex on any available disks in the disk group:
# volassist [-g disk_group] [-U use_type] make volume length \ layout=stripe ncolumn=number_of_columns [stwid=data_unit_size]
For example, to create a 128 MB volume named
v_stripe
,
with a usage type of
gen
and an 8-column striped
plex with a stripe width of 32K bytes, in the
dg2
disk
group:
# volassist -g dg2 -U gen make v_stripe 128m \ layout=stripe ncolumn=8 stwid=32k
The volume looks similar to the following:
Disk group: dg2 V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v v_stripe gen ENABLED ACTIVE 262144 SELECT v_stripe-01 pl v_stripe-01 v_stripe ENABLED ACTIVE 262144 STRIPE 8/64 RW sd dsk12-01 v_stripe-01 dsk12 0 32768 0/0 dsk12 ENA sd dsk13-01 v_stripe-01 dsk13 0 32768 1/0 dsk13 ENA sd dsk14-01 v_stripe-01 dsk14 0 32768 2/0 dsk14 ENA sd dsk15-01 v_stripe-01 dsk15 0 32768 3/0 dsk15 ENA sd dsk16-01 v_stripe-01 dsk16 0 32768 4/0 dsk16 ENA sd dsk17-01 v_stripe-01 dsk17 0 32768 5/0 dsk17 ENA sd dsk18-01 v_stripe-01 dsk18 0 32768 6/0 dsk18 ENA sd dsk19-01 v_stripe-01 dsk19 0 32768 7/0 dsk19 ENA
To create an LSM volume with a single striped plex on the disk or disks you specify:
# volassist [-g disk_group] make volume length layout=stripe \ ncolumn=number_of_columns [stwid=data_unit_size] disks
For example, to create a 300 MB LSM volume named
volst
with a 2-column striped plex with the default stripe width (64K bytes)
and a usage type of
gen
on disks
dsk20
and
dsk21
in the
dg2
disk group:
# volassist -g dg2 -U gen make volst 128m \ layout=stripe ncolumn=2 dsk20 dsk21
The volume looks similar to the following:
v volst gen ENABLED 262144 - ACTIVE - - pl volst-01 volst ENABLED 262144 - ACTIVE - - sd dsk20-01 volst-01 ENABLED 131072 0 - - - sd dsk21-01 volst-01 ENABLED 131072 0 - - -
4.3.5 Creating LSM Volumes with Mirrored, Striped Plexes
To provide data redundancy (high availability) and improved performance, you can create an LSM volume with two or more striped plexes.
You can either let LSM find space on any available disks in the disk group or specify which disks you want LSM to use in creating the volume. To further increase performance and availability, you can specify disks on a different bus for each plex and the DRL plex by making the volume in steps, allowing you to control which disks LSM uses to create each plex.
To improve availability of the logs, you can add multiple logs with
the
nlog=count
attribute when
you create the volume.
You can also add one or more FPA logs to the volume
with the
nfpalog=count
attribute
when you create it.
An FPA log is not required; it is used only when you create
a secondary volume from one plex of a primary volume (Section 5.4.2.2).
Multiple FPA logs ensure that FPA logging remains in effect in case of disk
failure.
When creating volumes with a striped plex, you must specify the number of stripe columns per plex. Each column must be on a different disk; therefore, this is the number of disks over which to stripe the plex.
For volumes that will support a file system, use the default
fsgen
usage type.
For volumes that will contain raw data, such as
a database, use the
gen
usage type.
You specify a usage
type only when you create the volume, but not when you add a mirror plex or
a log plex to an existing volume.
4.3.5.1 Creating a Mirrored, Striped Volume in One Step
To create an LSM volume with mirrored, striped plexes on any available disks in the disk group:
# volassist [-g disk_group] [-U use_type] make volume length \ mirror=2 layout=stripe ncolumn=number_of_columns \ [stwid=data_unit_size]
For example, to create a mirrored, striped volume named
mvol
in the
rootdg
disk group with a stripe width
of 32K bytes on any available disks in the disk group:
# volassist -U gen make mvol 256m \ mirror=2 layout=stripe ncolumn=3 stwid=32k
The volume looks similar to the following:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v mvol gen ENABLED ACTIVE 524288 SELECT - pl mvol-01 mvol ENABLED ACTIVE 524352 STRIPE 3/64 RW sd dsk1-01 mvol-01 dsk1 65 174784 0/0 dsk1 ENA sd dsk2-01 mvol-01 dsk2 0 174784 1/0 dsk2 ENA sd dsk3-01 mvol-01 dsk3 0 174784 2/0 dsk3 ENA pl mvol-02 mvol ENABLED ACTIVE 524352 STRIPE 3/64 RW sd dsk4-01 mvol-02 dsk4 0 174784 0/0 dsk4 ENA sd dsk5-01 mvol-02 dsk5 0 174784 1/0 dsk5 ENA sd dsk6-01 mvol-02 dsk6 0 174784 2/0 dsk6 ENA pl mvol-03 mvol ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk1-02 mvol-03 dsk1 0 65 LOG dsk1 ENA
By default, LSM creates a DRL plex for all mirrored volumes.
To create an LSM volume with mirrored, striped plexes on the disks you specify:
# volassist [-g disk_group] [-U use_type] make volume length \ mirror=2 layout=stripe ncolumn=number_of_columns \ [stwid=data_unit_size] disks
For example:
# volassist -g dg1 make mvol 256m mirror=2 layout=stripe \ ncolumn=2 stwid=32k dsk19 dsk20 dsk21 dsk22 dsk23
The volume looks similar to the following:
Disk group: dg1 TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 v mvol fsgen ENABLED 524288 - ACTIVE - - pl mvol-01 mvol ENABLED 524288 - ACTIVE - - sd dsk19-01 mvol-01 ENABLED 262144 0 - - - sd dsk20-01 mvol-01 ENABLED 262144 0 - - - pl mvol-02 mvol ENABLED 524288 - ACTIVE - - sd dsk21-01 mvol-02 ENABLED 262144 0 - - - sd dsk22-01 mvol-02 ENABLED 262144 0 - - - pl mvol-03 mvol ENABLED LOGONLY - ACTIVE - - sd dsk19-02 mvol-03 ENABLED 65 LOG - - -
By default, LSM creates a DRL plex for all mirrored volumes.
Notice
that LSM did not use
dsk23
, because there was space for
the DRL on one of the disks used in a data plex.
You can add a new log plex,
specifying a disk not used in the volume (Section 5.5.3) and remove
the original log plex (Section 5.5.6).
4.3.5.2 Creating a Mirrored, Striped Volume with Plexes on Different Buses
For improved availability, you can create an LSM volume with each data plex and the log plex on a different bus. If your configuration does not support this, you can still use the following procedure to specify which disks LSM uses to create each data plex and the log plex.
The following procedure shows how to ensure that LSM creates each plex
on the disks you specify.
The
ncolumn
option forces LSM
to stripe the plex across all the named disks.
The
ncolumn
value must equal the number of disks you specify.
Note
Each data plex is a complete copy of the volume, and uses as much disk space as the volume size you specify.
Create the volume with a single striped plex, specifying disks on one bus:
# volassist [-g disk_group] [-U use_type] make volume length \ layout=stripe ncolumn=number_of_columns \ [stwid=data_unit_size] disks
For example, to create a 1GB volume named
vstripe
in the
dg1
disk group with the default usage type of
fsgen
and one 3-column striped plex with the default stripe width
on disks
dsk10
,
dsk11
, and
dsk12
:
# volassist -g dg1 make vstripe 1g layout=stripe \ ncolumn=3 dsk10 dsk11 dsk12
The volume looks similar to the following:
Disk group: dg1 V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v vstripe fsgen ENABLED ACTIVE 2097152 SELECT vstripe-01 pl vstripe-01 vstripe ENABLED ACTIVE 2097408 STRIPE 3/128 RW sd dsk10-01 vstripe-01 dsk10 0 699136 0/0 dsk10 ENA sd dsk11-01 vstripe-01 dsk11 0 699136 1/0 dsk11 ENA sd dsk12-01 vstripe-01 dsk12 0 699136 2/0 dsk12 ENA
Add a mirror plex to the volume, specifying disks on a different bus:
# volassist [-g disk_group] mirror volume disks
For example:
# volassist -g dg1 mirror vol_stripe dsk19 dsk20 dsk21
Add a DRL plex to the volume, specifying a disk that is not used by one of the data plexes, and if possible, on a different bus:
# volassist [-g disk_group] addlog volume disk
For example:
# volassist -g dg1 addlog vol_stripe dsk26
The completed volume is ready for use and looks similar to the following:
Disk group: dg1 V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v vstripe fsgen ENABLED ACTIVE 2097152 SELECT - pl vstripe-01 vstripe ENABLED ACTIVE 2097408 STRIPE 3/128 RW sd dsk10-01 vstripe-01 dsk10 0 699136 0/0 dsk10 ENA sd dsk11-01 vstripe-01 dsk11 0 699136 1/0 dsk11 ENA sd dsk12-01 vstripe-01 dsk12 0 699136 2/0 dsk12 ENA pl vstripe-02 vstripe ENABLED ACTIVE 2097408 STRIPE 3/128 RW sd dsk19-01 vstripe-02 dsk19 0 699136 0/0 dsk19 ENA sd dsk20-01 vstripe-02 dsk20 0 699136 1/0 dsk20 ENA sd dsk21-01 vstripe-02 dsk21 0 699136 2/0 dsk21 ENA pl vstripe-03 vstripe ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk26-01 vstripe-03 dsk26 0 65 LOG dsk26 ENA
4.3.6 Creating LSM Volumes with a RAID 5 Plex
A volume with a RAID 5 data plex uses distributed parity to provide data redundancy. When you create the volume, you can use the default values for the number of columns in the plex (minimum of three, maximum of eight) and the stripe width (16K bytes), as well as let LSM use any available space in the disk group to create the volume. Or you can specify the number of columns, the stripe width, and the disks to use.
If you specify the disks to use, by default the
volassist
command creates the columns for a RAID5 plex on disks in
alphanumeric order, regardless of their order on the command line, and automatically
creates a RAID5 log plex for the volume on a separate disk.
LSM will not create the log plex for a RAID 5 volume on a disk used by the
data plex, as it does for mirrored volumes.
You must specify enough disks
to create the data plex and log plex.
To improve performance, you can create the columns on disks on different buses. For more information about specifying the disk order for columns in a RAID5 plex, see Section 4.4.3.
The usage type for all volumes with a RAID5 plex is
raid5
, regardless of what the volume is used for.
When you specify
layout=raid5
, LSM automatically applies the
raid5
usage type.
To create an LSM volume that uses a RAID5 plex with the default values on any available disks in the disk group:
# volassist [-g disk_group] make volume length \ layout=raid5 [ncolumn=number_of_columns]
For example, to create a 250 MB volume named
volr5
with the default number of columns on any available disks in the disk group:
# volassist make volr5 250m layout=raid5
The volume looks similar to the following:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v volr5 raid5 ENABLED ACTIVE 512064 RAID - pl volr5-01 volr5 ENABLED ACTIVE 512064 RAID 8/32 RW sd dsk1-01 volr5-01 dsk1 0 73152 0/0 dsk1 ENA sd dsk2-01 volr5-01 dsk2 0 73152 1/0 dsk2 ENA sd dsk3-01 volr5-01 dsk3 0 73152 2/0 dsk3 ENA sd dsk4-01 volr5-01 dsk4 0 73152 3/0 dsk4 ENA sd dsk5-01 volr5-01 dsk5 0 73152 4/0 dsk5 ENA sd dsk6-01 volr5-01 dsk6 0 73152 5/0 dsk6 ENA sd dsk7-01 volr5-01 dsk7 0 73152 6/0 dsk7 ENA sd dsk9-01 volr5-01 dsk9 0 73152 7/0 dsk9 ENA pl volr5-02 volr5 ENABLED LOG 2560 CONCAT - RW sd dsk10-01 volr5-02 dsk10 0 2560 0 dsk10 ENA
By default, LSM displays stripe width in blocks; 16K bytes is 32 blocks.
To create an LSM volume with a specific number of columns and stripe width:
# volassist [-g disk_group] make volume length \ layout=raid5 ncolumn=number_of_columns stwid=stripe_width
For example, to create a 250 MB volume named
5way
with five columns and a stripe width of 32K bytes on any available disks in
the
rootdg
disk group:
# volassist make 5way 250m layout=raid5 ncolumn=5 stwid=32k
The volume looks similar to the following:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v 5way raid5 ENABLED ACTIVE 512000 RAID - pl 5way-01 5way ENABLED ACTIVE 512000 RAID 5/64 RW sd dsk9-02 5way-01 dsk9 32768 128000 0/0 dsk9 ENA sd dsk10-02 5way-01 dsk10 32768 128000 1/0 dsk10 ENA sd dsk11-02 5way-01 dsk11 32768 128000 2/0 dsk11 ENA sd dsk8-03 5way-01 dsk8 35008 128000 3/0 dsk8 ENA sd dsk15-02 5way-01 dsk15 85344 128000 4/0 dsk15 ENA pl 5way-02 5way ENABLED LOG 3200 CONCAT - RW sd dsk16-02 5way-02 dsk16 85344 3200 0 dsk16 ENA
By default, LSM displays stripe width in blocks; 32K bytes is 64 blocks.
To specify the disks for LSM to use:
# volassist [-g disk_group] make volume length \ layout=raid5 [ncolumn=number_of_columns] \ [stwid=stripe_width] disks
For example, to create a volume called
4way
with
a 4-column plex using the default stripe width on disks
dsk12
,
dsk13
,
dsk14
,
dsk18
, and
dsk19
(for the log plex) in the
rootdg
disk group:
# volassist make 4way 250m layout=raid5 ncolumn=4 \ dsk12 dsk13 dsk14 dsk18 dsk19
The volume looks similar to the following:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v 4way raid5 ENABLED ACTIVE 512064 RAID - pl 4way-01 4way ENABLED ACTIVE 512064 RAID 4/32 RW sd dsk12-01 4way-01 dsk12 0 170688 0/0 dsk12 ENA sd dsk13-01 4way-01 dsk13 0 170688 1/0 dsk13 ENA sd dsk14-01 4way-01 dsk14 0 170688 2/0 dsk14 ENA sd dsk18-01 4way-01 dsk18 0 170688 3/0 dsk18 ENA pl 4way-02 4way ENABLED LOG 1280 CONCAT - RW sd dsk19-01 4way-02 dsk19 0 1280 0 dsk19 ENA
4.3.7 Creating LSM Volumes for Swap Space
To protect against system or cluster member crashes due to swap disk errors, you can create LSM mirrored volumes for swap space. For recommendations on the amount of swap space to configure, see the System Administration manual or the Cluster Administration manual.
HP recommends that you use multiple disks for secondary swap devices and add the devices as several individual volumes, instead of striping or concatenating them into a single large volume. This makes the swapping algorithm more efficient.
The way you use LSM volumes for swap space differs depending on the environment:
On a standalone system, to use LSM volumes for secondary swap, you must also use LSM for the root partition and primary swap space by encapsulating the boot disk (Section 3.4.1).
In a cluster, there is
no clusterwide swap; each member has its own private swap devices.
You can
use LSM volumes for cluster members' swap space independent of whether or
not the clusterwide root,
/usr
, and
/var
file system domains are under LSM control.
You can encapsulate the swap devices for one or more members to LSM volumes (Section 3.4.3) and create additional swap volumes for one or more cluster members (Section 4.3.7.1).
All swap volumes for both standalone systems and cluster members must
belong to the
rootdg
disk group.
If there is not enough
free disk space to create the volumes, add more disks to
rootdg
(see
Section 5.2.2).
4.3.7.1 Creating Swap Volumes
Swap volumes, if mirrored, should not use dirty region logging (DRL).
After you create a swap volume, you should also modify the volume's recovery
policy, so that LSM will not resynchronize the plexes after a system failure.
On a cluster member, choose disks that are local to the member, if possible.
The disks must belong to the
rootdg
disk group.
To create an LSM volume for secondary swap space on a standalone system or additional swap space on a cluster member:
Create the volume with no DRL plex, if mirrored. Assign the volume one of the following usage types:
On a standalone system, use the
gen
usage
type for secondary swap volumes.
In a cluster, use the
swap
usage type for
all members' swap volumes.
# volassist -U use_type make volume length [nmirror=count] \ [layout=nolog] [disks]
For example:
On a standalone system, to create a mirrored secondary swap
volume in the
rootdg
disk group on any available disks
in the disk group:
# volassist -U gen make swapvol_2 128m nmirror=2 \ layout=nolog
On a cluster member, to create a mirrored swap volume in the
rootdg
disk group using disks
dsk4
and
dsk5
(which are local to the member and belong to
rootdg
):
# volassist -U swap make member1_swap 128m nmirror=2 \ layout=nolog dsk4 dsk5
If the volume is mirrored, change the volume's recovery policy to prevent plex resynchronization after a system crash; for example:
# volume set start_opts=norecov swapvol_2
Make the LSM volume available as swap space using the
swapon
command; for example:
# swapon /dev/vol/rootdg/swapvol_2
Edit the
sysconfigtab
file to add the volume's
device special file to the
swapdevice
kernel attribute
value within the
vm:
section.
For example:
vm: swapdevice = /dev/vol/rootdg/swapvol, /dev/vol/rootdg/swapvol_2
In a cluster, be sure to modify the appropriate member's file, which
is a context-dependent symbolic link (CDSL) in the form
/cluster/members/member{n}/boot_partition/etc/sysconfigtab
.
4.3.7.2 Mirroring Swap Volumes
The following procedure applies only to secondary swap volumes on a standalone system and to any unmirrored swap volume on a cluster member.
Note
To mirror the primary swap volume on a standalone system, use the
volrootmir
command. See Section 3.4.2.1.3.
In a cluster, choose an LSM disk that belongs to the
rootdg
disk group and is local to the member whose swap volume you want
to mirror.
You can run the command from any cluster member.
To mirror a secondary swap volume on a standalone system or any unmirrored swap volume on a cluster member:
Mirror the volume. In a cluster, specify a disk that is local to the appropriate cluster member:
# volassist mirror volume [dskN]
For example:
To mirror a secondary swap volume (swapvol_2
)
on a standalone system using any available disk in
rootdg
:
# volassist mirror swapvol_2
To mirror a swap volume (joey_swap
) for
a cluster member using disk
dsk5
(which is local to that
member):
# volassist mirror joey_swap dsk5
Change the volume's recovery policy to prevent plex resynchronization after a system crash; for example:
# volume set start_opts=norecov swapvol_2
For more information on configuring additional swap space, see
swapon
(8)sysconfig
(8)4.4 Creating LSM Volumes with Nondefault Properties
This section describes how to create LSM volumes with attributes or
properties that you cannot always specify with the high-level commands.
Only
users who have a thorough understanding of LSM, and the need to create such
volumes, are advised to use these procedures.
4.4.1 Creating a Striped Plex with Subdisks of Different Sizes
If you have LSM disks of different sizes and want to use space on particular disks to maximize your use of storage, you can use the low-level commands to manually create the subdisks and plex columns, then create the plex, and then create and start the volume.
The makeup of each column can differ, but each column should total the same number of sectors. For example, each column can contain a different number of subdisks, of different sizes.
Each subdisk should be a multiple of the stripe width (and, therefore, so will each column) so that writes align evenly to subdisk boundaries.
The following table shows how three columns that each total 256000 sectors (125 MB) can comprise different numbers and sizes of subdisks. The three columns make up one plex of 768000 sectors (375 MB). These subdisk names and sizes are used as examples in the following procedure.
Striped Plex: plex-01 | ||
Column 0 | Column 1 | Column 2 |
Subdisk dsk3-01, 128000 Sectors Subdisk dsk4-01, 128000 Sectors |
Subdisk dsk6-01, 64000 Sectors Subdisk dsk9-01, 64000 Sectors Subdisk dsk11-01, 64000 Sectors Subdisk dsk12-01, 64000 Sectors |
Subdisk dsk13-01, 51200 Sectors Subdisk dsk14-01, 51200 Sectors Subdisk dsk15-01, 51200 Sectors Subdisk dsk16-01, 51200 Sectors Subdisk dsk17-01, 51200 Sectors |
Total: 256000 Sectors (125 MB) | Total: 256000 Sectors (125 MB) | Total: 256000 Sectors (125 MB) |
The following procedure shows how to make subdisks of different sizes, create a plex with columns of equal size, and create a volume using that plex. The plex uses the default stripe width of 64K bytes and has three columns, with the subdisk sizes shown in the previous table.
Each subdisk is on a different disk and starts at offset 0 in the public region; therefore you need specify only a length for the subdisk. (To create a subdisk on a disk that already has other subdisks, you must specify the offset the starting point of the new subdisk as well as its length.)
To create a volume with different sized subdisks:
Decide the size of the volume and the number of columns you want in the plex (or each plex, if mirrored); for the examples that follow, the volume is 375 MB, so each column is one-third of 375, or 125 MB.
Divide the column size (125 MB or 256000 sectors) by the number of intended subdisks per column (in the following examples, 2, 4, and 5) to determine the size of each subdisk. In the first column (two subdisks), each subdisk will be 256000/2, or 128000 sectors, and so on.
Alternatively, you can work backwards from the amount of space available on various disks, determine how large a subdisk you can create in that space that is a multiple of the stripe width, and calculate how many subdisks you need for each column. The space availability might influence the number of columns.
Create the subdisks:
volmake sd subdisk_name disk len=length
For example:
# volmake sd dsk3-01 dsk3 len=128000 # volmake sd dsk4-01 dsk4 len=128000 # volmake sd dsk6-01 dsk6 len=64000 # volmake sd dsk9-01 dsk9 len=64000 # volmake sd dsk11-01 dsk11 len=64000 # volmake sd dsk12-01 dsk12 len=64000 # volmake sd dsk13-01 dsk13 len=51200 # volmake sd dsk14-01 dsk14 len=51200 # volmake sd dsk15-01 dsk15 len=51200 # volmake sd dsk16-01 dsk16 len=51200 # volmake sd dsk17-01 dsk17 len=51200
Create a plex with the desired number of columns and associate the appropriate subdisks to each column.
If there are many subdisks involved, you can create an "empty" plex first, then associate the subdisk groups in separate steps, one step for each column.
You do not need to specify a size for the empty plex; as you associate subdisk columns, the plex size is updated to map to the end of the longest column. When you create a plex manually, you must set a stripe width.
For example:
# volmake plex plex-01 layout=stripe ncolumn=3 stwidth=64k # volsd assoc plex-01 dsk3-01:0 dsk4-01:0 # volsd assoc plex-01 dsk6-01:1 dsk9-01:1 dsk11-01:1 dsk12-01:1 # volsd assoc plex-01 dsk13-01:2 dsk14-01:2 dsk15-01:2 \ dsk16-01:2 dsk17-01:2
The plex looks similar to the following:
# volprint -pht plex-01
Disk group: rootdg PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE pl plex-01 - DISABLED - 768000 STRIPE 3/128 RW sd dsk3-01 plex-01 dsk3 0 128000 0/0 dsk3 ENA sd dsk4-01 plex-01 dsk4 0 128000 0/128000 dsk4 ENA sd dsk6-01 plex-01 dsk6 0 64000 1/0 dsk6 ENA sd dsk9-01 plex-01 dsk9 0 64000 1/64000 dsk9 ENA sd dsk11-01 plex-01 dsk11 0 64000 1/128000 dsk11 ENA sd dsk12-01 plex-01 dsk12 0 64000 1/192000 dsk12 ENA sd dsk13-01 plex-01 dsk13 0 51200 2/0 dsk13 ENA sd dsk14-01 plex-01 dsk14 0 51200 2/51200 dsk14 ENA sd dsk15-01 plex-01 dsk15 0 51200 2/102400 dsk15 ENA sd dsk16-01 plex-01 dsk16 0 51200 2/153600 dsk16 ENA sd dsk17-01 plex-01 dsk17 0 51200 2/204800 dsk17 ENA
Create the volume using the plex:
# volmake -U use_type vol volume_name plex=plex_name
For example:
# volmake -U fsgen vol vol-01 plex=plex-01
Start the volume:
# volume start vol-01
The volume is started and ready for use, and looks similar to the following:
# volprint -vht vol-01
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v vol-01 fsgen ENABLED ACTIVE 768000 ROUND - pl plex-01 vol-01 ENABLED ACTIVE 768000 STRIPE 3/128 RW sd dsk3-01 plex-01 dsk3 0 128000 0/0 dsk3 ENA sd dsk4-01 plex-01 dsk4 0 128000 0/128000 dsk4 ENA sd dsk6-01 plex-01 dsk6 0 64000 1/0 dsk6 ENA sd dsk9-01 plex-01 dsk9 0 64000 1/64000 dsk9 ENA sd dsk11-01 plex-01 dsk11 0 64000 1/128000 dsk11 ENA sd dsk12-01 plex-01 dsk12 0 64000 1/192000 dsk12 ENA sd dsk13-01 plex-01 dsk13 0 51200 2/0 dsk13 ENA sd dsk14-01 plex-01 dsk14 0 51200 2/51200 dsk14 ENA sd dsk15-01 plex-01 dsk15 0 51200 2/102400 dsk15 ENA sd dsk16-01 plex-01 dsk16 0 51200 2/153600 dsk16 ENA sd dsk17-01 plex-01 dsk17 0 51200 2/204800 dsk17 ENA
4.4.2 Creating a Striped Plex with Disks on Different Buses
You can improve performance for a volume with striped plexes by striping each plex over disks on different buses. If you have enough buses, you can mirror the volume on different buses from those supporting the first plex. For example, if you had 12 buses, you could stripe one plex over the first six buses, and stripe a second plex over the other six buses.
Caution
For mirrored volumes, do not create all the data plexes on the same bus; this eliminates the volume's availability. If the bus fails, you lose the entire volume.
If you have limited buses and want to create mirrored, striped volumes, you should create one striped plex using disks on the same bus and create the second striped plex using disks on another bus. If one bus fails, the volume still has a plex on another bus.
If the volume will have only one striped plex, you can stripe the plex over all the available buses to improve performance.
Before you begin, decide which LSM disks you want to use, identify which bus each disk is on, and plan how you want to create the volume based on how you want LSM to stripe and mirror the data over the buses.
To create a volume with plexes that are striped down buses and mirrored across buses, you must use low-level commands to create each subdisk, create each plex from those subdisks, create the volume from the plexes, add a log plex, and start the volume.
The following procedure creates a volume with two plexes and a DRL plex
using these disks and buses in the
rootdg
disk group:
Plex plex-01 | Plex plex-02 | ||
Bus 1 | Bus 2 | Bus 3 | Bus 4 |
dsk0 | dsk7 | dsk14 | dsk21 |
dsk1 | dsk8 | dsk15 | dsk22 |
dsk2 | dsk9 | dsk16 | dsk23 |
dsk3 | dsk10 | dsk17 | dsk24 |
dsk4 (for the DRL plex) |
The first plex will stripe alternately over buses 1 and 2. The second plex will stripe alternately over buses 3 and 4. The log plex will be on bus 1, the same as the first plex, but on a different disk. Although recommended, it is not always possible to place the log plex on a different bus from the data plexes.
To create an LSM mirrored, striped volume with each plex on different buses:
Create the subdisks on the disks on bus 1 and 2; for example:
# volmake sd dsk0-01 dsk0 len=16m # volmake sd dsk1-01 dsk1 len=16m # volmake sd dsk2-01 dsk2 len=16m # volmake sd dsk3-01 dsk3 len=16m # volmake sd dsk7-01 dsk7 len=16m # volmake sd dsk8-01 dsk8 len=16m # volmake sd dsk9-01 dsk9 len=16m # volmake sd dsk10-01 dsk10 len=16m
Create the striped plex, specifying the subdisks in alternating bus order; for example:
# volmake plex plex-01 layout=stripe stwidth=64k \ sd=dsk0-01,dsk7-01,dsk1-01,dsk8-01,dsk2-01,dsk9-01, \ dsk3-01,dsk10-01
This creates an 8-column striped plex that alternates between buses 1 and 2. It is more efficient to stripe over all the disks in a plex.
The plex looks similar to the following:
# volprint -p plex-01
Disk group: rootdg PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE pl plex-01 - DISABLED - 262144 STRIPE 8/128 RW sd dsk0-01 plex-01 dsk0 0 32768 0/0 dsk0 ENA sd dsk7-01 plex-01 dsk7 0 32768 1/0 dsk7 ENA sd dsk1-01 plex-01 dsk1 0 32768 2/0 dsk1 ENA sd dsk8-01 plex-01 dsk8 0 32768 3/0 dsk8 ENA sd dsk2-01 plex-01 dsk2 0 32768 4/0 dsk2 ENA sd dsk9-01 plex-01 dsk9 0 32768 5/0 dsk9 ENA sd dsk3-01 plex-01 dsk3 0 32768 6/0 dsk3 ENA sd dsk10-01 plex-01 dsk10 0 32768 7/0 dsk10 ENA
Create the subdisks for the second data plex on the disks on bus 3 and 4; for example:
# volmake sd dsk14-01 dsk14 len=16m # volmake sd dsk15-01 dsk15 len=16m # volmake sd dsk16-01 dsk16 len=16m # volmake sd dsk17-01 dsk17 len=16m # volmake sd dsk21-01 dsk21 len=16m # volmake sd dsk22-01 dsk22 len=16m # volmake sd dsk23-01 dsk23 len=16m # volmake sd dsk24-01 dsk24 len=16m
Create the second data plex, specifying the subdisks in alternating bus order; for example:
# volmake plex plex-02 layout=stripe stwidth=64k \ sd=dsk14-01,dsk21-01,dsk15-01,dsk22-01,dsk16-01,dsk23-01, \ dsk17-01,dsk24-01
This creates an 8-column striped plex that alternates between buses 3 and 4.
The plex looks similar to the following:
# volprint -pht plex-02
Disk group: rootdg PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE pl plex-02 - DISABLED - 262144 STRIPE 8/128 RW sd dsk14-01 plex-02 dsk14 0 32768 0/0 dsk14 ENA sd dsk21-01 plex-02 dsk21 0 32768 1/0 dsk21 ENA sd dsk15-01 plex-02 dsk15 0 32768 2/0 dsk15 ENA sd dsk22-01 plex-02 dsk22 0 32768 3/0 dsk22 ENA sd dsk16-01 plex-02 dsk16 0 32768 4/0 dsk16 ENA sd dsk23-01 plex-02 dsk23 0 32768 5/0 dsk23 ENA sd dsk17-01 plex-02 dsk17 0 32768 6/0 dsk17 ENA sd dsk24-01 plex-02 dsk24 0 32768 7/0 dsk24 ENA
Create the LSM volume using both data plexes; for example:
# volmake vol vol_mirr plex=plex-01,plex-02
The volume looks similar to the following:
# volprint -vht vol_mirr
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v vol_mirr fsgen DISABLED EMPTY 262144 ROUND - pl plex-01 vol_mirr DISABLED EMPTY 262144 STRIPE 8/128 RW sd dsk0-01 plex-01 dsk0 0 32768 0/0 dsk0 ENA sd dsk7-01 plex-01 dsk7 0 32768 1/0 dsk7 ENA sd dsk1-01 plex-01 dsk1 0 32768 2/0 dsk1 ENA sd dsk8-01 plex-01 dsk8 0 32768 3/0 dsk8 ENA sd dsk2-01 plex-01 dsk2 0 32768 4/0 dsk2 ENA sd dsk9-01 plex-01 dsk9 0 32768 5/0 dsk9 ENA sd dsk3-01 plex-01 dsk3 0 32768 6/0 dsk3 ENA sd dsk10-01 plex-01 dsk10 0 32768 7/0 dsk10 ENA pl plex-02 vol_mirr DISABLED EMPTY 262144 STRIPE 8/128 RW sd dsk14-01 plex-02 dsk14 0 32768 0/0 dsk14 ENA sd dsk21-01 plex-02 dsk21 0 32768 1/0 dsk21 ENA sd dsk15-01 plex-02 dsk15 0 32768 2/0 dsk15 ENA sd dsk22-01 plex-02 dsk22 0 32768 3/0 dsk22 ENA sd dsk16-01 plex-02 dsk16 0 32768 4/0 dsk16 ENA sd dsk23-01 plex-02 dsk23 0 32768 5/0 dsk23 ENA sd dsk17-01 plex-02 dsk17 0 32768 6/0 dsk17 ENA sd dsk24-01 plex-02 dsk24 0 32768 7/0 dsk24 ENA
Add a DRL plex to the volume, if possible, specifying a disk that is not used by one of the data plexes; for example:
# volassist addlog vol_mirr dsk4
The volume looks similar to the following:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v vol_mirr fsgen DISABLED EMPTY 262144 ROUND - pl plex-01 vol_mirr DISABLED EMPTY 262144 STRIPE 8/128 RW sd dsk0-01 plex-01 dsk0 0 32768 0/0 dsk0 ENA sd dsk7-01 plex-01 dsk7 0 32768 1/0 dsk7 ENA sd dsk1-01 plex-01 dsk1 0 32768 2/0 dsk1 ENA sd dsk8-01 plex-01 dsk8 0 32768 3/0 dsk8 ENA sd dsk2-01 plex-01 dsk2 0 32768 4/0 dsk2 ENA sd dsk9-01 plex-01 dsk9 0 32768 5/0 dsk9 ENA sd dsk3-01 plex-01 dsk3 0 32768 6/0 dsk3 ENA sd dsk10-01 plex-01 dsk10 0 32768 7/0 dsk10 ENA pl plex-02 vol_mirr DISABLED EMPTY 262144 STRIPE 8/128 RW sd dsk14-01 plex-02 dsk14 0 32768 0/0 dsk14 ENA sd dsk21-01 plex-02 dsk21 0 32768 1/0 dsk21 ENA sd dsk15-01 plex-02 dsk15 0 32768 2/0 dsk15 ENA sd dsk22-01 plex-02 dsk22 0 32768 3/0 dsk22 ENA sd dsk16-01 plex-02 dsk16 0 32768 4/0 dsk16 ENA sd dsk23-01 plex-02 dsk23 0 32768 5/0 dsk23 ENA sd dsk17-01 plex-02 dsk17 0 32768 6/0 dsk17 ENA sd dsk24-01 plex-02 dsk24 0 32768 7/0 dsk24 ENA pl vol_mirr-01 vol_mirr DISABLED EMPTY LOGONLY CONCAT - RW sd dsk4-01 vol_mirr-01 dsk4 0 65 LOG dsk4 ENA
Start the LSM volume; for example:
# volume start vol_mirr
The volume is ready for use.
4.4.3 Creating a RAID 5 Plex with Disks on Different Buses
You can improve performance for a volume with a RAID5 plex by striping the plex over disks on different buses.
To create the volume with a plex striped over only one disk
on each bus, you can use the
volassist
command and specify
the disks.
See
Section 4.3.6.
To create the volume with a plex striped over more than one
disk per bus, you must use low-level commands to create the subdisks for each
column, create the plex, and create and start the volume, because the
volassist
command might not use the disks in the order you specify
on the command line.
Before you begin, decide which LSM disks you want to use, identify which bus each disk is on, and plan how you want to create the volume based on how you want LSM to stripe the RAID 5 data plex over the buses.
Each column of subdisks should be the same size and be a multiple of the data unit size. For example, a data unit size (stripe width) of 16K bytes for a RAID5 plex corresponds to 32 blocks (sectors), so the total of the subdisks in each column should be a multiple of 32.
If each column consists of one subdisk (the typical configuration), then the subdisk size should be a multiple of 32. If a column consists of two subdisks, each subdisk can be a different size as long as the total is a multiple of 32.
You can optionally identify a separate disk, ideally on a different bus, for the RAID5 log plex, or allow LSM to use any available disk.
Caution
If both the log plex and one column of the data plex are on the same bus and the bus fails, you lose the entire volume. If possible, put the log plex on a bus that does not support a disk used in the data plex.
The following procedure creates a RAID 5 volume using these disks and
buses in the
rootdg
disk group:
Bus 1 | Bus 2 | Bus 3 | Bus 4 | Bus 5 |
dsk1 | dsk4 | dsk8 | dsk11 | dsk22 (for the RAID 5 log plex) |
dsk2 | dsk5 | dsk9 | dsk12 |
The finished volume will stripe data and parity alternately over buses 1 through 4. The log plex will be on bus 5.
To create a RAID 5 plex with disks on different buses:
Create the subdisks; for example:
# volmake sd dsk1-01 dsk1 len=1m # volmake sd dsk2-01 dsk2 len=1m # volmake sd dsk4-01 dsk4 len=1m # volmake sd dsk5-01 dsk5 len=1m # volmake sd dsk8-01 dsk8 len=1m # volmake sd dsk9-01 dsk9 len=1m # volmake sd dsk11-01 dsk11 len=1m # volmake sd dsk12-01 dsk12 len=1m
Create the RAID 5 data plex, specifying the subdisks in an order that alternates between the buses in the order you want; for example:
# volmake plex plex_r5 layout=raid5 stwidth=16k \ sd=dsk1-01,dsk4-01,dsk8-01,dsk11-01,dsk2-01,dsk5-01, \ dsk9-01,dsk12-01
This creates an eight-column RAID5 data plex named
plex_r5
, where the columns proceed from bus 1 through bus 4, then
repeat.
The plex looks similar to the following:
Disk group: rootdg PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE pl plex_r5 - DISABLED - 14336 RAID 8/32 RW sd dsk1-01 plex_r5 dsk1 0 2048 0/0 dsk1 ENA sd dsk4-01 plex_r5 dsk4 0 2048 1/0 dsk4 ENA sd dsk8-01 plex_r5 dsk8 0 2048 2/0 dsk8 ENA sd dsk11-01 plex_r5 dsk11 0 2048 3/0 dsk11 ENA sd dsk2-01 plex_r5 dsk2 0 2048 4/0 dsk2 ENA sd dsk5-01 plex_r5 dsk5 0 2048 5/0 dsk5 ENA sd dsk9-01 plex_r5 dsk9 0 2048 6/0 dsk9 ENA sd dsk12-01 plex_r5 dsk12 0 2048 7/0 dsk12 ENA
Create the LSM volume using the data plex; for example:
# volmake -U raid5 vol vol5 plex=plex_r5
This creates an LSM volume named
volr5
from plex
plex_r5
, with a usage type of
raid5
(required
for all volumes with a RAID5 plex).
Add a RAID 5 log plex to the volume, optionally specifying the disk (by default, LSM uses a disk not already used in the volume, if available); for example:
# volassist addlog volr5 dsk22
Start the LSM volume; for example:
# volume start volr5
The volume is ready for use and looks similar to the following:
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v volr5 raid5 ENABLED ACTIVE 14336 RAID - pl plex_r5 volr5 ENABLED ACTIVE 14336 RAID 8/32 RW sd dsk1-01 plex_r5 dsk1 0 2048 0/0 dsk1 ENA sd dsk4-01 plex_r5 dsk4 0 2048 1/0 dsk4 ENA sd dsk8-01 plex_r5 dsk8 0 2048 2/0 dsk8 ENA sd dsk11-01 plex_r5 dsk11 0 2048 3/0 dsk11 ENA sd dsk2-01 plex_r5 dsk2 0 2048 4/0 dsk2 ENA sd dsk5-01 plex_r5 dsk5 0 2048 5/0 dsk5 ENA sd dsk9-01 plex_r5 dsk9 0 2048 6/0 dsk9 ENA sd dsk12-01 plex_r5 dsk12 0 2048 7/0 dsk12 ENA pl volr5-01 volr5 ENABLED LOG 2560 CONCAT - RW sd dsk22-01 volr5-01 dsk22 0 2560 0 dsk22 ENA
4.5 Configuring File Systems to Use LSM Volumes
After you create an LSM volume, you can use it the same way you can use a disk partition. Because LSM uses the same interfaces as disk device drivers, you can specify an LSM volume in any operation where you can specify a disk or disk partition.
The following sections describe how to configure AdvFS and UFS to use
an LSM volume.
4.5.1 Configuring AdvFS Domains to Use LSM Volumes
AdvFS treats LSM volumes as it does any other storage device. (For more information on creating an AdvFS domain, see the AdvFS Administration manual.)
Note
If an AdvFS domain needs more storage, you can either create a new LSM volume and add it to the domain using the AdvFS
addvol
command or grow an LSM volume that is already part of the domain (Section 5.4.9).AdvFS domains can use a combination of LSM volumes and physical storage. For more information, see the AdvFS Administration manual.
4.5.2 Configuring UFS File Systems to Use LSM Volumes
To configure a UFS file system to use an LSM volume:
Create a file system using the LSM disk group and volume name:
# newfs [options] /dev/rvol/disk_group/volume
The following example creates a UFS file system on an LSM volume named
vol_ufs
in the
dg1
disk group:
# newfs /dev/rvol/dg1/vol_ufs
You do not need to specify the name of the disk group for LSM volumes
in the
rootdg
disk group.
For more information, see
newfs
(8)
Use the LSM block special device name to mount the file system:
# mount /dev/vol/disk_group/volume /mount_point
For example:
# mount /dev/vol/dg1/vol_ufs /mnt2
4.6 Creating LSM Volumes for Existing Data
You can use LSM to manage existing data by encapsulating the disk or
disk partition containing the data.
LSM converts the disk or partition to
an LSM
nopriv
disk and creates an LSM volume from the encapsulated
disk or disk partition.
You can encapsulate:
Disks or disk partitions, including UFS file systems (Section 4.6.1)
AdvFS domains (Section 4.6.2)
The boot disk and primary swap space on a standalone system (Section 3.4.1)
One or more cluster members' swap devices (Section 3.4.3)
4.6.1 Encapsulating Disks or Disk Partitions
Encapsulating existing data is a two-part process: you create the encapsulation
scripts with the
volencap
command, then run the
volreconfig
command to execute those scripts.
The encapsulation procedure configures the named disks and disk partitions
as LSM
nopriv
disks, using information in the disk label
and in the
/etc/fstab
file, and creates a separate LSM
volume from each
nopriv
disk.
By default, the
nopriv
disk and volume will belong to the
rootdg
disk group, unless you specify a different disk group.
If you encapsulate an entire disk (by not specifying a partition letter),
such as
dsk3
, LSM creates a
nopriv
disk
and a volume for each in-use partition.
Encapsulation provides a way to put existing data under LSM control,
which lets you use LSM to mirror the data in the volume and thereby provide
redundancy and high availability.
However, recovery of a failed LSM
nopriv
disk can be complex, and there are other cases where
nopriv
disks complicate matters.
After you encapsulate a domain,
immediately move the volumes to LSM
sliced
or
simple
disks in the disk group if possible, before mirroring or
performing other operations on the volumes.
After the encapsulation, entries in the
/etc/fstab
file or in the
/etc/sysconfigtab
file are changed to use
the LSM volume name instead of the block device name of the disk or disk partition.
To encapsulate a disk or disk partition:
Back up the data on the disk or disk partition to be encapsulated.
Unmount the disk or partition or take the data off line. If you cannot unmount the disk or partition or take the data off line, LSM must restart the system to complete the encapsulation procedure.
Create the LSM encapsulation script:
# volencap [-g disk_group] {disk|partition}
The following example creates an encapsulation script for a disk named
dsk3
:
# volencap dsk3
Note
Although you can encapsulate several disks or disk partitions at the same time, we recommend that you encapsulate each disk or disk partition separately.
Complete the encapsulation process:
# volreconfig
If the encapsulated disk or disk partition is in use, the
volreconfig
command queries you whether to restart the system now
or later.
Optionally (but recommended), move the volume to
sliced
or
simple
disks in the same disk group
before mirroring the volume or performing other operations.
See
Section 5.1.5.
4.6.2 Creating LSM Volumes for AdvFS Domains
You can place the storage for an existing AdvFS domain under LSM control by either encapsulating the domain or migrating the domain to an LSM volume.
Encapsulating an AdvFS domain, or more precisely, encapsulating the storage in use by the domain, creates an LSM volume for each disk or partition in the domain. If you cannot unmount the filesets before performing the encapsulation, LSM must restart the system to complete the process.
Note
If an AdvFS domain consists of one disk or partition, you can encapsulate the disk or partition (Section 4.6.1).
When you encapsulate an AdvFS domain, LSM changes the links in the
/etc/fdmns
directory to point to the LSM volumes.
Migrating an AdvFS domain creates an LSM volume on disks that you specify, moves the domain data to the new volume, and removes the original disks from the domain. The disks are no longer in use by the domain after the migration completes.
Migrating a domain does not require you to unmount filesets or restart the system, but it temporarily uses additional disk space until the migration is complete.
No mount point changes are necessary during encapsulation or migration,
because the mounted filesets are abstractions to the domain.
The domain can
be activated normally after the encapsulation or migration process completes.
After the domain is activated, the filesets remain unchanged and the result
of encapsulation or migration is transparent to AdvFS domain users.
4.6.2.1 Encapsulating an AdvFS Domain
Encapsulation provides a way to put existing data under LSM control,
which lets you use LSM to mirror the data in the volume to provide redundancy
and high availability.
However, encapsulation creates a
nopriv
disk, and recovery of a failed
nopriv
disk can be complex,
and there are other cases where
nopriv
disks complicate
matters.
After you encapsulate a domain, immediately move the volumes to LSM
sliced
or
simple
disks in the disk group if possible,
before mirroring or performing other operations on the volumes.
To encapsulate an AdvFS domain:
Back up the data in the AdvFS domain with the
vdump
utility.
Unmount all filesets.
If the domain is in use (you cannot unmount the filesets), you can create
the encapsulation script (step 3) and run
volreconfig
(step
4) when convenient to complete the encapsulation procedure.
Create the LSM encapsulation script:
# volencap domain
The following example creates an encapsulation script for an AdvFS domain
named
dom1
:
# volencap dom1
Complete the encapsulation procedure:
# volreconfig
If the AdvFS domain is mounted, the
volreconfig
command
prompts you to restart the system.
The
/etc/fdmns
directory is updated on successful
creation of LSM volumes.
Optionally (but recommended), move the volume to
sliced
or
simple
disks in the same disk group
before mirroring the volume or performing other operations.
See
Section 5.1.5.
4.6.2.2 Migrating an AdvFS Domain
The
volmigrate
command lets you migrate any AdvFS
domain (except for the
root_domain
on a standalone system)
to an LSM volume.
This operation uses a different disk than the disk the domain
uses originally and therefore does not require a restart.
The
volmigrate
command creates a volume with the
properties you specify, including:
The disk group in which to create the volume (based on the disks you specify).
The name of the volume.
(The default is the name of the domain
with the suffix
vol
.)
The number of stripe columns and mirrors in the volume.
Striping improves read performance, and mirroring ensures data availability in the event of a disk failure.
There must be sufficient LSM disks in the same disk group, and the disks
must be large enough to contain the domain.
For more information on disk requirements
and the options for striping and mirroring, see
volmigrate
(8)
To migrate a domain to an LSM volume:
# volmigrate [-g disk_group] [-m num_mirrors] [-s num_stripes] \ domain disk_media_name...
The
volmigrate
command creates a volume with the
specified characteristics, moves the data from the domain to the volume, removes
the original disk or disks from the domain, and leaves those disks unused.
The volume is started and ready for use, and no restart is required.