6    Configuring LAN Hardware as the Cluster Interconnect

This chapter provides basic information on how to configure local area network (LAN) hardware for use as a cluster interconnect. It discusses the following topics:

This chapter focuses on configuring LAN hardware as a cluster interconnect.

6.1    Configuration Guidelines

Any Ethernet adapter, switch, or hub that works in a standard LAN at 100 Mb/s or 1000Mb/s (Gigabit Ethernet) works within a LAN interconnect.

Note

Fiber Distributed Data Interface (FDDI), ATM LAN Emulation (LANE), and 10 Mb/s Ethernet are not supported in a LAN interconnect.

The following features are required of Ethernet hardware participating in a cluster LAN interconnect:

6.2    Set Ethernet Switch Address Aging to 15 Seconds

Ethernet switches maintain tables that associate media access control (MAC) addresses (and virtual LAN (VLAN) identifiers) with ports, thus allowing the switches to efficiently forward packets. These forwarding databases (also known as unicast address tables) provide a mechanism for setting the time interval when dynamically learned forwarding information grows stale and is invalidated. This mechanism is sometimes referred to as the aging time.

For any Ethernet switch participating in a LAN interconnect, set its aging time to 15 seconds.

Failure to do so may cause the switch to erroneously continue to route packets for a given MAC address to a port listed in the forwarding table after the MAC address has moved to another port (for example, due to NetRAIN failover). This may disrupt cluster communication and result in one or more nodes being removed from the cluster. The consequence may be that one or more nodes hang due to loss of quorum, but may also result in one of several panic messages. For example:

 CNX MGR: this node removed from cluster
 
 CNX QDISK: Yielding to foreign owner
 

6.3    LAN Interconnect Configurations

TruCluster Server currently supports up to eight members in a cluster, regardless of whether its cluster interconnect is based on LAN or Memory Channel. Chapter 1 illustrates some generic cluster configurations using either the Memory Channel or LAN interconnect. The following sections supplement that chapter by discussing the following LAN interconnect configurations:

6.3.1    Two Cluster Members Directly Connected by a Single Crossover Cable

You can configure a LAN interconnect in a two-member cluster by using a single crossover cable to connect the Ethernet adapter of one member to that of the other, as shown in Figure 6-1. (See the Cluster Installation manual discussion of cluster interconnect IP addresses for an explanation of the IP addresses shown in the figure.)

Figure 6-1:  Two Cluster Members Directly Connected by a Single Crossover Cable

Note

A crossover cable for point-to-point Ethernet connections is required to directly connect the network adapters of two members when no switch or hub is configured between them.

From a member's perspective, because this cluster does not employ redundant LAN interconnect components (each member has a single Ethernet adapter and a single cable connects the two members), a break in the LAN interconnect connection (for example, the servicing of a member's Ethernet adapter or a detached cable) will cause a member to leave the cluster. However, if you configure a voting quorum disk in this cluster, the cluster itself will survive the failure of either member or of the quorum disk, or a break in the LAN interconnect connection. Similarly, if you configure one member with a vote and the other with no votes, the cluster will survive the failure of the nonvoting member or of its LAN interconnect connection.

You can expand this configuration by adding a switch between the two members. A switch is required in the following cases:

6.3.2    Cluster Using a Single Ethernet Switch

You can configure a cluster with a single Ethernet hub or switch connecting two through eight members. For optimal performance, we recommend a switch for clusters of three or more members.

Any member that has multiple Ethernet adapters can configure them as a NetRAIN set to be used as its LAN interconnect interface. Doing so allows those members to remain cluster members even if they lose one internal connection to the LAN interconnect.

The three-member cluster in Figure 6-2 uses a LAN interconnect incorporating a single Ethernet switch. Each member's cluster interconnect is a NetRAIN virtual interface consisting of two network adapters. (See the Cluster Installation manual discussion of cluster interconnect IP addresses for an explanation of the IP addresses shown in the figure.)

Figure 6-2:  Three-Member Cluster Using a Single Ethernet Switch

Assuming that each member has one vote, this cluster can survive the failure of a single member or a single break in a member's LAN interconnect connection (for example, the servicing of an Ethernet adapter or a detached cable). From a member's perspective, any member can survive a single break in its LAN interconnect connection. However, the servicing or failure of the switch will make the cluster nonoperational. The switch remains a single point of failure in a cluster of any size, except when it is used in one of the recommended two-member configurations using a quorum disk discussed in Section 6.3.1. For this reason, the cluster in Figure 6-2 is not a recommended configuration.

By adding a second switch to this cluster, and connecting a LAN interconnect adapter from each member to each switch (as discussed in Section 6.3.3), you can eliminate the switch as a single point of failure and increase cluster reliability.

6.3.3    Cluster Using Fully Redundant LAN Interconnect Hardware

You can achieve a fully redundant LAN interconnect configuration by using NetRAIN and redundant paths from each member through interconnected switches. In the four-member cluster in Figure 6-3 and Figure 6-4, two Ethernet adapters on each member are configured as a NetRAIN virtual interface, two switches are interconnected by two crossover cables, and the Ethernet connections from each member are split across the switches.

Figure 6-3:  Recommended Fully Redundant LAN Interconnect Configuration Using Link Aggregation or Link Resiliency

Figure 6-4:  Recommended Fully Redundant LAN Interconnect Configuration Using the Spanning Tree Protocol

Note

If you are mixing switches from different manufacturers, consult with your switch manufacturers for compatibility between them.

Like the three-member cluster discussed in Section 6.3.2, this cluster can tolerate the failure of a single member or a single break in a member's LAN interconnect connection (for example, the servicing of an Ethernet adapter or a detached cable). (This assumes that each member has one vote and no quorum disk is configured.) However, this cluster can also survive a single switch failure and the loss of the crossover cables between the switches.

Because NetRAIN must probe the inactive LAN interconnect adapters across switches, the crossover cable connection between the switches is important. Two crossover cables are strongly recommended. When two crossover cables are used, as shown in Figure 6-3 and Figure 6-4, the loss of one of the cables is transparent to the cluster. As discussed in Appendix B, when using parallel interswitch links in this manner, you must employ one of the methods provided by the switches for detecting or avoiding routing loops between the switches. These figures indicate the appropriate port settings with respect to the most common methods provided by switches: link aggregation (also known as port trunking), link resiliency (both shown in Figure 6-3), and Spanning Tree Protocol (STP) (shown in Figure 6-4). (See the Cluster Installation manual discussion of cluster interconnect IP addresses for an explanation of the IP addresses shown in the figure.)

In some circumstances (like the nonrecommended configuration, shown in Figure 6-5, that uses a single crossover cable), a broken crossover connection can result in a network partition. If the crossover connection is completely broken, its loss prevents NetRAIN from sending packets to the inactive adapters across the crossover connection. Although this situation will not cause the cluster to fail, it will disable failover between the adapters in the NetRAIN sets.

For example, in the configuration shown in Figure 6-5 the active LAN interconnect adapters of Members 1 and 2 are currently on Switch 1; those of Members 3 and 4 are on Switch 2. If the crossover connection is broken while the cluster is in this state, Members 1 and 2 can see each other but cannot see Members 3 and 4 (and thus will remove them from the cluster). Members 3 and 4 can see each other but cannot see Members 1 and 2 (and thus will remove them from the cluster). By design, neither cluster can achieve quorum; each has two votes out of a required three, and both will hang in quorum loss.

Figure 6-5:  Nonrecommended Redundant LAN Interconnect Configuration

To decrease a cluster's vulnerability to network partitions in a dual-switched configuration, take any or all of the following steps:

6.3.4    Configurations That Support Ethernet Hubs

All Ethernet hubs (also known as shared hubs to distinguish them from Ethernet switches) run in half-duplex mode. As a result, when a hub is used in a LAN interconnect, the Ethernet adapters connected to it must be set to (or must autonegotiate) 100 Mb/sec, half-duplex mode. (See the Cluster Administration manual for additional information on how to accomplish this for the DE50x and DE60x families of adapters.)

Use of an Ethernet hub in a LAN interconnect is supported as follows:

Unlike Ethernet switches, Ethernet hubs cannot be configured with multiple parallel crossover cables to guard against potential network partitions. Hubs do not provide features to detect and respond to routing loops.

Because of the performance characteristics of Ethernet hubs, use them only in small clusters (two or three members).

6.3.5    Clustering AlphaServer DS10L Systems

Support for the LAN interconnect makes it possible to cluster more basic AlphaServer systems, such as the HP AlphaServer DS10L. The AlphaServer DS10L is an entry-level system that ships with two 10/100 Mb/s Ethernet ports, one 64-bit PCI expansion slot, and a fixed internal IDE disk. The 44.7 x 52.1 x 4.5-centimeter (17.6 x 20.5 x 1.75-inch (1U)) size of the AlphaServer DS10L, and the ability to rackmount large numbers of them in a single M-series cabinet, make clustering them an attractive option, especially for Web-based applications.

When you configure an AlphaServer DS10L in a cluster, we recommend that you use the single PCI expansion slot for the host bus adapter for shared storage (where the cluster root, member boot disks, and optional quorum disk reside), one Ethernet port for the external network, and the other Ethernet port for the LAN interconnect. Figure 6-6 shows a very basic low-end cluster of this type consisting of four AlphaServer DS10Ls.

Figure 6-6:  Low-End AlphaServer DS10L Cluster

Although the configuration shown in Figure 6-6 represents an inexpensive and useful entry-level cluster, its LAN interconnect and shared SCSI storage bus present single points of failure. That is, if the shared storage bus or the LAN interconnect switch fails, the cluster becomes unusable.

To eliminate these single points of failure, the configuration in Figure 6-7 adds two AlphaServer ES40 members to the cluster, plus two parallel interswitch connections. Two AlphaServer DS10L members are connected via Ethernet ports to one switch on the LAN interconnect; two are connected to the other switch. A Fibre Channel fabric employing redundant Fibre Channel switches replaces the shared SCSI storage in the previous configuration.

Although not distinctly shown in the figure, the host bus adapters of two DS10Ls are connected to one Fibre Channel switch; those of the other two DS10Ls are connected to the other Fibre Channel switch.

Figure 6-7:  Cluster Including Both AlphaServer DS10L and AlphaServer ES40 Members

The physical LAN interconnect device on each of the two AlphaServer ES40 members consists of two Ethernet adapters configured as a NetRAIN virtual interface. On each ES40, one adapter is cabled to the first Ethernet switch and the other is cabled to the second Ethernet switch. Similarly, each ES40 contains two host bus adapters connected to the Fibre Channel fabric. On each, one adapter is connected to the first Fibre Channel switch, the other is connected to the second Fibre Channel switch.

When delegating votes in this cluster, you have a number of possibilities: