120
V-Series Systems Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP Documentation comments: [email protected] Information Web: http://www.netapp.com Part number: 215-06260_A0 Processed: Thursday August 4 2011 11:19:27 Release Candidate Documentation—25 August 2011 Contents Subject to Change

Vs Install

Embed Size (px)

Citation preview

Page 1: Vs Install

V-Series Systems Installation Requirements

and Reference Guide

NetApp, Inc.495 East Java DriveSunnyvale, CA 94089 USATelephone: +1 (408) 822-6000Fax: +1 (408) 822-4501Support telephone: +1 (888) 4-NETAPPDocumentation comments: [email protected] Web: http://www.netapp.com

Part number: 215-06260_A0Processed: Thursday August 4 2011 11:19:27

Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 2: Vs Install

Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 3: Vs Install

Contents

V-Series technology overview ...................................................................... 7How a V-Series system uses storage ........................................................................... 7

Supported methods to connect to a storage array ........................................................ 8

Direct-attached configurations ........................................................................ 8

Fabric-attached configurations ........................................................................ 8

Number of storage arrays supported behind a V-Series system .................................. 8

Sharing storage arrays among hosts ............................................................................ 9

V-Series planning overview ....................................................................... 11V-Series Support Matrix information needed for planning ...................................... 11

Planning tasks for a V-Series implementation .......................................................... 11

Stages of implementation when using third-party storage ........................................ 13

Planning for RAID Implementation ......................................................... 15RAID protection for third-party storage .................................................................... 15

Implications of LUN size and number for Data ONTAP RAID groups ................... 15

Planning for Data ONTAP use of array LUNs ........................................ 17How array LUNs are made available for host use .................................................... 17

What an LDEV is .......................................................................................... 17

What a host group is ...................................................................................... 17

How array LUNs become available for Data ONTAP storage use ........................... 18

Considerations when planning for disk ownership ....................................... 18

Guidelines for setting the checksum type for array LUNs ............................ 19

Array LUN assignment changes ................................................................... 19

Considerations for provisioning array LUNs ............................................................ 20

Minimum number of array LUNs per V-Series system ................................ 20

Minimum and maximum array LUN sizes supported by Data ONTAP ....... 21

Minimum array LUN size for the root volume ............................................. 21

Elements that reduce usable space in an array LUN ..................................... 21

Identification of LUNs that do not meet array LUN size requirements ........ 22

When a spare core array LUN is required for core dumps ............................ 22

Planning for LUN security on the storage arrays ...................................................... 23

What LUN security is .................................................................................... 23

Available LUN security methods .................................................................. 23

Table of Contents | 3Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 4: Vs Install

Planning for paths to array LUNs ............................................................ 25Requirement for redundant setup of components in a path ....................................... 25

When to check for redundant paths to array LUNs ....................................... 26

Required number of paths to an array LUN .............................................................. 26

Advantages of four paths to an array LUN (8.1 Cluster-Mode and later) . . . . 27

Using LUN groups to partition the load over V-Series connections ......................... 27

What a LUN group is .................................................................................... 28

Example configuration with multiple LUN groups ....................................... 28

Implementation requirements for a multiple LUN group configuration ....... 29

How paths are reflected in array LUN names ........................................................... 30

Array LUN name format ............................................................................... 30

How the array LUN name changes in Data ONTAP displays ...................... 32

Valid path setup examples ......................................................................................... 33

Valid pathing: one 2-port array LUN group in a fabric-attached

configuration ............................................................................................ 33

Valid pathing: one 4-port array LUN group in a fabric-attached

configuration ............................................................................................ 34

What happens when a link failure occurs .................................................................. 35

Link failure in primary path--one 2-port array LUN group .......................... 36

Link failure in primary path--two 2-port array LUN groups ........................ 37

Determining the array LUNs for specific aggregates .............................. 39Rules about mixing storage in aggregates ................................................................. 39

Aggregate rules when the storage arrays are from the same family ......................... 39

Aggregate rules when the storage arrays are from different vendors or families ..... 41

Zoning guidelines ........................................................................................ 43Zoning requirements ................................................................................................. 43

Type of zoning recommended for a V-Series configuration ..................................... 44

Examples of zoning in a V-Series configuration ...................................................... 45

Determining whether to use neighborhoods (8.x 7-Mode) ..................... 47What a V-Series neighborhood is .............................................................................. 47

What Data ONTAP supports for V-Series neighborhoods ....................................... 47

Maximum number of array LUNs and disks in a neighborhood ............................... 48

Neighborhood maximum LUN limit ............................................................. 48

Platform maximum assigned device limit ..................................................... 49

Factors that impact the neighborhood maximum LUN limit ........................ 50

How to establish a neighborhood .............................................................................. 50

4 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 5: Vs Install

Data ONTAP configuration to establish a neighborhood ............................. 50

Storage array configuration to establish a neighborhood .............................. 50

Switch configuration to establish a neighborhood ........................................ 51

Planning a port-to-port connectivity scheme ........................................... 53V-Series connection guidelines ................................................................................. 53

Guidelines for V-Series FC initiator port usage ........................................................ 54

How FC initiator ports are labeled ............................................................................ 54

Connecting a V-Series system to back-end devices ................................. 55Connecting a V-Series stand-alone system to back-end devices .............................. 55

Connecting an HA pair to back-end devices ............................................................. 57

Validating a V-Series installation (8.1 Cluster-Mode and later) ............ 61Validating a back-end configuration (8.1 Cluster-Mode and later) .......................... 61

Displaying back-end configuration errors ................................................................. 62

Back-end configuration errors detected by the storage errors show command ........ 62

Validating a V-Series installation (8.x 7-Mode) ....................................... 65Checking the number of paths (8.0.x and 8.1 7-Mode) ............................................. 65

Example output showing correct and incorrect pathing (8.0.x and 8.1 7-Mode) ...... 66

Troubleshooting .......................................................................................... 69Invalid path setup examples ...................................................................................... 69

Invalid path setup: too many paths to an array LUN (8.0.x and 8.1.x and

7-Mode) ................................................................................................... 69

Invalid path setup: alternate paths are not configured ................................... 70

Installation quick start (7-Mode and third-party storage only) ............. 73Example configuration for the installation quick start (7-Mode and third-party

storage) ................................................................................................................ 73

Performing pre-installation tasks on the storage array .............................................. 74

Installing the V-Series system ................................................................................... 75

Setting up the switches .............................................................................................. 76

Setting up LUN security ............................................................................................ 77

Assigning an array LUN to a V-Series system and creating the root volume .......... 77

Installing Data ONTAP and licenses ........................................................................ 79

Testing your setup ..................................................................................................... 80

Additional setup ........................................................................................................ 81

Obtaining WWNs manually ...................................................................... 83Settings for connecting to an ASCII terminal console ............................ 85Target queue depth customization ............................................................ 87

Table of Contents | 5Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 6: Vs Install

Guidelines for specifying the appropriate target queue depth ................................... 87

Setting the target queue depth ................................................................................... 88

Storage array model equivalents ............................................................... 89Terminology comparison between storage array vendors ..................... 91Abbreviations .............................................................................................. 95Copyright information ............................................................................. 111Trademark information ........................................................................... 113How to send your comments .................................................................... 115Index ........................................................................................................... 117

6 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 7: Vs Install

V-Series technology overview

A V-Series system is an open storage controller that virtualizes storage from third-party storage arrayvendors, native disks, or both into a single heterogeneous storage pool.

The Data ONTAP software provides a unified storage software platform that simplifies managingboth native disk shelves and LUNs on storage arrays. You can add storage when and where you needit, without disruption.

Related references

Terminology comparison between storage array vendors on page 91

How a V-Series system uses storageA V-Series system pools storage from third-party storage arrays and serves data to Windows andUNIX hosts and clients.

Controller pooling and virtualizing heterogeneous storage

FC

RAID Storage Arrays

FC

Windows and UNIX Hosts

IP

Windows and UNIX Clients/Hosts

A V-Series system presents storage to clients either in the form of Data ONTAP file system volumes,which you manage on the system by using Data ONTAP management features, or as a SCSI targetthat creates LUNs for use by clients. In both cases (file system clients and LUN clients), on the V-Series system you combine the array LUNs into one or more Data ONTAP volumes for presentation

7Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 8: Vs Install

to the clients as files or as LUNs served by Data ONTAP. Likewise, you can combine native disksinto one or more Data ONTAP volumes for presentation to clients.

Supported methods to connect to a storage arraySupported methods for connecting V-Series systems are direct-attached and fabric-attached. Direct-attached connection is not supported for all storage arrays and for all V-Series models.

Stretch and fabric-attached MetroCluster configurations are supported for some storage arrays and V-Series models.

See the V-Series Support Matrix at support.netapp.com for information about the connection methodsupported for your storage array and V-Series model.

Direct-attached configurationsDirect-attached configurations require less equipment. However, more ports are used among thehosts. Direct-attached configurations are no longer recommended for new deployments, and they aresupported only for some storage arrays and some V-Series platforms.

Both stand-alone platforms and HA pairs can be deployed in a direct-attached configuration.

The advantages of a direct-attached configuration are:

• If the storage array has enough ports to connect to the V-Series system, a direct-attachedconfiguration is more cost-effective because it is not necessary to purchase switches.

• You do not have to configure and manage a Fibre Channel SAN.

The V-Series Support Matrix at support.netapp.com contains information about the storage arraysthat support a direct-attached configuration.

Fabric-attached configurationsYou can incorporate a fabric-attached configuration into an existing Fibre Channel SANinfrastructure.

Fabric-attached configurations are supported for both stand-alone systems and HA pairs.

See the V-Series Support Matrix at support.netapp.com to confirm that a fabric-attachedconfiguration is supported for your vendor’s storage arrays.

Number of storage arrays supported behind a V-Seriessystem

For most storage arrays, you can connect a stand-alone V-Series system or the nodes in an HA pair tomultiple storage arrays.

If multiple storage arrays behind a V-Series system are supported for your storage array:

8 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 9: Vs Install

• There is no limit to the number of storage arrays you can deploy behind your system. However,you must use different V-Series FC initiator ports for each storage array.

• The storage arrays can be from the same vendor, either all from the same family or from differentfamilies.

• The storage arrays can be from different vendors.

Different rules apply for assigning array LUNs to aggregates, depending on whether Data ONTAPconsiders the storage arrays to be in the same family.

Note: Storage arrays in the same family share the same performance and failover characteristics.For example, members of the same family all perform active-active failover, or they all performactive-passive failover. Storage arrays with 4-GB HBAs are not considered to be in the samefamily as storage arrays with 8-GB HBAs.

See the V-Series Support Matrix at support.netapp.com for information about storage array families.

Related concepts

Rules about mixing storage in aggregates on page 39

Aggregate rules when the storage arrays are from different vendors or families on page 41

Aggregate rules when the storage arrays are from the same family on page 39

Sharing storage arrays among hostsA typical storage array provides storage for both V-Series systems and other hosts. However, forsome storage arrays, you must dedicate the storage array to V-Series systems.

To determine whether your vendor’s storage array must be dedicated to V-Series, see the V-SeriesSupport Matrix at support.netapp.com.

V-Series technology overview | 9Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 10: Vs Install

10 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 11: Vs Install

V-Series planning overview

Successful implementation of a V-Series deployment requires careful planning and verifying properinstallation and configuration of all devices in your deployment.

V-Series Support Matrix information needed for planningWhen planning your V-Series deployment, check the V-Series Support Matrix to find out if yoursystem conforms to all hardware and software requirements.

The V-Series Support Matrix provides the latest information on hardware models and firmwareversions of switch and storage array products that are currently qualified for use with V-Seriessystems. Not all Data ONTAP releases support the same features, configurations, storage arraymodels, and V-Series models.

The following information in the matrix will help you during the planning phase:

• Whether your V-Series model is supported in the Data ONTAP release that you plan to run• The maximum and minimum system capacity limits for your V-Series model• If you want to deploy a MetroCluster, whether MetroCluster is supported for your storage array• Whether the configuration you want to deploy is supported (for example, a direct-attached

configuration is not supported for all storage arrays and V-Series models)• Whether multiple LUN groups are supported for your storage array• Which storage array firmware versions are supported• Whether your storage array supports non-disruptive (live) upgrade of the storage array firmware

See the V-Series Support Matrix at support.netapp.com for more information.

Planning tasks for a V-Series implementationSuccessfully implementing a V-Series configuration requires carefully planning your Data ONTAPand storage configurations for V-Series use. If your V-Series systems use third-party storage, youmust communicate with the storage array and switch administrators to ensure that the back-enddevices are configured to work with V-Series systems.

If you order your system with disk shelves, the factory configures the root volume and installslicenses and Data ONTAP software. You must perform these steps yourself if your system is usingonly third-party storage.

The final authority about what is supported for V-Series systems is the V-Series Support Matrix at support.netapp.com.

11Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 12: Vs Install

General planning task

• Determine how much storage space is needed by the hosts and clients that you plan to connect tothe V-Series system.

Additional planning tasks if you are using third-party storage

• Determine the requirements for setting up your storage array to work with the V-Series system,including the following:

• Configuration settings on the storage array that are required for the V-Series system to workwith the third-party storage array

• Which configuration is supported for the storage array you want to use• Environment requirements, for example, which storage array, storage array firmware, and

switch firmware are supported• Determine the Data ONTAP requirements for V-Series systems to be able to use array LUNs.

See the appropriate Data ONTAP Storage Management Guide for details about aggregates andvolumes.

• Determine the number and size of LUNs on the storage array that you need for Data ONTAP.• Plan for LUN security.

This task includes setting access controls on the storage array and, if switches are deployed,setting zoning on switches.

• Determine your port-to-port connectivity scheme between the V-Series systems and the storagearray, which involves planning for the following:

• Supported configurations for your vendor• V-Series FC initiator port usage• Cabling redundant paths between the V-Series system and storage array, either directly or

through switches• Zoning of switches (for a fabric-attached configuration)• Mapping (exporting) array LUNs to the ports to which the V-Series systems are connected

Additional planning tasks if you are using native disk shelves

Native disk shelves can be installed on a new or existing V-Series system. Data ONTAPautomatically assigns ownership to native disks attached to your system.

• Determine V-Series port usage.If your system uses both disks and array LUNs, determine what should go on disks and whatshould go on array LUNs.

• If your V-Series system uses both third-party storage and native disks, you need to determine howmany disks and array LUNs combined can be assigned to your system without exceeding thesupported maximum assigned device limit for your system.

• If you have an HA pair, determine whether to use the Multipath Storage feature.

12 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 13: Vs Install

See the Data ONTAP 7-Mode High-Availability Configuration Guide Data ONTAP Cluster-Mode High-Availability Configuration Guide for more information about Multipath Storage.

See the appropriate Data ONTAP Storage Management Guide for information about disk ownershipfor storage on native disk shelves connected to a V-Series system.

Additional planning task if you are using Data ONTAP data protection features

• Determine the data protection features you want to use and their setup requirements.See the appropriate Data Protection Online Backup and Recovery Guide.

Additional planning task for Data ONTAP storage management features

• Determine other features to simplify storage management that you want to use, for example,quotas. See the appropriate Data ONTAP Storage Management Guide.

Related concepts

Considerations for provisioning array LUNs on page 20

Planning for paths to array LUNs on page 25

Planning for LUN security on the storage arrays on page 23

Stages of implementation when using third-party storage on page 13

Stages of implementation when using third-party storageV-Series implementation with third-party storage has two stages: a back-end implementation and afront-end implementation.

Stage 1: back-end implementation

NormallyV-Series systems use third-party storage, although use of third-party storage is not required.Setting up the back-end implementation includes all tasks that are required to set up the V-Seriessystem with a storage array, up to a point where you can install Data ONTAP software.

Tasks to set up the back-end implementation include the following:

1. Creating and formatting array LUNs

2. Assigning ports

3. Cabling

4. Zoning switches (if applicable)

5. In Data ONTAP, assigning specific array LUNs to a V-Series system

6. In Data ONTAP, providing information to set up a V-Series system on the network. This processis similar to FAS system setup.

V-Series planning overview | 13Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 14: Vs Install

7. Installing Data ONTAP software

Note: If a V-Series system is ordered with disk shelves, the Data ONTAP software is installed bythe factory. In such a configuration, you do not need to create the root volume and install licensesand Data ONTAP software.

Stage 2: front-end implementation

Tasks to set up the front-end implementation include the following:

• Configuring the V-Series system for all protocols (NAS or FCP)• Implementing the SNAP* suite of products (Snapshot, SnapVault, and so on)• Creating volumes and aggregates• Setting up data protection, including NDMP dumps to tapes• Setting up native disks (if your system uses native disks for storage)

14 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 15: Vs Install

Planning for RAID Implementation

You need to plan the size of and number of LUNs in the storage array RAID groups and decidewhether you want to share the RAID group among hosts.

RAID protection for third-party storageThird-party storage arrays provide the RAID protection for the array LUNs that they make availableto V-Series systems.

Data ONTAP uses RAID 0 (striping) for array LUNs. Data ONTAP supports a variety of RAIDtypes on the storage arrays, except RAID 0 because it does not provide storage protection. Thestorage arrays provide the data protection, not Data ONTAP.

When creating "RAID groups" on storage arrays, follow the best practices of the storage array vendorto ensure that there is an adequate level of protection on the storage array so that disk failure does notresult in loss of data or loss of access to data.

Note: A RAID group on a storage array is the arrangement of disks that together form the definedRAID level. Each RAID group supports only one RAID type. The number of disks that you selectfor a RAID group determines the RAID type that a particular RAID group supports. Differentstorage array vendors use different terms to describe this entity—RAID groups, parity groups, diskgroups, Parity RAID groups, and other terms.

V-Series systems support native disk shelves as well as third-party storage. Data ONTAP supportsRAID4 and RAID-DP on the native disk shelves connected to a V-Series system but does not supportRAID4 and RAID-DP with array LUNs.

See the V-Series Implementation Guide for Third-party Storage to determine whether there arespecific requirements or limitations about RAID types for your storage array.

Implications of LUN size and number for Data ONTAP RAIDgroups

Part of planning for aggregates is to plan the size and number of Data ONTAP RAID groups youneed for those aggregates, and the size and number of array LUNs for the Data ONTAP RAIDgroups. Setting up Data ONTAP RAID groups for array LUNs requires planning and coordinationwith the storage array administrator.

Planning for Data ONTAP RAID groups involves the following:

1. Planning the size of the aggregate that best meets your data needs.

2. Planning the number and size of the RAID groups that you need for the size of the aggregate.

15Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 16: Vs Install

RAID groups in the same aggregate should be the same size, with the same number of arrayLUNs in each RAID group. Use the default RAID group size if possible.

3. Planning the size of the array LUNs that you need in your Data ONTAP RAID groups.

• To avoid a performance penalty, all array LUNs in a particular Data ONTAP RAID groupshould be the same size.

• The array LUNs should be the same size in all RAID groups in the same aggregate.

4. Communicating with the storage array administrator to create the number of array LUNs of thesize you need for the aggregate.The array LUNs should be optimized for performance, according to the instructions in the storagearray vendor documentation.

For more recommendations about setting up Data ONTAP RAID groups for use with third-partystorage, including minimum and maximum RAID group size, see the appropriate Data ONTAPStorage Management Guide.

Related concepts

Determining the array LUNs for specific aggregates on page 39

16 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 17: Vs Install

Planning for Data ONTAP use of array LUNs

For Data ONTAP to use third-party storage, a storage array administrator must first create LUNs onthe storage array and make them available to Data ONTAP. Then the Data ONTAP administratormust configure Data ONTAP to use the array LUNs that the storage array made available.

Note: Data ONTAP considers an array LUN to be a virtual disk.

How array LUNs are made available for host useA storage array administrator must create array LUNs and make them available to specified FCinitiator ports of V-Series systems.

The process to make LUNs available to hosts and the terminology to describe it varies among storagearray vendors. The basic process on the storage array to make LUNs available for host use is asfollows:

• Create logical devices (LDEVs).• Create a host group (or vendor equivalent).

Include in the host group the WWPNs of the initiator ports of the hosts that are allowed to see theLDEV.

• Map the LUNs to the host group.

Related concepts

What a host group is on page 17What an LDEV is on page 17How array LUNs become available for Data ONTAP storage use on page 18

What an LDEV isLDEV is a term used by some vendors and this guide to describe a piece of logical RAID storageconfigured from disks.

Each LDEV has an internal number that is unique to the storage array. When an LDEV is presentedout of a port on the storage array, the hosts see it as a LUN. The LUN ID—the external number seenby the hosts—must match on each of the two ports over which they are presented. LUN IDs do nothave to be unique on the storage array but must be unique on a port.

What a host group isA host group enables you to associate array LUNs with a group of hosts. Different vendors usedifferent terms to describe this concept and the process of creating a host group.

To simplify management, most storage arrays enable you to define one or more host groups. You candefine specific WWPNs (ports) and WWNs (hosts) to be members of the same group. You then

17Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 18: Vs Install

associate specific array LUNs with the host group. Hosts in the host group can access the LUNsassociated with the host group; hosts that are not in that host group cannot access those LUNs.

Related references

Terminology comparison between storage array vendors on page 91

How array LUNs become available for Data ONTAP storageuse

A V-Series system cannot use an array LUN presented to it until after Data ONTAP has beenconfigured to use the array LUN.

Although the storage array administrator makes an array LUN accessible to Data ONTAP, DataONTAP cannot use the array LUN for storage until both of the following tasks are completed:

1. One V-Series system must be assigned to be the owner of the array LUN.

2. The array LUN must be added to an aggregate.

When you assign an array LUN to a V-Series system, Data ONTAP writes data to the array LUN toidentify the assigned system as the owner of the array LUN. This logical relationship is referred to asdisk ownership.

When you assign an array LUN to a V-Series system, it becomes a spare LUN owned by that systemand it is no longer available to any other V-Series system.

A spare array LUN cannot be used for storage until you add it to an aggregate. Thereafter, DataONTAP ensures that only the owner of the array LUN can write data to and read data from the LUN.

In an HA pair, both nodes must be able to see the same storage, but only one node in the pair is theowner of the array LUN. The partner node takes over read/write access to an array LUN in case of afailure of the owning node. The original owning node resumes ownership after the problem thatcaused unavailability of the node is fixed.

Related concepts

Considerations when planning for disk ownership on page 18Determining the array LUNs for specific aggregates on page 39

Considerations when planning for disk ownershipIf you are deploying multiple V-Series systems, you must determine which V-Series systems will“own” which array LUNs.

Consider the following when planning which V-Series systems will “own” which array LUNs:

• The maximum assigned device limit supported by your platformThe V-Series Support Matrix at support.netapp.com shows the maximum assigned device limitthat is supported for different platforms. This is a hard-coded limit. If your system uses both array

18 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 19: Vs Install

LUNs and disks, this maximum limit is the maximum of disks and array LUNs combined. Youmust account for both types of storage when determining how many array LUNs and disks youcan assign to a system.

• The amount of load that you expect to be generated by different applications used in yourenvironmentSome types of applications are likely to generate a lot of requests whereas other applications (forexample, archival applications) generate fewer requests. You might want to consider weighingownership assignments based on expected load from specific applications.

Related concepts

How array LUNs become available for Data ONTAP storage use on page 18

Guidelines for setting the checksum type for array LUNsWhen you assign an array LUN to a V-Series system, you must specify a checksum type for the arrayLUN. The recommended checksum type is block checksum (BCS), the default.

The checksum type that you assign to an array LUN in Data ONTAP determines what type of dataprotection is applied to the array LUN. The checksum type also impacts performance and useablespace on an array LUN.

The following checksum types are supported:

• Supported for array LUNs - Block checksum (BCS) and zoned checksum (ZCS)BCS, the default, is the recommended checksum type because it supports deduplication andcompression. You must assign the BCS type to all array LUNs that will be added to FlexVolvolumes on which deduplication will be run.

• Supported for native disks - BCS (ZCS on some older disks)Data ONTAP automatically assigns the BCS checksum type to the disks.

An aggregate also has a checksum type, which is determined by the checksum type of the array LUNyou add to it.

See the appropriate Data ONTAP Storage Management Guide for information about checksums andchecksum rules related to aggregates.

Array LUN assignment changesYou can change assignment of a spare array LUN from one V-Series system to another.

See the appropriate Data ONTAP Storage Management Guide for information about changing theownership of an array LUN.

Planning for Data ONTAP use of array LUNs | 19Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 20: Vs Install

Considerations for provisioning array LUNsWhen planning how to provision LUNs for V-Series use, you need to consider the types of arrayLUNs that Data ONTAP supports, the minimum and maximum LUN sizes that Data ONTAPsupports, and the number of array LUNs you need.

LUN types supported by Data ONTAP

• You can map only storage array LUNS to Data ONTAP.Some storage arrays have a non-storage “command” LUN. You cannot map a “command” typeLUN to a V-Series system.

• Starting in Data ONTAP 8.1, you can map LUN 0 to Data ONTAP if it is a storage type LUN.

Minimum and maximum array LUN sizes supported by Data ONTAP

• The maximum LUN size that Data ONTAP supports differs according to Data ONTAP release.• The minimum array LUN size for a data (storage) LUN is different from the minimum LUN size

for the root volume.• The usable space in an array LUN is impacted by normal Data ONTAP overhead and checksum

overhead.For information about minimum and maximum LUN sizes, see the V-Series Support Matrix at support.netapp.com.

The number of LUNs you need

• The smaller the array LUNs, the more LUNs you need for the storage that you want.Ideally, creating one big array LUN from a given storage array RAID group is recommended.

• Device limits define the maximum number of disks and array LUNS that can be assigned to a V-Series controller.See the V-Series Support Matrix for information.

• Different applications generate different loads.When determining assignment of array LUNs to V-Series systems, consider what the storage willbe used for and the number of requests likely to be generated by different application.

Minimum number of array LUNs per V-Series systemIf the root volume is on third-party storage, each stand-alone V-Series system and each node in anHA pair must own at least one array LUN. If the root volume is on a native disk, the only arrayLUNs needed are those for data storage.

If you are deploying a MetroCluster configuration, two array LUNs are required, one LUN from eachsite, so that the root volume can be mirrored.

20 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 21: Vs Install

Note: MetroCluster configurations are not supported on V-Series systems with native disks.

Minimum and maximum array LUN sizes supported by Data ONTAPThe size of the array LUNs that you can create on the storage array is limited by the minimum andmaximum array LUN sizes that Data ONTAP supports.

For information about the minimum and maximum array LUN sizes according to Data ONTAP unitsof measurement, see the V-Series Support Matrix at support.netapp.com. Different vendors usedifferent formulas for calculating units of measurement. You must determine the minimum andmaximum array LUN sizes for your storage array that are the equivalent to the minimum andmaximum array LUN sizes that Data ONTAP supports.

Related concepts

Minimum array LUN size for the root volume on page 21

Minimum number of array LUNs per V-Series system on page 20

Minimum array LUN size for the root volumeThe array LUN used for the root volume must be larger than the minimum size required for otherarray LUNs.

It is strongly recommended that you do not set the size of a root volume below the minimum rootvolume size shown in the V-Series Support Matrix. The reason is that you want to ensure that there issufficient space in the root volume for system files, log files, and core files. You need to providethese files to technical support if a system problem occurs.

Note: The minimum array LUN size for a non-root volume is considerably smaller than for theroot volume, so be sure that you look at the information about the minimum array LUN size for theroot volume. Both the minimum array LUN size for the root volume and the minimum array LUNsize for non-root volumes are shown in the V-Series Support Matrix at support.netapp.com.

Related concepts

Minimum and maximum array LUN sizes supported by Data ONTAP on page 21

Elements that reduce usable space in an array LUNThe usable space in an array LUN is impacted by overheads and the checksum you choose.

When calculating the capacity of an array LUN you must consider the following factors that decreasethe usable capacity of the LUN:

• 10% – WAFL reserve• 0.2% – Core dump (1% in releases earlier than 8.0.2)• 20% – Volume-level Snapshot copy (default, configurable)• 5% – Aggregate-level Snapshot copy (default, configurable)• 12.5% – Block checksum

Planning for Data ONTAP use of array LUNs | 21Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 22: Vs Install

Related concepts

Minimum and maximum array LUN sizes supported by Data ONTAP on page 21

Identification of LUNs that do not meet array LUN size requirementsData ONTAP cannot use array LUNs that do not meet the Data ONTAP array LUN sizerequirements. Data ONTAP issues an error message identifying an array LUN that does not meet theminimum or maximum array LUN size requirements.

See the V-Series Support Matrix at support.netapp.com for information about minimum andmaximum array LUN sizes for each release.

Related concepts

Minimum and maximum array LUN sizes supported by Data ONTAP on page 21

When a spare core array LUN is required for core dumpsCore dump files can be written to the core dump space reserved on each array LUN. But whenautomatic takeover occurs when a partner node panics, the core dump files need to be written to asingle spare core LUN.

A core dump file contains the contents of memory and NVRAM. When a hardware or softwarefailure causes a V-Series system to crash, Data ONTAP typically creates a core file that technicalsupport can use to troubleshoot the problem.

Takeover if the partner node panics shortens the time between the initial failure and when service isfully restored because the takeover can be faster than recovery from the panic. However, thesubsequent giveback causes another brief outage.

If you want automatic takeover to occur if the partner node panics, the requirements are as follows:

• The Data ONTAP command for automatic takeover on panic must be enabled.Data ONTAP is configured by default to initiate a takeover if the partner node panics in somecircumstances.

• A spare core array LUN must be available for a core dump file.When a panic initiates takeover by the partner node, a core dump file is not saved unless a sparecore LUN is available.

Note: Core dump files can be written in the reserved core space on each array LUN, but thisprocess is time consuming.

• The spare core array LUN must meet the minimum required spare core LUN size.For information about the minimum spare core LUN size for each V-Series platform, see the V-Series Support Matrix at support.netapp.com.

The following commands control whether a node in an HA pair immediately takes over for apanicked partner.

22 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 23: Vs Install

Mode Command

7-Mode cf.takeover.on_panic

This command is enabled by default when FCP (Fibre ChannelProtocol) is licensed, when iSCSI is licensed, or both are licensed.

Cluster-Mode storage failover modify -node node -onpanic true

This command is enabled by default.

Planning for LUN security on the storage arraysIf you are using your V-Series system with third-party storage, you must use a LUN security methodto eliminate the possibility of a non V-Series system overwriting array LUNs owned by a V-Seriessystem, or the reverse.

What LUN security isLUN security is used to isolate which hosts can access which array LUNs.

LUN security is similar to switch zoning in concept, but it is performed on the storage array. LUNsecurity and LUN masking are equivalent terms to describe this functionality.

Attention: The Data ONTAP disk ownership scheme prevents one V-Series system fromoverwriting an array LUN owned by another V-Series system. However, it does not prevent a V-Series system from overwriting an array LUN accessible by a non V-Series host. Likewise,without a method of preventing overwriting, a non V-Series host could overwrite an array LUNused by a V-Series system.

Available LUN security methodsWith LUN security, you can mask array LUNs for viewing by only certain hosts, present LUNs onlyfor a specific host on a port, or dedicate a storage array to a particular host.

You should use both zoning and LUN security for added protection and redundancy for the V-Seriessystem. If, for example, you do not have LUN security configured and you have to replace a SANswitch, the V-Series system could panic before you are able to configure the zoning on the newswitch because the switch is wide open.

In addition to reading about the LUN security methods described here, also see the V-SeriesImplementation Guide for Third-party Storage for any additional details regarding LUN security foryour vendor’s storage arrays. For some storage arrays the array must be dedicated for V-Series use.

Method 1: Port-level security

Port-level security enables you to present only the array LUNs for a particular host, enabling you topresent only the LUNs for Data ONTAP on a particular port. That port then becomes dedicated to ahost.

Planning for Data ONTAP use of array LUNs | 23Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 24: Vs Install

Note: Not all storage arrays support port-level security. Some storage arrays present all LUNs onall ports by default, and they do not provide a way to restrict the visibility of LUNs to particularhosts. For these arrays you must use either a LUN security product or dedicate the storage array toV-Series.

Method 2: LUN security products

Use a LUN security product to control those hosts that are zoned to the same port so they can seespecific array LUNs over that port. This prevents other hosts from accessing those same array LUNsby masking them from the other hosts.

Method 3: Dedicate the storage array to V-Series

By dedicating the storage array to V-Series, no hosts other than the V-Series system are connected tothe storage array.

24 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 25: Vs Install

Planning for paths to array LUNs

Paths are the physical connections between the V-Series system and the storage array. Redundantpaths are required to eliminate any SPOF between the V-Series system and the storage array.

Requirement for redundant setup of components in a pathV-Series systems must connect to the storage array through a redundant Fibre Channel (FC) network.Two FC networks or fabric zones are required so that fabric ports or switches can be taken offline forupgrades and replacements without impacting the V-Series systems.

Redundancy requirements of the components in the path are as follows:

V-Series system

• You must attach each connection to a different FC initiator port in the port pair on the V-Seriessystem.

• Each V-Series FC initiator port on the same V-Series system must be on a different bus.

Fibre Channel switches

• Use redundant switches.• Use redundant ports on the Fibre Channel switches.

Storage array

• Ensure that the ports on the storage array that you select to access a given LUN are from differentcomponents that could represent a single point of failure (SPOF), for example, from alternatecontrollers, clusters, or enclosures. You want to ensure that you do not lose all access to a LUN ifone component fails.

Note: A given array LUN is accessed through only one port at a time.

The following illustration shows correct and incorrect storage array port selection. The path setup inthe example on the left is correct because the paths to the storage array are redundant—eachconnection is to a port on a different controller on the storage array.

25Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 26: Vs Install

Storagearray

Controller 2 Controller 2Controller 1 Controller 1

A A LUNs 1-10

B BStoragearray

A ALUNs 1-10

B B

Correct Incorrect

When to check for redundant paths to array LUNsCheck for redundant paths to an array LUN after installation and during fabric maintenanceactivities.

You should recheck for path redundancy when performing the following activities:

• During initial installation• While performing fabric maintenance, for example:

• Before, during, and after an infrastructure upgrade• Before and after taking a switch out of service for maintenance

Be sure that the paths were configured as redundant paths before you remove a switchbetween the V-Series systems and the storage array so that access to the array LUNs is notinterrupted.

• Before and after maintaining hardware on a storage array, for example, maintaining thehardware component on which host adapters and ports are located (the name of thiscomponent varies on different storage array models).

Required number of paths to an array LUNData ONTAP 8.1 Cluster-Mode supports two or four paths to an array LUN . Data ONTAP 8.1 7-Mode and releases prior to Data ONTAP 8.1 support only two paths to an array LUN.

Data ONTAP release Number of paths supported

8.1 Cluster-Mode 2 or 4 with an Active-Active storage array

8.1 7-Mode 2

Releases prior to 8.1 2

For all releases and modes, Data ONTAP expects and requires that a storage array provide access toa specific array LUN on two redundant storage array ports; that is, through two redundant paths. Agiven array LUN is accessed through only one port at a time.

26 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 27: Vs Install

Ensure that the ports on the storage array that you select to access a given LUN are from differentcomponents that could represent a single point of failure (SPOF), for example, from alternatecontrollers, clusters, or enclosures. You want to ensure that you do not lose all access to a LUN ifone component fails.

Related concepts

Advantages of four paths to an array LUN (8.1 Cluster-Mode and later) on page 27

Advantages of four paths to an array LUN (8.1 Cluster-Mode and later)When planning the number of paths to an array LUN for Data ONTAP 8.1 Cluster-Mode, considerwhether you want to set up two or four paths.

The advantages of setting up four paths to an array LUN include the following:

• If a switch fails, both storage array controllers are still available.• If a storage array controller fails, both switches are still available.• Performance can be improved because load balancing is over four paths instead of two.

Note: Only two paths to an array LUN are supported for Data ONTAP 8.1 7-Mode and releasesprior to Data ONTAP 8.1.

Related concepts

Required number of paths to an array LUN on page 26

Using LUN groups to partition the load over V-Seriesconnections

Using multiple LUN groups enables you to partition the load of array LUN traffic over the V-Seriesconnections to optimize performance. Use of multiple LUN groups is not supported for all storagearrays.

There are limits on the number of paths to a given array LUN per V-Series FC initiator port pair. Thelimit varies by Data ONTAP release. However, this does not mean that a V-Series system must seeall array LUNs for Data ONTAP through one FC initiator port pair. Some V-Series models have alarge number of FC initiator ports available.

Multiple LUN groups are not supported for all storage arrays. See the V-Series Support Matrix todetermine whether a configuration using multiple LUN groups is supported for your storage array.

Related concepts

What a LUN group is on page 28

Implementation requirements for a multiple LUN group configuration on page 29

Planning for paths to array LUNs | 27Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 28: Vs Install

What a LUN group isA LUN group is set of logical devices on the storage array that a V-Series system accesses over thesame paths.

The storage array administrator configures a set of logical devices as a group to define which hostWWPNs can access them. Data ONTAP refers to this set of devices as a LUN group.

The number of paths to a LUN group varies according to release and mode.Related concepts

Using LUN groups to partition the load over V-Series connections on page 27Implementation requirements for a multiple LUN group configuration on page 29

Example configuration with multiple LUN groupsUsing multiple LUN groups enables you to partition the load over V-Series connections. Thisconfiguration cannot be used with all storage arrays. See the V-Series Implementation Guide forThird-party Storage for information about which storage arrays are supported for this configuration.

The following illustration shows how one V-Series system FC initiator port pair (0c and 0f) is used toaccess one LUN group over one storage array port pair, and a second FC initiator port pair (0a and0h) is used to access a second LUN group on the same storage array over a different storage arrayport pair.

This configuration is referred to as Stand-alone with two 2-port array LUN groups. A multiple LUNgroup configuration could have an HA pair instead of a stand-alone system.

vs1

0a 0b 0c 0d 0e 0f 0g 0h

Storagearray

Controller 2Controller 1

AA LUN group 1

BB LUN group 2

Switch 1

z1 z2 z3 z4

z4

z3

z2

z1

Switch 2

This example configuration enables you to optimize performance by spreading the I/O across theRAID groups (parity groups) on the storage array. You set up your configuration so that different

28 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 29: Vs Install

port pairs on a V-Series system access different groups of LUNs on the storage array. The V-Seriessystem sees any given array LUN over only two paths because a given logical device is mapped toonly two alternate ports on the storage array.

On the storage array, different LUN groups are accessed through different ports. Each number usedto identify a logical device must be unique on the same storage array, but numbers presented to hoststo identify LUNs (external numbers) can be duplicated on different ports.

The following table summarizes the zoning for this example. Single-initiator zoning is therecommended zoning strategy.

Zone V-Series system FC initiator port Storage array

Switch 1

z1 Port 0a Controller 1 Port B

z2 Port 0c Controller 1 Port A

Switch 2

z3 Port 0f Controller 2 Port A

z4 Port 0h Controller 2 Port B

Related concepts

What a LUN group is on page 28

Using LUN groups to partition the load over V-Series connections on page 27

Implementation requirements for a multiple LUN group configuration on page 29

Implementation requirements for a multiple LUN group configurationImplementing a multiple LUN group configuration requires set up on the V-Series systems and thestorage arrays.

To implement a multiple LUN group configuration, you need to do the following:

• On the storage array, use as many ports as possible to provide access to the array LUNs youallocated for V-Series.

• On the V-Series system, use multiple FC initiator port pairs.Each port pair accesses a different LUN group on the storage array through redundant paths.For each V-Series system, you must use one initiator port pair for each array LUN group.

• You use host groups (or your vendor's equivalent) to define which array LUN groups arepresented to each V-Series initiator port.

• Switch zoning must define which target ports the V-Series initiator ports use to access each arrayLUN group.

• You need to create one big aggregate (in the Data ONTAP configuration), adding array LUNsfrom multiple RAID groups (parity groups) to the aggregate.

Planning for paths to array LUNs | 29Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 30: Vs Install

By doing so, the I/O is spread across more disks. The combination of spreading I/O across theRAID groups and creating one large aggregate results in a significant performance boost.

Related concepts

What a LUN group is on page 28

Example configuration with multiple LUN groups on page 28

How paths are reflected in array LUN namesThe array LUN name is a path-based name that includes the devices in the path between the V-Seriessystem and the storage array.

By looking at the array LUN name as it is displayed in Data ONTAP output, you can identify thefollowing:

• Devices in the path between the V-Series system and the storage array• Ports used• The LUN identifier that the storage array presents externally for mapping to hosts

The format of the array LUN differs depending on the type of configuration and the Data ONTAPmode that the system is running.

Array LUN name formatThe array LUN name is a path-based name that includes the devices in the path between the V-Seriessystem and the storage array, ports used, and the SCSI LUN ID on that path that the storage arraypresents externally for mapping to hosts.

On a 7-Mode V-Series system, there are two names for each array LUN because there are two pathsto each LUN, for example, mcdata3:6.127L0 and brocade15:6.127L0.

On an 8.0.x Cluster-Mode V-Series system, there are two names for each array LUN because thereare two paths to each LUN.

30 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 31: Vs Install

Array LUN format for 7-Mode and releases prior to 8.0

Configuration Array LUN name format Component descriptions

Direct-attached adapter.idlun-id adapter is the adapter numberon the V-Series system.

id is the channel adapter porton the storage array.

lun-id is the array LUNnumber that the storage arraypresents to hosts.

Example:

0a.0L0

Fabric-attached switch-name:port.idlun-

id

switch-name is the name ofthe switch.

port is the switch port that isconnected to the target port (theend point).

id is the device ID.

lun-id is the array LUNnumber that the storage arraypresents to hosts.

Example: mcdata3:6.127L0

mcdata3:6.127 is the pathcomponent and L0 is the SCSILUN ID.

Planning for paths to array LUNs | 31Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 32: Vs Install

Cluster-Mode array LUN name format

Configuration Array LUN name format Component descriptions

Direct-attached node-

name.adapter.idlun-id

node-name is the name of theCluster-Mode node. In Cluster-Mode, the node name isprepended to the LUN name sothat the path-based name willbe unique within the cluster.

adapter is the adapter numberon the V-Series system.

id is the channel adapter porton the storage array.

lun-id is the array LUNnumber that the storage arraypresents to hosts.

Example: node1.0a.0L0

Fabric-attached node-name.switch-

name:port.idlun-id

node-name is the name of theCluster-Mode node. In Cluster-Mode, the node name isprepended to the LUN name sothat the path-based name willbe unique within the cluster.

switch-name is the name ofthe switch.

port is the switch port that isconnected to the target port (theend point).

id is the device ID.

lun-id is the array LUNnumber that the storage arraypresents to hosts.

Example:node1.mcdata3:6.127L0

How the array LUN name changes in Data ONTAP displaysFor array LUN names shown in Data ONTAP displays, the paths shown are from the perspective ofthe V-Series system.

Keep the following in mind when you are looking at Data ONTAP displays that show array LUNs:

32 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 33: Vs Install

• Array LUN names are relative to the V-Series system from which the array LUN is viewed.Therefore, the name of an array LUN might be different from each V-Series system in an HA pairor cluster because the path to the LUN is different.

• On each V-Series system there are multiple valid names for a single array LUN, one per path.The name for a given LUN that is displayed in Data ONTAP can change depending on whichpath is active at a given time. For example, if the primary path becomes unavailable and DataONTAP switches to the alternate path, the LUN name that is displayed changes.

• In Data ONTAP 8.x Cluster-Mode, the node name is prepended to the array LUN name toprovide a unique name for each array LUN.

• Each node in a cluster typically accesses a given LUN through a different storage array port tolimit contention on a single port.

Note: It is possible for different V-Series systems to show the same name for different arrayLUNs. For example, this could occur if both switches have the same name.

Valid path setup examplesThe two best practice configurations are one 2-port array LUN group and one 4-port array LUNgroup. One 4-port array LUN group is recommended because failover is better than with one 2-portarray LUN group.

Note: Different storage arrays, even those from the same vendor, might label the ports differentlyfrom those shown in the example. On your storage array, ensure that the ports you select are onalternate controllers.

Related concepts

Invalid path setup examples on page 69

Valid pathing: one 2-port array LUN group in a fabric-attachedconfiguration

A configuration with one 2-port array LUN group works with all storage arrays for all Data ONTAPreleases. However, a one 4-port array LUN group configuration is preferred over this configurationbecause it provides better failover.

This is an example of a fabric-attached HA pair in which the nodes share the two (redundant) storagearray ports. This configuration uses the fewest number of ports that are possible for V-Seriessystems. This configuration is useful if you are limited in the number of storage array ports or switchports that you can use with V-Series systems.

The following illustration shows pathing in a configuration with one 2-port array LUN group.

Planning for paths to array LUNs | 33Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 34: Vs Install

Storagearray

Controller 1 Controller 2

vs1

z1

z1/z2 z3/z4

z3 z2 z4

vs2

0a 0b 0c 0d 0a 0b 0c 0d

2A1A LUN group 1

Switch 1 Switch 2

In this configuration with one 2-port array LUN group, each of the two target ports on the storagearray is accessed by two V-Series FC initiator ports, one from each node in the HA pair. (Two V-Series FC initiator ports, one from each node, “share” the same target port.) To ensure availability,use a redundant FC initiator port pair on each node in the HA pair. Then, if one path from a nodefails, the other path from the node is used; V-Series controller takeover does not occur.

Note: Both a one 2-port array LUN group configuration and a one 4-port array LUN groupconfiguration are best practice configuration recommendations. However, failover is not as goodin a one 2-port configuration as when using the one 4-port array LUN group configuration. In aone 2-port array LUN group configuration, if a switch on one fabric fails, all traffic from both V-Series systems goes through a single port on the storage array.

Valid pathing: one 4-port array LUN group in a fabric-attachedconfiguration

A one 4-port array LUN group configuration works with all storage arrays for all Data ONTAPreleases. This is the preferred configuration.

The following illustration shows pathing in a configuration with one 4-port array LUN group.

34 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 35: Vs Install

Storagearray

vs1 vs2

0a 0b 0c 0d 0a 0b 0c 0d

1A

Switch 1 Switch 2

Fabric 1 Fabric 2

1B

2A

2BLUN group 1

z1 z3 z2 z4

z4z2

z3z1

Controller 1 Controller 2

In this configuration with one 4-port LUN group, array LUNs are mapped to four ports on the storagearray. The array LUN group is presented to both nodes in the HA pair configuration on differentarray target ports. However, each V-Series system can see an array LUN, end-to-end, through onlytwo paths. Zoning is configured so that each FC initiator port on the V-Series systems can accessonly a single target array port.

Note: Both a one 2-port array LUN group configuration and a one 4-port array LUN groupconfiguration are best practice configuration recommendations. However, failover is not as goodin a one 2-port configuration as when using the one 4-port array LUN group configuration. In aone 2-port array LUN group configuration, if a switch on one fabric fails, all traffic from both V-Series systems goes through a single port on the storage array.

What happens when a link failure occursData ONTAP monitors a link’s usage periodically. The Data ONTAP response to a link failurediffers depending on where the failure occurs.

The following table shows what occurs if there is a failure in a fabric-attached configuration.

If a failure occurs in the link between the... Then...

V-Series system and the switch Data ONTAP receives notification immediatelyand sends traffic to the other path immediately.

Planning for paths to array LUNs | 35Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 36: Vs Install

If a failure occurs in the link between the... Then...

Switch and the storage array Data ONTAP is not immediately aware thatthere is a link failure because the link is stillestablished between the V-Series system and theswitch. Data ONTAP becomes aware that thereis a failure when the I/O times out. Data ONTAPretries three times to send the traffic on theoriginal path, then it fails over the traffic to theother path.

Link failure in primary path--one 2-port array LUN groupIn a V-Series configuration, when a link failure occurs in the primary path, Data ONTAPautomatically switches to the alternate path.

As the following illustration shows, for this scenario the primary path to LUN group 1 from vs1 isthrough vs1’s FC initiator port 0a, to Switch 1, and then to the storage array’s Controller 1 port 1A. Ifa failure occurs in the link between vs1 FC initiator port 0a and Switch 1 when vs1 tries to accessLUN group 1, Data ONTAP automatically switches to the alternate path through vs1’s FC initiatorport 0c. V-Series vs1 can then access LUN group 1 through Switch 2, and then through storage arrayController 2 port 2A.

vs1 vs2

LUN group 1

Switch 1

z1 z3 z2 z4

z3/z4

Switch 2

Storagearray

Controller 2Controller 1

2A1A

z1/z2

0a 0b 0c 0d 0a 0b 0c 0d

A link failure occurs in theprimary pathto LUN group 1

The alternate path to LUN group 1 is used

Interconnect cables

Until the link failure is fixed, there is only one interface to the storage. When connectivity is restored,Data ONTAP redistributes the array LUNs over the paths.

36 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 37: Vs Install

Link failure in primary path--two 2-port array LUN groupsIn a V-Series configuration, Data ONTAP automatically switches to the alternate path when a linkfailure occurs in the primary path.

The following illustration shows what Data ONTAP does when the primary path to a LUN fails in aconfiguration for an HA pair with two 2-port array LUN groups.

B

D

0a 0b 0c 0d 0a 0b 0c 0d

AA

B

D

CC

Controller 2

z1/z2

z1/z3 z5/z7

z7/z8

Switch 1 Switch 2

Fabric 1 Fabric 2

Storage array

Controller 1

z2/z4 z6/z8LUN group 1

LUN group 2

z3/z4

z5/z6

Interconnect cablesvs2vs1

The alternate path to a LUN in LUN group 1 is used

A link failure occurs in the primary path to a LUN in LUN group 1

Failover in this example with two LUN groups works the same way in configurations with andwithout fan-in (assuming the primary path from vs1 to a LUN in LUN group 1 is through vs1’s FCinitiator port 0a). If a failure occurs in the link between vs1’s FC initiator port 0a and Switch 1. DataONTAP automatically switches to the alternate path through vs1’s FC initiator port 0c, whichenables vs1 to access the LUN in LUN group 1 through Switch 2 and then through the storagearray’s Controller 2 port A.

Until the link failure is fixed, there is only one interface to the storage. When connectivity is restored,Data ONTAP redistributes the array LUNs over the paths.

Planning for paths to array LUNs | 37Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 38: Vs Install

38 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 39: Vs Install

Determining the array LUNs for specificaggregates

There are a number of rules about mixing different types of storage in aggregates that are unique toV-Series systems that use third-party storage. You need to understand these requirements whenplanning which array LUNs and disks to add to which aggregates.

Related concepts

How array LUNs are made available for host use on page 17

Considerations when planning for disk ownership on page 18

Rules about mixing storage in aggregatesYou cannot mix different storage types or array LUNs from different vendors or storage array typesin the same aggregate.

You cannot add the following to the same aggregate:

• Array LUNs with different checksum types• Array LUNs from different storage array vendors• Array LUNs from different storage array model families• Array LUNs from different drive types (for example, Fibre Channel, SATA) or different speeds• Array LUNs and disks

Note: Storage arrays in the same family share the same performance and failover characteristics.For example, members of the same family all perform active-active failover, or they all performactive-passive failover. Storage arrays with 4-GB HBAs are not considered to be in the samefamily as storage arrays with 8-GB HBAs.

Aggregate rules when the storage arrays are from the samefamily

You can mix array LUNs from storage arrays in the same family in the same aggregate, if desired.

The following examples show some options for laying out array LUNs in aggregates when thestorage arrays behind a V-Series system are in the same vendor family.

Note: For simplicity, the illustrations show only two storage arrays; your deployment can includemore storage arrays.

39Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 40: Vs Install

Example 1: A single aggregate for LUNs from all storage arrays

As shown in the following illustration, you can create one aggregate, then add all LUNs fromall the storage arrays in the same family to the same aggregate.

Storage array 1, Family A

vs-1

LUN 1

Storage array 2, Family A

LUN 1

Aggregate 1

Example 2: Distribute and mix LUNs from the storage arrays over multipleaggregates

As shown in the following illustration, you can create multiple aggregates, then distribute andmix the array LUNs from the different storage arrays in the same family over the aggregates.

Storage array 1, Family A

LUN 1

LUN 2

Storage array 2, Family A

LUN 1

LUN 2

vs-1

Aggregate 1

Aggregate 2

Note: This example is not supported if one of the storage arrays has Fibre Channel drivesand the other storage array has SATA drives.

40 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 41: Vs Install

Aggregate rules when the storage arrays are from differentvendors or families

You cannot mix array LUNs from storage arrays from different vendors, or from different families ofthe same vendor, in the same aggregate.

The following rules apply if your storage arrays are from different vendors or different families ofthe same vendor:

• You cannot mix array LUNs from storage arrays from different vendors, or from differentfamilies of the same vendor, in the same aggregate.

• You can associate the aggregate containing the root volume with any of the storage arrays,regardless of the family type of the storage array.

Note: When you create your aggregate, be sure that you explicitly specify the IDs of the arrayLUNs you want to add to the aggregate. Do not use the parameters for specifying the numberand size of array LUNs to be picked up because the system might automatically pick up LUNsfrom a different family or from a different vendor’s storage array. If you accidently mix arrayLUNs from a different storage array families or from a different storage array vendor when youconfigure an aggregate, you must destroy the aggregate and re-create it.

The following examples show options for how to lay out array LUNs in aggregates when the storagearrays are from different vendors or from different families from the same vendor.

Example 1: LUNs from the two storage arrays are in different aggregates

Storage array 1, Family A

vs-1

LUN 1

LUN 2

Storage array 2, Family B

LUN 1

Aggregate 1

Aggregate 2

LUN 2

In this example, some LUNs for Data ONTAP are from Storage array 1, Family A and theother LUNs for Data ONTAP are from Storage array 2, Family B. The LUNs from the twostorage arrays cannot be added to the same aggregate because the two storage arrays are from

Determining the array LUNs for specific aggregates | 41Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 42: Vs Install

different families of the same vendor. The same would be true if the two storage arrays werefrom different vendors.

Example 2: Some LUNs can be mixed in the same aggregate and some cannot

Storage array 1, Family A

LUN 1

LUN 2

Storage array 2, Family B

LUN 1

LUN 2

Storage array 3, Family B

LUN 1

LUN 2

vs-1

Aggregate 1

Aggregate 2

Aggregate 3

In this example, one storage array is from Family A and two storage arrays are from Family B.The LUNs from the Family A storage array cannot be added to the same aggregate as theLUNs from a Family B storage array because the storage arrays are from different families.However, LUN 1 of storage array 3 can be assigned to aggregate 2, which also contains LUNsfrom storage array 2, because the two storage arrays are in the same family.

42 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 43: Vs Install

Zoning guidelines

A common error when installing a V-Series configuration is to misconfigure zoning.

See the V-Series Support Matrix at support.netapp.com for information about specific switchguidelines and potential issues.

Related concepts

Zoning requirements on page 43

Type of zoning recommended for a V-Series configuration on page 44

Examples of zoning in a V-Series configuration on page 45

Zoning requirementsConfiguring zoning on a Fibre Channel switch enables you to define paths between connected nodes,restricting visibility and connectivity between devices connected to a common Fibre Channel SAN.

Zoning is required to prevent LUNs from being visible to a V-Series system on more than two targetports. If zoning is not used, and there are multiple target ports from an array in a given fabric, the V-Series FC initiator sees the same LUNs on all those target ports. Data ONTAP requires that an arrayLUN be visible on only one target port for each initiator port.

When configuring zoning in a V-Series deployment, the requirements are as follows:

• Zoning must be configured to restrict each initiator port to a single target port on each storagearray.

• On the switch, ports on the V-Series system and ports on the storage array must be assigned to thesame zone.This enables the V-Series systems to see the LUNs on the storage arrays.

• When sharing ports across heterogeneous systems, do not expose array LUNs from the V-Seriessystem to other systems, and the reverse.You must use array LUN security or array LUN masking to ensure that only the array LUNs thatare for Data ONTAP storage are visible to the V-Series systems.

Zoning can be configured by specifying WWNs (worldwide names) or ports.

Related concepts

Type of zoning recommended for a V-Series configuration on page 44

Examples of zoning in a V-Series configuration on page 45

43Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 44: Vs Install

Type of zoning recommended for a V-Series configurationYou should use single-initiator zoning, which limits each zone to a single V-Series system FCinitiator port.

The benefits of creating a separate zone for each V-Series system FC initiator port and each non V-Series host are as follows:

• You limit the number of ports over which a specific array LUN can be accessed. You can preventa V-Series system from accessing a given array LUN for Data ONTAP over more than twostorage array ports.

• Single-initiator zoning improves discovery and boot time because the V-Series FC initiators donot attempt to discover each other.

See the V-Series Implementation Guide for Third-party Storage for any additional information aboutzoning for your storage array vendor.

Related concepts

Minimum number of array LUNs per V-Series system on page 20

Minimum and maximum array LUN sizes supported by Data ONTAP on page 21

Elements that reduce usable space in an array LUN on page 21

Minimum array LUN size for the root volume on page 21

When a spare core array LUN is required for core dumps on page 22

Zoning requirements on page 43

Examples of zoning in a V-Series configuration on page 45

44 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 45: Vs Install

Examples of zoning in a V-Series configurationWhen configuring the switches for zoning, use LUN security to ensure that different hosts do not seeLUNs mapped to another host.

Zoning in a one 2-port LUN group configuration

Storagearray

Controller 1 Controller 2

vs1

z1

z1/z2 z3/z4

z3 z2 z4

vs2

0a 0b 0c 0d 0a 0b 0c 0d

2A1A LUN group 1

Switch 1 Switch 2

The following table shows single-initiator zoning for this example with a 30xx HA pair.Single-initiator zoning is the recommended zoning strategy.

Zone V-Series system Storage array

Switch 1

z1 vs1 Port 0a Controller 1 Port 1A

z2 vs2 Port 0a Controller 1 Port 1A

Switch 2

z3 vs1 Port 0c Controller 2 Port 2A

z4 vs2 Port 0c Controller 2 Port 2A

Zoning guidelines | 45Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 46: Vs Install

Zoning in a one 4-port LUN group configuration

Storagearray

vs1 vs2

0a 0b 0c 0d 0a 0b 0c 0d

1A

Switch 1 Switch 2

Fabric 1 Fabric 2

1B

2A

2BLUN group 1

z1 z3 z2 z4

z4z2

z3z1

Controller 1 Controller 2

The following table shows single-initiator zoning for this example with a 30xx HA pair.Single-initiator zoning is the recommended zoning strategy.

Zone V-Series system Storage array

Switch 1

z1 vs1 Port 0a Controller 1 Port 1A

z2 vs2 Port 0a Controller 1 Port 1B

Switch 2

z3 vs1 Port 0c Controller 2 Port 2A

z4 vs2 Port 0c Controller 2 Port 2B

46 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 47: Vs Install

Determining whether to use neighborhoods (8.x 7-Mode)

A neighborhood is a logical entity that enables V-Series systems to see the same array LUNs.Ordinarily systems that are not part of an HA pair have no relationship with each other; they cannotsee the same array LUNs and you cannot load balance between them. A neighborhood makes itpossible to do these things.

Neighborhoods do not pertain to configurations with only one stand-alone system or only one HApair and no other systems.

Note: Clusters in Cluster-Mode provide like functionality to 7-Mode neighborhoods.

What a V-Series neighborhood isIn a V-Series neighborhood, you can easily reassign ownership of array LUNs from oneneighborhood member to another through Data ONTAP. You can also transparently load balancedata service among the V-Series systems in a neighborhood by moving vFiler units amongneighborhood members. The physical storage always remains on the storage array.

A neighborhood can include up to six V-Series systems, which can be stand-alone systems or HApairs.

Neighborhood functionality is limited to third-party storage. Although neighborhood members seethe same array LUNs, the systems outside of an HA pair cannot see each other’s disks.

The neighborhood relationship does not provide any failover between neighborhood members if amember becomes unavailable. Failover of services is a function of the relationship between twonodes in an HA pair, and can occur only between the two nodes in an HA pair.

What Data ONTAP supports for V-Series neighborhoodsIf you are thinking about using neighborhoods, make sure they are supported for your storage arrayand for the Data ONTAP release running on your V-Series systems.

Data ONTAP releasessupported

Data ONTAP 8.0 7-Mode and later, and releases earlier than 8.x.

Mixing of ONTAP versions from different release streams is notsupported. For example, mixing Data ONTAP releases 7.2.x with7.3.x or 7.3.x with 8.x is not supported.

Storage arrays supported The V-Series Support Matrix provides information about any storagearrays for which neighborhoods do not apply.

47Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 48: Vs Install

Platforms supported There are no restrictions on platform types. Mixed platform types aresupported.

Configurations supported Any combination of stand-alone systems and HA pairs.

Note: A MetroCluster configuration is not supported forneighborhoods.

Maximum number of V-Series systems supported

Six

Maximum number ofLUNs and disks supported

You cannot exceed the limits for the maximum devices in aneighborhood and the neighborhood maximum LUN limit.

Related concepts

Maximum number of array LUNs and disks in a neighborhood on page 48

Maximum number of array LUNs and disks in aneighborhood

You cannot exceed the limits for the maximum devices in a neighborhood and the neighborhoodmaximum LUN limit.

When determining the number of LUNs that storage arrays can present to the V-Series systems inyour neighborhood, you need to consider the following two Data ONTAP limits:

• Neighborhood maximum array LUN limit• Maximum total number of devices that can be assigned to the platforms that you want to be

neighborhood members

Related concepts

Neighborhood maximum LUN limit on page 48

Platform maximum assigned device limit on page 49

Neighborhood maximum LUN limitThe neighborhood maximum array LUN limit is both the maximum visible limit for theneighborhood and the maximum assigned array LUN limit for the systems in the neighborhood.

The neighborhood maximum LUN limit has the following two components:

• It is the maximum visible limit for the neighborhood.This limit is the maximum number of the same array LUNs that V-Series systems in aneighborhood are allowed to see. All members of the neighborhood see all the same array LUNs.Individual V-Series systems in the neighborhood that have disks attached cannot see more arrayLUNs and disks combined than the maximum visible limit.

48 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 49: Vs Install

• It is the maximum assigned LUN limit for all the systems in the neighborhood combined.The platform maximum assigned device limit, not the neighborhood maximum LUN limit,dictates the maximum number of disks and array LUNs that can be assigned to a stand-alonesystem or HA pair. However, in the context of the neighborhood, you might not be able to assignthe maximum number of devices that the platform can support because the combined total ofassigned devices must not exceed the neighborhood limit.For example, assume that you have two stand-alone systems in the neighborhood, and themaximum assigned limit for the platform is 600 devices. If the neighborhood LUN limit was1,000 array LUNs, you could not assign 600 array LUNs to each system because the totalassigned LUNs for the two systems would be 1,200, which is 200 LUNs more than the 1,000neighborhood maximum LUN limit.

The V-Series Support Matrix provides a neighborhood maximum LUN limit for each V-Seriesplatform type. You can use the neighborhood maximum LUN limit shown if your neighborhoodmembers are using only array LUNs, and no other factors would reduce the neighborhood maximumLUN limit for your neighborhood. There are a number of factors that can reduce the neighborhoodmaximum LUN limit. Therefore, the limit in the V-Series Support Matrix might not be the actuallimit for your neighborhood.

Load balancing and changing ownership among six systems is not supported for native disks.However, when disks are attached to a system in the neighborhood that uses both disks and arrayLUNs, you need to count the disks toward the visible limit.

Related concepts

Factors that impact the neighborhood maximum LUN limit on page 50

Platform maximum assigned device limitYou cannot exceed the maximum assigned device limit, which is the limit for the total number ofdisks and array LUNs combined that you can assign to a platform.

For every platform, Data ONTAP hard codes the maximum number of devices (disks and arrayLUNs combined) that a stand-alone system or HA pair can support. Consider the platform maximumassigned device limit because the combination of disks and LUNs that you assign to a V-Seriessystem (through the Data ONTAP disk ownership feature) cannot exceed this limit.

The platform maximum assigned device limit does not limit the number of disks and array LUNs thatthe systems in a V-Series neighborhood can see (the visible limit); it limits only the number of disksand LUNs that you can assign to the platform. The visible limit is determined by the neighborhoodmaximum LUN limit.

Note: The platform maximum assigned device limit applies whether a V-Series system is in aneighborhood or not. The platform maximum assigned device limit is the same for a stand-alonesystem and HA pair because each node in the pair must be able to handle its storage and itspartner’s storage if the partner becomes unavailable.

Determining whether to use neighborhoods (8.x 7-Mode) | 49Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 50: Vs Install

Factors that impact the neighborhood maximum LUN limitNeighborhood members can never see more array LUNs than the neighborhood maximum LUN limitthat is shown in the V-Series Support Matrix for the platform type. However, certain factors canreduce the maximum LUN limit from the number shown in the V-Series Support Matrix.

The factors that impact the neighborhood maximum LUN limit are as follows:

• If the systems in the neighborhood are mixed platform types.• If there are neighborhood restrictions for the storage arrays that are presenting LUNs to the

neighborhood systems.• If disks are connected to the V-Series systems in the neighborhood.

Note: The lowest limit based on any factor becomes the maximum LUN limit for yourneighborhood.

Related concepts

Neighborhood maximum LUN limit on page 48

How to establish a neighborhoodA neighborhood exists only when a V-Series system can see array LUNs that belong to another V-Series system that is not its partner in an HA pair. To establish a neighborhood, the storage arraysand switches must be configured to enable all V-Series systems in the same neighborhood to see thesame array LUNs.

Related concepts

Data ONTAP configuration to establish a neighborhood on page 50Storage array configuration to establish a neighborhood on page 50Switch configuration to establish a neighborhood on page 51

Data ONTAP configuration to establish a neighborhoodNo explicit configuration is required on a V-Series system to support neighborhoods.

The underlying functionality that enables the V-Series systems to operate as a neighborhood is theData ONTAP disk ownership feature (assigning array LUNs to a specific V-Series system).

Note: After you determine the maximum number of LUNs that the storage arrays can present toyour neighborhood, be sure to communicate that information to the storage array administrators.

Storage array configuration to establish a neighborhoodThe storage array administrator must configure one or more storage arrays to present the same LUNsto the V-Series systems that you want to be in the neighborhood.

How a storage array administrator creates and presents LUNs to hosts varies on different storagearrays. A typical method is that the storage array administrator specifies the FC initiator ports of a

50 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 51: Vs Install

number of V-Series systems to be in the same host group on the storage array. This host groupconfiguration enables all the systems to see the same LUNs.

The storage array administrator must also set the storage array access controls so that all the V-Seriessystems can see the same array LUNs.

Related concepts

What a host group is on page 17

Switch configuration to establish a neighborhoodIf your configuration is fabric attached, you must zone switch ports that connect to your V-Seriessystem FC initiator ports. This ensures that all V-Series systems in the neighborhood can see thesame array LUNs.

Note: It is recommended that you use single-initiator zoning, which limits each zone to a single V-Series system FC initiator port.

Determining whether to use neighborhoods (8.x 7-Mode) | 51Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 52: Vs Install

52 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 53: Vs Install

Planning a port-to-port connectivity scheme

Planning connectivity between the V-Series FC initiator ports and storage array ports includesdetermining how to achieve redundancy and meeting requirements for the number of paths to anarray LUN.

Related concepts

Type of zoning recommended for a V-Series configuration on page 44

V-Series connection guidelines on page 53

Guidelines for V-Series FC initiator port usage on page 54

How FC initiator ports are labeled on page 54

V-Series connection guidelinesBe sure that your port-to-port connectivity plan addresses redundancy and pathing guidelines.

The requirements to set up connections are as follows:

• Attach each connection in a redundant port pair on the storage array to a different FC initiatorport on the V-Series system.

• If your storage array supports fewer LUNs per host group per port than the number of LUNs thatthe V-Series systems will be using, you need to add additional cables between the V-Seriessystem and the storage array.

• Use redundant ports on the Fibre Channel switches.• Avoid a SPOF.

Ensure that the ports on the storage array that you select to access a given LUN are from differentcomponents that could represent a single point of failure (SPOF), for example, from alternatecontrollers, clusters, or enclosures. You want to ensure that you do not lose all access to a LUN ifone component fails.

• Do not exceed the number of paths supported for your Data ONTAP release and mode.Data ONTAP 8.1 Cluster-Mode supports two or four paths to an array LUN . Data ONTAP 8.1 7-Mode and releases prior to Data ONTAP 8.1 support only two paths to an array LUN.

See the V-Series Implementation Guide for Third-party Storage for any information about port-to-port connectivity requirements for your storage array type.

53Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 54: Vs Install

Guidelines for V-Series FC initiator port usageWhen you plan FC initiator port usage, ensure that you are using redundant FC initiator ports andthat you meet the configuration requirements for FC initiator ports.

The guidelines for FC port usage are as follows:

FC initiator port pair and redundancyrequirements

Redundant FC initiator port pairs are required toconnect a V-Series system to array LUNs.

FC initiator port setting for HBAs All V-Series HBAs that are used to access disks orarray LUNs must be set to initiator ports.

Sharing an FC initiator port for multiplestorage arrays

Not supported.

Sharing initiator ports for different storageand devices

Not supported.

Related concepts

How FC initiator ports are labeled on page 54

How FC initiator ports are labeledAll FC initiator ports on V-Series systems are identified by a number and a letter. Labeling differsdepending on whether the ports are on the motherboard or cards in expansion slots.

Port numbering on themotherboard

Ports are numbered 0a, 0b, 0c, 0d......

Port numbering onexpansion cards

Ports are numbered according to the slot in which the expansion card isinstalled. A card in slot 3 yields ports 3A and 3B.

The FC initiator ports are labeled 1 and 2. However, the software refersto them as A and B. You see these labels in the user interface and systemmessages displayed on the console.

Related concepts

Guidelines for V-Series FC initiator port usage on page 54

54 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 55: Vs Install

Connecting a V-Series system to back-enddevices

Plan V-Series FC initiator and storage array port usage before connecting V-Series systems to back-end devices. You must set up redundant connections to avoid a single point of failure (SPOF).

Connecting a V-Series stand-alone system to back-enddevices

You can connect a stand-alone V-Series system to a storage array either directly or through a switch.

Steps

1. Identify onboard ports and expansion adapter ports for your V-Series model.

2. Locate the ports on the storage array that you want to use to connect to the V-Series system.

3. Connect the V-Series system to the storage array by using redundant FC initiator ports.

4. For a direct-attached configuration, connect the V-Series system to the storage array as follows:

a. Connect one cable from one FC initiator port on the V-Series system to controller 1 port 1 onthe storage array.

b. Connect a cable from a redundant FC initiator port on the V-Series system 1 to controller 2port 1 on the storage array.

5. For a fabric-attached configuration, connect the V-Series system to the switches as follows:

a. Connect one cable from the one FC initiator port on the V-Series system to Switch 1.

b. Connect another cable from the redundant port on the V-Series system to Switch 2.

6. For a fabric-attached configuration, connect the switch to the storage arrays as follows:

a. Connect Switch 1 to the storage array controller 1, port 1.

b. Connect Switch 2 to the storage array controller 2, port 1.

7. Connect a console cable to the console port on each V-Series system.

Use the RJ-45 to DB-9 adapter that is included with your system. Connect the console cable tothe adapter.

8. Install the cable management tray by pinching the arms of the tray and fitting the holes in thearms through the motherboard tray pins. Then push the cables into the cable holders, thread theadapter cables through the top rows of the cable holders, and thread the port cables through thelower cable holders.

55Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 56: Vs Install

9. Connect the V-Series system to an Ethernet network by plugging the network cable into thenetworking port.

If you are connecting more than one network cable to the network, connect to the portssequentially.

10. (Optional) Connect a remote management device from the back of the V-Series system to thenetwork using an Ethernet cable.

Note: If you are using RLM, the network switch port for the RLM connection must negotiatedown to 10/100 or autonegotiate.

11. Verify that the storage array is configured and connected properly, and that it is powered on.

Note: Your configured and connected storage array must be powered on before you power onyour V-Series system. The V-Series system expects these units to be ready for input/outputwhen it powers on and performs its reset and self-test.

12. If your deployment includes switches, make sure that all switch IDs are set, then turn on eachswitch 10 minutes apart from one another.

13. If applicable, turn on any tape backup devices.

14. For each power supply on the V-Series system, do the following:

a. Ensure that the power switch is in the Off (0) position.

b. Connect the socket end of the power cord to the power plug on the power supply.

c. Secure the power cord with the retaining adjustable clip on the power supply.

d. Plug the other end of the power cord into a grounded electrical outlet.

Note: To obtain power supply redundancy, you must connect the second power supply to aseparate AC circuit.

15. Start a communications program.

You must use some form of communications program to be able to perform initial network setupand V-Series configuration. You can start a communications program through a remotemanagement device after connecting to the serial port.

16. Turn the power switch on the V-Series system to the On (|) position.

The system verifies the hardware and loads the operating system.

17. If the storage array does not automatically discover V-Series system WWNs after you connect theV-Series system to the storage array, you must obtain the WWNs manually.

After you finish

Continue with the appropriate setup of your V-Series system and Data ONTAP.

56 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 57: Vs Install

Related concepts

Settings for connecting to an ASCII terminal console on page 85

Related tasks

Connecting an HA pair to back-end devices on page 57

Obtaining WWNs manually on page 83

Connecting an HA pair to back-end devicesYou can connect an HA pair to a storage array either directly or through a switch.

Steps

1. Identify the onboard ports and expansion adapter ports for your V-Series model.

2. See the System Configuration Guide to ensure that your HA interconnect adapter is in the correctslot for your system in an HA pair .

3. Plug one end of the optical cable into one of the local node’s HA adapter ports, then plug theother end into the partner node’s corresponding HA adapter port.

Note: Do not cross-cable the HA interconnect adapter. Cable the local node ports only to theidentical ports on the partner node.

4. Repeat the previous step for the two remaining ports on the HA adapter.

5. Locate the ports on the storage array that you want to use to connect to the V-Series system to thestorage array, either directly or through a switch.

6. For a direct-attached HA pair, connect V-Series system 1 to the storage array, using redundant FCinitiator ports.

a. Connect one cable from one FC initiator port on V-Series system 1 to controller 1, port 1, onthe storage array.

b. Connect a cable from a redundant FC initiator port on V-Series system 1 to controller 2, port1, on the storage array.

7. For a direct-attached HA pair , connect V-Series system 2 to the storage array, using redundantFC initiator ports.

a. Connect one cable from one FC initiator port on V-Series system 2 to controller 1, port 1, onthe storage array.

b. Connect a cable from a redundant FC initiator port on V-Series system 2 to controller 2, port1, on the storage array.

Connecting a V-Series system to back-end devices | 57Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 58: Vs Install

8. For a fabric-attached HA pair , connect V-Series system 1 to the switches, using redundant portpairs.

a. Connect one cable from one FC initiator port on V-Series system 1 to Switch 1.

b. Connect another cable from a redundant FC initiator port on V-Series system 1 to Switch 2.

9. For a fabric-attached HA pair , connect V-Series system 2 to the switches, using redundant portpairs.

a. Connect one cable from one FC initiator port on V-Series system 2 to Switch 1.

b. Connect another cable from a redundant FC initiator port on V-Series system 2 to Switch 2.

10. For a fabric-attached configuration, connect the switches to the storage array.

a. Connect Switch 1 to the storage array cluster (controller) 1, port 1.

b. Connect Switch 2 to the storage array cluster (controller) 2, port 1.

11. (Optional) Connect the V-Series system to a tape backup device through a separate FC initiatorport or SCSI tape adapter.

12. Connect a console cable to the console port on each V-Series system. Use the RJ-45 to DB-9adapter that is included with your system. Connect the console cable to the adapter.

13. Install the cable management tray by pinching the arms of the tray and fitting the holes in thearms through the motherboard tray pins. Then push the cables into the cable holders, thread theadapter cables through the top rows of the cable holders, and thread the port cables through thelower cable holders.

14. Connect the V-Series system to an Ethernet network by plugging the network cable into thenetworking port. If you are connecting more than one network cable to the network, connect tothe ports sequentially. Use the cable management tray to direct all the cabling from your system.

15. (Optional) Connect a remote management device from the back of the V-Series system to thenetwork using an Ethernet cable.

Note: The network switch port for the RLM connection must negotiate down to 10/100 orautonegotiate.

16. Verify that the storage array is configured and connected properly, and that it is powered on.

Note: Your configured and connected storage array must be powered on before you power onyour V-Series system. See your storage array documentation for how to power on the storagearray. The V-Series system expects these units to be ready for input/output when it powers onand performs its reset and self-test.

17. If your deployment includes switches, make sure that all switch IDs are set, then turn on eachswitch 10 minutes apart from one another.

18. If applicable, turn on any tape backup devices.

19. For each power supply on the V-Series system, do the following:

58 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 59: Vs Install

a. Ensure that the power switch is in the Off (0) position.

b. Connect the socket end of the power cord to the power plug on the power supply.

c. Secure the power cord with the retaining adjustable clip on the power supply.

d. Plug the other end of the power cord into a grounded electrical outlet.

Note: To obtain power supply redundancy, you must connect the second power supply to aseparate AC circuit.

20. Start a communications program.

You must use some form of communications program to be able to perform initial network setupand V-Series configuration. You can start a communications program through a remotemanagement device after connecting to the serial port.

21. Turn the power switch on the V-Series system to the On (|) position.

The system verifies the hardware and loads the operating system.

22. If the storage array does not automatically discover V-Series system WWNs after you connect theV-Series system to the storage array, you must obtain the WWNs manually.

After you finish

Continue with the appropriate setup of your V-Series system and Data ONTAP.

Related concepts

Settings for connecting to an ASCII terminal console on page 85

Related tasks

Connecting a V-Series stand-alone system to back-end devices on page 55

Obtaining WWNs manually on page 83

Connecting a V-Series system to back-end devices | 59Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 60: Vs Install

60 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 61: Vs Install

Validating a V-Series installation (8.1 Cluster-Mode and later)

It is important to detect and resolve any configuration errors before you deploy your system in aproduction environment.

In Data ONTAP 8.1 Cluster-Mode and later, you can use Data ONTAP commands to detectproblems with Data ONTAP configuration, misconfigured zoning, and configuration errors on thestorage array.

Validating a back-end configuration (8.1 Cluster-Mode andlater)

In Data ONTAP 8.1 Cluster-Mode and later, use the storage array config show command tovalidate the back-end configuration. This command returns information that you can use to identifyconfiguration errors and configurations that differ from your intended design.

Step

1. To display information about how storage arrays connect to the cluster, enter the followingcommand:

storage array config show

ExampleThe system displays information similar to the following:

LUN LUNNode Group Count Array Name Array Target Port Initiator------------ ----- ----- ---------------------------- ----------------------- ---------vgv3040f46b 0 10 HP_HSV300_1 50014380025d1508 0a 50014380025d1509 0c 50014380025d150c 0b 50014380025d150d 0d 1 10 IBM_2107900_1 5005076303030124 0a 5005076303088124 0b 5005076303130124 0c 5005076303188124 0d8 entries were displayed.

The information displayed in this example shows a valid back end configuration with four portsconnected to two storage arrays.

When an error in the configuration is detected, the system displays the following message:Warning: Configuration errors were detected. Use 'storage errors show'for detailed information.

You must fix any errors shown by storage errors show.

61Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 62: Vs Install

Displaying back-end configuration errorsIn Data ONTAP 8.1 Cluster-Mode, the storage errors show command provides details, at thearray LUN level, about back-end configuration issues with third-party storage used by V-Seriessystems.

About this task

If you are reverting to an earlier release, you need to run storage errors show before revertingto determine whether there are any back-end configuration errors that must be fixed beforedowngrading. Some errors that Data ONTAP 8.1 tolerates might cause a panic in earlier releases.

Step

1. Enter the following command to display errors detected in the back-end storage arrayconfiguration:

storage errors show

Error information similar to the following is displayed:

Disk: vnv3070f20b:vnci9124s54:1-24.126L23 vnci9124s54:1-24.126L23 (600a0b800019e999000036b24bac3983): This array LUN reports an invalid block size and is not usable. Only a block size of 512 is supported

Back-end configuration errors detected by the storageerrors show command

In 8.1 Cluster-Mode, the storage errors show command provides details, at the array LUNlevel, about back-end configuration issues for V-Series systems using third-party storage. Normally,to validate back-end configuration in 8.1 Cluster-Mode, you run storage array config showfirst. You then run storage errors show if the storage array config show tells you to.

You must fix errors detected by storage errors show before assigning array LUNs to yoursystem or before downgrading your system to an earlier release.

The storage errors show command identifies back-end configuration that prevents the V-Seriessystem and storage array from operating normally together. It also identifies back-end configurationthat does not comply with Data ONTAP requirements. The issues identified include the following:

• Fewer than two paths to an array LUN.• All paths to an array LUN are on the same storage array controller.• Two array LUNs are presented with the same LUN ID.

62 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 63: Vs Install

• The LUN IDs for the same LDEV are not the same on all target ports.• The array LUN exceeds the Data ONTAP maximum LUN size.• The array LUN does not meet the Data ONTAP minimum LUN size.• The block size of an array LUN is invalid.• An Engenio Access LUN was presented to V-Series

The storage errors show command does not identify configuration that does not conform tobest practice recommendations or conditions that can occur during transitional states that might notmatch typical intentions.

For example, you might see more LUN groups than intended, but this is not identified by DataONTAP as an error. This system tolerates this configuration because during migration of LUNs fromone LUN group to another an extra LUN group might be seen in storage array config showoutput until the migration is complete.

Example output for storage errors show

The output of storage errors show is grouped by storage array. The name and serialnumber of the LUN are shown, when applicable.

In the following storage errors show output, on IBM_1742_1 more than one LUN ID isbeing used for the same array LUN. On AMS2300_1, there is only one path to the specifiedLUN.

> storage errors showIBM_1742_1----------NAME (Serial #): This Array LUN is using multiple LUN IDs. Only one LUN ID per serial number is supported.NAME (Serial #): This Array LUN is using multiple LUN IDs. Only one LUN ID per serial number is supported.

AMS2300_1---------NAME (Serial #): This Array LUN is only available on one path. Proper configuration requires two paths.

Validating a V-Series installation (8.1 Cluster-Mode and later) | 63Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 64: Vs Install

64 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 65: Vs Install

Validating a V-Series installation (8.x 7-Mode)

It is important to detect and resolve any configuration errors before you bring the configurationonline in a production environment.

Checking the number of paths (8.0.x and 8.1 7-Mode)For systems running Data ONTAP 8.0 and 8.1 7-Mode, use the storage array show-configcommand to list the array LUNs that the storage system can access, and to check that each arrayLUN is visible through both paths.

Steps

1. Enter the following command to show the array LUNs in your configuration:

storage array show-config

ExampleThe following output shows correctly configured array LUNs. Each LUN group contains twopaths.

LUN Group Array Name Array Target Ports Switch Port InitiatorGroup 0 (4 LUNS) HP_HSV210_1 50:00:1f:e1:50:0a:86:6d vnmc4300s35:11 0b 50:00:1f:e1:50:0a:86:69 vnbr4100s31:15 0cGroup 1 (4 LUNS) HP_HSV210_1 50:00:1f:e1:50:0a:86:68 vnbr4100s31:1 0a 50:00:1f:e1:50:0a:86:6c vnmc4300s35:6 0dGroup 2(50 LUNS) HP_HSV200_1 50:00:1f:e1:50:0d:14:6d vnbr4100s31:5 0a 50:00:1f:e1:50:0d:14:68 vnmc4300s35:3 0d

2. If you see array LUNs in the output from storage array show-config with only one path,enter the following command to show information about each array LUN on the storage array:

storage array show-luns

ExampleArray LUNs that are not configured with two paths are shown as LUNs with a single path. In thefollowing example output, the incorrectly configured LUNs are the 20 LUNs not belonging to agroup and showing only a single path.

LUN Group Array Name Array Target Ports Switch Port InitiatorGroup 2 (50 LUNS) HP_HSV200_1 50:00:1f:e1:50:0d:14:68 vnmc4300s35:3 0d 50:00:1f:e1:50:0d:14:6d vnbr4100s31:5 0a(20 LUNs) HP_HSV200_1 50:00:1f:e1:50:0d:14:69 vnmc4300s35:2 0e

65Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 66: Vs Install

Example output showing correct and incorrect pathing(8.0.x and 8.1 7-Mode)

By learning to interpret Data ONTAP command output, you can determine whether there aresufficient paths to an array LUN.

For systems running Data ONTAP 8.0.x 7-Mode or 8.1 7-Mode, you use the commands in thefollowing table to check pathing.

Run this command... To...

storage array show-config • List the array LUNs that the V-Series systemcan access

• Check that each array LUN is visible throughboth paths

storage array show-luns • Obtain details about an individual LUN

A single path indicates a configuration problem. The second path can have a problem at the V-Seriesport, the switch, or the storage array port.

Output of storage array show-config showing two paths

The following example shows output from a V-Series system connected to two storage arrays.

> storage array show-config LUN Group Array Name Array Target Ports Switch Port InitiatorGroup 0 (4 LUNS) HP_HSV210_1 50:00:1f:e1:50:0a:86:6d vnmc4300s35:11 0b 50:00:1f:e1:50:0a:86:69 vnbr4100s31:15 0cGroup 1 (4 LUNS) HP_HSV210_1 50:00:1f:e1:50:0a:86:68 vnbr4100s31:1 0a 50:00:1f:e1:50:0a:86:6c vnmc4300s35:6 0dGroup 2(50 LUNS) HP_HSV200_1 50:00:1f:e1:50:0d:14:6d vnbr4100s31:5 0a 50:00:1f:e1:50:0d:14:68 vnmc4300s35:3 0d

In this valid example, each LUN group is comprised of LUNs that share the same two paths.Groups 0 and 1 contain a total of 8 LUNs on the HP_HSV210_1 array and Group 2 contains50 LUNs on the HP_HSV200_1 array.

Output of storage array show-config if there are not two paths

Array LUNs that are not configured with two paths are shown as one or more LUNs with asingle path, similar to the following example. The incorrectly configured LUNs are the 20LUNs not belonging to a group and showing only a single path.

LUN Group Array Name Array Target Ports Switch Port Initiator Group 2 (50 LUNS) HP_HSV200_1 50:00:1f:e1:50:0d:14:68 vnmc4300s35:3 0d

66 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 67: Vs Install

50:00:1f:e1:50:0d:14:6d vnbr4100s31:5 0a (20 LUNs) HP_HSV200_1 50:00:1f:e1:50:0d:14:69 vnmc4300s35:2 0e

Output of storage array show-luns

If you see array LUNs in the output from the storage array show-config command with onlyone path, you need to use the storage array show-luns command to show information abouteach array LUN on the storage array. This information enables you to determine which array LUNsare members of groups and which are incorrectly configured. The output from the storage arrayshow-luns HP_HSV200_1 produces output similar to the following (the output is abbreviated).

Name WWPNs vnmc4300s35:3.127L1 50:00:1f:e1:50:0d:14:68, 50:00:1f:e1:50:0d:14:6d vnmc4300s35:3.127L2 50:00:1f:e1:50:0d:14:68, 50:00:1f:e1:50:0d:14:6d vnmc4300s35:3.127L3 50:00:1f:e1:50:0d:14:68, 50:00:1f:e1:50:0d:14:6d . . . vnbr4100s31:5.126L49 50:00:1f:e1:50:0d:14:6d, 50:00:1f:e1:50:0d:14:68 vnmc4300s35:3.127L50 50:00:1f:e1:50:0d:14:68, 50:00:1f:e1:50:0d:14:6d vnmc4300s35:3.127L51 50:00:1f:e1:50:0d:14:69, vnmc4300s35:3.127L52 50:00:1f:e1:50:0d:14:69, . . . vnbr4100s31:5.126L53 50:00:1f:e1:50:0d:14:69, vnbr4100s31:5.126L70 50:00:1f:e1:50:0d:14:69,

LUNs 1 - 50 make up Group 2, the array LUNs configured with two ports as shown with thestorage array show-config command. LUNs 51 through 70 are the 20 LUNs that contain asingle path connected only to Port 50:00:1f:e1:50:0d:14:69 of the storage array.

Related concepts

Valid path setup examples on page 33

Invalid path setup examples on page 69

Validating a V-Series installation (8.x 7-Mode) | 67Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 68: Vs Install

68 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 69: Vs Install

Troubleshooting

You should validate your configuration during initial installation so you can resolve issues beforeyour configuration is put into a production environment.

Invalid path setup examplesPath setup can be invalid because paths to an array LUN are not redundant or the number of paths toan array LUN does not meet Data ONTAP requirements.

Related concepts

Requirement for redundant setup of components in a path on page 25

Required number of paths to an array LUN on page 26

Valid path setup examples on page 33

Invalid path setup: too many paths to an array LUN (8.0.x and 8.1.x and 7-Mode)

Data ONTAP 8.0.x and 8.1.x 7-Mode require two paths to an array LUN; more than two paths to anarray LUN are not supported.

The path setup in the following example is invalid because the same array LUN would be accessedover four paths instead of only two paths.

69Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 70: Vs Install

Storagearray

Switch 1 Switch 2

3 3

4 5 4 5

A A

B BLUN 1

0a 0b 0c 0d

vs1

Controller 1 Controller 2

In this invalid scenario, Connection 3 from 0a through Switch 1 is incorrectly zoned to bothConnection 4 and Connection 5. Likewise, Connection 3 from 0c through Switch 2 is incorrectlyzoned to both Connection 4 and Connection 5. The result is that array LUN 1 is seen over more thantwo paths.

For this configuration to be correct for 8.0.x and 8.1.x 7-Mode, FC initiator port 0a must see eitherController 1 Port A or Controller 2 Port A on the storage array, but not both. Likewise, FC initiatorport 0c must see either port Controller 1 Port B or Controller 2 Port B on the storage array, but notboth.

Related concepts

Required number of paths to an array LUN on page 26

Valid path setup examples on page 33

Invalid path setup: alternate paths are not configuredIt is important to set up alternate paths to array LUNs to provide access to all V-Series LUNs fromboth V-Series FC initiators and to avoid a single point of failure (SPOF).

The following configuration is invalid because it does not provide alternate paths from each V-SeriesFC initiator port on the V-Series systems to each LUN on the storage array. Both FC initiator portsfrom the same V-Series system are connected to the storage array through the same switch.

70 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 71: Vs Install

B

D

0a 0b 0c 0d 0a 0b 0c 0d

AA

B

D

CC

Controller 2

z1

z1 z3

z3 z4

Switch 1 Switch 2

Fabric 1 Fabric 2

Storage array

Controller 1

z2

z2 z4LUN group 1

LUN group 2

Clusterinterconnect cables

vs2vs1

Assume that the following zoning is in place in this invalid example:

• For vs1:

• 0a is zoned to see Controller 1 Port A• 0c is zoned to see Controller 1 Port C

• For vs2:

• 0a is zoned to see Controller 2 Port A• 0c is zoned to see Controller 2 Port C

In this sample configuration, the problems that result from not having alternate paths are:

• Each switch becomes a SPOF.• vs1’s FC initiator port 0a can access LUNs in LUN group 1 on the storage array’s Controller 1

port A, but it cannot access the LUNs in LUN group 2.• vs2’s FC initiator port 0a can access LUNs in LUN group 1 through the storage array’s Controller

2 port A, but it cannot access LUNs in LUN group 2.

To make this a valid configuration, the following changes must be made:

Troubleshooting | 71Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 72: Vs Install

• vs1’s FC initiator port 0c must be connected to Switch 2.• vs2’s FC initiator port 0a must be connected to Switch 1.• Appropriate zoning must be configured.

If you are using multiple ports on a storage array that supports configuring a specific set of LUNson a selected set of ports, ensure that a given FC initiator port sees all array LUNs presented onthe fabric.

Related concepts

Requirement for redundant setup of components in a path on page 25

72 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 73: Vs Install

Installation quick start (7-Mode and third-partystorage only)

If you are familiar with setting up a system running Data ONTAP, quick start instructions might besufficient to help you set up your V-Series system to work with a storage array.

The quick start installation instructions are for a V-Series configuration in which the V-Seriessystems are running Data ONTAP 7-Mode and the configuration uses only third-party storage.

Steps

1. Example configuration for the installation quick start (7-Mode and third-party storage)on page 73

2. Performing pre-installation tasks on the storage array on page 74

3. Installing the V-Series system on page 75

4. Setting up the switches on page 76

5. Setting up LUN security on page 77

6. Assigning an array LUN to a V-Series system and creating the root volume on page 77

7. Installing Data ONTAP and licenses on page 79

8. Testing your setup on page 80

9. Additional setup on page 81

Example configuration for the installation quick start (7-Mode and third-party storage)

Setting up connectivity between a V-Series system and a storage array is basically the same for anyconfiguration.

Refer to this one 4-port array LUN group example as you are using the quick start. This is therecommended configuration. It is supported with any V-Series model, any switch, and any storagearray, regardless of vendor.

73Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 74: Vs Install

Storagearray

vs1 vs2

0a 0b 0c 0d 0a 0b 0c 0d

1A

Switch 1 Switch 2

Fabric 1 Fabric 2

1B

2A

2BLUN group 1

z1 z3 z2 z4

z4z2

z3z1

Controller 1 Controller 2

In this configuration with one 4-port LUN group, array LUNs are mapped to four ports on the storagearray. The array LUN group is presented to both nodes in the HA pair configuration on differentarray target ports. However, each V-Series system can see an array LUN, end-to-end, through onlytwo paths. Zoning is configured so that each FC initiator port on the V-Series systems can accessonly a single target array port.

Note: V-Series FC initiator port names and storage array port names vary depending on the V-Series model and storage array model. In the illustration, Controller 1 and Controller 2 are thehardware components on which the ports are located. Different vendors and different storage arraymodels use different terminology to represent these hardware components (for example, cluster orcontroller for Hitachi, HP XP, and IBM; Storage Processor (SP) for EMC CLARiiON; andController Module for Fujitsu ETERNUS).

Performing pre-installation tasks on the storage arrayBefore you can begin installing a V-Series configuration, the storage array administrator mustprepare storage for Data ONTAP to use.

Steps

1. Ensure compliance with supported storage array models, firmware, switches, Data ONTAPversion, root and core LUN sizes.

See the V-Series Support Matrix at support.netapp.com.

74 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 75: Vs Install

2. Ask your storage administrator to create at least four LUNs on the storage array for the V-Seriessystem.

Each node in the HA pair requires an array LUN for the root volume and an array LUN for coredumps.

3. Ask your storage array administrator to configure any parameters on the storage array that arerequired to work with the Data ONTAP.

See the V-Series Implementation Guide for Third-party Storage for information about theparameters that must be set to work with Data ONTAP.

4. Obtain appropriate Data ONTAP software..

Related concepts

Type of zoning recommended for a V-Series configuration on page 44

Minimum array LUN size for the root volume on page 21

When a spare core array LUN is required for core dumps on page 22

Installing the V-Series systemAfter the storage administrator makes storage available to Data ONTAP, you are ready to install theV-Series system.

Steps

1. Power on the V-Series system and interrupt the boot process by pressing Ctrl-C when you see thefollowing message on the console:

Starting Press CTRL-C for special boot menu

2. Select “Maintenance mode boot” on the menu.

Do not proceed any further with V-Series system installation and setup at this time.

3. Check the settings of the V-Series HBAs to ensure that they are configured as initiators.

a. Determine which ports are configured as target ports:

fcadmin config

b. Configure the required ports as initiator ports:

fcadmin config -t initiator port#

4. Install the Fibre Channel cables connecting the V-Series system to switches and switches to thestorage array.

Installation quick start (7-Mode and third-party storage only) | 75Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 76: Vs Install

Setting up the switchesSwitch configuration is typically done by the storage or SAN administrator. Some customers usehard zoning and other customers use soft zoning.

About this task

The switches need to be zoned so that the V-Series systems and the storage arrays can see each other.You need to use single-initiator zoning so that the V-Series FC initiator ports do not see each other.

Step

1. Zone the switches:

a. Log on to the storage array and obtain the WWPNs of the FC adapters of the storage array.

b. Use the Fibre Channel switch commands to zone each switch so that the storage array and theV-Series system see each other’s WWPNs.

In the example configuration, the zones are as follows for soft zoning.

Zone V-Series system and port Storage array controller and port

Switch 1

z1 vs1 0a Controller 1 1A

z2 vs2 0a Controller 1 2A

Switch 2

z3 vs1 0c Controller 2 1B

z4 vs2 0c Controller 2 2B

Related concepts

Zoning guidelines on page 43

76 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 77: Vs Install

Setting up LUN securityThe storage array administrator must configure the storage array so that other hosts cannot access thearray LUNs intended for use by Data ONTAP.

About this task

The concept of LUN security is similar to zoning except that setting up LUN security is performedon the storage array. LUN security keeps different servers from using each others storage on theSAN. LUN security might also be referred to as LUN masking.

Steps

1. Set up LUN security on the storage array.

2. Create host groups, or the equivalent, for the V-Series system.

The term host group is used on some storage arrays to describe a configuration parameter thatenables you to specify host access to specific ports on the storage array. Different storage arraysuse different terms to describe this configuration parameter. Each storage array vendor has itsown process for creating a host group or the equivalent.

Related concepts

Planning for LUN security on the storage arrays on page 23

Assigning an array LUN to a V-Series system and creatingthe root volume

For each V-Series system, you must assign an array LUN to it and create its root volume before youcan install Data ONTAP.

About this task

At this point it is easiest if you assign only one array LUN to each V-Series system. You can addadditional array LUNs after you have installed Data ONTAP software and verified yourconfiguration.

For Data ONTAP 8.x, the root volume must be a FlexVol volume. You should assign only one arrayLUN to the aggregate with the root volume. The FlexVol volume can then use all the space for theroot volume.

Installation quick start (7-Mode and third-party storage only) | 77Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 78: Vs Install

Steps

1. Return to the V-Series console.

The system should still be in Maintenance Mode.

2. Enter the following command to confirm that you can see the array LUNs created by the storageadministrator:

disk show -v

If you do not see the array LUNs, reboot the V-Series system into Maintenance Mode. Double-check that the array LUNs exist, that the host groups were created correctly, that zoning iscorrect, and that cabling is correct.

3. Enter the following command to assign the first array LUN to the V-Series system:

disk assign {disk_name}

For example, on V-Series system 1, enter:

disk assign L1

On V-Series system 2, enter:

disk assign L2

Note: The best practice recommendation is to use the block checksum type (BCS), the default,because it supports deduplication and compression.

4. Confirm that the system ID of the V-Series system is shown as the owner of the array LUN:

disk show -v

If the system ID of a V-Series system is shown, the array LUN was assigned to the V-Seriessystem.

5. Exit Maintenance Mode:

halt

6. At the boot environment prompt, enter the following command:

bye

7. Press Ctrl-C to interrupt the boot process and to display the boot options menu.

8. Select the following to create the root volume with one of the array LUNs that you assigned tothis storage system:Clean configuration and initialize all disks.

9. Enter the following when the system prompts you whether you want to install a new file system:

y

10. The system responds with the following message:This will erase all the data on the disks, are you sure?

Enter:

78 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 79: Vs Install

y

The storage system creates a FlexVol root volume named “vol0” in an aggregate named “aggr0”(the system automatically creates the aggregate). After these are created on one of the assignedarray LUNs, the system prompts for setup information.

Related concepts

Planning for Data ONTAP use of array LUNs on page 17

Installing Data ONTAP and licensesWhen a V-Series system is ordered without disk shelves, you must install Data ONTAP and therequired v-series license.

About this task

You can perform Data ONTAP software installation using either of the following methods:

• Map the V-Series C$ share to your laptop as a CIFS share.• On an HTTP server, use the software install http://ipaddr/file.zip command to

install the software.

This procedure describes using the CIFS share method.

Steps

1. Install the CIFS and V-Series licenses.

You can install other licenses now or later.

2. Run CIFS setup in workgroup mode.

3. Map the V-Series C$ to your laptop.

4. Make an /etc/software directory (or enter the software list command to createthe /etc/software directory).

5. Copy the Data ONTAP executable to the /etc/software directory.

6. Enter the following command to run the software executable:

software install <release_setup.exe>

7. Download the Data ONTAP software.

8. Reboot the V-Series system.

9. If your V-Series system is a node in an HA pair, repeat the setup and Data ONTAP softwareinstallation steps on the partner node before validating your setup.

Installation quick start (7-Mode and third-party storage only) | 79Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 80: Vs Install

Testing your setupBefore putting a V-Series system into a production environment, your must test your setup to ensurethat everything works.

About this task

Testing for proper setup of a V-Series system with a storage array includes:

• Checking your system to ensure that the configuration is as expected• Verifying that there are two paths to storage• Testing normal controller failover• Testing path failover and controller failover

Steps

1. Use the following commands to confirm that the results of configuration are what you expect.

Use this command... To...

disk show -v Check whether all array LUNs are visible.

sysconfig -v Check which V-Series FC initiator ports, switch ports, and arrayLUNs are used.

storage array show-config Display connectivity to back-end storage arrays.

sysconfig -r Check the aggregate configuration and ensure that spare arrayLUNs are available.

2. Ensure that there are two paths to each array LUN so that the V-Series system can continue tooperate when running on a single path.

a. Enter the following command:

storage array show-config

b. Check whether two paths to the array LUNs are shown.

If you do not see two paths to the array LUNs, check zoning, “host group” configuration, andcabling.

c. Look at the adapters shown to see whether all paths are on a single adapter.

If you see both paths through only one V-Series system FC initiator port (the V-Seriessystem’s 0c port, for example) this is an indication that the back-end zoning is redundantlycrossed. This is not a supported configuration.

Note: Do not continue with testing until you see two paths.

3. Test normal controller failover.

80 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 81: Vs Install

a. On the partner node (vs2 in the example configuration), enter the following commands toperform a cf takeover: A takeover of vs1 should occur.

Example

vs2 > cf status

vs2> cf takeover (vs1 goes down; vs2 takes over)

vs1/vs2> df (vs2 looks at vs1)

vs1/vs2> partner (to switch to the partner)

vs2> sysconfig

vs2 (takeover)> cf status

vs2 (takeover)> cf giveback (to return to normal operation)

b. Repeat the same commands on the local V-Series system (vs1 in the example configuration).

4. Test path failover and cluster failover.

a. On the local V-Series system (vs1 in the example configuration), enter the followingcommands:

fcadmin offline 0a

storage show disk -p (You should see only one path) or storage array show-config

fcadmin offline 0c (HA pair takeover should occur)

b. On the partner node (vs2 in the example configuration), enter the following commands:

cf takeover

cf giveback

c. After both the local node (vs1) and the partner node (vs2) are back online, go to the partnernode (vs2 in the example) and repeat the procedure.

Related concepts

Validating a V-Series installation (8.x 7-Mode) on page 65

Additional setupAfter initial installation and testing, you can assign additional array LUNs to your V-Series systemsand set up various Data ONTAP features on your systems.

Tasks after initial installation and testing include the following:

• Assign additional array LUNs to the V-Series systems as required.

Installation quick start (7-Mode and third-party storage only) | 81Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 82: Vs Install

After the basic V-Series setup is complete, you can have the storage administrator createadditional array LUNs for the V-Series systems, as needed.

• Create Data ONTAP aggregates and volumes as desired.• Set up additional Data ONTAP features on your V-Series system, for example, features for

backup and recovery.

Related concepts

Planning for Data ONTAP use of array LUNs on page 17

Determining the array LUNs for specific aggregates on page 39

Related information

Data ONTAP documentation on NOW — now.netapp.com/NOW/knowledge/docs/ontap/ontap_index.shtml

82 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 83: Vs Install

Obtaining WWNs manually

If the V-Series system is not connected to the SAN switch, use this procedure to obtain the WorldWide Port Names (WWPNs) of the V-Series FC initiator ports that will be used to connect the V-Series system to the switch.

About this task

Having the switch automatically discover WWPNs is the preferred method of obtaining WWPNsbecause you can avoid potential errors resulting from typing the WWPNs into the switchconfiguration.

Steps

1. Connect the V-Series system console connection to a laptop computer.

2. Power on your V-Series system.

Interrupt the boot process by pressing Ctrl-C when you see the following message on theconsole:

Starting Press CTRL-C for floppy boot menu

3. Select the Maintenance Mode option on the boot options menu.

4. Enter the following to list the WWPNs of the V-Series system FC initiator ports:

storage show adapter

To list a specific adapter WWPN, add the adapter name, for example, storage show adapter0a.

5. Record the WWPNs that will be used and leave the V-Series system in Maintenance Mode.

Related tasks

Connecting a V-Series stand-alone system to back-end devices on page 55

Connecting an HA pair to back-end devices on page 57

83Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 84: Vs Install

84 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 85: Vs Install

Settings for connecting to an ASCII terminalconsole

You can attach an ASCII terminal console through the serial port on the back of your V-Seriessystem if you want to do local system administration.

The ASCII terminal console enables you to monitor the boot process and helps you configure yourV-Series system after it boots.

The ASCII terminal console is connected to your V-Series system with a DB-9 serial adapter,attached to an RJ-45 converter cable. The DB-9 adapter connects into the DB-9 serial port on theback of your V-Series system.

The following table shows how the DB-9 serial cable is wired. Input indicates data flow from theASCII terminal to your V-Series system and output indicates data flow from your V-Series system tothe ASCII terminal.

Pin number Signal Data flow direction Description

1 DCD Input Data carrier detect

2 SIN Input Serial input

3 SOUT Output Serial input

4 DTR Output Data terminal ready

5 GND N/A Signal ground

6 DSR Input Data set ready

7 RTS Output Request to send

8 CTS Input Clear to send

9 RI Input Ring indicator

The following table shows the communications parameters for connecting an ASCII terminal consoleto a V-Series system. You need to set the following communications parameters to the same valuesfor both your V-Series system and the ASCII terminal.

Parameter Setting

Baud 9600

Data bit 8

Parity None

85Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 86: Vs Install

Parameter Setting

Stop bits 1

Flow control None

Note: See your terminal documentation for information about changing the ASCII consoleterminal settings.

Related tasks

Connecting a V-Series stand-alone system to back-end devices on page 55

Connecting an HA pair to back-end devices on page 57

86 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 87: Vs Install

Target queue depth customization

By default, the V-Series system can send 256 commands to the target port on the storage array. Formost companies the default target queue depth is appropriate, but you can change it if it is not.

The target queue depth limits the number of commands (read and write requests) that the storagearray port must handle from V-Series systems and non V-Series hosts. When multiple initiators areaccessing a target port on the storage array, you do not want the outstanding commands in the queuebuffer, from all initiators together, to exceed what the storage array can handle. Otherwise, theperformance of V-Series systems and non V-Series hosts can suffer. Storage arrays differ in thenumber of commands that they can handle in the queue buffer.

Non V-Series hosts also provide a means for limiting the target queue length. See the documentationfor the host for information about limiting target queue length.

Note: Target queue depth might also be referred to as target queue length, Q-Depth, or MaxThrottle.

Guidelines for specifying the appropriate target queue depthYou need to consider the impact of all the initiators on the storage array port when you are planningthe configuration for a specific V-Series system or a specific non V-Series host.

If your deployment includes more than one initiator, you need to configure each initiator so that thetotal number of commands by all initiators does not exceed the maximum that the storage array portcan handle.

Guidelines for specifying the appropriate target queue length are as follows:

• Do not configure a value of 0. 0 (zero) means there is no limit on the outstanding commands.• Divide 256 by the number of V-Series systems and non V-Series hosts that will be acting as

initiators on the target port on the storage array, configuring each V-Series system and non V-Series host with the resulting value.

• Consider the volume of commands that specific initiators would be likely to send to the targetport. You could then configure higher values for initiators likely to send a greater number ofrequests and a lower value for initiators likely to send a lesser number of requests.

• Configure non V-Series hosts according to the guidelines provided for those hosts.

Related tasks

Setting the target queue depth on page 88

87Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 88: Vs Install

Setting the target queue depthThe default target queue depth is acceptable for most companies, but you can change it if you wantto.

About this task

For Cluster-Mode, use this option through nodeshell.

Step

1. Use the following option to set the target queue depth:

options disk.target_port.cmd_queue_depth value

Related concepts

Guidelines for specifying the appropriate target queue depth on page 87

88 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 89: Vs Install

Storage array model equivalents

If you are working with storage arrays from different vendors, you might be interested in knowingthe model numbers used by different vendors for the same storage array hardware.

Equivalent Hitachi, HP XPxxx, and Sun models

Hitachi HP XP Sun

VSP P9500 --

USP-V XP24000, XP20000 StorEdge 9990V

USP XP12000 StorEdge 9990

NSC XP10000 StorEdge 9995

Equivalent IBM, Engenio, and Sun models

IBM Engenio Sun

DS4700 3994 StorEdge 6140

DS4800 6998 StorEdge 6540

DS5020 StorEdge 6180

DS5100/DS5300 7900 StorEdge 6580/6780

89Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 90: Vs Install

90 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 91: Vs Install

Terminology comparison between storage arrayvendors

Different vendors sometimes use different terms to describe the same things.

The following table provides a mapping between some common vendor terms.

Term Vendor Definition

host group Hitachi A configuration entity thatenables you to specify hostaccess to ports on the storagearray. You identify the FCinitiator port WWNs for the V-Series systems that you want toaccess the LUNs; the processdiffers according to vendor andsometimes differs for differentstorage array models of thesame vendor.

IBM DS4xxx/DS5xxx

EMC DMX

HP XP

volume group IBM DS8xxx

Storage Group EMC CX

cluster IBM XIV

host affinity group Fujitsu ETERNUS4000,ETERNUS6000,ETERNUS8000, ETERNUSDX8000, ETERNUS DX400

host definition 3PAR

host 3PAR, HP EVA

-- IBM ESS No concept of "host group."You must create a host in theESS user interface for each V-Series FC initiator port that youplan to connect to the storagearray and map each host to aport.

parity group IBM DS8xxx, IBM ESS,Hitachi, HP XP

The arrangement of disks in theback-end that together form thedefined RAID level.

RAID group Data ONTAP, EMC CX,Fujitsu ETERNUS

array, RAID set IBM DS4xxx/DS5xxx

91Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 92: Vs Install

Term Vendor Definition

Parity RAID, Parity RAIDgroup

EMC DMX A DMX feature that providesparity data protection on thedisk device level using physicalparity volumes.

disk group HP EVA A set of physical disks thatform storage pools from whichyou can create virtual disks.

parity set, RAID set 3PAR A group of parity-protectedchunklets. (A chunklet is a 256-MB block of continguous spaceon a physical disk.)

cluster Data ONTAP In Data ONTAP 8.0 Cluster-Mode and later, a cluster is agrouping of nodes that enablesmultiple nodes to pool theirresources into a large virtualserver and to distribute workacross the cluster.

Hitachi, HP XP A hardware component on thestorage arrays that contains theports to which hosts attach.

IBM XIV An entity that groups multiplehosts together and assigns thesame mapping to all the hosts.

92 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 93: Vs Install

Term Vendor Definition

controller Data ONTAP The component of a V-Seriessystem that runs the DataONTAP operating system andinteracts with back-end storagearrays. Controllers are alsosometimes called heads, orCPU modules.

Hitachi, HP EVA, HP XP, IBM A hardware component on thestorage arrays that contains theports to which hosts attach.interface module IBM XIV

node 3-PAR

FEBE Board EMC Symmetrix

Storage processor (SP) EMC CLARiiON

Controller Module Fujitsu ETERNUS

LUN Many storage arrays A grouping of one or moredisks or disk partitions into onespan of disk storage space. Inthe Data ONTAPdocumentation, this is referredto as array LUN.

Data ONTAP The V-Series system canvirtualize the storage attachedto it and serve the storage up asLUNs to applications andclients outside the V-Seriessystem (for example, throughiSCSI and FCP). Clients areunaware of where a front-endLUN is stored.

LUN, virtual disk HP EVA A virtual disk (called a Vdisk inthe user interface) is asimulated disk drive created ina disk group. You can assign acombination of characteristicsto a virtual disk, such as aname, redundancy level, andsize. Presenting a virtual diskoffers its storage to a host.

Terminology comparison between storage array vendors | 93Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 94: Vs Install

Term Vendor Definition

array LUN Data ONTAP documentation,Data ONTAP storagemanagement tools

The Data ONTAPdocumentation uses the termarray LUN to distinguish LUNson the storage arrays fromfront-end LUNs (Data ONTAPLUNs).

vLUN 3PAR (volume-LUN) A pairingbetween a virtual volume and alogical unit number (LUN). Fora host to see a virtual volume,the volume must be exported asa LUN by creating VLUNs onthe storage array.

volume IBM, IBM XIV Equivalent to what otherstorage array vendors call aLUN.

Data ONTAP A logical entity that holds userdata that is accessible throughone or more of the accessprotocols supported by DataONTAP, including NetworkFile System (NFS), CommonInternet File System (CIFS),HyperText Transfer Protocol(HTTP), Fibre ChannelProtocol (FCP), and InternetSCSI (iSCSI). The V-Seriessystem treats an IBM volume asa disk.

EMC DMX A general term referring to astorage device. A physicalvolume corresponds to a singledisk device.

virtual volume 3PAR A virtual storage unit createdby mapping data from one ormore logical disks.

94 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 95: Vs Install

Abbreviations

A list of abbreviations and their spelled-out forms are included here for your reference.

A

ABE (Access-Based Enumeration)

ACE (Access Control Entry)

ACL (access control list)

ACP (Alternate Control Path)

AD (Active Directory)

ALPA (arbitrated loop physical address)

ALUA (Asymmetric Logical Unit Access)

AMS (Account Migrator Service)

API (Application Program Interface)

ARP (Address Resolution Protocol)

ASCII (American Standard Code for Information Interchange)

ASP (Active Server Page)

ATA (Advanced Technology Attachment)

B

BCO (Business Continuance Option)

BIOS (Basic Input Output System

BCS (block checksum type )

BLI (block-level incremental)

BMC (Baseboard Management Controller)

95Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 96: Vs Install

C

CD-ROM (compact disc read-only memory)

CDDI (Copper Distributed Data Interface)

CDN (content delivery network)

CFE (Common Firmware Environment)

CFO (controller failover)

CGI (Common Gateway Interface)

CHA (channel adapter)

CHAP (Challenge Handshake Authentication Protocol)

CHIP (Client-Host Interface Processor)

CIDR (Classless Inter-Domain Routing)

CIFS (Common Internet File System)

CIM (Common Information Model)

CLI (command-line interface)

CP (consistency point)

CPU (central processing unit)

CRC (cyclic redundancy check)

CSP (communication service provider)

96 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 97: Vs Install

D

DAFS (Direct Access File System)

DBBC (database consistency checker)

DCE (Distributed Computing Environment)

DDS (Decru Data Decryption Software)

dedupe (deduplication)

DES (Data Encryption Standard)

DFS (Distributed File System)

DHA (Decru Host Authentication)

DHCP (Dynamic Host Configuration Protocol)

DIMM (dual-inline memory module)

DITA (Darwin Information Typing Architecture)

DLL (Dynamic Link Library)

DMA (direct memory access)

DMTD (Distributed Management Task Force)

DNS (Domain Name System)

DOS (Disk Operating System)

DPG (Data Protection Guide)

DTE (Data Terminal Equipment)

Abbreviations | 97Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 98: Vs Install

E

ECC (Elliptic Curve Cryptography) or (EMC Control Center)

ECDN (enterprise content delivery network)

ECN (Engineering Change Notification)

EEPROM (electrically erasable programmable read-only memory)

EFB (environmental fault bus)

EFS (Encrypted File System)

EGA (Enterprise Grid Alliance)

EISA (Extended Infrastructure Support Architecture)

ELAN (Emulated LAN)

EMU environmental monitoring unit)

ESH (embedded switching hub)

F

FAQs (frequently asked questions)

FAS (fabric-attached storage)

FC (Fibre Channel)

FC-AL (Fibre Channel-Arbitrated Loop)

FC SAN (Fibre Channel storage area network)

FC Tape SAN (Fibre Channel Tape storage area network)

FC-VI (virtual interface over Fibre Channel)

FCP (Fibre Channel Protocol)

FDDI (Fiber Distributed Data Interface)

FQDN (fully qualified domain name)

FRS (File Replication Service)

FSID (file system ID)

FSRM (File Storage Resource Manager)

FTP (File Transfer Protocol)

98 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 99: Vs Install

G

GbE (Gigabit Ethernet)

GID (group identification number)

GMT (Greenwich Mean Time)

GPO (Group Policy Object)

GUI (graphical user interface)

GUID (globally unique identifier)

H

HA (high availability)

HBA (host bus adapter)

HDM (Hitachi Device Manager Server)

HP (Hewlett-Packard Company)

HTML (hypertext markup language)

HTTP (Hypertext Transfer Protocol)

Abbreviations | 99Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 100: Vs Install

I

IB (InfiniBand)

IBM (International Business Machines Corporation)

ICAP (Internet Content Adaptation Protocol)

ICP (Internet Cache Protocol)

ID (identification)

IDL (Interface Definition Language)

ILM (information lifecycle management)

IMS (If-Modified-Since)

I/O (input/output)

IP (Internet Protocol)

IP SAN (Internet Protocol storage area network)

IQN (iSCSI Qualified Name)

iSCSI (Internet Small Computer System Interface)

ISL (Inter-Switch Link)

iSNS (Internet Storage Name Service)

ISP (Internet storage provider)

J

JBOD (just a bunch of disks)

JPEG (Joint Photographic Experts Group)

K

KB (Knowledge Base)

Kbps (kilobits per second)

KDC (Kerberos Distribution Center)

100 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 101: Vs Install

L

LAN (local area network)

LBA (Logical Block Access)

LCD (liquid crystal display)

LDAP (Lightweight Directory Access Protocol)

LDEV (logical device)

LED (light emitting diode)

LFS (log-structured file system)

LKM (Lifetime Key Management)

LPAR (system logical partition)

LRC (Loop Resiliency Circuit)

LREP (logical replication tool utility)

LUN (logical unit number)

LUSE (Logical Unit Size Expansion)

LVM (Logical Volume Manager)

Abbreviations | 101Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 102: Vs Install

M

MAC (Media Access Control)

Mbps (megabits per second)

MCS (multiple connections per session)

MD5 (Message Digest 5)

MDG (managed disk group)

MDisk (managed disk)

MIB (Management Information Base)

MIME (Multipurpose Internet Mail Extension)

MMC (Microsoft Management Console)

MMS (Microsoft Media Streaming)

MPEG (Moving Picture Experts Group)

MPIO (multipath network input/output)

MRTG (Multi-Router Traffic Grapher)

MSCS (Microsoft Cluster Service

MSDE (Microsoft SQL Server Desktop Engine)

MTU (Maximum Transmission Unit)

102 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 103: Vs Install

N

NAS (network-attached storage)

NDMP (Network Data Management Protocol)

NFS (Network File System)

NHT (NetApp Health Trigger)

NIC (network interface card)

NMC (Network Management Console)

NMS (network management station)

NNTP (Network News Transport Protocol)

NTFS (New Technology File System)

NTLM (NetLanMan)

NTP (Network Time Protocol)

NVMEM (nonvolatile memory management)

NVRAM (nonvolatile random-access memory)

O

OFM (Open File Manager)

OFW (Open Firmware)

OLAP (Online Analytical Processing)

OS/2 (Operating System 2)

OSMS (Open Systems Management Software)

OSSV (Open Systems SnapVault)

Abbreviations | 103Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 104: Vs Install

P

PC (personal computer)

PCB (printed circuit board)

PCI (Peripheral Component Interconnect)

pcnfsd (storage daemon)

(PC)NFS (Personal Computer Network File System)

PDU (protocol data unit)

PKI (Public Key Infrastructure)

POP (Post Office Protocol)

POST (power-on self-test)

PPN (physical path name)

PROM (programmable read-only memory)

PSU power supply unit)

PVC (permanent virtual circuit)

Q

QoS (Quality of Service)

QSM (Qtree SnapMirror)

104 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 105: Vs Install

R

RAD (report archive directory)

RADIUS (Remote Authentication Dial-In Service)

RAID (redundant array of independent disks)

RAID-DP (redundant array of independent disks, double-parity)

RAM (random access memory)

RARP (Reverse Address Resolution Protocol)

RBAC (role-based access control)

RDB (replicated database)

RDMA (Remote Direct Memory Access)

RIP (Routing Information Protocol)

RISC (Reduced Instruction Set Computer)

RLM (Remote LAN Module)

RMC (remote management controller)

ROM (read-only memory)

RPM (revolutions per minute)

rsh (Remote Shell)

RTCP (Real-time Transport Control Protocol)

RTP (Real-time Transport Protocol)

RTSP (Real Time Streaming Protocol)

Abbreviations | 105Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 106: Vs Install

S

SACL (system access control list)

SAN (storage area network)

SAS (storage area network attached storage) or (serial-attached SCSI)

SATA (serial advanced technology attachment)

SCSI (Small Computer System Interface)

SFO (storage failover)

SFSR (Single File SnapRestore operation)

SID (Secure ID)

SIMM (single inline memory module)

SLB (Server Load Balancer)

SLP (Service Location Protocol)

SNMP (Simple Network Management Protocol)

SNTP (Simple Network Time Protocol)

SP (Storage Processor)

SPN (service principal name)

SPOF (single point of failure)

SQL (Structured Query Language)

SRM (Storage Resource Management)

SSD (solid state disk

SSH (Secure Shell)

SSL (Secure Sockets Layer)

STP (shielded twisted pair)

SVC (switched virtual circuit)

106 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 107: Vs Install

T

TapeSAN (tape storage area network)

TCO (total cost of ownership)

TCP (Transmission Control Protocol)

TCP/IP (Transmission Control Protocol/Internet Protocol)

TOE (TCP offload engine)

TP (twisted pair)

TSM (Tivoli Storage Manager)

TTL (Time To Live)

U

UDP (User Datagram Protocol)

UI (user interface)

UID (user identification number)

Ultra ATA (Ultra Advanced Technology Attachment)

UNC (Uniform Naming Convention)

UPS (uninterruptible power supply)

URI (universal resource identifier)

URL (uniform resource locator)

USP (Universal Storage Platform)

UTC (Universal Coordinated Time)

UTP (unshielded twisted pair)

UUID (universal unique identifier)

UWN (unique world wide number)

Abbreviations | 107Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 108: Vs Install

V

VCI (virtual channel identifier)

VCMDB (Volume Configuration Management Database)

VDI (Virtual Device Interface)

VDisk (virtual disk)

VDS (Virtual Disk Service)

VFM (Virtual File Manager)

VFS (virtual file system)

VI (virtual interface)

vif (virtual interface)

VIRD (Virtual Router ID)

VLAN (virtual local area network)

VLD (virtual local disk)

VOD (video on demand)

VOIP (voice over IP)

VRML (Virtual Reality Modeling Language)

VTL (Virtual Tape Library)

W

WAFL (Write Anywhere File Layout)

WAN (wide area network)

WBEM (Web-Based Enterprise Management)

WHQL (Windows Hardware Quality Lab)

WINS (Windows Internet Name Service)

WORM (write once, read many)

WWN (worldwide name)

WWNN (worldwide node name)

WWPN (worldwide port name)

www (worldwide web)

108 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 109: Vs Install

Z

ZCS (zoned checksum)

Abbreviations | 109Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 110: Vs Install

110 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 111: Vs Install

Copyright information

Copyright © 1994–2011 NetApp, Inc. All rights reserved. Printed in the U.S.A.

No part of this document covered by copyright may be reproduced in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in anelectronic retrieval system—without prior written permission of the copyright owner.

Software derived from copyrighted NetApp material is subject to the following license anddisclaimer:

THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS ORIMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIEDWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANYDIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIALDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTEGOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESSINTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHERIN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OROTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IFADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

NetApp reserves the right to change any products described herein at any time, and without notice.NetApp assumes no responsibility or liability arising from the use of products described herein,except as expressly agreed to in writing by NetApp. The use or purchase of this product does notconvey a license under any patent rights, trademark rights, or any other intellectual property rights ofNetApp.

The product described in this manual may be protected by one or more U.S.A. patents, foreignpatents, or pending applications.

RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject torestrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and ComputerSoftware clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).

111Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 112: Vs Install

112 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 113: Vs Install

Trademark information

NetApp, the NetApp logo, Network Appliance, the Network Appliance logo, Akorri,ApplianceWatch, ASUP, AutoSupport, BalancePoint, BalancePoint Predictor, Bycast, CampaignExpress, ComplianceClock, Cryptainer, CryptoShred, Data ONTAP, DataFabric, DataFort, Decru,Decru DataFort, DenseStak, Engenio, Engenio logo, E-Stack, FAServer, FastStak, FilerView,FlexCache, FlexClone, FlexPod, FlexScale, FlexShare, FlexSuite, FlexVol, FPolicy, GetSuccessful,gFiler, Go further, faster, Imagine Virtually Anything, Lifetime Key Management, LockVault,Manage ONTAP, MetroCluster, MultiStore, NearStore, NetCache, NOW (NetApp on the Web),Onaro, OnCommand, ONTAPI, OpenKey, PerformanceStak, RAID-DP, ReplicatorX, SANscreen,SANshare, SANtricity, SecureAdmin, SecureShare, Select, Service Builder, Shadow Tape,Simplicity, Simulate ONTAP, SnapCopy, SnapDirector, SnapDrive, SnapFilter, SnapLock,SnapManager, SnapMigrator, SnapMirror, SnapMover, SnapProtect, SnapRestore, Snapshot,SnapSuite, SnapValidator, SnapVault, StorageGRID, StoreVault, the StoreVault logo, SyncMirror,Tech OnTap, The evolution of storage, Topio, vFiler, VFM, Virtual File Manager, VPolicy, WAFL,Web Filer, and XBB are trademarks or registered trademarks of NetApp, Inc. in the United States,other countries, or both.

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International BusinessMachines Corporation in the United States, other countries, or both. A complete and current list ofother IBM trademarks is available on the Web at www.ibm.com/legal/copytrade.shtml.

Apple is a registered trademark and QuickTime is a trademark of Apple, Inc. in the U.S.A. and/orother countries. Microsoft is a registered trademark and Windows Media is a trademark of MicrosoftCorporation in the U.S.A. and/or other countries. RealAudio, RealNetworks, RealPlayer,RealSystem, RealText, and RealVideo are registered trademarks and RealMedia, RealProxy, andSureStream are trademarks of RealNetworks, Inc. in the U.S.A. and/or other countries.

All other brands or products are trademarks or registered trademarks of their respective holders andshould be treated as such.

NetApp, Inc. is a licensee of the CompactFlash and CF Logo trademarks.

NetApp, Inc. NetCache is certified RealSystem compatible.

113Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 114: Vs Install

114 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 115: Vs Install

How to send your comments

You can help us to improve the quality of our documentation by sending us your feedback.

Your feedback is important in helping us to provide the most accurate and high-quality information.If you have suggestions for improving this document, send us your comments by e-mail to [email protected]. To help us direct your comments to the correct division, include in thesubject line the product name, version, and operating system.

You can also contact us in the following ways:

• NetApp, Inc., 495 East Java Drive, Sunnyvale, CA 94089• Telephone: +1 (408) 822-6000• Fax: +1 (408) 822-4501• Support Telephone: +1 (888) 4-NETAPP

115Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 116: Vs Install

116 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 117: Vs Install

IndexA

aggregatesdetermining array LUNs for 39mixing storage types in 39, 41

array LUN groupsdefined 28valid pathing examples

one 2-port array 33one 4-port array 34

array LUNsassignment changes 19availability for Data ONTAP storage 18availability for host use 17checksum type guidelines 19mixing storage types in aggregates 39, 41names

changes in Data ONTAP displays 32format 30–32paths reflected in 30

number per V-Series system, minimum 20partitioning the load over connections 27paths to

advantages of four 27how reflected in names 30number required 26planning 25valid setup examples 33

provisioning considerations 20requirements for a V-Series system 17result of link failure 35root volume minimum 21size

information location 21LUNs that do not meet 22maximum 21minimum 21

usable space in 21when to check paths to 26

ASCII terminalsettings for connecting to 85

B

back-end configurationerrors, troubleshooting 62

validating for 8.1 Cluster-Mode and later 61back-end configuration errors

displaying 62storage array configuration errors

displaying 62back-end devices

connecting to V-Series systems 55block checksums

guidelines for setting 19space taken in array LUN 21

C

checksumsimpact on usable space in a LUN 21setting for array LUNs 19

commandsstorage array config show 61storage array show-config 65–67storage array show-luns

output example 66, 67storage errors show 61, 62

configurationsdirect-attached 8fabric-attached 8

connecting portsplanning for 53redundancy requirement 53

connecting to storage arraysdirect-attached 8fabric-attached 8guidelines 53supported methods 8

connectionsredundancy requirements for 53

controllers on storage arrayspecifying ports on 53

core dump filecontents of 22space requirement in array LUNs 22

D

Data ONTAP RAID groupsSee RAID groups (Data ONTAP)

Data ONTAP storage

Index | 117Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 118: Vs Install

array LUN availability 18direct-attached configurations 8disk ownership

defined 18of array LUNs 18planning for 18

downgradingdisplaying back-end configuration errors 62

F

fabric-attached configurations 8FC initiator ports

labeling guidelines 54usage guidelines 54

H

host groupsdefined 17what to include in 17

I

implementation stages when using third-party storage13, 14

initiator portsFC labeling guidelines 54FC usage guidelines 54

installationvalidating for 8.1 Cluster-Mode 61validating for 8.x 7-mode 65

L

labeling guidelinesFC initiator ports 54

LDEVdefined 17inconsistent LUN IDs detected 62when created 17

link failuresData ONTAP response to 35in primary path, result of 36, 37

load partitioninghow it is done 27

LUN groupsdefined 28load partitioning with 27

multiple, example 28LUN limit

maximum for neighborhoods 48LUN ownership

planning for 18LUN security

defined 23methods 23, 24planing for on storage arrays 23quick start setup 7-Mode 77requirement for 23, 24

LUNs 22

N

names of array LUNschanges in Data ONTAP displays 32format

paths reflected in 30native disks on a V-Series system

planning for 11–13neighborhood maximum LUN limit

defined 48neighborhoods

described 47determining whether to use 47establishing

Data ONTAP requirements 50requirements 50storage array requirements 50switch configuration requirements 51

limitsmaximum LUNs 48what impacts 50

requirements and limits 47

O

owning array LUNs 18

P

paths to array LUNsadvantages of four 27defined 25how reflected in names 30invalid setup examples

alternate paths are not configured 70too many paths to an array 69

118 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change

Page 119: Vs Install

redundancy requirements 25, 26valid setup examples 33

platform maximum assigned device limitdefined 49

provisioning array LUNsconsiderations 20

Qquick start for 7-Mode

V-Series installationadding additional LUNs 81assigning array LUNs 77example configuration 73LUN security setup 77pre-installation 74software and licenses 79switch setup 76testing 80

RRAID 0

how Data ONTAP uses 15RAID groups (Data ONTAP)

LUN number implications 15LUN size implications 15

RAID implementationplanning for 15

RAID protectionwhich device provides 15

RAID types Data ONTAP supportsfor array LUNs 15for disks 15

redundancy requirementsto array LUNs 25

root volumearray LUN size minimum 21

Ssize

supported for array LUNs 21spare core array LUN

when needed 22storage

mixing in aggregates 39storage array config show command

use in validating configuration 61storage array show-config command

use to check number of paths 65

storage array show-luns commanduse to check number of paths

Data ONTAP 8.0x and 8.1 7-Mode 65storage arrays

connecting tomethods 8

location of supported arrays 11multiple behind a V-Series system 8planning LUN security for 23specifying controller ports 53

storage errors show commanderrors detected 62use in validating configuration

8.1 Cluster-Mode and later 61storage types

mixing in aggregates 39Support Matrix

planning informationV-Series Support Matrix 11

switcheslocation of supported switches 11setting up 76zoning in a V-Series configuration 43

T

target queue depthdefined 87guidelines for specifying 87setting 88

technology overviewV-series systems 7

terminologyfamily 39host group 17LDEV 17LUN group 28RAID group 15storage array vendors 91

third-party storageimplementation stages with V-Series 13, 14

third-party storage arraysrequirement for dedicated 9

troubleshooting back-end configuration errors 62troubleshooting V-Series configurations

invalid pathingno alternate paths 70too many paths 69

Index | 119Release Candidate Documentation—25 August 2011

Contents Subject to Change

Page 120: Vs Install

U

usable space in an array LUN 21

V

V-Series configurationredundancy requirements 25

V-Series implementationplanning tasks for 11–13

V-Series neighborhoodsdescribed 47

V-Series Support Matrix 11V-Series systems

array LUN requirements 17connecting to back-end devices 55

how they use storage 7minimum number of array LUNs for 20multiple storage arrays behind 8

valid pathing examplesone 2-port array LUN group 33one 4-port array LUN group 34

validating an 8.1 Cluster-Mode installation 61validating an 8.x 7-Mode installation 65

Z

zoning in a V-Series configurationexamples of 45guidelines 43recommended zoning type 44requirements 43

120 | V-Series system Installation Requirements and Reference GuideRelease Candidate Documentation—25 August 2011

Contents Subject to Change