135
What’s new in vSphere 5 VMware Cloud Infrastructure

VMware vSphere 5 seminar

Embed Size (px)

DESCRIPTION

Powerpoint presentation of Raido's VMware vSphere 5 seminar on November 10, 2011

Citation preview

Page 1: VMware vSphere 5 seminar

What’s new in vSphere 5

VMware Cloud Infrastructure

Page 2: VMware vSphere 5 seminar

2010

vCloud Director

vShield Security

vCenter Management

vSphere vSphere vSphere

Page 3: VMware vSphere 5 seminar

2011

vCloud Director

vShield Security

vCenter Management

vSphere vSphere vSphere

Cloud Infrastructure Launch

New

Page 4: VMware vSphere 5 seminar
Page 5: VMware vSphere 5 seminar

Agenda

• vSphere 5.0 Platform

• vSphere 5.0 Networking

• vSphere 5.0 Availability

• vSphere 5.0 vMotion, DRS/DPM

• vCenter Server 5.0

• vSphere 5.0 vStorage

• vSphere 5.0 Storage Appliance (VSA)

• VMware vCenter Site Recovery Manager v5.0

Page 6: VMware vSphere 5 seminar

vSphere 5.0 – Platform

•  Platform Enhancements

•  ESXi Firewall

•  Image Builder

•  Auto Deploy

Page 7: VMware vSphere 5 seminar

New Virtual Machine Features § vSphere 5.0 supports the industry’s most capable VM’s

Other new features

•  UI for multi-core virtual CPUs

•  Extended VMware Tools compatibility

•  Support for Mac OS X servers

Broader Device Coverage

•  Client-connected USB devices

•  USB 3.0 devices •  Smart Card Readers for

VM Console Access

•  VM BIOS boot order config API and PowerCLI interface

•  EFI BIOS

•  3D graphics Richer Desktop

Experience

•  32 virtual CPUs per VM •  1TB RAM per VM •  4x previous capabilities!

VM Scalability

Items which require HW version 8 in blue

Page 8: VMware vSphere 5 seminar

vSphere 5.0 – Platform

•  Platform Enhancements

•  ESXi Firewall

•  Image Builder

•  Auto Deploy

•  vSphere Update Manager

Page 9: VMware vSphere 5 seminar

ESXi 5.0 Firewall Features § Capabilities

• ESXi 5.0 has a new firewall engine which is not based on iptables. • The firewall is service oriented, and is a stateless firewall.

• Users have the ability to restrict access to specific services based on IP address/Subnet Mask.

§ Management • The GUI for configuring the firewall on ESXi 5.0 is similar to that used with

the classic ESX firewall — customers familiar with the classic ESX firewall should not have any difficulty with using the ESXi 5.0 version.

• There is a new esxcli interface (esxcfg-firewall is deprecated in ESXi 5.0). • There is Host Profile support for the ESXi 5.0 firewall. • Customers who upgrade from Classic ESX to ESXi 5.0 will have their

firewall settings preserved.

Page 10: VMware vSphere 5 seminar

UI: Security Profile § The ESXi Firewall can be managed via the vSphere client.

§ Through the Configuration > Security Profile, one can observe the Enabled Incoming/Outgoing Services, the Opened Port List for each service & the Allowed IP List for each service.

Page 11: VMware vSphere 5 seminar

UI: Security Profile > Services > Properties § Through the Services Properties, one can configure if a

service should be automatically started.

§ Services can also be stopped & started on-the-fly.

Page 12: VMware vSphere 5 seminar

vSphere 5.0 – Platform

•  Platform Enhancements

•  ESXi Firewall

•  Image Builder

•  Auto Deploy

Page 13: VMware vSphere 5 seminar

Composition of an ESXi Image

Core���Hypervisor

CIM���Providers

Plug-in���Components

Drivers

Page 14: VMware vSphere 5 seminar

ESXi Image Deployment § Challenges

• Standard ESXi image from VMware download site is sometimes limited •  Doesn’t have all drivers or CIM providers for specific hardware

•  Doesn’t contain vendor specific plug-in components

? StandardESXi ISO

•  Base providers •  Base drivers

Missing CIM���

provider

Missing driver

Page 15: VMware vSphere 5 seminar

Building an Image

Start PowerCLI session

Windows Host with PowerCLIand Image Builder Snap-in

Page 16: VMware vSphere 5 seminar

Building an Image

Activate Image Builder Snap-in

Windows Host with PowerCLIand Image Builder Snap-in

Image Builder

Page 17: VMware vSphere 5 seminar

Building an Image

OEM VIBs

ESXi ���VIBs

Depots

Connect to depot(s)

Windows Host with PowerCLIand Image Builder Snap-in

Image Builder

ImageProfile

Driver���VIBs

Page 18: VMware vSphere 5 seminar

Building an Image Depots

OEM VIBs

ESXi ���VIBs

Clone and modify ���existing Image Profile

Windows Host with PowerCLIand Image Builder Snap-in

ImageProfile

Driver���VIBs

Image Builder

Page 19: VMware vSphere 5 seminar

Building an Image

ISO Image

Depots

OEM VIBs

ESXi ���VIBs

Generate new image

Windows Host with PowerCLIand Image Builder Snap-in

Driver���VIBs

PXE-bootableImage

Image Builder

ImageProfile

Page 20: VMware vSphere 5 seminar

vSphere 5.0 – Platform

•  Platform Enhancements

•  ESXi Firewall

•  Image Builder

•  Auto Deploy

Page 21: VMware vSphere 5 seminar

•  Deploy and patch vSphere hosts in minutes using a new “on the fly” model

•  Coordination with vSphere Host Profiles

• Rapid provisioning: initial deployment and patching of hosts

• Centralized host and image management • Reduce manual deployment and patch

processes

vSphere vSphere

vCenter Server with Auto Deploy

Host Profiles

vSphere

Image Profiles

vSphere

Overview

Benefits

vSphere 5.0 – Auto Deploy

Page 22: VMware vSphere 5 seminar

Deploying a Datacenter Has Just Gotten Much Easier

Time: 30 mins

Total time: 20 Hours!

…..Repeat 37 more times…

Total time: 10 Minutes!

Before After

Time: 30 mins

Time: 30 mins

Page 23: VMware vSphere 5 seminar

Auto Deploy Example – Initial Boot Provision new host

OEM VIBs

ESXi ���VIBs

ImageProfile

Driver���VIBs

ImageProfile ImageProfile

vCenter Server

Host Profile Host Profile Host Profile

TFTP DHCP

Auto Deploy

“Waiter”

Rules Engine

Page 24: VMware vSphere 5 seminar

Auto Deploy Example – Initial Boot 1) PXE Boot server

OEM VIBs

ESXi ���VIBs

ImageProfile

Driver���VIBs

ImageProfile ImageProfile

vCenter Server

Host Profile Host Profile Host Profile

TFTP DHCP

gPXE image

DHCPReques

t

Auto Deploy

“Waiter”

Rules Engine

Page 25: VMware vSphere 5 seminar

Auto Deploy

“Waiter”

Auto Deploy Example – Initial Boot

Cluster A Cluster B

2) Contact Auto Deploy Server

OEM VIBs

ESXi ���VIBs

ImageProfile

Driver���VIBs

ImageProfile ImageProfile

vCenter Server

Host Profile Host Profile Host Profile

http boot

request

Rules Engine

Page 26: VMware vSphere 5 seminar

Auto Deploy

“Waiter”

Auto Deploy Example – Initial Boot

Cluster A Cluster B

3) Determine Image Profile, Host Profile and cluster

OEM VIBs

ESXi ���VIBs

ImageProfile

Driver���VIBs

ImageProfile ImageProfile

vCenter Server

Host Profile Host Profile Host Profile

•  Image Profile X

•  Host Profile 1 •  Cluster B

Rules Engine

Page 27: VMware vSphere 5 seminar

Auto Deploy Example – Initial Boot

Auto Deploy

“Waiter”

Cluster A Cluster B

4) Push image to host, apply host profile

OEM VIBs

ESXi ���VIBs

ImageProfile

Driver���VIBs

ImageProfile ImageProfile

vCenter Server

Host Profile Host Profile Host Profile

Image Profile Host Profile

Cache

Rules Engine

Page 28: VMware vSphere 5 seminar

Auto Deploy Example – Initial Boot

Auto Deploy

“Waiter”

5) Place host into cluster

OEM VIBs

ESXi ���VIBs

ImageProfile

Driver���VIBs

ImageProfile ImageProfile

vCenter Server

Host Profile Host Profile Host Profile

Image Profile Host Profile

Cache

Cluster A Cluster B

Rules Engine

Page 29: VMware vSphere 5 seminar

vSphere 5.0 – Networking

•  LLDP

•  NetFlow

•  Port Mirror

•  NETIOC – New Traffic Types

Page 30: VMware vSphere 5 seminar

What Is Discovery Protocol? (Link Layer Discovery Protocol )

§ Discovery protocol is a data link layer network protocol used to discover capabilities of network devices.

§ Discovery protocol allows customer to automate the deployment process in a complex environment through its ability to •  Discover capabilities of Network devices •  Discover configuration of neighboring infrastructure

§ vSphere infrastructure supports following Discovery Protocol •  CDP (Standard vSwitches & Distributed vSwitches) •  LLDP (Distributed vSwitches)

§ LLDP is a standard based vendor neutral discovery protocol (802.1AB)

Page 31: VMware vSphere 5 seminar

LLDP Neighbour Info § Sample output using LLDPD Utility

Page 32: VMware vSphere 5 seminar

vSphere 5.0 – Networking

•  LLDP

•  NetFlow

•  Port Mirror

•  NETIOC – New Traffic Types

Page 33: VMware vSphere 5 seminar

What Is NetFlow? § NetFlow is a networking protocol that collects IP traffic info

as records and sends them to third party collectors such as CA NetQoS, NetScout etc.

vDS

VM A VM B

trunk

Physical switch

Collector

§  The Collector/Analyzer report on various information such as:

•  Current top flows consuming the most bandwidth

• Which flows are behaving irregularly

• Number of bytes a particular flow has sent and received in the past 24 hours

NetFlow session

Host

VM traffic

Legend :

Page 34: VMware vSphere 5 seminar

NetFlow with Third-Party Collectors

Host

vDS External Systems

Internal flows

Legend :

External flows

NetFlow session

Net Scout nGenius Collector

CA NetQoSCollector

Page 35: VMware vSphere 5 seminar

vSphere 5.0 Networking

•  LLDP

•  NetFlow

•  Port Mirror

•  NETIOC – New Traffic Types

Page 36: VMware vSphere 5 seminar

What Is Port Mirroring (DVMirror)? § Port Mirroring is the capability on a network switch to send

a copy of network packets seen on a switch port to a network monitoring device connected on another switch port.

§ Port Mirroring is also referred to as SPAN (Switched Port Analyzer) on Cisco Switches.

§ Port Mirroring overcomes the limitation of promiscuous mode. •  By providing granular control on which traffic can be monitored

•  Ingress Source

•  Egress Source

§ Helps in troubleshooting network issue by providing access to: •  Inter-VM traffic

•  Intra-VM traffic

Page 37: VMware vSphere 5 seminar

Port Mirror Traffic Flow When Mirror Destination Is a VM

Mirror Flow Legend :

VM Traffic

Inter-VM traffic

Intra-VM traffic

Ingress Source Destinatio

n

vDS

EgressSource Destinatio

n

vDS

Ingress Source Destinatio

n

vDS

External System

EgressSource Destinatio

n

vDS

External System

Page 38: VMware vSphere 5 seminar

vSphere 5.0 Networking

•  LLDP

•  NetFlow

•  Port Mirror

•  NETIOC – New Traffic Types

Page 39: VMware vSphere 5 seminar

What Is Network I/O Control (NETIOC)? § Network I/O control is a traffic management feature of

vSphere Distributed Switch (vDS).

§  In consolidated I/O (10 gig) deployments, this feature allows customers to: • Allocate Shares and Limits to different traffic types.

• Provide Isolation •  One traffic type should not dominate others

• Guarantee Service Levels when different traffic types compete

§ Enhanced Network I/O Control — vSphere 5.0 builds on previous versions of Network I/O Control feature by providing: • User-defined network resource pools

• New Host Based Replication Traffic Type • QoS tagging

Page 40: VMware vSphere 5 seminar

NETIOC VM Groups

Network I/O Control

Total BW = 20 Gig

10 GigE

VMware vNetwork Distributed Switch

VMRG

1

VMRG

2

VMRG

3 VM

vMot

ion

iSCS

I

FT

NFS

HBR

User

Defi

ned

RP

VMRG1 VMRG2 VMRG3

Page 41: VMware vSphere 5 seminar

Server Admin

Mgmt NFS iSCSI

vMotion FT

Teaming Policy vNetwork Distributed Switch

HBR

vNetwork Distributed Portgroup

NETIOC VM Traffic Coke VM Pepsi VMs

Scheduler

Shaper

Scheduler

Shares enforcement per uplink

Limit enforcement per team

Load Based Teaming

Traffic Shares Limit (Mbps) 802.1p

vMotion 5 150 1

Mgmt 30 --

NFS 10 250 --

iSCSI 10 2

FT 60 --

HBR 10 --

VM 20 2000 4

Pepsi 5 --

Coke 15 --

Page 42: VMware vSphere 5 seminar

vSphere 5.0 – Availability

Page 43: VMware vSphere 5 seminar

vSphere HA Primary Components § Every host runs a agent

• Referred to as ‘FDM’ or Fault Domain Manager

• One of the agents within the cluster is chosen to assume the role of the Master

•  There is only one Master per cluster during normal operations

• All other agents assume the role of Slaves

§ There is no more Primary/Secondary concept with vSphere HA

vCenter

ESX 02

ESX 01 ESX 03

ESX 04

Page 44: VMware vSphere 5 seminar

The Master Role § An FDM master monitors:

• ESX hosts and Virtual Machine availability. • All Slave hosts. Upon a Slave host failure,

protected VMs on that host will be restarted. • The power state of all the protected VMs.

Upon failure of a protected VM, the Master will restart it.

§ An FDM master manages: • The list of hosts that are members of the

cluster, updating this list as hosts are added or removed from the cluster.

vCenter

ESX 02

ESX 01 ESX 03

ESX 04

•  The list of protected VMs. The Master updates this list after each user-initiated power on or power off.

Page 45: VMware vSphere 5 seminar

The Slave Role § An Slave monitors the runtime state

of it’s locally running VMs and forwards any significant state changes to the Master.

§ It implements vSphere HA features that do not requi re centra l coordination, most notably VM Health Monitoring.

§ It monitors the health of the Master. I f the Master should fa i l , i t participates in the election process for a new master.

§ Maintains list of powered on VMs

vCenter

ESX 02

ESX 01 ESX 03

ESX 04

Page 46: VMware vSphere 5 seminar

Storage Level Communications § One of the most exciting new features of

vSphere HA is its ability to use a storage subsystem for communication.

§ The datastores used for this are referred to as ‘Heartbeat Datastores’.

§ This provides for increased communication redundancy.

§ Heartbeat datastores are used as a communication channel only when the management network is lost - such as in the case of isolation or network partitioning.

vCenter

ESX 02

ESX 01 ESX 03

ESX 04

Page 47: VMware vSphere 5 seminar

Storage Level Communications § Heartbeat Datastores allow a Master to:

•  Monitor availability of Slave hosts and the VMs running on them

•  Determine whether a host has become network isolated rather than network partitioned.

•  Coordinate with other Masters - since a VM can only be owned by only one master, masters will coordinate VM ownership thru datastore communication.

vCenter

ESX 02

ESX 01 ESX 03

ESX 04

•  By default, vCenter will automatically pick 2 datastores. These 2 datastores can also be selected by the user.

Page 48: VMware vSphere 5 seminar

vSphere 5.0 – vMotion, DRS/DPM

Page 49: VMware vSphere 5 seminar

vSphere 5.0 – vMotion § The original vMotion keeps getting better!

§ Multi-NIC Support • Support up to four 10Gbps or sixteen 1Gbps NICs.

(ea. NIC must have its own IP). • Single vMotion can now scale over multiple NICs.

(load balance across multiple NICs). • Faster vMotion times allow for a higher number of concurrent vMotions.

§  Reduced Application Overhead • Slowdown During Page Send (SDPS) feature throttles busy VMs to reduce

timeouts and improve success.

• Ensures less than 1 Second switchover time in almost all cases.

§ Support for higher latency networks (up to ~10ms) • Extend vMotion capabilities over slower networks.

Page 50: VMware vSphere 5 seminar

Multi-NIC Throughput

0

5

10

15

20

25

30

One NIC Two NICs Three NICs*

Thro

ughp

ut (G

bps)

Multi-NIC

* Limited by throughput of PCI-E bus in this particular setup.

Page 51: VMware vSphere 5 seminar

vSphere 5.0 – DRS/DPM § DRS/DPM improvements focus on cross-product integration.

•  Introduce support for “Agent VMs.” •  Agent VM is a special purpose VM tied to a specific ESXi host.

•  Agent VM cannot / should not be migrated by DRS or DPM.

•  Special handling of Agent VMs now afforded by DRS & DPM.

§ A DRS/DPM cluster hosting Agent VMs. •  Accounts for Agent VM reservations (even when powered off).

•  Waits for Agent VMs to be powered on and ready before placing client VMs.

•  Will not try to migrate a Agent VM (Agent VMs pinned to their host).

§ Maintenance Mode / Standby Mode Support • Agent VMs do not have to be evacuated for host to enter

maintenance or standby mode. •  When host enters maintenance/standby mode, Agent VMs are powered off

(after client VMs are evacuated).

•  When host exits maintenance/standby mode, Agent VMs are powered on (before client VMs are placed).

Page 52: VMware vSphere 5 seminar

vSphere 5.0 – vCenter Server

Page 53: VMware vSphere 5 seminar

vSphere Web Client Architecture

vCenter in either single or Linked mode operation

The Query Service obtains live data from the core vCenter Server process

Application Server that provides a scalable back end

vCenter

Query Service

The vSphere Web Client runs within a browser

Fx

Flex Client Back End

Page 54: VMware vSphere 5 seminar

Extension Points Launchbar

Sidebar Extension Create custom actions Inventory Objects

Tabs

Add right-click extensions

Portlets

Page 55: VMware vSphere 5 seminar

Features of the vSphere Web Client § Ready Access to Common Actions

• Quick access to common tasks provided out of the box

Page 56: VMware vSphere 5 seminar

Introducing vCenter Server Appliance § The vCenter Server Appliance is the answer!

• Simplifies Deployment and Configuration • Streamlines patching and upgrades

• Reduces the TCO for vCenter

§ Enables companies to respond to business faster!

Automation

Scalability

Visibility

Virtual Appliance

VMware vCenter Server

Page 57: VMware vSphere 5 seminar

Component Overview § vCenter Server Appliance (VCSA) consists of:

• A pre-packaged 64 bit application running on SLES 11 •  Distributed with sparse disks

•  Disk Footprint

•  Memory Footprint

• A built in enterprise level database with optional support for a remote Oracle databases.

• Limits are the same for VC and VCSA •  Embedded DB

• 5 hosts/50 VMs •  External DB

• <300 hosts/<3000 VMs (64 bit)

• A web-based configuration interface

Distribution Min Deployed Max Deployed 3.6GB ~5GB ~80GB

Page 58: VMware vSphere 5 seminar

Feature Overview § vCenter Server Appliance supports:

• The vSphere Web Client • Authentication through AD and NIS

• Feature parity with vCenter Server on Windows •  Except –

• Linked Mode support - Requires ADAM (AD LDS) • IPv6 support • External DB Support

• Oracle is the only supported external DB for the first release • No vCenter Heartbeat support

• HA is provided through vSphere HA

Page 59: VMware vSphere 5 seminar

vSphere 5.0 – vStorage

• VMFS 5.0

• vStorage API for Array Integration

• Storage vMotion

• Storage I/O Control

• Storage DRS

• VMware API for Storage Awareness

• Profile Driven Storage

• FCoE – Fiber Channel over Ethernet

Page 60: VMware vSphere 5 seminar

Introduction to VMFS-5 § Enhanced Scalability

•  Increase the size limits of the filesystem & support much larger single extent VMFS-5 volumes.

• Support for single extent 64TB Datastores.

§ Better Performance • Uses VAAI locking mechanism with more tasks.

§ Easier to manage and less overhead • Space reclamation on thin provisioned LUNs.

• Smaller sub blocks. • Unified Block size.

Page 61: VMware vSphere 5 seminar

VMFS-5 vs VMFS-3 Feature Comparison

Feature VMFS-3 VMFS-5

2TB+ VMFS Volumes Yes (using extents) Yes

Support for 2TB+ Physical RDMs No Yes

Unified Block size (1MB) No Yes

Atomic Test & Set Enhancements(part of VAAI, locking mechanism)

No Yes

Sub-blocks for space efficiency 64KB (max ~3k) 8KB (max ~30k)

Small file support No 1KB

Page 62: VMware vSphere 5 seminar

VMFS-3 to VMFS-5 Upgrade § The Upgrade to VMFS-5 is clearly displayed in the vSphere

Client under Configuration → Storage view.

§ It is also displayed in the Datastores → Configuration view.

§ Non-disruptive upgrades.

Page 63: VMware vSphere 5 seminar

vSphere 5.0 – vStorage

• VMFS 5.0

• vStorage API for Array Integration

• Storage vMotion

• Storage I/O Control

• Storage DRS

• VMware API for Storage Awareness

• Profile Driven Storage

• FCoE – Fiber Channel over Ethernet

Page 64: VMware vSphere 5 seminar

VAAI – Introduction § vStorage API for Array Integration = VAAI

§ VAAI’s main purpose is to leverage array capabilities. • Offloading tasks to reduce overhead

• Benefit from enhanced mechanisms arrays mechanisms

§ The “traditional” VAAI primitives have been improved.

§ We have introduced multiple new primitives.

§ Support for NAS!

Application

Hypervisor

Fabric

Array

LUN01

LUN02

VI-3

Non-VAAI

VAAI

Page 65: VMware vSphere 5 seminar

VAAI Primitive Updates in vSphere 5.0 § vSphere 4.1 has a default plugin shipping for Write Same as

the primitive was fully T10 compliant, however ATS and Full Copy were not. • The T10 organization is responsible for SCSI standardization (SCSI-3) and

a standard used by many Storage Vendors.

§ vSphere 5.0 has all the 3 primitives which are T10 compliant integrated in the ESXi Stack. • This allows for arrays which are T10 compliant leverage these primitives

with a default VAAI plugin in vSphere 5.0.

§ It should also be noted that the ATS primitive has been extended in vSphere 5.0 / VMFS-5 to cover even more operations, resulting in even better performance and greater scalability.

Page 66: VMware vSphere 5 seminar

Introducing VAAI NAS Primitives § With this primitive, we will enable hardware acceleration/

offload features for NAS datastores.

§ The following primitives are defined for VAAI NAS: • Full File Clone – Similar to the VMFS block cloning. Allows offline VMDKs

to be cloned by the Filer. •  Note that hot migration via Storage vMotion on NAS is not hardware accelerated.

• Reserve Space – Allows creation of thick VMDK files on NAS.

§ NAS VAAI plugins are not shipped with ESXi 5.0. These plugins will be developed and distributed by the storage vendors, but signed by the VMware certification program.

Page 67: VMware vSphere 5 seminar

VAAI NAS: Thick Disk Creation § Without the VAAI NAS primitives, only Thin format is

available.

§ With the VAAI NAS primitives, Flat (thick), Flat pre-initialized (eager zeroed-thick) and Thin formats are available.

Non VAAI VAAI

Page 68: VMware vSphere 5 seminar

Introducing VAAI Thin Provisioning § What are the driving factors behind VAAI TP?

• Provisioning new LUNs to a vSphere environment (cluster) is complicated.

§ Strategic Goal: • We want to make the act of physical storage provisioning in a vSphere

environment extremely rare. • LUNs should be an incredibly large address spaces & should be able to

handle any VM workload.

§ VAAI TP features include: • Dead space reclamation.

• Monitoring of the space.

Page 69: VMware vSphere 5 seminar

VAAI Thin Provisioning – Dead Space Reclamation § Dead space is previously written blocks that are no longer

used by the VM. For instance after a Storage vMotion.

§ vSphere conveys block information to storage system via VAAI & storage system reclaims the dead blocks. •  Storage vMotion, VM deletion

and swap file deletion can trigger the thin LUN to free some physical space.

•  ESXi 5.0 uses a standard SCSI command for dead space reclamation.

vSphere

VMFS volume A VMFS volume B

Storage vMotion

Page 70: VMware vSphere 5 seminar

Current “Out Of Space” User Experience

VMware

VMware

?

No space related warnings

No mitigation steps available

Space exhaustion, VMs and LUN offline

Page 71: VMware vSphere 5 seminar

“Out Of Space” User Experience with VAAI Extensions

Space exhaustion, affected VMs paused, LUN online & awaiting space allocation.

Space exhaustion warning in UI

Storage vMotion based evacuation or add space

VMware

VMware

Page 72: VMware vSphere 5 seminar

vSphere 5.0 – vStorage

• VMFS 5.0

• vStorage API for Array Integration

• Storage vMotion

• Storage I/O Control

• Storage DRS

• VMware API for Storage Awareness

• Profile Driven Storage

• FCoE – Fiber Channel over Ethernet

Page 73: VMware vSphere 5 seminar

Storage vMotion – Introduction § In vSphere 5.0, a number of new enhancements were made

to Storage vMotion. • Storage vMotion will work with Virtual Machines that have snapshots,

which means coexistence with other VMware products & features such as VCB, VDR & HBR.

• Storage vMotion will support the relocation of linked clones.

• Storage vMotion has a new use case – Storage DRS – which uses Storage vMotion for Storage Maintenance Mode & Storage Load Balancing (Space or Performance).

Page 74: VMware vSphere 5 seminar

Storage vMotion Architecture Enhancements (1 of 2) § In vSphere 4.1, Storage vMotion uses the Changed Block Tracking

(CBT) method to copy disk blocks between source & destination.

§ The main challenge in this approach is that the disk pre-copy phase can take a while to converge, and can sometimes result in Storage vMotion failures if the VM was running a very I/O intensive load.

§ Mirroring I/O between the source and the destination disks has significant gains when compared to the iterative disk pre-copy mechanism.

§ In vSphere 5.0, Storage vMotion uses a new mirroring architecture to provide the following advantages over previous versions: •  Guarantees migration success even when facing a slower destination.

• More predictable (and shorter) migration time.

Page 75: VMware vSphere 5 seminar

Storage vMotion Architecture Enhancements (2 of 2)

Source Destination

Mirror Driver

Guest OS

Datamover VMkernel

VMM/Guest

Userworld

Page 76: VMware vSphere 5 seminar

vSphere 5.0 – vStorage

• VMFS 5.0

• vStorage API for Array Integration

• Storage vMotion

• Storage I/O Control

• Storage DRS

• VMware API for Storage Awareness

• Profile Driven Storage

• FCoE – Fiber Channel over Ethernet

Page 77: VMware vSphere 5 seminar

Storage I/O Control Phase 2 and Refreshing Memory § In many customer environments, storage is mostly accessed from

storage arrays over SAN, iSCSI or NAS.

§ One ESXi host can affect the I/O performance of others by issuing large number of requests on behalf of one its virtual machines.

§ Thus the throughput/bandwidth available to ESXi hosts itself may vary drastically leading to highly-variable I/O performance for VMs.

§ To ensure stronger I/O guarantees, we implemented Storage I/O Control in vSphere 4.1 for block storage which guarantees an allocation of I/O resources on a per VM basis.

§ As of vSphere 5.0 we also support SIOC for NFS based storage!

§ This capability is essential to provide better performance for I/O intensive and latency-sensitive applications such as database workloads, Exchange servers, etc.

Page 78: VMware vSphere 5 seminar

Storage I/O Control Refreshing Memory What you see

NFS / VMFS Datastore

online store

data mining

Microsoft Exchange

What you want to see

NFS / VMFS Datastore

online store

data mining

Microsoft Exchange

VIP VIP VIP VIP

Page 79: VMware vSphere 5 seminar

vSphere 5.0 – vStorage

• VMFS 5.0

• vStorage API for Array Integration

• Storage vMotion

• Storage I/O Control

• Storage DRS

• VMware API for Storage Awareness

• Profile Driven Storage

• FCoE – Fiber Channel over Ethernet

Page 80: VMware vSphere 5 seminar

What Does Storage DRS Solve? § Without Storage DRS:

•  Identify the datastore with the most disk space and lowest latency. • Validate which virtual machines are placed on the datastore and ensure

there are no conflicts. • Create Virtual Machine and hope for the best.

§ With Storage DRS: • Automatic selection of the best placement for your VM. • Advanced balancing mechanism to avoid storage performance

bottlenecks or “out of space” problems.

• Affinity Rules.

Page 81: VMware vSphere 5 seminar

What Does Storage DRS Provide? § Storage DRS provides the following:

1.  Initial Placement of VMs and VMDKS based on available space and I/O capacity.

2. Load balancing between datastores in a datastore cluster via Storage vMotion based on storage space utilization.

3. Load balancing via Storage vMotion based on I/O metrics, i.e. latency.

§ Storage DRS also includes Affinity/Anti-Affinity Rules for VMs and VMDKs; • VMDK Affinity – Keep a VM’s VMDKs together on the same datastore.

This is the default affinity rule. • VMDK Anti-Affinity – Keep a VM’s VMDKs separate on different datastores.

• Virtual Machine Anti-Affinity – Keep VMs separate on different datastores.

§ Affinity rules cannot be violated during normal operations.

Page 82: VMware vSphere 5 seminar

Datastore Cluster § An integral part of SDRS is to create a group of datastores

called a datastore cluster. • Datastore Cluster without Storage DRS – Simply a group of datastores.

• Datastore Cluster with Storage DRS – Load Balancing domain similar to a DRS Cluster.

§ A datastore cluster, without SDRS is just a datastore folder. It is the functionality provided by SDRS which makes it more than just a folder.

datastore cluster

datastores 500GB

2TB

500GB 500GB 500GB

Page 83: VMware vSphere 5 seminar

2TB

Storage DRS Operations – Initial Placement (1 of 4) § Initial Placement – VM/VMDK create/clone/relocate.

• When creating a VM you select a datastore cluster rather than an individual datastore and let SDRS choose the appropriate datastore.

• SDRS will select a datastore based on space utilization and I/O load. • By default, all the VMDKs of a VM will be placed on the same datastore

within a datastore cluster (VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastore clusters.

300GB available

260GB available

265GB available

275GB available

datastore cluster

datastores 500GB 500GB 500GB 500GB

Page 84: VMware vSphere 5 seminar

Storage DRS Operations – Load Balancing (2 of 4) Load balancing – SDRS triggers on space usage & latency threshold.

§ Algorithm makes migration recommendations when I/O response time and/or space utilization thresholds have been exceeded. •  Space utilization statistics are constantly gathered by vCenter, default threshold

80%.

•  I/O load trend is currently evaluated every 8 hours based on a past day history, default threshold 15ms.

§ Load Balancing is based on I/O workload and space which ensures that no datastore exceeds the configured thresholds.

§ Storage DRS will do a cost / benefit analysis!

§ For I/O load balancing Storage DRS leverages Storage I/O Control functionality.

Page 85: VMware vSphere 5 seminar

Storage DRS Operations – Thresholds (3 of 4)

Page 86: VMware vSphere 5 seminar

Storage DRS Operations – Datastore Maintenance Mode § Datastore Maintenance Mode

• Evacuates all VMs & VMDKs from selected datastore. • Note that this action will not move VM Templates.

• Currently, SDRS only handles registered VMs.

Place VOL1 in maintenance

mode

datastore cluster

datastores VOL1

2TB

VOL2 VOL3 VOL4

Page 87: VMware vSphere 5 seminar

Storage DRS Operations (4 of 4) Datastore Cluster

VMDK affinity §  Keep a Virtual

Machine’s VMDKs

together on the same

datastore

§  Maximize VM

availability when all

disks needed in order

to run

§  On by default for all

VMs

VMDK anti-affinity §  Keep a VM’s VMDKs

on different

datastores

§  Useful for separating

log and data disks of

database VMs

§  Can select all or a

subset of a VM’s disks

Datastore Cluster

VM anti-affinity §  Keep VMs on different

datastores

§  Similar to DRS anti-affinity rules

§  Maximize availability of a set of redundant

VMs

Datastore Cluster

Page 88: VMware vSphere 5 seminar

SDRS Scheduling

SDRS allows you to create a schedule to change its settings.

This can be useful for scenarios where you don’t want VMs to migrate between datastore or when I/O latency might rise, giving false negatives, e.g. during VM

backups.

Page 89: VMware vSphere 5 seminar

So What Does It Look Like? Provisioning…

Page 90: VMware vSphere 5 seminar

So What Does It Look Like? Load Balancing. § It will show “utilization before” and “after.” § There’s always the option to override the

recommendations.

Page 91: VMware vSphere 5 seminar

vSphere 5.0 – vStorage

• VMFS 5.0

• vStorage API for Array Integration

• Storage vMotion

• Storage I/O Control

• Storage DRS

• VMware API for Storage Awareness

• Profile Driven Storage

• FCoE – Fiber Channel over Ethernet

Page 92: VMware vSphere 5 seminar

What Is vStorage APIs Storage Awareness (VASA)? § VASA is an Extension of the vSphere Storage APIs, vCenter-

based extensions. Allows storage arrays to integrate with vCenter for management functionality via server-side plug-ins or Vendor Providers.

§ This in turn allows a vCenter administrator to be aware of the topology, capabilities, and state of the physical storage devices available to the cluster.

§ VASA enables several features. •  For example it delivers System-defined (array-defined) Capabilities that

enables Profile-driven Storage. •  Another example is that it provides array internal information that helps

several Storage DRS use cases to work optimally with various arrays.

Page 93: VMware vSphere 5 seminar

Storage Compliancy § Once the VASA Provider has been successfully added to

vCenter, the VM Storage Profiles should also display the storage capabilities provided to it by the Vendor Provider.

§ The above example contains a ‘mock-up’ of some possible Storage Capabilities as displayed in the VM Storage Profiles. These are retrieved from the Vendor Provider.

Page 94: VMware vSphere 5 seminar

vSphere 5.0 – vStorage

• VMFS 5.0

• vStorage API for Array Integration

• Storage vMotion

• Storage I/O Control

• Storage DRS

• VMware API for Storage Awareness

• Profile Driven Storage

• FCoE – Fiber Channel over Ethernet

Page 95: VMware vSphere 5 seminar

Why Profile Driven Storage? (1 of 2) § Problem Statement

1. Difficult to manage datastores at scale •  Including: capacity planning, differentiated data services for each datastore, maintaining capacity

headroom, etc.

2. Difficult to correctly match VM SLA requirements to available storage •  Because: Manually choosing between many datastores and >1 storage tiers

•  Because: VM requirements not accurately known or may change over its lifecycle

§ Related trends • Newly virtualized Tier-1 workloads need stricter VM storage SLA promises

•  Because: Other VMs can impact performance SLA

• Scale-out storage mix VMs with different SLAs on the same storage

Page 96: VMware vSphere 5 seminar

Why Profile Driven Storage? (2 of 2) Save OPEX by reducing repetitive planning and effort!

§ Minimize per-VM (or per VM request) “thinking” or planning for storage placement. • Admin needs to plan for optimal space and I/O balancing for each VM. • Admin needs to identify VM storage requirements and match to physical

storage properties.

§ Increase probability of “correct” storage placement and use (minimize need for troubleshooting, minimize time for troubleshooting). • Admin needs more insight into storage characteristics. • Admin needs ability to custom-tag available storage. • Admin needs easy means to identify incorrect VM storage placement

(e.g. on incorrect datastore).

Page 97: VMware vSphere 5 seminar

Save OPEX by Reducing Repetitive Planning and Effort!

Today

Storage DRS

Storage DRS + Profile driven

storage

Identify requirements

Find optimal datastore Create VM

Periodically check

compliance

Identify storage characteristics

Group datastores

Identify requirements Create VM

Periodically check

compliance

Discover storage

characteristics

Group datastores

Select VM Storage profile Create VM

Initial setup

Initial setup

Page 98: VMware vSphere 5 seminar

Storage Capabilities & VM Storage Profiles

Storage Capabilities surfaced by VASA or

user-defined

VM Storage Profile referencing Storage

Capabilities

VM Storage Profile associated with VM

Not Compliant Compliant

Page 99: VMware vSphere 5 seminar

Selecting a Storage Profile During Provisioning

§ By selecting a VM Storage Profile, datastores are now split into Compatible & Incompatible.

§ The Celerra_NFS datastore is the only datastore which meets the GOLD Profile requirements – i.e. it is the only datastore that has our user-defined storage capability associated with it.

Page 100: VMware vSphere 5 seminar

VM Storage Profile Compliance

§ Policy Compliance is visible from the Virtual Machine Summary tab.

Page 101: VMware vSphere 5 seminar

vSphere 5.0 – vStorage

• VMFS 5.0

• vStorage API for Array Integration

• Storage vMotion

• Storage I/O Control

• Storage DRS

• VMware API for Storage Awareness

• Profile Driven Storage

• FCoE – Fiber Channel over Ethernet

Page 102: VMware vSphere 5 seminar

Introduction § Fiber Channel over Ethernet (FCoE) is an enhancement that

expands Fiber Channel into the Ethernet by combining two leading-edge technologies (FC and the Ethernet)

§ The FCoE adapters that VMware supports generally fall into two categories, hardware FCoE adapters and software FCoE adapters which uses an FCoE capable NIC • Hardware FCoE adapters were supported as of vSphere 4.0.

§ The FCoE capable NICs are referred to as Converged Network Adapters (CNAs) which facilitate network and storage traffic.

§ ESXi 5.0 uses FCoE adapters to access Fibre Channel storage.

Page 103: VMware vSphere 5 seminar

Software FCoE Adapters (1 of 2) § A software FCoE adapter is a software code that performs

some of the FCoE processing.

§ This adapter can be used with a number of NICs that support partial FCoE offload.

§ Unlike the hardware FCoE adapter, the software adapter needs to be activated, similar to Software iSCSI.

Page 104: VMware vSphere 5 seminar

Software FCoE Adapters (2 of 2) § Once the Software FCoE is enabled, a new adapter is

created, and discovery of devices can now take place.

Page 105: VMware vSphere 5 seminar

Conclusion § vSphere 5.0 has many new compelling storage features.

§ VMFS volumes can be larger than ever before. • They can contain many more virtual machines due to VAAI

enhancements and architectural changes.

§ Storage DRS and Profile Driven Storage will help solve traditional problems with virtual machine provisioning.

§ The administrative overhead will be severely reduced. • VASA surfacing storage characteristics.

• Creating Profiles through Profile Driven Storage. • Combining multiple datastores in a large aggregate.

Page 106: VMware vSphere 5 seminar

vSphere Storage Appliance (VSA)

Page 107: VMware vSphere 5 seminar

Introduction (1 of 3) § In vSphere 5.0, VMware releases a new storage appliance

called VSA. • VSA is an acronym “vSphere Storage Appliance.” • This appliance is aimed at our SMB (Small-Medium Business) customers

who may not be in a position to purchase a SAN or NAS array for their virtual infrastructure, and therefore do not have shared storage.

•  It is the SMB market that we wish to go after with this product — our aim to move these customers from Essentials to Essentials+.

• Without access to a SAN or NAS array, this excludes these SMB customers from many of the top features which are available in a VMware Virtual Infrastructure, such as vSphere HA & vMotion.

• Customers who decide to deploy a VSA can now benefit from many additional vSphere features without having to purchase a SAN or NAS device to provide them with shared storage.

Page 108: VMware vSphere 5 seminar

Introduction (2 of 3)

§ Each ESXi server has a VSA deployed to it as a Virtual Machine.

§ The appliances use the available space on the local disk(s) of the ESXi servers & present one replicated NFS volume per ESXi server. This replication of storage makes the VSA very resilient to failures.

VSA

vSphere

VSA

vSphere

VSA

vSphere

NFS NFS NFS

vSphere Client

VSA Manager

Page 109: VMware vSphere 5 seminar

Introduction (3 of 3) § The NFS datastores exported from the VSA can now be used as

shared storage on all of the ESXi servers in the same datacenter.

§ The VSA creates shared storage out of local storage for use by a specific set of hosts.

§ This means that vSphere HA & vMotion can now be made available on low-end (SMB) configurations, without external SAN or NAS servers.

§ There is a CAPEX saving achieved by SMB customers as there is no longer a need to purchase a dedicated SAN or NAS devices to achieve shared storage.

§ There is also an OPEX saving as the management of the VSA may be done by the vSphere Administrator and there is no need for dedicated SAN skills to manage the appliances.

Page 110: VMware vSphere 5 seminar

Supported VSA Configurations § The vSphere Storage Appliance can be deployed in two

configurations: • 2 x ESXi 5.0 servers configuration

•  Deploys 2 vSphere Storage Appliances, one per ESXi server & a VSA Cluster Service on the vCenter server

• 3 x ESXi 5.0 servers configuration •  Deploys 3 vSphere Storage Appliances, once per ESXi server

• Each of the servers must contain a new/vanilla install of ESXi 5.0. • During the configuration, the user selects a datacenter. The user is then

presented with a list of ESXi servers in that datacenter. • The installer will check the compatibility of each of these physical hosts

to make sure they are suitable for VSA deployment. • The user must then select which compatible ESXi servers should

participate in the VSA cluster, i.e. which servers will host VSA nodes. •  It then ‘creates’ the storage cluster by aggregating and virtualizing each

server’s local storage to present a logical pool of shared storage.

Page 111: VMware vSphere 5 seminar

VSA cluster with 2 members

Volume 1 Volume 2 (Replica)

VSA Manager

VSA Cluster Service

vCenter Server

Manage

VSA ���Datastore 1

VSA ���Datastore 2

Volume 1 (Replica) Volume 2

Two Member VSA

Page 112: VMware vSphere 5 seminar

VSA Manager

vCenter Server

Manage

Volume 2 Volume 1 (Replica)

VSA cluster with 3 members

Volume 3 (Replica) Volume 1

VSA ���Datastore 1

VSA ���Datastore 2

VSA ���Datastore 3

Three Member VSA

Volume 2 (Replica) Volume 3

Page 113: VMware vSphere 5 seminar

VSA Manager

§ The VSA Manager helps an administrator perform the following tasks: • Deploy vSphere Storage Appliance instances onto ESXi hosts to create a

VSA cluster

• Automatically mount the NFS volumes that each vSphere Storage Appliance exports as datastores to the ESXi hosts

• Monitor, maintain, and troubleshoot a VSA cluster

Page 114: VMware vSphere 5 seminar

Resilience § Many storage arrays are a single point of failure (SPOF) in

customer environments.

§ VSA is very resilient to failures.

§ If a node fails in the VSA cluster, another node will seamlessly take over the role of presenting its NFS datastore.

§ The NFS datastore that was being presented from the failed node will now be presented from the node that holds its replica (mirror copy).

§ The new node will use the same NFS server IP address that the failed node was using for presentation, so that any VMs that reside on that NFS datastore will not be affected by the failover.

Page 115: VMware vSphere 5 seminar

What’s New in VMware vCenter Site Recovery Manager v5.0 – Technical

Page 116: VMware vSphere 5 seminar

vCenter Site Recovery Manager Ensures Simple, Reliable DR § Site Recovery Manager Complements vSphere to provide the

simplest and most reliable disaster protection and site migration for all applications

§ Provide cost-efficient replication of applications to failover site •  Built-in vSphere Replication

•  Broad support for storage-based replication

§ Simplify management of recovery and migration plans •  Replace manual runbooks with centralized recovery plans

•  From weeks to minutes to set up new plan

§ Automate failover and migration processes for reliable recovery •  Enable frequent non-disruptive testing

•  Ensure fast, automated failover

•  Automate failback processes

Page 117: VMware vSphere 5 seminar

SRM Provides Broad Choice of Replication Options

vSphere Replication

Storage-based replication

vSphere Replication: simple, cost-efficient replication for Tier 2 applications and smaller sites

Storage-based replication: High-performance replication for business-critical applications in larger sites

vCenter Server Site

Recovery Manager

vSphere

VM

VM

VM

VM

VM

VM

vCenter Server Site

Recovery Manager

vSphere

VM

VM

VM

VM

VM

VM

VM VM V

M

VM

VM

VM

Page 118: VMware vSphere 5 seminar

Replication Software

VMFS VMFS

Replication Software

VMFS VMFS

SRM of Today’s High-Level Architecture

“Protected” Site “Recovery” Site

SRA SRA

SRM Plug-In SRM Plug-In

SAN Array

SAN Array

SRM Server SRM Server

vSphere Client vSphere Client

vCenter Server vCenter Server

ESX ESX ESX ESX ESX

Replication

Page 119: VMware vSphere 5 seminar

Technology – vSphere Replication § Adding native replication to SRM

•  Virtual machines can be replicated regardless of what storage they live on

•  Enables replication between heterogeneous datastores •  Replication is managed as a property of a virtual machine •  Efficient replication minimizes impact on VM workloads •  Provides guest-level copy of the VM and not a copy of the VM itself

Page 120: VMware vSphere 5 seminar

vSphere Replication Details § Replication Granularity per Virtual Machine

•  Can opt to replicate all or a subset of the VM’s disks •  You can create the initial copy in any way you want - even via sneaker net! •  You have the option to place the replicated disks where you want. •  Disks are replicated in group consistent manner

§ Simplified Replication Management •  User selects destination location for target disks •  User selects Recovery Point Objective (RPO) •  User can supply initial copy to save on bandwidth

§ Replication Specifics •  Changes on the source disks are tracked by ESX •  Deltas are sent to the remote site •  Does not use VMware snapshots

Page 121: VMware vSphere 5 seminar

Replication UI § Select VMs to replicate from within the vSphere client by

right click options

§ Can do this on one VM, or multiple at the same time!

Page 122: VMware vSphere 5 seminar

vSphere Replication 1.0 Limitations § Focus on virtual disks of powered-on VMs.

•  ISOs and floppy images are not replicated. • Powered-off/suspended VMs not replicated. • Non-critical files not replicated (e.g. logs, stats, swap, dumps).

§ vSR works at the virtual device layer. •  Independent of disk format specifics. •  Independent of primary-side snapshots. • Snapshots work with vSR, snapshot is replicated, but VM is recovered

with collapse snapshots. • Physical RDMs are not supported.

§ FT, linked clones, VM templates are not supported with HBR. § Automated failback of vSR-protected VMs will be late,

but will be supported in the future. § Virtual Hardware 7, or later, in the VM is required.

Page 123: VMware vSphere 5 seminar

SRM Architecture with vSphere Replication

“Protected” Site “Recovery” Site

vRMS vRMS

SRM Plug-In SRM Plug-In

Storage

SRM Server SRM Server vCenter Server vCenter Server

Storage

vRS

vSphere Client vSphere Client

ESX ESX ESX

vRA vRA vRA

ESX ESX

VMFS VMFS

Storage VMFS VMFS

Page 124: VMware vSphere 5 seminar

SRM Scalability

Maximum Enforced

Protected virtual machines total 3000 No

Protected virtual machines in a single protection group 500 No

Protection groups 250 No

Simultaneous running recovery plans 30 No

vSphere Replicated virtual machines 500 No

Page 125: VMware vSphere 5 seminar

Workflow § Currently we have DR Event failover, and Test.

Page 126: VMware vSphere 5 seminar

Will shutdown protected VM’s, and

than synchronize

them!

Planned Migration § New is Planned Migration.

Planned migration ensures application consistency and no data-loss during migration

•  Graceful shutdown of production VMs in application consistent state •  Data sync to complete replication of VMs

•  Recover fully replicated VMs

Page 127: VMware vSphere 5 seminar

Failback Description Benefits

•  Facilitates DR operations for enterprises that are mandated to perform a true failover as part of DR testing

•  Simplifies recovery process after disaster

•  “Single button” to failback all recovered VMs

•  Interfaces with storage to automatically reverse replication

•  Replays existing recovery plans – so new virtual machines are not part of failback

Reverse Replication

Site A (Primary) Site B (Recovery)

Page 128: VMware vSphere 5 seminar

Failback § To failback, you need first to do a planned migration,

followed by a reprotect. Then, to do the actual failback, you do a recovery.

§ Below is a successful recovery of a planned migration.

Page 129: VMware vSphere 5 seminar

Failback (continued) § Reprotect is now almost complete . . .

Page 130: VMware vSphere 5 seminar

Failback (continued) § Replication now goes in reverse – to the protected side.

Page 131: VMware vSphere 5 seminar

Failback (continued) § Now we are ready to failover to our original side – the

protected site!

Page 132: VMware vSphere 5 seminar

DR Event

Page 133: VMware vSphere 5 seminar

Dependencies § There is more functionality to help manage multitier

applications.

Page 134: VMware vSphere 5 seminar

Dependencies (continued)

Page 135: VMware vSphere 5 seminar

Group 5 Group 4 Group 3 Group 2 Group 1

Dependencies (continued) – VM Startup Order

Database Apache

Desktop

Desktop

Desktop

Desktop

Apache

Apache

Mail Sync Exchange

App Server

Master Database

App Server

Database