思杰服务器虚拟化Xenserver_解决方案介绍

Embed Size (px)

DESCRIPTION

思杰服务器虚拟化Xenserver_解决方案介绍

Citation preview

  • Xenserver Citrix [email protected]

    Citrix Confidential - Do Not Distribute

  • (Citrix Systems) 1989 NASDAQ :CTXS2011 :22 6,500+ 35 10,000+ 100 >23

    Citrix Confidential - Do Not Distribute

  • -XenDesktop-XenApp-XenServerWeb-NetScalerCitrix

    Citrix Confidential - Do Not Distribute

  • Citrix Confidential - Do Not Distribute

  • Citrix Confidential - Do Not Distribute

  • 2007-12-36050,000 225,000 50% 500

  • XenServer 90% Amazon EC2, Rackspace ,Softlayer,Carpathia NaviSite, NTT Microsoft SAP

  • Citrix XenServerXenServer 300,000+ 225,000+ licenseXenDesktop

  • Citrix Confidential - Do Not Distribute

  • - 10 minutes to XenXenApp, XenDesktop and NetScaler VPXXenServer

  • x86Full-virtualizationPara-virtualizationHardware Assisted

    Citrix Confidential - Do Not Distribute

  • XenServer vs Xenserver Domain 0 Parent Partition (Linux/Windows/Solaris) Xen, Microsoft Hyper-VVMware

    Citrix Confidential - Do Not Distribute

  • XenServerABCDE

  • Citrix XenServer

    Citrix Confidential - Do Not Distribute

  • XenCenter

    Citrix Confidential - Do Not Distribute

  • XenServer 6.0

    64-bit Xen HypervisoraaaaActive Directory aaaa(P2V, V2V and OVF)aaaaaaaaXenCenteraaaaManagement Integration with Systems Center VMMaaaaAutomated VM Protection and RecoveryaaaDistributed Virtual SwitchingaaaDynamic Memory ControlaaaHigh AvailabilityaaaPerformance Reporting and AlertingaaaMixed Resource Pools with CPU MaskingaaaDynamic Workload Balancing and Power ManagementaaIntelliCache for XenDesktop Storage OptimizationaaLive Memory Snapshot and RevertaaProvisioning Services for Virtual ServersaaRole-Based Administration and Audit TrailaaStorageLink Advanced Storage ManagementaaMonitoring Pack for Systems Center Ops ManageraaWebWeb Management Console with Delegated AdminaaProvisioning Services for Physical ServersaSite Recoverya

  • XenServer 6.0 runs on the Xen 4.1 HypervisorGPT SupportSwitch from DOS MBR partitioning to GUID partitioningSupports boot device >2TBOnly for fresh XS installs10% Dom0 size reduction

    XenServer 6.0

  • Linux: - 4%Windows: 2 - 6%64 bit hypervisor16CPU

  • : www.openvswitch.org2DVS Web

    Citrix Confidential - Do Not Distribute

  • DVS vSwitch ACLs, QoS, RSPAN Netflow Newly plugged VIFsNew VMsXenMotioned VMs (manual and through WLB)VMs on new pool membersPoolPrivate Networks Fail Safe mode

  • 16 vCPU per VM128GB Memory per VM1TB Physical RAM support

  • XenServer resource pool API VM API

    Citrix Confidential - Do Not Distribute

  • HAMaster / emails

    Citrix Confidential - Do Not Distribute

  • Citrix Confidential - Do Not Distribute

  • -XenMotion Live Migration

  • XenCenter

  • CPU

  • Xenserver

  • vApps Virtual AppliancesOVF definition of Virtual AppliancevApp XenCenterIntegrated Site RecoveryAppliance Import and ExportHA(vApp)

  • vApp(vApp)

  • /

  • windows basedWLBSmall footprint, Linux Virtual Appliance~150Mb

  • (DMC)

  • XSVMPR (VMPR)

  • Pre-checks with instructions to resolveStep-by-step procedure for upgradesBlocks unsupported upgrades

    Rolling Pool

  • () HTTP, NFS or FTP CD or PXE

    Rolling Pool

  • API

  • StorageLinkStorageLink GatewayLUN-per-VDIUsing array smartsStorageLinkRemoves SPOFNetAppDell EqualLogicPlanned: EMC Clariion and EMC VNX series through SMI-S (post-release)*

    StorageLink

  • StorageLink Gateway XAPI DaemonSMAPILVMNFSNetAppCSLG BridgeStorageLink GatewayEQLNetAppSMI-S

  • StorageLink XAPI DaemonSMAPILVMNFSNetAppCSLG BridgeEQLNetAppSMI-S

  • Integrated Site Recovery XenServer XenCenterfailover failbackvApp grouping startup orderFailover Powerstate of source VMDuplicate VMs on target poolSR connectivity

  • StorageLink Site RecoveryCitrix Confidential Subject to NDA

    Citrix Confidential Subject to NDA

  • Lab Managerdebugging,

  • WebXenscenter:VMActive DirectoryXenServer ( VNC RDP)

  • VDIHDX3D Pro3CAD RemoteFX, virtual GPUs,

    GPU pass-through

  • GPU pass-through, PCGPU pass-through GPU pass-through, 75%GPU cardsXenServer Host

  • XenDesktop 5 XenServer 5.6 SP2

    -IntelliCache

  • PoolI/OAssignedI/OStill need IOPS for write trafficLocal write cache benefitsTCO 15 30 %

    IntelliCache

  • XenConvert P2VXenCenterVM disk OVF applianceNamevCPUvMemoryVirtual InterfacesTarget SRimagevm

  • OVFXenServer 5.6 and aboveXenConvert 2.2.1 and aboveVmware ESX 3.5 and aboveVHDOVFvApps(.ova)(ova.gz)OVF EULAManifest & Digital Signature

  • Added to XenServer 5.6 FP1 for Linux BridgeCLI only in 5.6 FP1Added support for Active-Backup for Open vSwitchConfigurable through XenCenter

  • XenServer Solarflare SR-IOV Improved performance, but loss of services and management (e.g., live migration)Improved performance AND full use of services and managementXS & Solarflare SR-IOV ModelTypical SR-IOV Implementation

    vSwitch

    App

    GuestVM

    NIC

    dom0

    GuestVM

    App

    VF driver

    Physical driver

    Virtual NIC

    VF driver

    Virtual NIC

    vSwitch

    App

    GuestVM

    NIC

    dom0

    VF

    Physical driver

    Virtual NIC

    Netfront driver

    Plug-in driver

    Netback driver

  • XenServer 6.0 SCVMM and SCOMSystem Center Virtual Machine ManagerXSSystem Center Operations Manager packXSHost/PoolXenCenter (e.g. HA, DR) SCVMM and SCOM

  • XenServer

    CapabilityLocal DiskFibre ChanneliSCSI hardwareiSCSI softwareNFS basedNetAppEqualLogicStore VMAutomatic VM PlacementXenMotion VMsResize DisksFast exportThin Provision

  • Provisioning Server (streaming service)Network StorageXenserverVirtualized ServerPhysical Server

  • Xenserver

  • : XenServer?IT.At Half the Cost of Other OfferingsAll of the Features and Performance

    Citrix Confidential - Do Not Distribute

  • Citrix Confidential - Do Not Distribute

    *More than 215,000 organizations worldwide rely on Citrix to deliver any application to users anywhere with the best performance, highest security and lowest cost. Citrix customers include 100 percent of the Fortune 100 companies and 99 percent of the Fortune Global 500, as well as hundreds of thousands of small businesses. Citrix has approximately 8,000 partners in more than 100 countries. Annual revenue in 2008 was $1.6 billion. ***Citrix XenServer is a enterprise-class, cloud-proven virtualization platform. With the introduction of version 5 last year, XS crossed an important milestone with a robust feature set that rivals any other product on the market.

    XenServer is not a risky bet. Its the second most widely deployed hypervisor in the enterprise with more than 50,000 customers in production. With the Xen hypervisor running virtually every cloud on the planet, including the worlds largest virtualization deployment, its also cloud-proven with more scalability than most enterprises are likely to ever need.

    On Feb 23 2009, we changed the virtualization industry forever by making XenServer the full product completely free for everyone enterprises, OS vendors, cloud providers

    Here are some highlights from market though leaders

    **Xen 50,000 ( CPU Linux

    The green arrows show memory and CPU access which goes through the Xen engine down to the hardware. The orange lines show the path of I/O traffic on the server. The storage and network I/O connect through a high performance memory bus in Xen to the control domain environment. *vCenter vCenter windows server vCenter , vCenter heartbeatPlatinum Edition: Advanced automation, data protection and private cloud features for enterprise-wide virtual environmentsEnterprise Edition: Automated, integrated, and production-ready offering for medium to large enterprise deploymentsAdvanced Edition: Highly available and memory optimized virtual infrastructure for improved TCO and host utilizationFree Edition: Free, enterprise-ready virtual infrastructure with management tools above & beyond alternatives .*Often people ask for information on what type of performance they can expect with our solutions. These are some general numbers based on running a variety of benchmarks on our virtualization platform as compared to running the same thing on native.

    With a fully para-virtualized OS such as Linux you can expect .5 4% overhead vs running on native.

    With a hardware virtualization assist OS + paravirtualized I/O such as with Windows you will see a 2-7% overhead vs running on native.

    In both cases close to native speeds and much faster than the host OS based virtualization solutions. XenSource matches the Windows performance provided by the latest VMWare ESX version and actually runs faster than ESX for Linux VMS. Keep in mind that VMWare has been tuning their architecture for many years now and in our first year we have been able to match their performance without much tuning. Our performance will only increase from the point on.

    The XenSource platform is also designed to handle the performance requirements of power hungry applications and allows up to 32 virtual CPUs for a Linux VM and up to 4 virtual CPUs for a Windows VMs. This allows highly threaded applications like databases, app servers, and mail servers to take advantage of multiple processor cores.DVS GoalsProvide greater visibility into the XS networking layerProvide distributed fine grained networking configuration and control policies

    GoalsIntegrate OVS within XS as an eventual replacement for the Linux bridge stackBasically a compatibility exerciseOVS must replicate all existing XS networking functionality (VLANs, bonds, dedicated storage NICs, ingress QoS, etc)Does not leverage any of the advanced OVS functionality, but is a critical building blockProvide a Distributed Virtual Switching (DVS) solution that extends the XS platformBegins to leverage programmable nature of the OVS and its support for OpenFlowRequires OVS integration and DVSCExtends XS in two ways:Greater visibility into the networking layer of the XS platform via standard tools and processes, including RSPAN and NetFlowFine grained networking configuration and control policies that apply across VM migrationsWorking on future solutions in this area ourselves, and with partnersRevolves around OVS / OpenFlow combination Enables other solutions like VM isolation, multi-tenancy, and connecting cloud and on-premise networksIn Cowley timeframe support for ISV solutions to be evaluated on a case-by-case basisEnabling the partner ecosystemBy integrating an OpenFlow-programmable switch into our platform, and providing DVSC APIs

    With virtualization, one challenge is network visibility. The last hop to the VM is now a switch living in the virtualization software on the host, not the top-of-rack switch as network administrators are more accustomed to.

    With Distributed Virtual Switching, we can enable better visibility into the network and accomplish things such as: Real time network traffic statistics (Rx bytes, packets etc.) that you can easily get on switches in the physical world Enhanced security. Setting of ACLs on virtual interfaces (VIFs) permits you to provide a configurable, XenServer-provided firewall for the VM. Example: block HTTP, enable only HTTP, and various other configurations are now possible. Enhanced monitoring. Through port monitoring, you could for example determine if the XenDesktop user is running Pandora and causing performance issues Simpler network isolation and configuration of VLANs which are especially important in service provider environmentsleading to much simpler multi-tenancy in the future.

    VM ACLs move with the VM, even after a live migration, instead of being tied to a specific host.

    The Controller console UI is a web based interface, separate from XenCenter so that network administrators (not virtualization administrators) can have visibility into the distributed virtual switching environment.

    Without the controller, you can still do normal networking configurations (as in the past) via XenCenter, such as creating networks and configuration of TCP/IP settings.*XenMotion allows running Guest VMs to be migrated without service downtimeZero down-time during planned maintenance Load Balance VMs over different servers

    XenServer supports several features to guarantee service uptime in the event of infrastructure failure. Firstly, resource pools can be configured for automated high-availability. This deals with individual host failures by restarting VMs that were running on that host onto the next available machine in the resource pool. Notable features include:Peer-to-peer "self-healing" architecture ensures there is no single point of management failure.Set VM restart priorities individually, to control the order in which services are restarted in the event of host failure.Dynamic failure planning algorithms allow administrators to see how many hosts failures can be tolerated without compromising services.Presentation Title Goes HereInsert Version Number Here 2007 Citrix Systems, Inc.All rights reserved.*The inability to combine disparate host hardware into a resource pool makes it difficult for customers to expand their deployments over time. Anything that requires creation of an additional pool creates increased management costs for pool-based configuration settings, SRs, and templates, and limits migration agility.

    XenServer 5.5 requires host CPU models within a pool to be identical in order to ensure successful live migrations. However, major system vendors, HP, Dell, and IBM included, add and discontinue CPU offerings within the lifecycle of a server model, making it difficult to purchase truly identical servers over time.

    Both Intel (FlexMigration) and AMD (Extended Migration) offer technologies in their recent CPUs that provide CPU "masking" or "leveling". These features allow a CPU to be configured to appear as providing a different make, model, or functionality than it actually does, enabling pools of hosts with disparate CPUs to safely support live migrations.

    HCL will be extended to include CPU combinations that have been certified by partners or customers via the kits

    CPUs on the hosts to be masked have FlexMigration/Enhanced Migration capabilities CPUs of all hosts in the pool are from the same vendor (Intel or AMD, not mixed)

    *At GA we likely wont have a lot of tested, certified configurations of CPUs that can be levelized with one another. We encourage people to submit their working confirmations to us via the Citrix XenServer 5.6 Server Hardware Self-Test Kit:http://www.citrix.com/static/ready/downloads/XS+5.6+Server+Hardware+Self-Test+Kit.zip.

    We will have a new HCL category for Heterogeneous Pools to support this.

    *Schedule pool policy based on time of day needsWhen starting guests, an option to Start on optimal server is available, and XenServer chooses the most appropriate server based on policyUsers have the ability to over-ride policy, or specify guests or hosts that are excluded from policy (eg high-demand applications)

    *DMC = VM Density

    Automated Live memory reconfigurationProvide administrator-controlled over-provisioning of host RAMAdministrator can define policy and host will manage VM memory based on that policyManual per-VM memory allocation control2MB increments

    *** People using the SMI-S adapter today with XS 5.6FP1/SP2 and StorageLink Gateway should not upgrade yet to XS6 at the time of the release but should wait until the support for this is added post-release*Legacy NetApp and Dell EQL adapters are still in the code, but mainly for users upgrading to XenServer 6 who are using the Legacy adapter today. New SRs created from XenCenter will use the new integrated SL adapter.SMI-S plans are TBD (see previous slides)*For many organizations, using virtualized server infrastructure for disaster recovery represents a significant opportunity to achieve disaster plans at a fraction of the cost of maintaining costly physical infrastructure at remote sites.

    However, disaster recovery implementation and management is still an extremely complex undertaking for which many IT organizations dont have the expertise or administrative cycles.

    Citrix StorageLink Site Recovery simplifies the setup and configuration of disaster recovery for Hyper-V VMs by integrating with array-based DR controls for replication, fail-over, and testing.

    Controls for setting up replication are the same regardless of which array youre connected to, and array specific attributes (such as whether or not replication is based on a LUN basis or POOL basis) depends on the array manufacture and array setup.*Citrix Lab Manager greatly simplifies the management of IT Lab environments. Through the process of selecting and provisioning an environment, to isolating a specific state where a potential problem is seen for investigation to the final reclaim of the physical or virtual resources being used, Lab Manager standardizes the practices involved and delivers great efficiency and agility to the manager of the resources

    *XenServer Web Console GoalsEnable XenServer Mgmt from a Web based console Offer VM level delegation so end users can manage their VMs

    Web SS delivers Remote ManagementIT admins have long wanted a means to mange VMs remotely via a browser based, non-windows platformEnd User Self ServiceWSS also allows IT to delegate routine management tasks to the application/VM ownerThis satisfies the more strategic goal of helping IT to enable customer self service in the datacenter

    Finally WSS also provides a foundation for future innovation in the areas of web based mgmt, self service and an opencloud director layer for x-platform mgmt

    **So this use traditional case is shown on the left. Each blade or workstation needed a GPU installed, and Windows was installed physically.

    On the right we have the GPU pass-thru use case. We can install a number of GPUs in the XenServer host, and assign them to the Virtual machines.

    The actual savings will be determined by the number of GPUs in the server, or the capabilities of the new multi-GPU cards coming from vendors such as nVidia.**Vmware appliance imports are supported from:Vmware OVF toolVmware StudioESX 3.5/4.0/4.1Archiving will create an .ova file, Compression will create a ova.gz file

    *By default the guest VM will use the fast path for network traffic, however a regular VIF backup path is available and the VM will fallback to this path during migration to a different host. If a Solarflare SR-IOV adapter is available on the target host, the guest will switch back to the fast path again after migration.*With software-based iSCSI:All servers connect to the same LUNVirtual disk drives are individual logical volumes on the LUN similar to local disk setupOnly the server running a VM connects to the individual virtual disk for that VMA special master server coordinates which servers connect to which virtual disk drives

    The right side of this picture should look pretty familiar by now. We have the Provisioning Server with a couple of vDisks housed on network storage. On the left I now have two different types of systems. One is a server that has been virtualized using XenServer and is set-up to run two VMs. The other is a regular bare-metal server.

    Both of these can be provisioned concurrently from the Provisioning Server, but more importantly if you notice both the bare-metal server and one of the VMs are now running the SAP workload from the same vDisk. We didnt need to export and create a modified copy to run on the VM. A single vDisk works for both. By installing the XenServer drivers when we create the image that image is instantly ready to be streamed to both physical and virtual machines.

    #### This maximizes your flexibility while minimizing the number of workload images manage.

    So how can you use this? Say for example your QA environment is virtualized, but you run bare-metal servers in production. Without this capability you will most likely build and test your workload in a VM in QA and then have to re-create it on a bare-metal server when you go to deploy in production. This adds delays and opportunities for mistakes. However, with this capability, you can build and test your workloads in the virtualized QA environment and then immediately move and stream them to physical servers in production faster and more reliably than before.

    Another example might be a round the clock strategy where you freely move workloads between physical and virtual servers based on the demand at the time of day. You might run XenApp on physical servers during the day when your employees are using office applications, and then move them to virtual servers in the evening when user activity is low and instead run that big analytics package on the physical servers so that reports are ready in the morning.

    Because you can re-provision systems in minutes, you can be much more aggressive about maximizing server utilization at all hours of the day.