Ha active active bang gfs2

Embed Size (px)

Text of Ha active active bang gfs2

  1. 1. Son: H S Tm Email: hosytam@gmail.com Bi Vit c S Hng dn ca anh Nguyn Vn Thng HNG DN CU HNH CLUSTER FILE SYSTEM M HNH ACTIVE/ACTIVE BNG GFS2 (Global File System)
  2. 2. 1 - Thng tin cu hnh: Yu cu: Hai server cu hnh Active - Active v s dng chung mt Disk_Cluster (Trn SAN STORAGE) hay ni cch khc l 2 Server s s dng chung mt trn SAN STORAGE ghi d liu vo . (Bnh thng nu theo c ch Cluster th trong ti mt thi im th ch c mt Server c php truy cp vo vng Disk ghi d liu v Server cn li ch d phng khng c php truy cp vo vng disk , khi server chnh b s c th Server d phng mi c truy cp. Sau y mnh s hng dn bn cu hnh Cluster File System theo m hnh Acitve/Active cho 2 Server cng truy cp cng mt lc lu v c d liu nh mt Share bnh thng. H thng chng ta gm 2 Server: mediasrv01 v mediasrv02 - Thng tin v Server (mediasrv01) * Port 1 mng cng Ethernet dng chy dch v Ip Address: 192.168.100.19 Subnet Mask: 255.255.255.0 Default Gateway: 192.168.100.1 DNS: 8.8.8.8 * Por 4: Port ny ni vi Server (mediasrv02) Ip Address: 10.0.0.2 (S ni chuyn vi iDRAC qua ng dch v) Subnet Mask: 255.255.255.0 * Port iDRAC Ip Address: 192.168.100.17 Subnet Mask: 255.255.255.0 Default gateway: 192.168.100.1 - Thng tin v Server (mediasrv02) * Port 1 mng cng Ethernet dng chy dch v Ip Address: 192.168.100.20 Subnet Mask: 255.255.255.0 Default Gateway: 192.168.100.1 DNS: 8.8.8.8 * Por 4: Port ny ni vi Server (mediasrv02) Ip Address: 10.0.0.3 (S ni chuyn vi iDRAC qua ng dch v) Subnet Mask: 255.255.255.0 * Port iDRAC
  3. 3. Ip Address: 192.168.100.18 Subnet Mask: 255.255.255.0 Default gateway: 192.168.100.1 2 - Cu hnh tng bc nh sau: 2.1 - Cu hnh trn Server (mediasrv01 v mediasrv02) >> Thao tc ny lm trn c 2 Server - Cu hnh file hosts nm trong ng dn Thng tin ni dung file hosts nh sau: ng l lu li Sau copy file hosts ny sang mediasrv02 [root@mediasrv01 ~]# scp /etc/hosts root@192.168.100.20:/etc/ khi ng li Server mediasrv01 v mediasrv02 - Tt dch v Selinux [root@mediasrv01 ~]# vim /etc/sysconfig/selinux
  4. 4. 2.2 - t a ch IP v Sa li Card mng Port ko dng >> Thao tc ny lm trn c 2 Server Trn Server mediasrv01 hin ang ch dng port1 v port4 (cn Port 2 v Port3) th chng ta c th tt n i nm trong ng dn sau: [root@localhost ~]# cd /etc/sysconfig/network-scripts/ 3.3 Kim tra Firewall trn c 2 Server (mediasrv01 v mediasrv02) >> Thao tc ny lm trn c 2 Server [root@mediasrv01 ~]# iptables -L
  5. 5. 3.4 Kim tra cu hnh trn SAN v cu hnh Multipath trn c 2 Server mediasrv01 v mediasrv02 ) >> Thao tc ny lm trn c 2 Server C cc lnh h tr nh sau: lvmdiskscan lvdisplay, lvs pvdisplay, pvs multipath -ll ng dn cu hnh Fix Multipath l /etc/multipath.conf multipaths { multipath { wwid 36006048000028350131253594d303030 alias mpatha } multipath { wwid 36006048000028350131253594d303041 alias mpathb } multipath { wwid 36006048000028350131253594d303145 alias mpathc } multipath {
  6. 6. wwid 36006048000028350131253594d303334 alias mpathd } } [root@localhost ~]# multipath -ll mpathb (3600a098000b7257b00000d325985b53b) dm-4 DELL ,MD38xxf size=4.0G features='0' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=1 status=active | `- 1:0:0:2 sdc 8:32 active ready running |-+- policy='service-time 0' prio=1 status=enabled | `- 1:0:1:2 sde 8:64 active ready running |-+- policy='service-time 0' prio=1 status=enabled | `- 12:0:0:2 sdg 8:96 active ready running `-+- policy='service-time 0' prio=1 status=enabled `- 12:0:1:2 sdi 8:128 active ready running mpatha (3600a098000b7257b00000bfc59839c59) dm-3 DELL ,MD38xxf size=300G features='0' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=1 status=active | `- 1:0:0:1 sdb 8:16 active ready running |-+- policy='service-time 0' prio=1 status=enabled | `- 1:0:1:1 sdd 8:48 active ready running |-+- policy='service-time 0' prio=1 status=enabled | `- 12:0:0:1 sdf 8:80 active ready running `-+- policy='service-time 0' prio=1 status=enabled `- 12:0:1:1 sdh 8:112 active ready running >> ket hop truong hop cu the cua may minh va output cua lenh multipath -ll de config tiep file /etc/multipath.conf 3.5 To th mc mount v (To Mount point) ) >> Thao tc ny lm trn c 2 Server [root@mediasrv01 ~]# mkdir /filevideo 3.6 Ci t v g ci t mt s gi cn thit ) ) >> Thao tc ny lm trn c 2 Server
  7. 7. [root@mediasrv01 ~]# yum -y install pcs pacemaker fence-agents-all gfs2-utils lvm2-cluster ntp [root@mediasrv01 ~]# yum remove NetworkManager 3.7 t mt khu cho ti khon hacluster ) >> Thao tc ny lm trn c 2 Server [root@mediasrv01 ~]# passwd hacluster t password cho ti khon hacluster 3.8 ng b thi gian NTP cho 2 Server vi nhau ) >> Thao tc ny lm trn c 2 Server [root@mediasrv01 ~]# timedatectl [root@mediasrv01 ~]# yum install -y ntp [root@mediasrv01 ~]# systemctl enable ntpd [root@mediasrv01 ~]# systemctl start ntpd [root@mediasrv01 ~]# ntpq -p [root@mediasrv01 ~]# ntpstat [root@mediasrv01 ~]# yum remove chrony [root@mediasrv01 ~]# timedatectl set-ntp true [root@mediasrv01 ~]# timedatectl set-ntp 0 [root@mediasrv01 ~]# timedatectl set-timezone Asia/Ho_Chi_Minh [root@mediasrv01 ~]# timedatectl set-ntp 1 [root@mediasrv01 ~]# hwclock --systohc [root@mediasrv01 ~]# timedatectl 3.9 Bt dch v Cluster >> Thao tc ny lm trn c 2 Server [root@mediasrv01 ~]# systemctl start pcsd.service; systemctl enable pcsd.service; 3.10 Bt tnh nng Cluster cho LVM >> Thao tc ny lm trn c 2 Server [root@mediasrv01 ~]# /sbin/lvmconf --enable-cluster
  8. 8. 3.11 Thm cc Node tham gia vo Cluster >> Thao tc ny ch lm trn mediasrvo1 (Ch lm trn Server chnh) [root@mediasrv01 ~]# pcs cluster auth mediasrv01-private mediasrv02-private Mn hnh s hin th: Username: hacluster Password: 12345678 mediasrv02-private: Authorized mediasrv01-private: Authorized 3.12 Bt tnh nng IPMI trong trnh qun l iDRAC (Bt IPMI trn c 2 Server vt l) Trc khi cu hnh fence chng ta phi vo trnh duyt qun l iDRAC enable tnh nng ipmi trn iDRAC ca server vt l mediasrv01 v mediasrv02 nh hnh bn di 3.13 Cu hnh Fence Tip theo chng ta vo trong trang qun l Cluster Management https://192.168.100.19:2224 ng nhp ti khon vi Username: hacluster Password: 12345678
  9. 9. chn Login Xut hin ca s sau: chn Create New xut hin hp thoi:
  10. 10. in y thng tin nh hnh trn v chn Create Cluster Sau khi to xong vo bn trong nhn vo sharemedia s c cc thng s nh ny: -> i vi mediasrv01
  11. 11. -> i vi mediasrv02
  12. 12. Tip theo chng ta s cu hnh Fence Devices trn server mediasrv01 bng dng lnh nh sau: [root@mediasrv01 ~]# pcs stonith create fence_mediasrv01_ipmi fence_ipmilanpcmk_host_list="mediasrv01-private" ipaddr="mediasrv01-ipmi"action="reboot" lanplus="1" login="root" passwd="12345678" delay=15op monitor interval=60s [root@mediasrv01 ~]# pcs stonith create fence_mediasrv02_ipmi fence_ipmilanpcmk_host_list="mediasrv02-private" ipaddr="mediasrv02-ipmi"action="reboot" lanplus="1" login="root" passwd="12345678" delay=15op monitor interval=60s Cu hnh xong chng ta c nh hnh bn di: Sau khi to xong chng ta vo giao din web cluster managemen xem c danh sch hin th: https://192.168.100.19:2224 ng nhp ti khon vi Username: hacluster Password: 12345678 chn Login
  13. 13. vo bn trong nhn vo sharemedia , sau chng ta chn FENCE DEVICES thy hin th danh sch fence_mediasrv01_ipmi v fence_mediasrv02_ipmi khi m chng ta to bng dng lnh: Ch : Nu chng ta khi ng li Server m ko Start c dch v fence_mediasrv01_ipmi v fence_mediasrv02_ipmi th chng ta phi Stop dch v sau mi Start dch v nh cu lnh di y: [root@mediasrv01 ~]# pcs cluster stop mediasrv01-private [root@mediasrv01 ~]# pcs cluster start mediasrv01-private [root@mediasrv01 ~]# pcs cluster stop mediasrv02-private [root@mediasrv01 ~]# pcs cluster start mediasrv02-private Sau khi cu hnh cc bc trn xong th chng ta chy mt s lnh b sung v cu hnh Cluster (C iu chnh li ti liu trong qu trnh s dng >> Cu hnh trn mediasrv01 [root@mediasrv01 ~]# pcs property set no-quorum-policy=freeze [root@mediasrv01 ~]# pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true [root@mediasrv01 ~]# pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true
  14. 14. [root@mediasrv01 ~]# pcs constraint order start dlm-clone then clvmd-clone [root@mediasrv01 ~]# pcs constraint colocation add clvmd-clone with dlm-clone [root@mediasrv01 ~]# lvmdiskscan [root@mediasrv01 ~]# pvcreate /dev/mapper/mpathc [root@mediasrv01 ~]# vgcreate -Ay -cy cluster_vg /dev/mapper/mpathc [root@mediasrv01 ~]# lvcreate -l 100%FREE -ncluster_lv cluster_vg [root@mediasrv01 ~]# mkfs.gfs2 -j2 -p lock_dlm -t sharemedia:gfs2-share /dev/cluster_vg/cluster_lv
  15. 15. [root@mediasrv01 ~]# pcs resource create clusterfs Filesystem device="/dev/cluster_vg/clust "options=noatime" op monitor interval=10s on- fail=fence clone interleave=true [root@mediasrv01 ~]# mount |grep filevideo [root@mediasrv01 ~]# pcs constraint order start clvmd-clone then clusterfs-clone [root@mediasrv01 ~]# pcs constraint colocation add clusterfs-clone with clvmd- clone Nh vy chng ta cu hnh xong. ng nhp vo Management https://192.168.100.19:2224 https://192.168.100.20:2224 Ch : Bt mediasrv01 ln trc sau mi bt Host2 [root@mediasrv01 ~]# pcs cluster stop mediasrv02-private [root@mediasrv01 ~]# pcs cluster start mediasrv02-private [root@mediasrv01 ~]# pcs cluster stop mediasrv02-private [root@mediasrv01 ~]# pcs cluster start mediasrv02-private
  16. 16. 3.14 Tng dung lng LVM trong GFS2 Chng ta dng lnh lvmdiskscan kim tra thng tin, chng ta thy hin ti /dev/cluster_vg/cluster_lv ch c dung lng hin ti l 3.52TB trn con SAN STORAGE ti tng thm dung lng ln v c tng dung lng l 3.91TB nh hnh bn di: Vy chng ta tng dung lng cho /dev/cluster_vg/cluster_lv nhn dung lng 3.91TB m chng ta tng trn SAN STORAGE nh th no: u tin chng ta disable Resources bng cch ng nhp vo ng dn sau: https://192.168.100.19:2224/login ng nhp username v password chn Log