Pacemaker : Configure GFS2 Filesystem |
Set GFS2 Filesystem Resource on Cluster. This example is based on the environment like follows. 1) Basic Cluster setting is done 2) Fence Device is configured +--------------------+
| [ ISCSI Target ] |
| dlp.srv.world |
+---------+----------+
10.0.0.30|
|
+----------------------+ | +----------------------+
| [ Cluster Node#1 ] |10.0.0.51 | 10.0.0.52| [ Cluster Node#2 ] |
| node01.srv.world +----------+----------+ node02.srv.world |
| | | |
+----------------------+ +----------------------+
|
| [1] | Create a shared storage on ISCSI Target for GFS2 Filesystem, refer to here. On this example, it created ISCSI storage as IQN [iqn.2021-06.world.srv:dlp.target02] with [10G] size. |
| [2] | On all Cluster Nodes, Install required packages and change LVM settings. |
# enable [HighAvailability, ResilientStorage] repo and install (disabled by default) [root@node01 ~]# dnf --enablerepo=highavailability,resilientstorage -y install lvm2-lockd gfs2-utils dlm [root@node01 ~]# vi /etc/lvm/lvm.conf # line 1172 : uncomment and change use_lvmlockd = 1 |
| [3] | On a Node in cluster, configure DLM resource for GFS2 filesystem. |
# set [no-quorum-policy=freeze] on GFS2 [root@node01 ~]# pcs property set no-quorum-policy=freeze # create controld resource # [dlm] ⇒ any name you like # [group] ⇒ any group name [root@node01 ~]# pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence group locking --future # create clone of [locking] to activate it on all nodes in cluster [root@node01 ~]# pcs resource clone locking interleave=true # create lvmlockd resource # [lvmlockdd] ⇒ any name # [group] ⇒ the same group with controld resource [root@node01 ~]# pcs resource create lvmlockdd ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence group locking --future # verify status # OK if all [Started] [root@node01 ~]# pcs status --full Cluster name: ha_cluster
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: node02.srv.world (2) (version 2.1.6-10.1.el9-6fdc9deea29) - partition with quorum
* Last updated: Tue Dec 5 12:45:00 2023 on node01.srv.world
* Last change: Tue Dec 5 12:44:49 2023 by root via cibadmin on node01.srv.world
* 2 nodes configured
* 6 resource instances configured
Node List:
* Node node01.srv.world (1): online, feature set 3.17.4
* Node node02.srv.world (2): online, feature set 3.17.4
Full List of Resources:
* scsi-shooter (stonith:fence_scsi): Started node01.srv.world
* Resource Group: ha_group:
* lvm_ha (ocf:heartbeat:LVM-activate): Started node01.srv.world
* Clone Set: locking-clone [locking]:
* Resource Group: locking:0:
* dlm (ocf:pacemaker:controld): Started node01.srv.world
* lvmlockdd (ocf:heartbeat:lvmlockd): Started node01.srv.world
* Resource Group: locking:1:
* dlm (ocf:pacemaker:controld): Started node02.srv.world
* lvmlockdd (ocf:heartbeat:lvmlockd): Started node02.srv.world
Migration Summary:
Fencing History:
* unfencing of node01.srv.world successful: delegate=node01.srv.world, client=pacemaker-controld.4616, origin=node02.srv.world, completed='2023-12-05 11:15:32.582894 +09:00'
* unfencing of node02.srv.world successful: delegate=node02.srv.world, client=pacemaker-fenced.4606, origin=node01.srv.world, completed='2023-12-05 11:15:32.564894 +09:00'
Tickets:
PCSD Status:
node01.srv.world: Online
node02.srv.world: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
|
| [4] | On a Node in cluster, configure shared volume on shared storage. [sdb] on the example below is shared storage from ISCSI Target. |
# discover target [root@node01 ~]# iscsiadm -m discovery -t sendtargets -p 10.0.0.30 10.0.0.30:3260,1 iqn.2022-01.world.srv:dlp.target01 10.0.0.30:3260,1 iqn.2022-01.world.srv:dlp.target02 # login [root@node01 ~]# iscsiadm -m node --login --target iqn.2022-01.world.srv:dlp.target02 [root@node01 ~]# iscsiadm -m session -o show tcp: [1] 10.0.0.30:3260,1 iqn.2022-01.world.srv:dlp.target01 (non-flash) tcp: [2] 10.0.0.30:3260,1 iqn.2022-01.world.srv:dlp.target02 (non-flash) # set LVM [root@node01 ~]# parted --script /dev/sdb "mklabel gpt" [root@node01 ~]# parted --script /dev/sdb "mkpart primary 0% 100%" [root@node01 ~]# parted --script /dev/sdb "set 1 lvm on" # create physical volume [root@node01 ~]# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created. # create chared volume [root@node01 ~]# vgcreate --shared vg_gfs2 /dev/sdb1 Volume group "vg_gfs2" successfully created VG vg_gfs2 starting dlm lockspace Starting locking. Waiting until locks are ready... |
| [5] | Move to another node and start lock manager for shared volume. |
[root@node02 ~]# iscsiadm -m discovery -t sendtargets -p 10.0.0.30 10.0.0.30:3260,1 iqn.2022-01.world.srv:dlp.target01 10.0.0.30:3260,1 iqn.2022-01.world.srv:dlp.target02 [root@node02 ~]# iscsiadm -m node --login --target iqn.2022-01.world.srv:dlp.target02 [root@node02 ~]# [root@node02 ~]# iscsiadm -m session -o show tcp: [1] 10.0.0.30:3260,1 iqn.2022-01.world.srv:dlp.target01 (non-flash) tcp: [2] 10.0.0.30:3260,1 iqn.2022-01.world.srv:dlp.target02 (non-flash) lvmdevices --adddev /dev/sdb1 [root@node02 ~]# vgchange --lock-start vg_gfs2 VG vg_gfs2 starting dlm lockspace Starting locking. Waiting until locks are ready...[root@node02 ~]# vgs VG #PV #LV #SN Attr VSize VFree cs 1 2 0 wz--n- <29.00g 0 vg_gfs2 1 0 0 wz--ns <9.98g <9.98g |
| [6] | Back to the node you set shared volume and create logical volume and configure GFS2 filesystem. |
# create logical volume [root@node01 ~]# lvcreate -l 100%FREE -n lv_gfs2 vg_gfs2 Logical volume "lv_gfs2" created. # format with GFS2 [root@node01 ~]# mkfs.gfs2 -j2 -p lock_dlm -t ha_cluster:gfs2-01 /dev/vg_gfs2/lv_gfs2 This will destroy any data on /dev/dm-3
Are you sure you want to proceed? [y/n] y
Discarding device contents (may take a while on large devices): Done
Adding journals: Done
Building resource groups: Done
Creating quota file: Done
Writing superblock and syncing: Done
Device: /dev/vg_gfs2/lv_gfs2
Block size: 4096
Device size: 9.98 GB (2615296 blocks)
Filesystem size: 9.98 GB (2615293 blocks)
Journals: 2
Journal size: 32MB
Resource groups: 42
Locking protocol: "lock_dlm"
Lock table: "ha_cluster:gfs2-01"
UUID: 1f5ed3d1-4785-4ecd-bc02-02c282bac54f
# create LVM-activate resource # [shared_lv] ⇒ any name # [group] ⇒ any group name [root@node01 ~]# pcs resource create shared_lv ocf:heartbeat:LVM-activate lvname=lv_gfs2 vgname=vg_gfs2 activation_mode=shared vg_access_mode=lvmlockd group shared_vg --future # create clone of [LVM-activate] [root@node01 ~]# pcs resource clone shared_vg interleave=true # set start order as [locking] → [shared_vg] [root@node01 ~]# pcs constraint order start locking-clone then shared_vg-clone Adding locking-clone shared_vg-clone (kind: Mandatory) (Options: first-action=start then-action=start) # set that [shared_vg] and [locking] start on a same node [root@node01 ~]# pcs constraint colocation add shared_vg-clone with locking-clone # create Filesystem resource # [shared_fs] ⇒ any name # [device] ⇒ device with GFS2 formatted # [directory] ⇒ any directory you'd like to mount GFS2 filesystem # [group] ⇒ the same group with LVM-activate resource [root@node01 ~]# pcs resource create shared_fs ocf:heartbeat:Filesystem device="/dev/vg_gfs2/lv_gfs2" directory="/home/gfs2-share" fstype="gfs2" options=noatime op monitor interval=10s on-fail=fence group shared_vg --future # verify status # OK if all [Started] [root@node01 ~]# pcs status --full Cluster name: ha_cluster
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: node02.srv.world (2) (version 2.1.6-10.1.el9-6fdc9deea29) - partition with quorum
* Last updated: Tue Dec 5 13:11:43 2023 on node01.srv.world
* Last change: Tue Dec 5 13:11:19 2023 by root via cibadmin on node01.srv.world
* 2 nodes configured
* 10 resource instances configured
Node List:
* Node node01.srv.world (1): online, feature set 3.17.4
* Node node02.srv.world (2): online, feature set 3.17.4
Full List of Resources:
* scsi-shooter (stonith:fence_scsi): Started node01.srv.world
* Resource Group: ha_group:
* lvm_ha (ocf:heartbeat:LVM-activate): Started node01.srv.world
* Clone Set: locking-clone [locking]:
* Resource Group: locking:0:
* dlm (ocf:pacemaker:controld): Started node01.srv.world
* lvmlockdd (ocf:heartbeat:lvmlockd): Started node01.srv.world
* Resource Group: locking:1:
* dlm (ocf:pacemaker:controld): Started node02.srv.world
* lvmlockdd (ocf:heartbeat:lvmlockd): Started node02.srv.world
* Clone Set: shared_vg-clone [shared_vg]:
* Resource Group: shared_vg:0:
* shared_lv (ocf:heartbeat:LVM-activate): Started node01.srv.world
* shared_fs (ocf:heartbeat:Filesystem): Started node01.srv.world
* Resource Group: shared_vg:1:
* shared_lv (ocf:heartbeat:LVM-activate): Started node02.srv.world
* shared_fs (ocf:heartbeat:Filesystem): Started node02.srv.world
Migration Summary:
Fencing History:
* unfencing of node01.srv.world successful: delegate=node01.srv.world, client=pacemaker-controld.4616, origin=node02.srv.world, completed='2023-12-05 11:15:32.582894 +09:00'
* unfencing of node02.srv.world successful: delegate=node02.srv.world, client=pacemaker-fenced.4606, origin=node01.srv.world, completed='2023-12-05 11:15:32.564894 +09:00'
Tickets:
PCSD Status:
node01.srv.world: Online
node02.srv.world: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
# OK if GFS2 filesystem is mounted on both nodes [root@node01 ~]# df -hT /home/gfs2-share Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/vg_gfs2-lv_gfs2 gfs2 10G 67M 10G 1% /home/gfs2-share[root@node02 ~]# df -hT /home/gfs2-share Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/vg_gfs2-lv_gfs2 gfs2 10G 67M 10G 1% /home/gfs2-share |
No comments:
Post a Comment