Pacemaker : Set Cluster Resource (NFS) |
Set NFS Cluster Resource and Configure Active/Passive NFS Server. This example is based on the environment like follows. 1) Basic Cluster setting is done 2) Fence Device is configured 3) LVM shared storage is configured +--------------------+
| [ ISCSI Target ] |
| dlp.srv.world |
+----------+---------+
10.0.0.30|
|
+----------------------+ | +----------------------+
| [ Cluster Node#1 ] |10.0.0.51 | 10.0.0.52| [ Cluster Node#2 ] |
| node01.srv.world +----------+----------+ node02.srv.world |
| NFS Server | | | NFS Server |
+----------------------+ | +----------------------+
vip:10.0.0.60
|
+----------+---------+
| [ NFS Clients ] |
| |
+--------------------+
|
| [1] | On all Cluster Nodes, if Firewalld is running, allow NFS service. |
[root@node01 ~]# firewall-cmd --add-service=nfs success # if use NFSv3, allow them, too [root@node01 ~]# firewall-cmd --add-service={nfs3,mountd,rpc-bind} success firewall-cmd --runtime-to-permanent success |
| [2] | On a Node that LVM shared storage is active in Cluster, Add NFS resource. [/dev/vg_ha/lv_ha] on the example below is LVM shared storage. |
# current status [root@node01 ~]# pcs status Cluster name: ha_cluster
Cluster Summary:
* Stack: corosync
* Current DC: node01.srv.world (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
* Last updated: Fri Mar 25 10:05:59 2022
* Last change: Fri Mar 25 10:01:47 2022 by root via cibadmin on node01.srv.world
* 2 nodes configured
* 2 resource instances configured
Node List:
* Online: [ node01.srv.world node02.srv.world ]
Full List of Resources:
* scsi-shooter (stonith:fence_scsi): Started node01.srv.world
* Resource Group: ha_group:
* lvm_ha (ocf:heartbeat:LVM-activate): Started node01.srv.world
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
# set Filesystem resource # [nfs_share] : any name # [device=***] : shared storage # [directory=***] : mount point # [--group ***] : set in the same group with shared storage [root@node01 ~]# pcs resource create nfs_share ocf:heartbeat:Filesystem device=/dev/vg_ha/lv_ha directory=/home/nfs-share fstype=ext4 --group ha_group pcs status Cluster name: ha_cluster
Cluster Summary:
* Stack: corosync
* Current DC: node01.srv.world (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
* Last updated: Fri Mar 25 10:07:20 2022
* Last change: Fri Mar 25 10:07:01 2022 by root via cibadmin on node01.srv.world
* 2 nodes configured
* 3 resource instances configured
Node List:
* Online: [ node01.srv.world node02.srv.world ]
Full List of Resources:
* scsi-shooter (stonith:fence_scsi): Started node01.srv.world
* Resource Group: ha_group:
* lvm_ha (ocf:heartbeat:LVM-activate): Started node01.srv.world
* nfs_share (ocf:heartbeat:Filesystem): Started node01.srv.world
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
# mount automatically on a node that resources started [root@node02 ~]# df -hT /home/nfs-share Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/vg_ha-lv_ha ext4 9.8G 24K 9.3G 1% /home/nfs-share # set nfsserver resource # [nfs_daemon] : any name # [nfs_shared_infodir=***] : specify a directory that NFS server related files are located [root@node01 ~]# pcs resource create nfs_daemon ocf:heartbeat:nfsserver nfs_shared_infodir=/home/nfs-share/nfsinfo nfs_no_notify=true --group ha_group # set IPaddr2 resource # virtual IP address clients access to NFS service [root@node01 ~]# pcs resource create nfs_vip ocf:heartbeat:IPaddr2 ip=10.0.0.60 cidr_netmask=24 --group ha_group # set nfsnotify resource # [source_host=***] ⇒ same one with vip above [root@node01 ~]# pcs resource create nfs_notify ocf:heartbeat:nfsnotify source_host=10.0.0.60 --group ha_group pcs status Cluster name: ha_cluster
Cluster Summary:
* Stack: corosync
* Current DC: node01.srv.world (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
* Last updated: Fri Mar 25 10:09:38 2022
* Last change: Fri Mar 25 10:09:31 2022 by root via cibadmin on node01.srv.world
* 2 nodes configured
* 5 resource instances configured
Node List:
* Online: [ node01.srv.world node02.srv.world ]
Full List of Resources:
* scsi-shooter (stonith:fence_scsi): Started node01.srv.world
* Resource Group: ha_group:
* lvm_ha (ocf:heartbeat:LVM-activate): Started node01.srv.world
* nfs_share (ocf:heartbeat:Filesystem): Started node01.srv.world
* nfs_daemon (ocf:heartbeat:nfsserver): Started node01.srv.world
* nfs_vip (ocf:heartbeat:IPaddr2): Started node01.srv.world
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
|
| [3] | On an active Node NFS filesystem mounted, set exportfs setting. |
# set exportfs resource # [nfs_root] : any name # [clientspec=*** options=*** directory=***] : exports setting # [fsid=0] : root point on NFSv4 [root@node01 ~]# pcs resource create nfs_root ocf:heartbeat:exportfs clientspec=10.0.0.0/255.255.255.0 options=rw,sync,no_root_squash directory=/home/nfs-share/nfs-root fsid=0 --group ha_group # set exportfs resource [root@node01 ~]# pcs resource create nfs_share01 ocf:heartbeat:exportfs clientspec=10.0.0.0/255.255.255.0 options=rw,sync,no_root_squash directory=/home/nfs-share/nfs-root/share01 fsid=1 --group ha_group pcs status Cluster name: ha_cluster
Cluster Summary:
* Stack: corosync
* Current DC: node01.srv.world (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
* Last updated: Fri Mar 25 10:11:58 2022
* Last change: Fri Mar 25 10:11:41 2022 by root via cibadmin on node01.srv.world
* 2 nodes configured
* 7 resource instances configured
Node List:
* Online: [ node01.srv.world node02.srv.world ]
Full List of Resources:
* scsi-shooter (stonith:fence_scsi): Started node01.srv.world
* Resource Group: ha_group:
* lvm_ha (ocf:heartbeat:LVM-activate): Started node01.srv.world
* nfs_share (ocf:heartbeat:Filesystem): Started node01.srv.world
* nfs_daemon (ocf:heartbeat:nfsserver): Started node01.srv.world
* nfs_vip (ocf:heartbeat:IPaddr2): Started node01.srv.world
* nfs_root (ocf:heartbeat:exportfs): Started node01.srv.world
* nfs_share01 (ocf:heartbeat:exportfs): Started node01.srv.world
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@node01 ~]# showmount -e Export list for node01.srv.world: /home/nfs-share/nfs-root 10.0.0.0/255.255.255.0 /home/nfs-share/nfs-root/share01 10.0.0.0/255.255.255.0 |
| [4] | Verify settings to access to virtual IP address with NFS from any client computer. |
| [root@client ~]# mount -t nfs4 10.0.0.60:share01 /mnt [root@client ~]# df -hT /mnt Filesystem Type Size Used Avail Use% Mounted on 10.0.0.60:/share01 nfs4 9.8G 512K 9.3G 1% /mnt |
No comments:
Post a Comment