How to setup GFS on RHEL/CentOS/Fedora

A clustered file system or SAN file system, is an enterprise storage file system which can be shared (concurrently accessed for reading and writing) by multiple computers. Such devices are usually clustered servers, which connect to the underlying block device over an external storage device. Such a device is commonly a storage area network (SAN).
Examples of such file systems:
* GFS : The Red Hat Global File System
* GPFS : IBM General Parallel File System
* QFS : The Sun Quick File System
* OCFS : Oracle cluster file system
In the rest of tutorial we will focus on GFS2 file system the new version of GFS file system, and how to mount a shared disk on a Fedora, Red Hat or CentOS. GFS (Global File System) is a cluster file system. It allows a cluster of computers to simultaneously use a block device that is shared between them (with FC, iSCSI, NBD, etc…). GFS is a free software, distributed under the terms of the GNU General Public License, was originally developed as part of a thesis-project at the University of Minnesota in 1997.
So, let start this brief configuration.
Install Clustering Software
Add tools for Cluster and Cluster Storage in RHN to the servers: So on both servers
[mezgani@node1 ~]$ sudo yum -y groupinstall Clustering
[mezgani@node1 ~]$ sudo yum -y groupinstall “Cluster Storage”
[mezgani@node2 ~]$ sudo yum -y groupinstall Clustering
[mezgani@node2 ~]$ sudo yum -y groupinstall “Cluster Storage”

After, on first node set admin password, activate luci and restart the service :
[mezgani@node1 ~]$ sudo luci_admin init
Initializing the luci server
Creating the ‘admin’ user
Enter password:

[mezgani@node1 ~]$ sudo chkconfig luci on
[mezgani@node1 ~]$ sudo service luci start

On both nodes, start ricci agent:
[mezgani@node1 ~]$ sudo chkconfig ricci on
[mezgani@node2 ~]$ sudo chkconfig ricci on
[mezgani@node1 ~]$ sudo service ricci start
[mezgani@node2 ~]$ sudo service ricci start

Make sure all the necessary daemons start up with the cluster..
[mezgani@node1 ~]$ sudo chkconfig gfs2 on
[mezgani@node1 ~]$ sudo chkconfig cman on
[mezgani@node1 ~]$ sudo service cman start
[mezgani@node1 ~]$ sudo service gfs2 start
[mezgani@node2 ~]$ sudo chkconfig gfs2 on
[mezgani@node2 ~]$ sudo chkconfig cman on
[mezgani@node2 ~]$ sudo service cman start
[mezgani@node2 ~]$ sudo service gfs2 start

luci is started so you can connect to https://node1:8084/ with admin user and password that you’ve set.
Create a new cluster ‘delta’ and add nodes using locally installed files option

Later, you may use parted to manage partitions on sda disk, and create some partitions.
Parted is an industrial-strength package for creating, destroying, resizing, checking and copying partitions, and the file systems on them. It support big partitions unlike fdisk.
For example here i create a partition of 1T.
[mezgani@node1 ~]$ sudo parted /dev/sda
GNU Parted 1.8.1
On utilise /dev/sda
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) p

Model: HP MSA2012fc (scsi)
Disk /dev/sda: 2400GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Fanions
2 1001GB 2400GB 1399GB ext3 primary

(parted) mkpart primary
Type de système de fichiers? [ext2]? ext3
Début? 0
Fin? 1001GB
(parted) print

Model: HP MSA2012fc (scsi)
Disk /dev/sda: 2400GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Fanions
1 17,4kB 1001GB 1001GB ext3 primary
(parted) quit

After creating partition, make gfs2 file system on it, with mkfs.gfs2 like this
[mezgani@node1 ~]$ sudo /sbin/mkfs.gfs2 -p lock_dlm -t delta:gfs2 -j 8 /dev/sda1
This will destroy any data on /dev/sda1.
It appears to contain a ext3 filesystem.

Are you sure you want to proceed? [y/n] y

Device: /dev/sda1
Blocksize: 4096
Device Size 932.25 GB (244384761 blocks)
Filesystem Size: 932.25 GB (244384760 blocks)
Journals: 8
Resource Groups: 3730
Locking Protocol: “lock_dlm”
Lock Table: “delta:gfs2”

Last, make a directory on all nodes and mount our gfs file system on them.
[mezgani@node1 ~]$ sudo mkdir /home/disk
[mezgani@node2 ~]$ sudo mkdir /home/disk
[mezgani@node1 ~]$ sudo mount -o acl -t gfs2 /dev/sda1 /home/disk
[mezgani@node2 ~]$ sudo mount -o acl -t gfs2 /dev/sda1 /home/disk

However, if a storage device loses power or is physically disconnected, file system corruption may occur.
Well, you can recover the GFS2 file system by using the gfs_fsck command.

[mezgani@node1 ~]$ sudo fsck.gfs2 -v -y /dev/sda1
With the -y flag specified, the fsck.gfs2 command does not prompt you for an answer before making changes.


Author: Ali MEZGANI

My name is MEZGANI Ali. I was born back in 1978 in Rabat Morocco. My interests are Debian Linux , programming , science and music.

17 thoughts on “How to setup GFS on RHEL/CentOS/Fedora”

  1. Hey, great rundown.

    I’m in the middle of trying to get this working for a shared FTP site repo.

    Using RHEL 5 with three nodes. Planning to include one more in the future.

    Any idea why it would hang while trying to mount? Everything else went pretty smooth. It just sits at the last line and doesn’t do anything:

    [user@server ~]$ sudo mount -vv -o acl -t gfs2 /dev/sdb1 /mnt/ftp
    /sbin/mount.gfs2: mount /dev/sdb1 /mnt/ftp
    /sbin/mount.gfs2: parse_opts: opts = “rw,acl”
    /sbin/mount.gfs2: clear flag 1 for “rw”, flags = 0
    /sbin/mount.gfs2: add extra acl
    /sbin/mount.gfs2: parse_opts: flags = 0
    /sbin/mount.gfs2: parse_opts: extra = “acl”
    /sbin/mount.gfs2: parse_opts: hostdata = “”
    /sbin/mount.gfs2: parse_opts: lockproto = “”
    /sbin/mount.gfs2: parse_opts: locktable = “”

    I’ve got RHEL support, but the guy is as dumb as a stone, so I’m looking around in the middle of his shooting in the dark.

    Here was the mkfs results:

    [user@server ~]$ sudo mkfs.gfs2 -p lock_dlm -t dc1_inf01_ftp:gfs2 -j 4 /dev/sdb1
    This will destroy any data on /dev/sdb1.
    It appears to contain a gfs2 filesystem.

    Are you sure you want to proceed? [y/n] y

    Device: /dev/sdb1
    Blocksize: 4096
    Device Size 15.00 GB (3932156 blocks)
    Filesystem Size: 15.00 GB (3932154 blocks)
    Journals: 4
    Resource Groups: 60
    Locking Protocol: “lock_dlm”
    Lock Table: “dc1_inf01_ftp:gfs2”
    UUID: A24DD524-C95C-8004-D833-4B7D5D4A8AB6

    [user@server ~]$

  2. Hi,

    Great tuto for creating the file system and locally mounting it.
    But the main point is how to share the FS between multiple nodes ?

    Can you post some config files examples? Because with your tuto, there is no advantage using gfs2 instead of ext3 or whatever.

    Or there is a point I didn’t get.

    Thanks 😉

  3. @last poster:

    Nope nope.. Your question is totally valid. However, you, like myself, thought that GFS was a replicating filesystem.. But it’s not. It’s just more lock friendly in different ways. It itself does no means of replication on it’s own. For that you need DRDB or something.

    Me I am trying to cluster 3 (potentially more) servers together to share replicated data, live, read/write usable, and there’s apparently no solution for this yet, except Coda, an old friend of mine.

    People always keep saying, DRDB with GFS/GFS2, but DRDB only works with 2 nodes. Not 3 or more. So this really hinders things.

    If anyone out there has any suggestions, please feel free to inform. 🙂

  4. Hi there, great tutorial.

    I’m having problems when trying to start the cman.

    I get this error

    /usr/sbin/cman_tool: ccsd is not running [FAILED]

    and I am not able to start the cman, so I am not able to complete this tutorial.

    Any suggestions ?

  5. Hi Rafael,

    Had same issue when starting cman.

    Overcame it by creating a cluster in luci and then running service cman start on each node

  6. Hi,

    As per your tuto, you have created gfs FS on locally on both nodes(node1 & node2), How they are communicate between each other for sharing/syncing data?

    Please suggest..

  7. i use a san storage in my two-node cluster, after created gfs2 on node1(/dev/sda3), i can mount the gfs2 filesystem on node1, but failed to mount it on node2:

    /sbin/mount.gfs2: invalid device path /dev/sda3

    if i run fdisk /dev/sda on node 2, it shows there are sda1, sda2, sda3 with correct size, but i can’t find /dev/sda3 device file on node 2….

    how to make node2 can mount the gfs2 fs too?

  8. This is gr8 tutorial. Can I use GFS on Logical volume(LV) without using clvm(Clustered logical volume) in shared environment like(SAN/NAS)?

  9. It seems that you don’t configure fencing methods. I wonder it is necessary to configure fencing methods with GFS?

  10. Hi,

    My requirement is that i need to share same data on 2 node, so i configure IP only in shared resources and i set gfs2 FS on my storage partition to have rw access on data from both nodes.

    is it fne configuration or do i need to make any changes?
    please once confirm my config. also suggest me performance tunning for gfs2

    CONFIG :

    [root@rac ~]# cat /etc/cluster/cluster.conf


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s