Configuring FCoE in Linux RHEL and HP FlexFabric

– First step is to identify Ethernet NIC’s and CNA that will pass FCoE traffic and thi is done by collecting information about MAC adresses.
– Installation of both of packages : fcoe-utils and lldpad
yum install fcoe-utils lldpad
– Load the driver bnx2fc
modprobe bnx2fc
– Renaming /etc/fcoe/cfg-ethx file to the name of the CNA, in our case it is eth2 :
cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-eth2
cat > /etc/fcoe/cfg-eth2 /etc/sysconfig/network-scripts/ifcfg-eth2<EOF
DEVICE=eth2
ONBOOT=yes
BOOTPROTO=none
USERCTL=NO
MTU=9000
EOF
The MTU is set to 9000 because FC payload is 2,112 bytes jumbo frames must be turned on to
avoid unnecessary IP fragmentation
– Run ifup to bring FCoE interface up
ifup eth2
– Run fcoeadm -i to check all created FCoE interfaces and status.
– Run cat /proc/scsi/scsi to see the luns

Advertisements

SRE engineer Bookmarks

I’m Linux system engineer and I develop in Python, Bash and Perl. I’m really interested by SRE position for that, I’m applying for SRE Engineer in Google and production Engineer in facebook. and here I share my daily bookmarks :

sre

Filesystems (jfs, xfs, ext3) comparison on Debian

File system is a method for storing and organizing computer files and the data they contain to make it easy to find and access them and there are a lot of type of file systems : shared file systems, database file systems, network file systems, disk file systems …
In Linux there are many disk file systems available, i felt the need to know more about some file systems performance on my old box, so i decided to do a benchmark essay.
In this studies we will look at the disk file systems and specially JFS (The IBM Journaled File System), XFS and ext3 (Third Extended File system).
This benchmark essay is based on some real-world tasks appropriate for a file server with older generation hardware (Pentium 4, IDE hard-drive).
I used an advanced automated tool named Bonnie++, a benchmark suite that is aimed at performing a number of simple tests of hard drive and file system performance. you can find more information about it at http://www.coker.com.au/bonnie++/

Description of Hardware used at tests:

processor : Intel(R) Pentium(R) 4 2xCPU 3.00GHz
RAM : 1287404 kB
Controller : 82801EB/ER (ICH5/ICH5R) IDE Controller
Hard drive:
$ sudo /usr/sbin/smartctl -i /dev/hdb

=== START OF INFORMATION SECTION ===
Model Family: Seagate Maxtor DiamondMax 20
Device Model: MAXTOR STM3802110A
Serial Number: 9LR4B9T3
Firmware Version: 3.AAJ
User Capacity: 80 026 361 856 bytes
Device is: In smartctl database [for details use: -P show]
ATA Version is: 7
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Sat Aug 8 19:10:53 2009 UTC
SMART support is: Available – device has SMART capability.
SMART support is: Enabled

OS:
$ cat /etc/issue; uname -a
Debian GNU/Linux squeeze/sid
Linux aldebaran 2.6.29 #2 SMP Thu Apr 2 20:37:46 UTC 2009 i686 GNU/Linux

All optional daemons killed (crond,sshd,httpd,…)
In this essay i used 3 partitions with 26GB for each one, the first partitions contains the jfs file systems the second contains the xfs file systems and the third contains ext3 file systems.

$ sudo parted /dev/hdb
GNU Parted 1.8.8
Using /dev/hdb
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) p
Model: MAXTOR STM3802110A (ide)
Disk /dev/hdb: 80,0GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 17,4kB 26,0GB 26,0GB jfs jfs
2 26,0GB 52,0GB 26,0GB xfs xfs
3 52,0GB 78,0GB 26,0GB ext3 ext3

I install the file systems on each partition :

$ sudo jfs_mkfs /dev/hdb1
$ sudo mkfs.xfs /dev/hdb2
$ sudo mkfs.ext3 /dev/hdb3

I make directories and i mount them :
$ mkdir jfs xfs ext3
$ sudo mount /dev/hdb1 jfs/
$ sudo mount /dev/hdb2 xfs/
$ sudo mount /dev/hdb3 ext3/

Later i rund bonnie++ this :
$ for i in *; do sudo /usr/sbin/bonnie++ -u 1000:1000 -d `pwd`/$i -q > $i.csv ; sleep 20; done
sleeping time is for freeing the busy resources, thanks bortzemyer for the tips 🙂
And i get this great result detailed

fs

We can see that there are not big difference between these filesystem, even JFS is useless of cpu and XFS and ext3 have better output/intput usage.
Well XFS and ext3 filesystems still a solid choice in the long list of quality Linux filesystems.

How to setup GFS on RHEL/CentOS/Fedora

A clustered file system or SAN file system, is an enterprise storage file system which can be shared (concurrently accessed for reading and writing) by multiple computers. Such devices are usually clustered servers, which connect to the underlying block device over an external storage device. Such a device is commonly a storage area network (SAN).
Examples of such file systems:
* GFS : The Red Hat Global File System
* GPFS : IBM General Parallel File System
* QFS : The Sun Quick File System
* OCFS : Oracle cluster file system
In the rest of tutorial we will focus on GFS2 file system the new version of GFS file system, and how to mount a shared disk on a Fedora, Red Hat or CentOS. GFS (Global File System) is a cluster file system. It allows a cluster of computers to simultaneously use a block device that is shared between them (with FC, iSCSI, NBD, etc…). GFS is a free software, distributed under the terms of the GNU General Public License, was originally developed as part of a thesis-project at the University of Minnesota in 1997.
So, let start this brief configuration.
Install Clustering Software
Add tools for Cluster and Cluster Storage in RHN to the servers: So on both servers
[mezgani@node1 ~]$ sudo yum -y groupinstall Clustering
[mezgani@node1 ~]$ sudo yum -y groupinstall “Cluster Storage”
[mezgani@node2 ~]$ sudo yum -y groupinstall Clustering
[mezgani@node2 ~]$ sudo yum -y groupinstall “Cluster Storage”

After, on first node set admin password, activate luci and restart the service :
[mezgani@node1 ~]$ sudo luci_admin init
Initializing the luci server
Creating the ‘admin’ user
Enter password:

[mezgani@node1 ~]$ sudo chkconfig luci on
[mezgani@node1 ~]$ sudo service luci start

On both nodes, start ricci agent:
[mezgani@node1 ~]$ sudo chkconfig ricci on
[mezgani@node2 ~]$ sudo chkconfig ricci on
[mezgani@node1 ~]$ sudo service ricci start
[mezgani@node2 ~]$ sudo service ricci start

Make sure all the necessary daemons start up with the cluster..
[mezgani@node1 ~]$ sudo chkconfig gfs2 on
[mezgani@node1 ~]$ sudo chkconfig cman on
[mezgani@node1 ~]$ sudo service cman start
[mezgani@node1 ~]$ sudo service gfs2 start
[mezgani@node2 ~]$ sudo chkconfig gfs2 on
[mezgani@node2 ~]$ sudo chkconfig cman on
[mezgani@node2 ~]$ sudo service cman start
[mezgani@node2 ~]$ sudo service gfs2 start

luci is started so you can connect to https://node1:8084/ with admin user and password that you’ve set.
Create a new cluster ‘delta’ and add nodes using locally installed files option

Later, you may use parted to manage partitions on sda disk, and create some partitions.
Parted is an industrial-strength package for creating, destroying, resizing, checking and copying partitions, and the file systems on them. It support big partitions unlike fdisk.
For example here i create a partition of 1T.
[mezgani@node1 ~]$ sudo parted /dev/sda
GNU Parted 1.8.1
On utilise /dev/sda
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) p

Model: HP MSA2012fc (scsi)
Disk /dev/sda: 2400GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Fanions
2 1001GB 2400GB 1399GB ext3 primary

(parted) mkpart primary
Type de système de fichiers? [ext2]? ext3
Début? 0
Fin? 1001GB
(parted) print

Model: HP MSA2012fc (scsi)
Disk /dev/sda: 2400GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Fanions
1 17,4kB 1001GB 1001GB ext3 primary
(parted) quit

After creating partition, make gfs2 file system on it, with mkfs.gfs2 like this
[mezgani@node1 ~]$ sudo /sbin/mkfs.gfs2 -p lock_dlm -t delta:gfs2 -j 8 /dev/sda1
This will destroy any data on /dev/sda1.
It appears to contain a ext3 filesystem.

Are you sure you want to proceed? [y/n] y

Device: /dev/sda1
Blocksize: 4096
Device Size 932.25 GB (244384761 blocks)
Filesystem Size: 932.25 GB (244384760 blocks)
Journals: 8
Resource Groups: 3730
Locking Protocol: “lock_dlm”
Lock Table: “delta:gfs2”

Last, make a directory on all nodes and mount our gfs file system on them.
[mezgani@node1 ~]$ sudo mkdir /home/disk
[mezgani@node2 ~]$ sudo mkdir /home/disk
[mezgani@node1 ~]$ sudo mount -o acl -t gfs2 /dev/sda1 /home/disk
[mezgani@node2 ~]$ sudo mount -o acl -t gfs2 /dev/sda1 /home/disk
….

However, if a storage device loses power or is physically disconnected, file system corruption may occur.
Well, you can recover the GFS2 file system by using the gfs_fsck command.

[mezgani@node1 ~]$ sudo fsck.gfs2 -v -y /dev/sda1
With the -y flag specified, the fsck.gfs2 command does not prompt you for an answer before making changes.