Monitoring Hard Disks with SMART on CentOS

In this article i explain how to use smartmontools’ smartctl utility and smartd dæmon to monitor the health of a system’s disks. I have a HP server that come with a hard drive HP seagate DG072BB975 unfortunately that is not yet supported by hddtemp.
The server uses SAS (Serial Attached SCSI) HP drives, so the device is mounted in /dev/cciss/c0d0

First install smartmontools.
$ sudo yum install smartmontools

Looking to our partitions system.
$ cat /proc/partitions
major minor #blocks name

104 0 71652960 cciss/c0d0
104 1 104391 cciss/c0d0p1
104 2 71545477 cciss/c0d0p2
253 0 69500928 dm-0
253 1 2031616 dm-1
That appear that there some cciss (Compaq Smart Array Controller) interface good.

So, the HP server come with 2 disks and by default there are connected to the 0 and 1 channel.
first HDD is on -d cciss,0 and the second on the -d cciss,1

Well, to get smartctl working, you need to specify device/bus type and the disk number, like this:
$ sudo /usr/sbin/smartctl -H -d cciss,0 /dev/cciss/c0d0

smartctl version 5.38 [i686-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is

SMART Health Status: OK

this output shows the results of the health status inquiry. This is the one-line Executive Summary Report of disk health; the disk shown here has passed. If your disk health status is FAILING, back up your data immediately.
For a full report you can set the option –all like this:
$ sudo /usr/sbin/smartctl –all -d cciss,0 /dev/cciss/c0d0

smartctl version 5.38 [i686-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is

Device: HP DG072BB975 Version: HPDC
Serial number: 3NP3FWC200009917UFB3
Device type: disk
Transport protocol: SAS
Local Time is: Fri Aug 14 02:30:17 2009 GMT
Device supports SMART and is Enabled
Temperature Warning Enabled
SMART Health Status: OK

Current Drive Temperature: 26 C
Drive Trip Temperature: 68 C
Elements in grown defect list: 0
Vendor (Seagate) cache information
Blocks sent to initiator = 3368064691
Blocks received from initiator = 4000083366
Blocks read from cache and sent to initiator = 1897459483
Number of read and write commands whose size segment size = 0
Vendor (Seagate/Hitachi) factory information
number of hours powered up = 5259.38
number of minutes until next internal SMART test = 46

Error counter log:
Errors Corrected by Total Correction Gigabytes Total
ECC rereads/ errors algorithm processed uncorrected
fast | delayed rewrites corrected invocations [10^9 bytes] errors
read: 0 0 0 0 0 0.000 0
write: 0 0 0 0 0 0.000 0

Non-medium error count: 0
No self-tests have been logged
Long (extended) Self Test duration: 1070 seconds [17.8 minutes]

Configuring smartd:
Edit your smartd.conf file to add the disks you need. Ensure you edit the correct file (/etc/smartd.conf) and add the following lines:
$ cat >> etc/smartd.conf
/dev/cciss/c0d0 -d cciss,0 -H -m root@domain.tld
/dev/cciss/c0d0 -d cciss,1 -H -m root@domain.tld

Replace the email with your own so when a disk fails it will contact you directly.
and start smart daemon.
$ sudo /etc/init.d/smartd start

When you work on a group that runs a large computing cluster with many nodes and many disk drives, the use of SMART become very interesting really when it could help reduce downtime and keep the cluster operating more reliably.

Monitoring Hard Disks with SMART on CentOS

Filesystems (jfs, xfs, ext3) comparison on Debian

File system is a method for storing and organizing computer files and the data they contain to make it easy to find and access them and there are a lot of type of file systems : shared file systems, database file systems, network file systems, disk file systems …
In Linux there are many disk file systems available, i felt the need to know more about some file systems performance on my old box, so i decided to do a benchmark essay.
In this studies we will look at the disk file systems and specially JFS (The IBM Journaled File System), XFS and ext3 (Third Extended File system).
This benchmark essay is based on some real-world tasks appropriate for a file server with older generation hardware (Pentium 4, IDE hard-drive).
I used an advanced automated tool named Bonnie++, a benchmark suite that is aimed at performing a number of simple tests of hard drive and file system performance. you can find more information about it at

Description of Hardware used at tests:

processor : Intel(R) Pentium(R) 4 2xCPU 3.00GHz
RAM : 1287404 kB
Controller : 82801EB/ER (ICH5/ICH5R) IDE Controller
Hard drive:
$ sudo /usr/sbin/smartctl -i /dev/hdb

Model Family: Seagate Maxtor DiamondMax 20
Device Model: MAXTOR STM3802110A
Serial Number: 9LR4B9T3
Firmware Version: 3.AAJ
User Capacity: 80 026 361 856 bytes
Device is: In smartctl database [for details use: -P show]
ATA Version is: 7
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Sat Aug 8 19:10:53 2009 UTC
SMART support is: Available – device has SMART capability.
SMART support is: Enabled

$ cat /etc/issue; uname -a
Debian GNU/Linux squeeze/sid
Linux aldebaran 2.6.29 #2 SMP Thu Apr 2 20:37:46 UTC 2009 i686 GNU/Linux

All optional daemons killed (crond,sshd,httpd,…)
In this essay i used 3 partitions with 26GB for each one, the first partitions contains the jfs file systems the second contains the xfs file systems and the third contains ext3 file systems.

$ sudo parted /dev/hdb
GNU Parted 1.8.8
Using /dev/hdb
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) p
Model: MAXTOR STM3802110A (ide)
Disk /dev/hdb: 80,0GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 17,4kB 26,0GB 26,0GB jfs jfs
2 26,0GB 52,0GB 26,0GB xfs xfs
3 52,0GB 78,0GB 26,0GB ext3 ext3

I install the file systems on each partition :

$ sudo jfs_mkfs /dev/hdb1
$ sudo mkfs.xfs /dev/hdb2
$ sudo mkfs.ext3 /dev/hdb3

I make directories and i mount them :
$ mkdir jfs xfs ext3
$ sudo mount /dev/hdb1 jfs/
$ sudo mount /dev/hdb2 xfs/
$ sudo mount /dev/hdb3 ext3/

Later i rund bonnie++ this :
$ for i in *; do sudo /usr/sbin/bonnie++ -u 1000:1000 -d `pwd`/$i -q > $i.csv ; sleep 20; done
sleeping time is for freeing the busy resources, thanks bortzemyer for the tips 🙂
And i get this great result detailed


We can see that there are not big difference between these filesystem, even JFS is useless of cpu and XFS and ext3 have better output/intput usage.
Well XFS and ext3 filesystems still a solid choice in the long list of quality Linux filesystems.

Filesystems (jfs, xfs, ext3) comparison on Debian

How to setup GFS on RHEL/CentOS/Fedora

A clustered file system or SAN file system, is an enterprise storage file system which can be shared (concurrently accessed for reading and writing) by multiple computers. Such devices are usually clustered servers, which connect to the underlying block device over an external storage device. Such a device is commonly a storage area network (SAN).
Examples of such file systems:
* GFS : The Red Hat Global File System
* GPFS : IBM General Parallel File System
* QFS : The Sun Quick File System
* OCFS : Oracle cluster file system
In the rest of tutorial we will focus on GFS2 file system the new version of GFS file system, and how to mount a shared disk on a Fedora, Red Hat or CentOS. GFS (Global File System) is a cluster file system. It allows a cluster of computers to simultaneously use a block device that is shared between them (with FC, iSCSI, NBD, etc…). GFS is a free software, distributed under the terms of the GNU General Public License, was originally developed as part of a thesis-project at the University of Minnesota in 1997.
So, let start this brief configuration.
Install Clustering Software
Add tools for Cluster and Cluster Storage in RHN to the servers: So on both servers
[mezgani@node1 ~]$ sudo yum -y groupinstall Clustering
[mezgani@node1 ~]$ sudo yum -y groupinstall “Cluster Storage”
[mezgani@node2 ~]$ sudo yum -y groupinstall Clustering
[mezgani@node2 ~]$ sudo yum -y groupinstall “Cluster Storage”

After, on first node set admin password, activate luci and restart the service :
[mezgani@node1 ~]$ sudo luci_admin init
Initializing the luci server
Creating the ‘admin’ user
Enter password:

[mezgani@node1 ~]$ sudo chkconfig luci on
[mezgani@node1 ~]$ sudo service luci start

On both nodes, start ricci agent:
[mezgani@node1 ~]$ sudo chkconfig ricci on
[mezgani@node2 ~]$ sudo chkconfig ricci on
[mezgani@node1 ~]$ sudo service ricci start
[mezgani@node2 ~]$ sudo service ricci start

Make sure all the necessary daemons start up with the cluster..
[mezgani@node1 ~]$ sudo chkconfig gfs2 on
[mezgani@node1 ~]$ sudo chkconfig cman on
[mezgani@node1 ~]$ sudo service cman start
[mezgani@node1 ~]$ sudo service gfs2 start
[mezgani@node2 ~]$ sudo chkconfig gfs2 on
[mezgani@node2 ~]$ sudo chkconfig cman on
[mezgani@node2 ~]$ sudo service cman start
[mezgani@node2 ~]$ sudo service gfs2 start

luci is started so you can connect to https://node1:8084/ with admin user and password that you’ve set.
Create a new cluster ‘delta’ and add nodes using locally installed files option

Later, you may use parted to manage partitions on sda disk, and create some partitions.
Parted is an industrial-strength package for creating, destroying, resizing, checking and copying partitions, and the file systems on them. It support big partitions unlike fdisk.
For example here i create a partition of 1T.
[mezgani@node1 ~]$ sudo parted /dev/sda
GNU Parted 1.8.1
On utilise /dev/sda
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) p

Model: HP MSA2012fc (scsi)
Disk /dev/sda: 2400GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Fanions
2 1001GB 2400GB 1399GB ext3 primary

(parted) mkpart primary
Type de système de fichiers? [ext2]? ext3
Début? 0
Fin? 1001GB
(parted) print

Model: HP MSA2012fc (scsi)
Disk /dev/sda: 2400GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Fanions
1 17,4kB 1001GB 1001GB ext3 primary
(parted) quit

After creating partition, make gfs2 file system on it, with mkfs.gfs2 like this
[mezgani@node1 ~]$ sudo /sbin/mkfs.gfs2 -p lock_dlm -t delta:gfs2 -j 8 /dev/sda1
This will destroy any data on /dev/sda1.
It appears to contain a ext3 filesystem.

Are you sure you want to proceed? [y/n] y

Device: /dev/sda1
Blocksize: 4096
Device Size 932.25 GB (244384761 blocks)
Filesystem Size: 932.25 GB (244384760 blocks)
Journals: 8
Resource Groups: 3730
Locking Protocol: “lock_dlm”
Lock Table: “delta:gfs2”

Last, make a directory on all nodes and mount our gfs file system on them.
[mezgani@node1 ~]$ sudo mkdir /home/disk
[mezgani@node2 ~]$ sudo mkdir /home/disk
[mezgani@node1 ~]$ sudo mount -o acl -t gfs2 /dev/sda1 /home/disk
[mezgani@node2 ~]$ sudo mount -o acl -t gfs2 /dev/sda1 /home/disk

However, if a storage device loses power or is physically disconnected, file system corruption may occur.
Well, you can recover the GFS2 file system by using the gfs_fsck command.

[mezgani@node1 ~]$ sudo fsck.gfs2 -v -y /dev/sda1
With the -y flag specified, the fsck.gfs2 command does not prompt you for an answer before making changes.

How to setup GFS on RHEL/CentOS/Fedora

Pipes in syslog

Using syslog, there are a possiblity to write the output to a pipe, so we can read this pipe from a program. But we have to be careful, syslogd should not wedge but we will have missing and/or mangled
messages if they arrive faster than our program can process them.
Let’s take look to how to create these pipes and read from them:

First create a named pipe using mkfifo:
$ mkfifo -p /home/mezgani/syslog.pipe

Make syslog.conf to points to this file:
*.info |/home/mezgani/syslog.pipe

Restart syslog:
$ sudo pkill -HUP syslogd

Create processing script that read the pipe
$ cat > foo
cat /home/mezgani/syslog.pipe | while read input
# some stuff
echo ${input}
# ….

Pipes in syslog