Configuring FCoE in Linux RHEL and HP FlexFabric

– First step is to identify Ethernet NIC’s and CNA that will pass FCoE traffic and thi is done by collecting information about MAC adresses.
– Installation of both of packages : fcoe-utils and lldpad
yum install fcoe-utils lldpad
– Load the driver bnx2fc
modprobe bnx2fc
– Renaming /etc/fcoe/cfg-ethx file to the name of the CNA, in our case it is eth2 :
cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-eth2
cat > /etc/fcoe/cfg-eth2 /etc/sysconfig/network-scripts/ifcfg-eth2<EOF
DEVICE=eth2
ONBOOT=yes
BOOTPROTO=none
USERCTL=NO
MTU=9000
EOF
The MTU is set to 9000 because FC payload is 2,112 bytes jumbo frames must be turned on to
avoid unnecessary IP fragmentation
– Run ifup to bring FCoE interface up
ifup eth2
– Run fcoeadm -i to check all created FCoE interfaces and status.
– Run cat /proc/scsi/scsi to see the luns

Configuring FCoE in Linux RHEL and HP FlexFabric

Install bower on Ubuntu 14.04

I need to install bower to manage my js library for such use I need to install it using npm the nodejs package manager.
I you need nodejs on ubuntu 14.04 don’t install it using apt-get, the ubuntu packages are bad.
You may install the official package for ubuntu from nodejs website.

curl --silent --location https://deb.nodesource.com/setup_4.x | sudo bash -
sudo apt-get install --yes nodejs

bower is a shell tool, you have to install it globally so I run:
npm install -g bower

Install bower on Ubuntu 14.04

Quick setup : Free tier ec2 amazon instance

I create a free tier ec2 instance in amazon, I reserve an Elastic IP and assign it to my instance.
I install nginx and When I need to access nginx from internet, for that I went to my domain registrar godaddy I create a CNAME that point to this Elastic IP.
Finally I create a security group, I open stream to HTTP port, and my ec2 instance is became ready to use and to share http content.

Quick setup : Free tier ec2 amazon instance

Install and configure MongoDB in Ubuntu

We will install the most recent version of MongoDB from the 10gen repo. This requires us to first register the public key for the 10gen MongoDB apt repository, add the repository, and continue with the MongoDB installation.

Configure MongoDB
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
sudo echo "deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen" | sudo tee -a /etc/apt/sources.list.d/10gen.list
sudo apt-get -y update
sudo apt-get -y install mongodb-10gen vim curl

Create the database and database user
We need to create our database (proddb) and database user(admin).All commands denoted with ‘>’ are executed inside of MongoDB.

sudo mongo
>use proddb
>db.addUser({user: "admin",pwd: "bacon&eggs",roles: ["dbAdmin"],})
>exit

Modify MongoDB settings
We need to modify our MongoDB configuration to set the bind address to ‘0.0.0.0’ and port to ‘27017’. By default these values should be correct, but we want to ensure these settings are configured explicitly.

sudo vim /etc/mongodb.conf

Ensure the following is set correctly (note this is only a portion of the configuration file):

# mongodb.conf

# Where to store the data.

# Note: if you run mongodb as a non-root user (recommended) you may
# need to create and set permissions for this directory manually,
# e.g., if the parent directory isn't mutable by the mongodb user.
dbpath=/var/lib/mongodb

#where to log
logpath=/var/log/mongodb/mongodb.log

logappend=true

port = 27017
bind_ip = 0.0.0.0

Restart the database
Now just restart the database for the changes to take effect.

sudo service mongodb restart

Install and configure MongoDB in Ubuntu

Fork from github and add change to your own repository

When playing with vagrant, I set up a haproxy keepalived apache stack on ubuntu precise.
For that I forked this repository vagrant-haproxy-demo from github (https://github.com/justintime/vagrant-haproxy-demo.git) that run a haproxy standalone instance and I append my change to it to support keepalived.

First of all, I create the repository vagrant-haproxy-keepalived.
and I set remote origin to the repository created.
I create a new branch named development, I modify code and I do a commit.

Finally I checkout to master and rebase the branch development.


git remote set-url origin https://github.com/mezgani/vagrant-haproxy-keepalived.git
git checkout -b development
git add .
git commit -a
git checkout master
git rebase development
git push origin master

My code is here https://github.com/mezgani/vagrant-haproxy-keepalived.git, you can browse code.

Fork from github and add change to your own repository

Understanding Linux Load Average – Part 3

The Dutch Prutser's Blog

In part 1 we performed a series of experiments to explore the relation between CPU utilization and Linux load average. We concluded that the load average is influenced by processes running on or waiting for the CPU. Based on experiments in part 2 we came to the conclusion that processes that are performing disk I/O also influence the load average on a Linux system. In this posting we will do another experiment to find out if the Linux load average is also affected by processes performing network I/O.

Network I/O and load average

To check if a correlation exists between processes performing network I/O and the load average we will start 10 processes generating network I/O on an otherwise idle system and collect various performance related statistics using the sar command. Note: My load-gen script uses the ping command to generate network I/O.

The above output shows that the lo

View original post 602 more words

Understanding Linux Load Average – Part 3

Understanding Linux Load Average – Part 2

The Dutch Prutser's Blog

In part 1 we performed a series of experiments to explore the relation between CPU utilization and Linux load average. We came to the conclusion that CPU utilization clearly influences the load average. In part 2 we will continue our experiments and take a look if disk I/O also influences the Linux load average.

Disk I/O and load average

The first experiment is starting 2 processes performing disk I/O on an otherwise idle system to measure the amount I/O issued, the load average and CPU utilization using the sar command. BTW: My load-gen script uses the dd command to generate disk I/O.

The -b command line option given to sar tells it to report disk I/O statistics. The above output tells us that on average 48207 blocks per second were written to disk and almost nothing was read. What effect does this have on the load average?

The run-queue utilization…

View original post 565 more words

Understanding Linux Load Average – Part 2