Controlling Machines Remotely via IPMI

IPMI

Each server has a little ARM machine glued on the side with a dedicated ethernet port. Using standard protocols, as well as manufacturer extensions, we can do all sorts of useful things to the server remotely. Each machine’s IPMI card has an IP address that resolves from host “rcXXipmi”. Some of these include:

  • Powering on/off, resetting, warm reboots, software shutdowns, etc.
  • Accessing a serial-over-lan version of the console (can use this to configure BIOS parameters, as well as get a Linux console)
  • Setting PXE boot for the next boot cycle (allows easy network re-installs)
  • Listing sensor status (temperatures, voltages, etc)
  • Listing error log (ECC errors, even SMART disk errors)
  • Getting a remote KVM console (keyboard, video, mouse)

There are two main utilities you’ll want to use:

ipmitool

ipmitool is linux app that speaks the ipmi protocol to local and remote servers. Here are some example commands to get you started (read the extensive man page for more info):

  • Get a serial-over-lan console on rcXX: ipmitool -I lanplus -H rcXXipmi -U ADMIN -a sol activate
  • Get the power status: ipmitool -I lanplus -H rcXXipmi -U ADMIN chassis status
  • Reboot a machine: ipmitool -I lanplus -H rcXXipmi -U ADMIN power reset
  • Force PXE boot on the next boot only: ipmitool -I lanplus -H rcXXipmi -U ADMIN chassis bootdev pxe
    (This will cause the machine to reinstall all its software on the next boot)
  • Reboot the IPMI card: ipmitool -I lanplus -H rcXXipmi -U ADMIN mc reset cold
  • Get sensor output: ipmitool -I lanplus -H rcXXipmi -U ADMIN sdr list
  • Get the error log: ipmitool -I lanplus -H rcXXipmi -U ADMIN sel elist

 

For more information : https://ramcloud.atlassian.net/wiki/display/RAM/Controlling+Machines+Remotely+via+IPMI

 

 

 

 

 

Advertisements

Quick How-to to set up Swarm cluster

1.Define a file hosts with the entries below, where : 192.168.1.3 is the IP address of the node manager, and 192.168.1.5 and 192.168.1.6 are simple nodes.

# Hosts
node1 ansible_host=192.168.1.3
node2 ansible_host=192.168.1.5
node3 ansible_host=192.168.1.6

[swarm]
node1
node2
node3

2.On node manager (192.168.1.103) run :

$ docker swarm init 3. Run the docker swarm join command on the rest of nodes, using ansible

$ ansible swarm:’!192.168.1.103′ -m shell -a ‘docker swarm join –token SWMTKN-1-32c9u5o0y7gbp3zg8wlvkacbcz0jcwnfgzyhitn5xk8v3gi7s5-2pyej0kduwv6kmx7ynjjvxnhl 192.168.1.103:2377′

The token WMTKN-1-32c9u5o0y7gbp3zg8wlvkacbcz0jcwnfgzyhitn5xk8v3gi7s5-2pyej0kduwv6kmx7ynjjvxnhl should depends on the return of the command : dockers swarm init.

4.Open 2377 TCP port on your firewall if you have one activated.

$ ufw allow 2377/tcp

5.Create the service :

$ docker service create –name hello-world –replicas 3 -p 80:80 dockercloud/hello-world

Configuring FCoE in Linux RHEL and HP FlexFabric

– First step is to identify Ethernet NIC’s and CNA that will pass FCoE traffic and thi is done by collecting information about MAC adresses.
– Installation of both of packages : fcoe-utils and lldpad
yum install fcoe-utils lldpad
– Load the driver bnx2fc
modprobe bnx2fc
– Renaming /etc/fcoe/cfg-ethx file to the name of the CNA, in our case it is eth2 :
cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-eth2
cat > /etc/fcoe/cfg-eth2 /etc/sysconfig/network-scripts/ifcfg-eth2<EOF
DEVICE=eth2
ONBOOT=yes
BOOTPROTO=none
USERCTL=NO
MTU=9000
EOF
The MTU is set to 9000 because FC payload is 2,112 bytes jumbo frames must be turned on to
avoid unnecessary IP fragmentation
– Run ifup to bring FCoE interface up
ifup eth2
– Run fcoeadm -i to check all created FCoE interfaces and status.
– Run cat /proc/scsi/scsi to see the luns

How to Install Docker on Ubuntu 14.04 LTS

Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on Linux. Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent “containers” to run within a single Linux instance, avoiding the overhead of starting virtual machines.

Here some tips for installing docker on an Ubuntu 14.04 LTS.

Let’s install docker :
apt-get -y install docker.io

Let’s fix link and path :
ln -sf /usr/bin/docker.io /usr/local/bin/docker
sed -i '$acomplete -F _docker docker' /etc/bash_completion.d/docker.io

And finally configure docker to start when the server boot :
update-rc.d docker.io defaults

Of course we search container that contains ubuntu by search token
docker search ubuntu

And we install it
docker pull ubuntu

SRE engineer Bookmarks

I’m Linux system engineer and I develop in Python, Bash and Perl. I’m really interested by SRE position for that, I’m applying for SRE Engineer in Google and production Engineer in facebook. and here I share my daily bookmarks :

sre

Installing smog mongodb viewer

First of all install dependency packages
sudo apt-get update && apt-get install git-core curl build-essential openssl libssl-dev

Install Node.js & NPM
git clone https://github.com/joyent/node.git
cd node

Run git tag to show all available versions
git tag -l

select the latest stable.
git checkout v0.9.9

Run configure and make
./configure
make
sudo make install

Get npm Node package manager using curl
curl https://npmjs.org/install.sh | sudo sh

Install smog using node package manager by running
npm install smog -g

Start smog on background:
smog&

smog will start on default port 8080

How to create an Apache Module

1. Use apxs2 to create a skeleton module named mod_hello:
apxs2 -g -n hello

2. Modify the automatically generated mod_hello.c file:
3. Run make and install the .so file into Apache’s libexec directory
apxs2 -iac mod_hello.c
4. Modify httpd.conf to load the module and to install it as the content handler for the URL of your choice:

# httpd.conf
LoadModule hello_module libexec/mod_hello.so

<Location /hello_demo>
SetHandler hello
</Location>