Fork from github and add change to your own repository

When playing with vagrant, I set up a haproxy keepalived apache stack on ubuntu precise.
For that I forked this repository vagrant-haproxy-demo from github (https://github.com/justintime/vagrant-haproxy-demo.git) that run a haproxy standalone instance and I append my change to it to support keepalived.

First of all, I create the repository vagrant-haproxy-keepalived.
and I set remote origin to the repository created.
I create a new branch named development, I modify code and I do a commit.

Finally I checkout to master and rebase the branch development.


git remote set-url origin https://github.com/mezgani/vagrant-haproxy-keepalived.git
git checkout -b development
git add .
git commit -a
git checkout master
git rebase development
git push origin master

My code is here https://github.com/mezgani/vagrant-haproxy-keepalived.git, you can browse code.

Advertisements

Build nginx flask vagrant vbox on windows

Before this post I wrote and article on how to serve flask on ubuntu.
I share with you my first contact with vagrant, on a windows box I build a virtualbox vm that run nginx, and a flask “hello world” application.

here the Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|

hostname = "propus"
locale = "en_GB.UTF.8"

# Box
config.vm.box = "ubuntu/trusty64"

# Shared folders
config.vm.synced_folder ".", "/var/www/ovpn"

# Port forwarding
config.vm.network :forwarded_port, guest: 80, host: 8080

# Setup
config.vm.provision :shell, :inline => "touch .hushlogin"
config.vm.provision :shell, :inline => "hostnamectl set-hostname #{hostname} && locale-gen #{locale}"
config.vm.provision :shell, :inline => "apt-get update --fix-missing"
config.vm.provision :shell, :inline => "apt-get install -q -y g++ make git vim"

# Lang
config.vm.provision :shell, :inline => "apt-get install -q -y python python-dev python-distribute python-pip"

# nginx
config.vm.provision :shell, :inline => "apt-get install -q -y nginx"

config.vm.provision :shell, :path => "bootstrap.sh"

end

And here is the bootstrap.sh script content

#!/bin/bash

pip install virtualenv
mkdir -p /var/www/ovpn && cd /var/www/ovpn
mkdir /var/log/uwsgi/
virtualenv -p /usr/bin/python venv
source venv/bin/activate
pip install uwsgi
pip install flask
rm -f /etc/nginx/conf.d/default
cat <> /etc/nginx/conf.d/ovpn

server {
listen 80;
server_name 10.0.1.15;
charset utf-8;
client_max_body_size 75M;

location / { try_files $uri @ovpn; }
location @ovpn {
include uwsgi_params;
uwsgi_pass unix:/tmp/demoapp_uwsgi.sock;
}
}
EOF

cat <>/var/www/ovpn/demoapp_uwsgi.ini
[uwsgi]
#application's base folder
base = /var/www/ovpn

#python module to import
app = hello
module = %(app)

home = %(base)/venv
pythonpath = %(base)

#socket file’s location
socket = /tmp/%n.sock

#permissions for the socket file
chmod-socket = 666

#the variable that holds a flask application inside the module imported at line #6
callable = application

#location of log files
logto = /var/log/uwsgi/%n.log
EOF

/etc/init.d/nginx restart

Place the Vagrantfile and the bootstrap.sh in the same directory.
and from powershell console Run:

vagrant up

This will build the vm image. have a fun !

Understanding Linux Load Average – Part 3

The Dutch Prutser's Blog

In part 1 we performed a series of experiments to explore the relation between CPU utilization and Linux load average. We concluded that the load average is influenced by processes running on or waiting for the CPU. Based on experiments in part 2 we came to the conclusion that processes that are performing disk I/O also influence the load average on a Linux system. In this posting we will do another experiment to find out if the Linux load average is also affected by processes performing network I/O.

Network I/O and load average

To check if a correlation exists between processes performing network I/O and the load average we will start 10 processes generating network I/O on an otherwise idle system and collect various performance related statistics using the sar command. Note: My load-gen script uses the ping command to generate network I/O.

The above output shows that the lo

View original post 602 more words

Understanding Linux Load Average – Part 2

The Dutch Prutser's Blog

In part 1 we performed a series of experiments to explore the relation between CPU utilization and Linux load average. We came to the conclusion that CPU utilization clearly influences the load average. In part 2 we will continue our experiments and take a look if disk I/O also influences the Linux load average.

Disk I/O and load average

The first experiment is starting 2 processes performing disk I/O on an otherwise idle system to measure the amount I/O issued, the load average and CPU utilization using the sar command. BTW: My load-gen script uses the dd command to generate disk I/O.

The -b command line option given to sar tells it to report disk I/O statistics. The above output tells us that on average 48207 blocks per second were written to disk and almost nothing was read. What effect does this have on the load average?

The run-queue utilization…

View original post 565 more words

Understanding Linux Load Average – Part 1

Very nice article on how to detect load average issue

The Dutch Prutser's Blog

A frequently asked question in my classroom is “What is the meaning of load average and when is it too high?”. This may sound like an easy question, and I really thought it was, but recently I discovered that things aren’t always that easy as they seem. In this first of a three-part post I will explain what the meaning of Linux load average is and how to diagnose load averages that may seem too high.

Obtaining the current load average is very simple by issuing the uptime command:

But what is the meaning of these 3 numbers? Basically load average is the run-queue utilization averaged over the last minute, the last 5 minutes and the last 15 minutes. The run-queue is a list of processes waiting for a resource to become available inside the Linux operating system. The example above indicates that on average there were 10.52 processes waiting…

View original post 836 more words

Serving Flask with nginx

Installing Common Python Tools for Deployment
:
apt-get install -y python python-dev python-distribute python-pip
apt-get install nginx

Installing The Web Application and Its Dependencies
Download and install Flask and uwsgi using pip

pip install virtualenv
mkdir /var/www/ngstones.com && cd /var/www/ngstones.com
virtualenv -p /usr/bin/python venv
source venv/bin/activate
pip install uwsgi
pip install flask

rm /etc/nginx/conf.d/default

vim /etc/nginx/conf.d/ngstones

server {
listen 80;
server_name 10.0.0.2;
charset utf-8;
client_max_body_size 75M;

location / { try_files $uri @ngstones; }
location @ngstones {
include uwsgi_params;
uwsgi_pass unix:/var/www/ngstones.com/demoapp_uwsgi.sock;
}
}

vim /var/www/ngstones.com/demoapp_uwsgi.ini

[uwsgi]
#application's base folder
base = /var/www/ngstones.com

#python module to import
app = hello
module = %(app)

home = %(base)/venv
pythonpath = %(base)

#socket file’s location
socket = /var/www/ngstones.com/%n.sock

#permissions for the socket file
chmod-socket = 666

#the variable that holds a flask application inside the module imported at line #6
callable = app

#location of log files
logto = /var/log/uwsgi/%n.log

vim /var/www/ngstones.com/hello.py
from flask import Flask
app = Flask(__name__)
@app.route(“/”)
def hello():
return “Hello!”
if __name__ == “__main__”:
app.run()

Restart Nginx to load new config file
service nginx restart
cd /var/www/ngstones.com/
uwsgi --ini /var/www/ngstones.com/demoapp_uwsgi.ini

Finally update your DNS and you are good to go!

How to Install Docker on Ubuntu 14.04 LTS

Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on Linux. Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent “containers” to run within a single Linux instance, avoiding the overhead of starting virtual machines.

Here some tips for installing docker on an Ubuntu 14.04 LTS.

Let’s install docker :
apt-get -y install docker.io

Let’s fix link and path :
ln -sf /usr/bin/docker.io /usr/local/bin/docker
sed -i '$acomplete -F _docker docker' /etc/bash_completion.d/docker.io

And finally configure docker to start when the server boot :
update-rc.d docker.io defaults

Of course we search container that contains ubuntu by search token
docker search ubuntu

And we install it
docker pull ubuntu