Installing Cabot on Debian

Let’s say we want to install Cabot on a server, but not on AWS, nor on DigitalOcean. And because we like challenges, let’s just use Debian Wheezy 7.5 instead of the recommended Ubuntu 12.04 LTS. Ready?

We will try to follow Cabot quickstart to perform the installation.
But first, we must set up a few things. I’ve discovered that during installation, Cabot locks the password of the root account using passwd -l root. So be aware that you won’t be able to log with root and ssh if you haven’t set an authentication process using SSH keys. As The Hitchhiker’s Guide to the Galaxy would say: « Don’t Panic », you can reverse the process if you want using passwd -u root. As a security, you could always create another user with adduser mynewuser and give him an admin status by adding it to the suddoers file with visudo.

So let’s create our keys and configure ssh.
Generate the key:

ssh-keygen -t rsa

Now that we have a key named id_rsa.pub in our .ssh directory, we should do the following steps:

  1. Generate the key-file.
  2. Somehow get the key-file over to the right user-id on the right host.
  3. If that user doesn’t already have an « .ssh » directory, create one AND set its permissions to « 700. » (« rwx——« )
  4. If that user doesn’t already have an « .ssh/authorized_keys » file, create one AND set its permissions to « 600. » (« rw——-« )
  5. Append … don’t overwrite(!) … the new key to that file.

Or we can use ssh-copy-id:

 ssh-copy-id -i ~/.ssh/id_rsa.pub root@hostname.org

You should now be able to log in without having to type in a password.

Before going further, we need to install Fabric as it is used to provision the server and deploy Cabot.

pip install fabric

Fabric also requires some dependencies which are described on Fabric installation page.

Next step, clone the repository. I am using the deploy branch of lincolnloop fork which include awesome python packaging.

 git clone https://github.com/lincolnloop/cabot.git
 git checkout deploy
 cd cabot

Modify configuration: (might need to use development.env with the fork here)

 cp conf/production.env.example conf/production.env
 vim conf/production.env

Before doing anything, make sure you can or can’t install nodejs: aptitude search node. If nodejs can’t be found, you’ll need to install it manually.
A few changes before provisioning our server. Edit file bin/setup_dependencies.sh
and remove line 57 and 58:   ‘nodejs‘ and ‘npm’
Install nodejs manually, npm should come it.

fab provision -H root@your.server.hostname

Once it’s done, we can deploy cabot:

 fab deploy -H ubuntu@your.server.hostname

You should get an error because cabot is trying to use upstart but can’t find it.
To run Cabot, log in to your server under the user ubuntu.

 cd 2014-06-19-e662635

(Your directory will have a different name following a similar pattern year-month-day-wathever)

 foreman start

Congratulations, Cabot should now be running!

Cabot

Things to do

I will certainly add a simple init.d script to my Cabot fork so that we can run it easily. I will maybe change one or more things as the ubuntu username.

Understanding how to install and run Cabot took me a few hours. I got disturbed by the quickstart speaking of AWS or DigitalOcean server. I also tried to install it manually but didn’t manage to get it work, although I was close to it (I think. Or at least I hope ^^). Installing it on Debian added some difficulties, but nothing insurmountable. As a conclusion, I must say that Cabot is worth the effort. It provides a great way to monitor your service with Http checks and the possibility to alert based on Graphite metrics is just priceless.

Découverte de la gestion de log avec ELK

Dans le cadre de mon stage, je m’intéresse actuellement au solution de monitoring et j’ai donc eu l’occasion de tester le triplet Elasticsearch Logstash Kibana connu sous l’abréviation ELK. Logstash permet d’agréger simplement des logs provenant de différentes sources, Elasticsearch s’occupe de les stocker et de les rendre disponibles et enfin Kibana les affiche sur un dashboard hautement personnalisable. Les instructions qui suivent m’ont donc permis d’avoir un rapide aperçu du fonctionnement de la solution ELK en local et dans un cas très simple de gestion de logs système.

Récupération des logiciels

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.0.1.tar.gz
wget https://download.elasticsearch.org/logstash/logstash/logstash-1.3.3-flatjar.jar
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.0milestone5.tar.gz

Extraction

tar xvf elasticsearch-1.0.1.tar.gz
tar xvf kibana-3.0.0milestone5.tar.gz

Elasticsearch

cd elasticsearch-1.0.1/

La configuration elasticsearch.yml se situe dans config/. Il n’est pas nécessaire d’y toucher pour un test en local, on pourrait toutefois modifier les paramètres cluster.name et node.name pour personnaliser l’installation.

Démarrer Elasticsearch:

./bin/elasticsearch

Logstash

Création d’un fichier de configuration logstash.conf:

touch logstash.conf

Nous allons lire les fichiers de log du système, de ce fait, il pourrait être nécessaire de lancer Logstash en root pour que celui-ci puisse lire les fichiers de logs. Cette solution n’est à utiliser que pendant la phase de test.

Contenu du fichier:

input {
    file {
        type => "linux-syslog"
        path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
    }
}
output {
    stdout { }
    elasticsearch_http {
        host => "127.0.0.1"
    }
}

Documentation pour le paramètre elasticsearch_http.

Démarrer Logstash:

sudo java -jar logstash-1.3.3-flatjar.jar agent -f logstash.conf

Les nouveaux logs devraient donc maintenant être récupérés par Logstash et stockés par Elasticsearch. Nous donc pouvoir les visualiser avec Kibana.

Kibana

cd kibana-3.0.0milestone5/

Éditer le fichier config.js et changer la ligne:

elasticsearch: "http://"+window.location.hostname+":9200",

en

elasticsearch: "http://127.0.0.1:9200",

Cette modification nous permet d’ouvrir le fichier index.html directement dans notre navigateur pour accéder à Kibana sans avoir besoin de mettre en place un serveur comme Apache pour servir les fichiers.

Résultat

KibanaL’ajout d’un mécanisme d’authentification pour l’accès à Kibana peut être réalisé simplement en utilisant le projet fangli/kibana-authentication-proxy.