Tuesday, December 15, 2009

Citrix XenServer 5.5 command line (Storage issue)

The reality is that you can always do more from command line. Citrix XenCenter is very powerful however sometimes you are going to find issues that only can be resolved from command line.

I am going to show an example of a situation when you start to use command line.
First you have to understand that Citrix Xen defines each component with an uuid on its database. Each component on your system have an uuid
1- Host
2- VM
3- Storage
4- Pool and so on


[root@xenserver01 ~]#
[root@xenserver01 ~]# xe host-list
uuid ( RO) : d5003241-d252-4bc7-9485-2fa5838e09f3
name-label ( RW): xenserver01
name-description ( RO): Default install of XenServer

[root@xenserver01 ~]#

My problem: I took out one of the hard drive from xenserver01, I restart the server and Xencenter is showing my storage resource no available. I try to delete the SR however Xencenter is not allowing me to perform this operation. The resource stays red on my Xencenter console.

To resolve it, basically go command line and do the following:

[root@xenserver01 ~]# xe sr-list

This command will list all the SR you have on your system.

uuid ( RO) : 8311c845-beaf-b0ff-008a-2f03d256ebf5
name-label ( RW): Local Disk2
name-description ( RW):
host ( RO): xenserver01
type ( RO): lvm
content-type ( RO): user

Make sure you copy the uuid for the resource you want to work on. This case , we copy 8311c845-beaf-b0ff-008a-2f03d256ebf5 .

Then run the following command to delete it from Citrix Xen Database.

#xe sr-forget uuid= 8311c845-beaf-b0ff-008a-2f03d256ebf5

Now resource has been forgot from Citrix Xen database.

The same way you can use other commands so try :

#xe help --all

This command will show you all available command. Moreover, the xe command allows you to use TAB for auto completion .

Problems changing LDAP password from Ubuntu 9.04

You would think that the default LDAP Ubuntu configuration (client) would be able to change your password, but it is not. http://blog.carlosgomez.net/2009/10/setting-up-ldap-client-for-ubuntu-904.html

If you are a user and you want to change your password you will get this results.

passwd: Authentication information cannot be recovered
passwd: password unchanged

The fix .....

Edit /etc/pam.d/common-passwd file. Look for use_authok and delete it. Save the file and done.

I do not know if this is a bug or not ...but I can tell you that users should be able to change password.

Wednesday, December 9, 2009

Quick tip: Deploying command on multiple servers (easy script)

When you maintain hundreds of server, A good sysadmin starts to design scripts that make his life easier. Let me show you what I do.

First make sure you have a file with a the list of server names or IPs.

#cat server.list

Second make sure you have setup ssh keys from your machine (user root) to all the machines on the list. This is the only part that can be complicated, however if you have deployed machines using a Kickstart server you can make sure that all the server on pools use the same auth keys on root directory.

Now you can create the script.

for i in `cat $1`
ssh -o ConnectTimeout=10 -o BatchMode=yes $i $2;

Very simple right. I will call it run.sh.

Let me explain it really quick.
The script will take every line of file (IP or name) and it will ssh to that machine so it can run $2 (command).
The Option ConnectTimeout will guarantee that the ssh will try to connect for 10 seconds.
Batchmode will guarantee that no password will be asked so the ssh connection will pass.

Then you can use it ....

#./run.sh server.list "/etc/init.d/apache restart"

Very helpful to restart apache on a Web server Farm.

Let's add something else to the script.

The script performs commands sequentially meaning that it has to wait until the command finishes to continue with the next server. But if you want to perform the command at the same time in all the servers we need to add the flag -f to the ssh command.


ssh -o ConnectTimeout=10 -o BatchMode=yes $i $2;


ssh -f -o ConnectTimeout=10 -o BatchMode=yes $i $2;


Enjoy it....

Quick tip: Scanning scsi bus (Linux) to add new hard drive on VMware and Citrix Xen

Probably this is not a hot topic, however I regularly go through the trouble of adding more disk space to VMs on any of the VM environments (VMWare and Citrix Xen). You would think it should not be different however it is quite different depending on the Hypervisor or Linux flavor.

For Citrix Xen, it is very simple you only create the disk and attached to the VM and finally you can run fdisk and create partitions. It is like magic, however as soon as you add the disk you can detached until you poweroff the VM.

For VMWare, it is not that simple you can create the disk and added to the VM , however the VM is not detecting the new disk, that's because the VM requires to scan de scsi bus to detect new devices.

There are different ways to do that however that depends of the Linux OS that you are running.

For Red Hat, or Centos, you can force the scan, this blog can show you how http://misterd77.blogspot.com/2007/12/how-to-scan-scsi-bus-with-26-kernel.html

For Ubuntu , forcing the scan using the procedures for Centos does not work ...so I decided to take a look and I have found this utility that can help you http://www.garloff.de/kurt/linux/scsidev/

This rescan utility it is also valid for any type of linux OS so try it ...

Tuesday, December 8, 2009

Quick Tip: Getting DELL asset tag info from linux server

This is a pretty helpful command to get info from hardware.

#dmidecode -s system-serial-number

Wednesday, December 2, 2009

Opensource Storage Solution: FreeNAS and Openfiler

After dealing with both products, I came out with the following conclusion

FreeNAS is a great product for HOME. It can be installed in a very small machine, contains all protocols (CIFS,NFS,FTP,ISCSI) used by commercial products plus a media server and bittorrent. Basically,the FreeNAS media server allows you to stream music and video in your local network. so you can play them with any media server client (PS3,PC). It is not commercial supported.

OpenFiler is also a great product. It requires a machine with more RAM and disk space.It has all protocol used by commercial products however it lacks of media server and bittorrent.It is commercial supported. So I suggest you to go Openfiler for businesses.

Wednesday, November 25, 2009

Configuration management: Netdirector with Ubuntu 9/04 (64bit)

" Research time ..."
I was almost ready to deploy puppet when I realized that puppet does not have any GUI interface. Well there is one puppetshow but I started to test it and I noticed that it is still in development. So I decided to check if there are another tools to do configuration management and I found Netdirector. My first impression was very good because they have a pretty good website and they offer commercial services.Moreover, I noticed that the application is java based for server and client. Here I show you how to make a quick installation and how to deploy the agent.

First let get the binary for Linux  (32 or 64 bit) . My case is 64 bit.

# wget http://sourceforge.net/projects/netdirector/files/GPL%20NetDirector%20Server%20Manager/3.2.3/netdirector.tar.bz2/download

Uncompress the file.

# tar jxvf netdirector.tar.bz2

Make sure you have all the required packages installed.

# apt-get install sun-java6-jre postgresql libpg-java

You can notice that I am installing postgresql database because the default installation support this database. If you want to use MySQL you would have to research a little bit so you can attached netdirector to you MySQL database.

#cd netdirector/netdirector/dists/netdirector/main/binary-amd64

Install the package.
# dpkg -i netdirector_3.2.2+tomcat_5.5.27-psql7-all.deb

Now you can access the application on http://serverip:8080/netdirector
user admin
pass admin

Do not forget to create an user and role. The GUI for users contains more info and you can create group of servers.

Installing the netdirector agent (Client)

Get the binary file.
# wget http://sourceforge.net/projects/netdirector/files/NetDirector%20Server%20Agent/Netdirector%20Agent%203.1%20Stable/netdirectoragent-3.1-linux-installer.bin/download

Now if you try to install the binary and you are using 64bit Ubuntu, nothing is going to happen. The reason is that the binary requires some 32bit libraries. To avoid this problem install the following package.

#apt-get install ia32-libs

Then install the package using text mode and follow instructions.

# ./netdirectoragent-3.1-linux-installer.bin --mode text

After installing the agent you can go to the web interface and add the server.

Enjoy it ...

Thursday, November 19, 2009

Installing Groundwork 6 (Nagios) on Ubuntu 9.04 64bits

"Research time .."
Yes , yes ...you probably try to figure out why Ubuntu. I prefer other linux distros like Red Hat, Centos or SUSE, however sometimes you can face developers that like Ubuntu and you can not do anything... so I decided to base many of blogs on installations for Ubuntu 9.04 (Current version). Moreover, 32bits installations for current servers do not make sense since most of the new servers have 64bit support and more that 2GB of memory.

I have installed Groundworks on Centos with no problem at all , however ...if you try to install Groundworks (64bit) on Ubuntu 9.04, you will find one issue. 
First, make sure you have Ubuntu upgraded

#apt-get upgrade

then get the Groundworks binary

# chmod +x groundwork-6.0-br120-gw440-linux-64-installer.bin

Now wait ....you would think that you can just run the binary file to install ..however you will get an error with the agent.bin file. The problem is that agent.bin was created dynamically linked to 32 bit libraries ...so you Ubuntu 64 bit does not contain these libraries. For this specific version of Groundworks, I suggest to install 32 bit libraries.

# apt-get install libc6-i386

Then you can install Groundworks .....This bug has already been reported http://www.groundworkopensource.com:8080/browse/GWCE-17
So newer versions probably will have it fixed.

Enjoy it ....

Wednesday, November 18, 2009

Installing vmware tools on Ubuntu 9.04

" Research time ..."
Ubuntu is not a OS supported by VMware so installing vmware tools on Ubuntu is not a easy task. 
First You can select to install vmware tools , then mount the cdrom drive on the Ubuntu VM, then copy the tar file to your local hard drive, untar the file, and finallly run ./vmware-install.pl.....WAIT ...WRONG .....it does not work.

First ...First ...you have to make sure you have gcc and make installed ...

#apt-get install gcc make

Then ....the modules will not compile so you will see some errors. You can google and find the patches so then you can compile.  Finally you would think all this trouble to get vmware tools installed on Ubuntu .....
However, you can grab this script http://chrysaor.info/scripts/ubuntu904vmtools.sh and executed on your VM and you will get every done . 
Thanks to http://chrysaor.info/, this guy is providing Debian, openbsd, and Ubuntu images with vmware tool  installed.

Enjoy it....

Tuesday, November 17, 2009

Installing Cacti on Ubuntu 9.04

" Research time ..." 

 Cacti is a network graphic solution based on RRDTools. Installing Cacti on Ubuntu 9.04 is a very easy task.

Firts you will need a Ubuntu Server with all the basic installation.

Make sure you have LAMP stack installed.

#tasksel install lamp-server

Then install Cacti package.

#apt-get install cacti cacti-spine

This process will install Cacti 0.8.7b. After the install you can access the installation pages on http://server/cacti/ . Follow the instructions and the installation will be finished.....???

Access http://server/cacti/ using user: admin and password: admin
Cacti will ask you to change password so then you can access.

 1- No able to add network devices with SNMP.
 2- Thunbnails are not working.

Ubuntu 9.04 repos will install rrdtool 1.3.1, however Cacti 0.8.7b seems to support only up to rrdtool 1.2.X.
In that case, it will be better to get the latest version of Cacti 0.8.7e and install it.

Get the package http://www.cacti.net/downloads/cacti-0.8.7e.tar.gz

Enjoy it..

Wednesday, November 11, 2009

Quick Install: MediaWiki on Ubuntu 9.04

 " Research time!!"

MediaWiki is an opensource project that allows you to keep your documentation updated. You probably can find a bunch of Wiki projects out there, however it is the most complete. This process shows you how to install MediaWiki on Ubuntu. 

Ubuntu server , basic installation.

#tasksel install lamp-server

# apt-get install mediawiki

The installation is going to ask you about mysql password.

Additional Packages

#apt-get install imagemagick mediawiki-math php5-gd

Uncomment from /etc/mediawiki/apache.conf

Alias /mediawiki /var/lib/mediawiki

Restart apache

go to your browser


Follow steps

#mv /var/lib/mediawiki/config/LocalSettings.php /etc/mediawiki/LocalSettings.php
#chmod 600 /etc/mediawiki/LocalSettings.php
#rm -Rf /var/lib/mediawiki/config

Logo change

# cp logo.png /var/lib/mediawiki/skins/common/images/wiki.png

User administrator

user: WikiSysop pass:

Make sure to setup your relay for SMTP.

Installing extensions

# apt-get install mediawiki-extensions mediawiki-semediawiki

Extension available on

Enable LDAP authetication

# mwenext LdapAuthentication.php

add this to Localsettings.php

$wgAuth = new LdapAuthenticationPlugin();
$wgLDAPDomainNames = array("LDAPDEV");
$wgLDAPServerNames = array("LDAPDEV"=>"ldap.example.local");

$wgLDAPUseLocal = true;

$wgLDAPEncryptionType = array("LDAPDEV"=>"clear");

$wgLDAPBaseDNs = array("LDAPDEV"=>"dc=example,dc=local");
$wgLDAPSearchAttributes = array("LDAPDEV"=>"uid");

$wgLDAPDebug = 3; //for debugging LDAP
$wgShowExceptionDetails = true;  //for debugging MediaWiki

$wgGroupPermissions['*']['createaccount'] = false;
$wgGroupPermissions['*']['read'] = true;
$wgGroupPermissions['*']['edit'] = false;
$wgGroupPermissions['*']['createpage'] = false;
$wgGroupPermissions['*']['createtalk'] = false;

Enjoy it ....

Tuesday, November 10, 2009

Quick tip: Disk Usage on linux

" Research time !!"

I can not imagine how many times a disk of a linux server gets filled up ( I would say it is common with virtual machine since we normally size the disk to fit only the OS). Moreover, system administrators need to find out what happened to get that disk filled up.  A good and powerful tool is du command so I will show you how to use it.

#du -hs *

It will show you the disk usage on that directory.

#du -s *|sort -n

It will sort the output so you can see the largest directory or file on the bottom ...

With only this two commands you can easily find the largest directory or file.

Enjoy it

How to setup a chroot enviroment with ssh/ scp for Linux ...

"Research Time...."

 We need to create user accounts in a server with limited access to the file system. Quick answer create a chroot environment. The user should be able to ssh or scp to the server without having access to others user home directory or root file system.

After a little research, you probably can find different way to do this. One approach is using a restricted shell however this is not a real chroot environment because if the user can change the shell he can access the root file system.Second approach is modifying sshd so the user can only see he is own home dir however this require to change the standard sshd configuration and binary. Finally approach , and this is a very clever solution is creating a chroot enviroment usign chroot command and modifying the user shell...http://www.fuschlberger.net/programs/ssh-scp-sftp-chroot-jail/

Basically you can download the script  http://www.fuschlberger.net/programs/ssh-scp-sftp-chroot-jail/make_chroot_jail.sh . Make sure is enable to be executed then you can use as follow:

make_chroot_jail.sh /path/to/chroot-shell /path/to/jail

The script allows you to create a new user with a shell on the path defined and under a jail path.

chroot-shell is stored by default on /bin/chroot-shell. However you should specify /bin directory when you require to use a path for the jail. 

Path to the jail is by default /home/jail. The script will create a chroot environment under this directory. User directory will be on /home/jail/home/user.

This chroot environment allows the user to ssh , scp or sftp.

Other way to do but more complex is using the JAILKIT from http://olivier.sessink.nl/jailkit/index.html

Enjoy it !

Friday, November 6, 2009

Creating a CPAN mirror

!!! Research Time....making infrastructure easy...

Probably you have a lot of perl developers in your network and they require to create custom packages for easy deployment. Moreover, you do not want all the developers to get the packages from internet every time because it could consume your bandwidth. The solution for this problem is having a local CPAN mirror. This procedure is pretty simple , but it can take sometime to synchronize the mirror the first time.

First lets have a a server with enough space to store the mirror data. Let say you provision 10GB for CPAN ( CPAN can consume around 7GB).

Synchronize the  CPAN mirror with one of the mirrors available using rsync. I have a separated partition for CPAN mounted on /cpan directory.

 # /usr/bin/rsync -av --delete ftp.funet.fi::CPAN /cpan

This process can take long time depending on your internet access, so I suggest you to use screen to leave the terminal running.

After the first synchronization , you must make sure that your mirror is updated ( CPAN can have many updates during the day) , so I suggest you to create a cron job that runs everyday at midnight. Edit your crontab and add the following:

#crontab -e

@daily /usr/bin/rsync -av --delete ftp.funet.fi::CPAN /cpan >/dev/null 2>&1

Now that you have the CPAN mirror copied to the disk, you need to share it on the network. You can use ftp or http, for my case I prefer to use http with Apache. I have Apache already installed for another mirrors so basically I need to give access to the mirror putting a symlink on Apache default directory (/var/www)

# ln -s /cpan /var/www/cpan

Then create a cname on your local DNS for cpan.example.com to server.example.com

Now you have your mirror available on http://cpan.example.com/cpan/

CPAN client configuration

If your team of developers have already used external CPANs, they will require to modify CPAN to access your local mirror. If it is the first time you are using CPAN, it will be asking you to choose a mirror.

# cpan

cpan shell -- CPAN exploration and modules installation (v1.9402)
Enter 'h' for help.

cpan[1]> o conf

On CPAN, "o conf" will show you the configuration.

cpan[2]> o conf urllist

This command will show you the list of mirrors that you have configured.

cpan[3]> o conf urllist flush

This command will delete any mirror on the list.

 cpan[6]> o conf urllist push http://cpan.example.com /cpan/

This command will add your local mirror to urllist. Finally you need to commit the changes 

cpan[7]> o conf commit

Done ....start using your mirror


Wednesday, November 4, 2009

Resize Ext3 partition Linux VM on Citrix Xen

" Research time ..everyday try to compile what you learn..."

Sometime when you create a VM in a Virtual Enviroment (Citrix Xen or VMware), you assign certain amount of disk space and then you realize that you need more. Resizing a partition is not a easy task, since there is a lot of concern when there is data already stored. Moreover, Resizing a boot partition will require to shutdown the server and boot with CD in rescue mode. Well if you are shutting down your server it would be easier to just create a new VM with the size that you want. But what about non-boot partitions? On virtual environments this task is easy and it can be performed while the machine is on.

The following process shows you how to resize partitions on Citrix Xen however I am pretty sure that you can also use it on VMWARE.

First, you have to umount the partitions that requires to be resized.

# umount /data

Go to Storage tab on XenCenter for the machine that you want to resize filesystem.

hard drive
Size and location : Change Size

hard drive
Now we have the device resize but the filesystem is still the same size.
Now I suggest to make a Filesystem check in  your partition on that device

# fsck -n /dev/xvdb1

Now the partition is clean but it is still using Ext3. Ext3 can not be resize but ext2 can. So we need to convert the partition ext3 to ext2. The conversion is basically done disabling journal on ext3.

# tune2fs -O ^has_journal /dev/xvdb1

Now we need to make another filesystem check on ext2 format.

# e2fsck -f /dev/xvdb1

This is the scary part you will have to delete the partition, however do not worry you are not going to lose the data. Basically you are making sure the partition table is updated with the new size.

# fdisk /dev/xvdb    (Remember fdisk is on devices no partitions)

Delete partition /dev/xvdb1 and create it again with the new size.

Then do a filesystem check to make sure that everything is running smooth.

# fsck -n /dev/xvdb1

Then run the resize command so ext2 knows about the size of the partition.

# resize2fs /dev/xvdb1

Finally, you need to turn on journal and mount the partition.

This command turns the partition ext3

#tune2fs -j /dev/xvdb1

#mount /dev/xvdb1 /data

Enjoy it !!!!!

Monday, November 2, 2009

Improving transfer speed for SAMBA

Samba will allow you to share directories for Windows Client. Installation is pretty simple so you can install samba on Ubuntu server with #apt-get install samba. After having samba installed, you have to decide what directory to share and what kind of security to use. Security is very interesting since you can design your server to use AD security from the network or use local user database.  Finally you have a samba server in you rnetwork where you can share directories with all the other user, however if you start to copy a lot of data to your samba server you will notice that the transfer speed is very low. I would say almost the same that FTP.
The following tests show how to improve the transfer speed on the Samba Server changing basically two parameters.


One Ubuntu server with SAMBA
One ubuntu with smbclient and smbfs

Testing for different configuration

On server smb.conf


On Client:

# mount -t smbfs -o rw,username=mpi //load01/data /mnt
# cd /mnt
# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 3.93339 s, 26.7 MB/s

On server


On client

# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 2.08827 s, 50.2 MB/s

On server

On Client
# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 1.5947 s, 65.8 MB/s

On server:

On Client:
# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 1.56355 s, 67.1 MB/s

On Server:
On Client
# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 1.52957 s, 68.6 MB/s

On Server:

On Client:
# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 1.46897 s, 71.4 MB/s

On Server:

On Client:
# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 1.45731 s, 72.0 MB/s

NoteThis test assumes that you are writing data to the Samba Share. Changing those values definitively shows an improvement in transfer speed, however I have not tested all the possible scenarios. 
On server, the local transfer speed is the following: 

# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 0.242673 s, 432 MB/s

Still I have pending to verify why the transfer speed is so low comparing to create the file locally. If somebody has found the reason of such  a difference..please let me know.

Thursday, October 29, 2009

Installing subversion Server with LDAP on Ubuntu 9.04

A very popular version control is Subversion. However, many times subversion requires integration with other systems and protocols. First, Subversion can take advantage of apache to allow remote access and Second we need some kind of authentication and authorization. The following process will help you to put all this together ina very quick and easy way.

I assume you have a LDAP server already set up.
Install Apache, and Subvervion:

# apt-get install apache2 subversion libapache2-svn subversion-tools

Edit dav_svn.conf

#vi /etc/apache2/mods-available/dav_svn.conf

The file should have nothing enabled, so we need to add our configuration  
The following configuration uses LDAP authetication and SVN authorization.
On LDAP, users on svngroup GROUP will have access to the repository.

        DAV svn
        SVNParentPath /svn
        AuthzSVNAccessFile /svn/aclfile
        AuthName "Subversion Repos"
        AuthType Basic
        AuthBasicProvider ldap
        AuthzLDAPAuthoritative   on
        AuthLDAPURL ldap://ldap:389/ou=People,dc=example,dc=com?uid
        AuthLDAPGroupAttribute memberUid
        AuthLDAPGroupAttributeIsDN off
        Require ldap-group cn=svngroup,ou=Groups,dc=example,dc=com

Enabling module for LDAP authentication 

#a2enmod authnz_ldap


#/etc/init.d/apache2 start

Create the aclfile on /svn to set up the authorization for Repositories.

sysadmin = user


Create the test repo.

#cd /svn
#svnadmin create test

Then you can test access to your SVN with your browser.



Get the latest package from website.

# apt-get install websvn

Follow the configuration screen and done

Then edit the following file.

#vi /etc/websvn/svn_deb_conf.inc

Comment $config->addRepository("repos 1", "file:///svn");

and Save

Now you can access your repos on http://svn-server/websvn/
However we should put some security.

Go and Edit /etc/websvn/apache.conf and add the Authetication with LDAP.

   AuthName "WEBSVN "
   AuthType Basic
   AuthBasicProvider ldap
   AuthzLDAPAuthoritative   on
   AuthLDAPURL ldap://ldap:389/ou=People,dc=example,dc=com?uid
   # Require valid-user
   AuthLDAPGroupAttribute memberUid
   AuthLDAPGroupAttributeIsDN off
   Require ldap-group cn=svndev,ou=Groups,dc=example,dc=com

save and Restart Apache.

Wednesday, October 28, 2009

Configuration management: Puppet with Ubuntu 9.04

When you have a lot of machines to configure and manage, Configuration management tools like (Cfengine or Puppet)  becomes very handy.  The following procedure shows you how to have your Puppet Server installed and one client using Ubuntu 9.04.

For Server
#apt-get install puppetmaster

For Client

#apt-get install puppet

On Puppet server:

Edit /etc/puppet/puppet.conf


and add certname

certname = puppetserver.example.com

certname will guarantee that the cert is created right.

Edit /etc/puppet/fileserver.conf

This file will configure the path for files stored on the servers and who is allowed
to take them.

  path /etc/puppet/files
  allow *

Copy /etc/sudoers to /etc/puppet/files/etc/ so clients can take the file.

Now you have to make sure that on directory /etc/puppet/, you have all this directories.

root@puppetserver:/etc/puppet# ls
files  fileserver.conf  manifests  puppet.conf

then go manifests and create a directory called classes.

Create a file /etc/puppet/manifests/classes/sudo.pp

# /etc/puppet/manifests/classes/sudo.pp

class sudo {
    file { "/etc/sudoers":
        owner => "root",
        group => "root",
        mode  => 440,
        source => "puppet://puppetserver.example.com/files/etc/sudoers"

Create a file /etc/puppet/manifests/site.pp

import "classes/*"

# tell puppet on which client to run the class
node puppetclient {
    include sudo

Start puppet master /etc/init.d/puppetmasterd start

Note: You will get an error with xmlsimple.rb file. You will basically go to
/usr/lib/ruby/1.8/ and move xmlsimple.rb file to /usr/lib/ruby/1.8/lib/.

On Puppet client

Edit /etc/puppet/puppet.conf


certname = puppetclient.example.com
server = puppetserver.example.com
runinterval = 60   

runinterval will check puppet server every 60 seconds (Default 1800)

Then run the following command so the Puppet Server can issue a certificate.

# puppetd --test
warning: peer certificate won't be verified in this SSL session
notice: Did not receive certificate
notice: Set to run 'one time'; exiting with no certificate


#puppetca --list

Then sign it.

# puppetca --sign puppetcliet.example.com
Signed puppetcliet.example.com

Now your client can talk with the master.

#/etc/init.d/puppet start

Enjoy it

Change Interface name (eth1 to eth0) on Ubuntu 9.04

Probably, many administrator has fallen with this problem before when the clone virtual machines on Virtual enviroments (VMware or Citrix Xen). After cloning the machine and turn it on, there is no IP and the ifconfig command shows that instead of eth0 you have eth1. Then  you go check the configuration for the interfaces on /etc/network/interfaces and you see eth0 configured but what you have it is eth1.

This procedure shows you how to change the name device to eth0 from eth1.

As root, go and  edit /etc/udev/rules.d/70-persistent-net.rules

SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:50:56:81:5a:fa", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"

Look for the line showing NAME="eth1" and change this value to NAME="eth0".
If there is another line showing NAME="eth0" delete it because it contains the configuration of the cloned machine.

Save the file.

Then open /etc/network/interfaces make sure you have the correct configuration for eth0.
For this case I will use dhcp configuration:

auto eth0
iface eth0 inet dhcp

Finally you can reboot or you can reload the rules with the new name.

# udevadm trigger

This command will reload the rules with the new name so then you can restart networking.

# /etc/init.d/networking restart

Friday, October 23, 2009

Hadoop 0.20 on Ubuntu 9.04

Hadoop is a combination of High Distributed File System and a Map/Reduce Framework based on java. In other words, you can have a cluster of servers that can share their hard drive as a big file system HDFS and then you can process data using Hadoop APIs (Map/Reduce). This tool is capable of sort and summarize data with speed that commercial and opensource Database can not reach. Hadoop belongs to the Apache group and is used by Yahoo, Facebook and Google.

With all that potential, I decided to give it a try to see how I could implement a cluster of 4 machines.

I used 4 machines (1 cpu, 1GB mem, 10GB hard drive) with Ubuntu 9.04 server.
hadoop01, hadoop02, hadoop03, and hadoop04

Install Java 6 on Ubuntu
#apt-get install sun-java6-jre
#apt-get install sun-java6-jdk

Create group hadoop
Create user hadoop
root@hadoop01:~# addgroup hadoop
Adding group `hadoop' (GID 1001) ...
root@hadoop01:~# adduser --ingroup hadoop hadoop

Configure ssh key for hadoop user
$ssh-keygen -t rsa -P ""
$cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

Get the latest version of hadoop


Install it on /usr/local/hadoop
Make sure everything under is own by hadoop:hadoop /usr/local/hadoop
Edit vi /usr/local/hadoop/conf/hadoop-env.sh

export JAVA_HOME=/usr/lib/jvm/java-6-sun

Define the Data partition for the node: example /data
Make sure everything under is own by hadoop:hadoop /data
Make sure /etc/hosts or DNS contains all the cluster node names localhost localhost.localdomain
x.x.x.x   servername

Configuring Hadoop Master Node 

Since Hadoop contains to processes HDFS and Map/Reduce, the master node should contain both servers.
For HDFS, the service is called namenode.
For Map/Reduce, the service is called jobtracker.

Slave nodes will use the following processes:
For HDFS,  the service will be Datanode.
For Map/Reduce, the service will be Tasktracker.

Edit core-site.xml and mapred-site.xml (These files are used by Hadoop 0.20. Older version were using hadoop-site.xml)

For core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<description>A base for other temporary directories.</description>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>

<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map and reduce task.
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.

For mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
     <description>The host and port that the MapReduce job tracker runs
     at.  If "local", then jobs are run in-process as a single map
     and reduce task.

These files assume the hadoop01 is the Master node for both processes.
These files assume that /data will contains the data for HDFS.
These files will be the same for all nodes and should be own by hadoop user.

Configuring Master and Slaves

On the master node, you will find these two files masters and slaves. (/usr/local/hadoop/conf directory).
These file must be modified on the master node.

The masters file contains the name of the master.

hadoop@hadoop01:/usr/local/hadoop/bin$ cat /usr/local/hadoop/conf/masters

The slaves file contains the list of slave servers.
hadoop@hadoop01:/usr/local/hadoop/bin$ cat /usr/local/hadoop/conf/slaves

Adding node to Hadoop Cluster(Ubuntu)
The nodes on the cluster must be installed basically the same as the Master node.

Install Java 6
#apt-get install sun-java6-jre
#apt-get install sun-java6-jdk

Create group hadoop
Create user hadoop

root@hadoop02:~# addgroup hadoop
Adding group `hadoop' (GID 1001) ...
root@hadoop02:~# adduser --ingroup hadoop hadoop

Have DNS or /etc/hosts setup for the cluster so any machine can be accessed by name.
Copy SSH keys for hadoop user on master node dir .ssh/*

Get Hadoop package installed
Probably you have it already in other hadoop datanode.
Untar the content on /usr/local/hadoop

Define your data directory: example /data
Copy configurations file from master node.

hadoop@hadoop01:/usr/local/hadoop/conf$ scp hadoop-env.sh hadoop03:/usr/local/hadoop/conf
hadoop-env.sh                                                                                                100% 2278     2.2KB/s   00:00
hadoop@hadoop01:/usr/local/hadoop/conf$ scp core-site.xml  hadoop03:/usr/local/hadoop/conf
core-site.xml                                                                                                100% 1321     1.3KB/s   00:00
hadoop@hadoop01:/usr/local/hadoop/conf$ scp mapred-site.xml   hadoop03:/usr/local/hadoop/conf
mapred-site.xml                                                                                              100%  455     0.4KB/s   00:00

Go to Master node and add the new node to the file

hadoop@hadoop01:/usr/local/hadoop/conf$ vi slave

Starting the cluster

Make sure that the /data is empty in all the nodes. Then on the master lets format the namenode.

$/usr/local/hadoop/bin/hadoop namenode -format

Start the HDFS cluster


Verify that Datanode has started in all the nodes running


hadoop@hadoop01:/usr/local/hadoop/bin$ jps
410 Jps
32161 SecondaryNameNode
32054 DataNode
31944 NameNode


hadoop@hadoop03:~$ jps
5832 Jps
5785 DataNode

Finally start the Map/Reduce Cluster.


Note: to stop the cluster, first run $/usr/local/hadoop/bin/stop-mapred.sh and then $/usr/local/hadoop/bin/stop-dfs.sh

Hadoop Web Interfaces

Hadoop comes with several web interfaces which are by default (see conf/hadoop-default.xml) available at these locations:

Finally, you can start to use the Hadoop Cluster, however it will require that you learn Hadoop API or some kind of language. If you are looking for something more like SQL , try to use Hive.

Thursday, October 22, 2009

Setting Up LDAP Client for Ubuntu 9.04

Before I showed how to set the ldap client on Centos , however Ubuntu has some differences.


# apt-get install libpam-ldap libnss-ldap nss-updatedb libnss-db

Set LDAP server


Set search Base


LDAP version 3

Local database (NO)

LDAP requires login (YES/NO)

go /etc/ldap.conf and add

nss_base_passwd ou=People,dc=example
nss_base_shadow ou=People,dc=example
nss_base_group ou=Groups,dc=example

Then go to /etc/nsswitch.conf

and modify the following lines

passwd: compat [SUCCESS=return] ldap
group: compat [SUCCESS=return] ldap
shadow: compat [SUCCESS=return] ldap

Tuesday, October 20, 2009

Traffic Shaping on Linux (Ubuntu or Red Hat)

Traffic Shaping or Bandwidth Management are  issues that concern many people when the Internet resources are limited. I think this can be a hot topic however it requires a lot of administration so it is probably not  a priority for most of the System Administrators. There a couple of traffic shapers out there however HTB is the most popular one and easiest to manage it. As you have seen before I also like to use some GUI interfaces as Webmin so I have attached the procedure to install the HTB module for webmin...

Download the HTB script (Hierarchy Token Bucket queuing)

#wget http://downloads.sourceforge.net/project/htbinit/HTB.init/0.8.5/htb.init-v0.8.5?use_mirror=voxel

Copy htb.init to /etc/init.d/htb.init

Create directory /etc/sysconfig/htb

Create rules on /etc/sysconfig/htb directory. Assuming only one interface (eth0), this is an example configuration:

# cd /etc/sysconfig/htb
# vi eth0


Save file

#  vi eth0-2.root

# root class containing outgoing bandwidth

Now you can start htb.init

# /etc/init.d/htb.init start

Checking status 

#/etc/init.d/htb.init stats

With webmin module 

Install Tree:DAG_Node

#apt-get install libtree-dagnode-perl


# cpan -i Tree::DAG_Node

Then install webmin module.

Go Webmin ---> Webmin Configuration --> Webmin Modules

on Third Party Module : http://sehier.fr/webmin-htb/webmin-htb.tar.gz

and Install it.

To check the module go Networking ---> Hierarchy Token Bucket queuing

Configuring the rules is the trick of this Bandwidth manager.

Friday, October 16, 2009

Tracking Stats with Webmin (Centos 5.3 64bit)

As I have said before Webmin is a powerful tool. Webmin has a Third party module that allows you to track performance and trends on the same server without rely on external tools.

Open a browser with webmin interface http://serverip:100000

Go Webmin --> Webmin configuration --> Webmin Modules

Copy this link on Third Party Modules

Then Install it

The package installed will be called Historic System Statistics

If you click on that you will get an error because there are some packages missing.

The missing package is RRDtool. You can get it from different sources however I suggest you 
to use rpmforge repos.

Go System --> Software Packages and copy this link on (From ftp or http URL)

Then you would have to clean yum database.

Go Others --> Command Shell and run yum clean all

Then go back to System --> Software Packages

On (Packages from Yum) : rrdtool perl-libwww-perl

Install them

Finally go System --> Historic System Statistic

The modules start to configure themselve and then Start Webminstats.

Now you can have stats and trends running on the server without using external tools.

IPMI Installation on Citrix Xen 5.5

IPMI is a standardized message-based hardware management interface. A hardware chip known as the Baseboard Management Controller (BMC), or Management Controller (MC), implements the core of IPMI.

BMC is already integrated in many servers, so you have to make sure that your system has this already. IBM, HP or Dell servers already have it.

Accessing remotely the BMC requires some kind of network access so when you see the servers in the back you probably can not see the BMC NIC. This NIC is already integrated with NIC1 so if you connect NIC1 to the network the BMC NIC can have access to the network.

Since Citrix Xen is becoming popular and Citrix Xen still does not provide info or repositories to install IPMI tools, I decided to show you how you can have this powerful on your system.

My configuration is two Citrix Xen Servers 5.5 running on HP proliant DL160.

Upgrade Ncurses package because ipmitools requires it.

#wget ftp://ftp.pbone.net/mirror/yum.trixbox.org/centos/5/old/ncurses-5.6-7.20070612.i386.rpm
#rpm -Uvh ncurses*

then Install IPMI server and drivers

#wget http://mirror.centos.org/centos/5/os/i386/CentOS/OpenIPMI-libs-2.0.6-11.el5.i386.rpm
#wget http://mirror.centos.org/centos/5/os/i386/CentOS/OpenIPMI-2.0.6-11.el5.i386.rpm

Install OpenIPMI-libs first because OpenIPMI requires this package

#rpm -ivh OpenIPMI-libs*
#rpm -ivh OpenIPMI-2.0*

I suggest you ipmitools as client because there is a lot more documentation in how to use it. This website has a lot of documentation to use ipmitool http://wiki.adamsweet.org/doku.php?id=ipmi_on_linux

# wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/netmax/Fedora_8/i386/ipmitool-1.8.9-38.1.i386.rpm
#rpm -ivh ipmitool*

Then start IPMI service

# /etc/init.d/ipmi start

Make sure the IPMI service starts when the machine reboots

# chkconfig ipmi on

NOW we can get local info from IPMI using ipmitool

#ipmitool sdr

It will give you stats of the motherboard

#ipmitool lan print

it will give you BMC NIC information.

[root@xen01 ~]# ipmitool lan set 2 ipsrc static
[root@xen01 ~]# ipmitool lan set 2 ipaddr x.x.x.x
Setting LAN IP Address to x.x.x.x
[root@xen01 ~]# ipmitool lan set 2 netmask x.x.x.x
Setting LAN Subnet Mask to x.x.x.x
[root@xen01 ~]# ipmitool lan set 2 defgw ipaddr x.x.x.x
Setting LAN Default Gateway IP to x.x.x.x
[root@xen01 ~]# ipmitool lan set 2 arp generate on
[root@xen01 ~]# ipmitool lan set 2 arp interval 5
[root@xen01 ~]# ipmitool lan set 2 access on
[root@xen01 ~]# ipmitool lan set 2 user

Then we need to set up user access to IPMI
the user list can be find with this command:

[root@xen01 ~]# ipmitool user list
[root@xen01 ~]# ipmitool user set name 2 root
[root@xen01 ~]# ipmitool user enable 2
[root@xen01 ~]# ipmitool channel setaccess 2 2 ipmi=on link=on privilege=4
[root@xen01 ~]# ipmitool user set password 2 secret
[root@xen01 ~]# ipmitool user list
[root@xen02 ~]# rmcp_ping -d x.x.x.x

If you do not get response, you will have to reboot the server and check BIOS.
Make sure that BIOS has IPMI setup to share NIC and save changes.

[root@xen02 ~]# rmcp_ping -d x.x.x.x

You should receive response from IPMI IP. Then you can try to access IPMI remotely.

[root@xen02 ~]# ipmitool -H x.x.x.x -U root -a chassis status
System Power : on
Power Overload : false
Power Interlock : inactive
Main Power Fault : false
Power Control Fault : false
Power Restore Policy : previous
Last Power Event :
Chassis Intrusion : inactive
Front-Panel Lockout : inactive
Drive Fault : false
Cooling/Fan Fault : false
Sleep Button Disable : allowed
Diag Button Disable : allowed
Reset Button Disable : allowed
Power Button Disable : allowed
Sleep Button Disabled: true
Diag Button Disabled : true
Reset Button Disabled: true
Power Button Disabled: true
[root@xen02 ~]#

Great!!! Now you can check IPMI stats remotely using any server that has ipmitool installed.

NOTE: This probably apply to any Centos 5 Linux, however I installed and tested it on Citrix Xen
because I think it should be cool to have IPMI funcionalities.

Wednesday, October 14, 2009

Webmin: a powerful system administration tool (Setup)

During several years as System Administrator, I have found people that do not like GUI interfaces for system administration work. I completely disagree this approach because GUI interfaces facilitate and standardize your work.

One of the big problems in a Network Infrastructure is the lack of documentation. Top management do not see advantage in having accurate documentation because they can not attached that to direct revenues. Accurate documentation is a utopia for real environment so System administrator must figure out how the environment was built to perform a good job. However, sysadmins tend to built and process changes in many different way that makes tracking changes very difficult.

I suggest WEBMIN as an alternative to standardize and make changes on uneven environments. Why?

  • Webmin is OpenSource (Free)
  • Webmin was created 10 years ago and It also part of Solaris Install Packages.
  • Webmin is perl based, so it is portable to many OSes. 
  • It has modules to manage many servers. (Apache, Mysql, postfix, sendmail, so on)
  • It has Cluster administration capabilities. 
  • It is very easy to install and does not load the server at all.
Do not be lazy and try it ...

Redhat or Centos installation

# wget http://prdownloads.sourceforge.net/webadmin/webmin-1.490-1.noarch.rpm
#rpm -Ivh webmin*

Basic installation of Centos contains all dependencies.

then you can access webmin with root password on http://serverip:10000

Ubuntu and Debian

#wget http://prdownloads.sourceforge.net/webadmin/webmin_1.490_all.deb
#dpkg -i webmin*

Probably there were dependencies missing so then you should try

#apt-get install -f

After the installation, Webmin will identify all the server installed in your server, however to use the modules you have to make sure that module config for that specific module has been configured right.
Some people tend to discourage of using Webmin because they do not configure it right.

My experience Webmin has shown that new OS installations can have 95% of the Webmin Modules well configured and Current Server installations can have 60% of the Webmin Modules well configured.

Tuesday, October 13, 2009

Data Center Management with openQRM on Centos 5.3 64bit (Installation)


A year ago I tested this application and it was pretty awesome, although it was still in development. Now I would like to give it a try and see if it is ready for production environments.

Initially, I got a CentOs 5.3 64 bit installed with 1 cpu and 1GB mem.
First we need to get the packages.

[root@openqrm ~]# mkdir openqrm
[root@openqrm ~]# cd openqrm/
[root@openqrm openqrm]# wget http://sourceforge.net/projects/openqrm/files/openQRM%204.5/RPM%20CentOS5%2064bit/openqrm-server-entire-4.5-centos5.x86_64.rpm/download

Make sure you install the packages required by openQRM. Regular Centos repos are going to miss some of the packages so I suggest you to install RPMFORGE repos

[root@openqrm ~]# wget http://dag.wieers.com/rpm/packages/rpmforge-release/rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm

[root@openqrm ~]# rpm -Uvh rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm
warning: rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 6b8d79e6
Preparing...                ########################################### [100%]
   1:rpmforge-release       ########################################### [100%]
[root@openqrm ~]# yum clean all
Loaded plugins: fastestmirror
Cleaning up Everything
Cleaning up list of fastest mirrors

Now you can start to install packages using rpmforge repos.

[root@openqrm openqrm]# rpm -ivh openqrm-server-entire-4.5-centos5.x86_64.rpm
error: Failed dependencies:
        /bin/ash is needed by openqrm-server-4.5-centos5.x86_64
        /usr/bin/expect is needed by openqrm-server-4.5-centos5.x86_64
        bind is needed by openqrm-server-4.5-centos5.x86_64
        dhcp is needed by openqrm-server-4.5-centos5.x86_64
        expect is needed by openqrm-server-4.5-centos5.x86_64
        httpd is needed by openqrm-server-4.5-centos5.x86_64
        iscsi-initiator-utils is needed by openqrm-server-4.5-centos5.x86_64
        mysql is needed by openqrm-server-4.5-centos5.x86_64
        mysql-server is needed by openqrm-server-4.5-centos5.x86_64
        nagios is needed by openqrm-server-4.5-centos5.x86_64
        nagios-devel is needed by openqrm-server-4.5-centos5.x86_64
        nagios-plugins is needed by openqrm-server-4.5-centos5.x86_64
        nagios-plugins-nrpe is needed by openqrm-server-4.5-centos5.x86_64
        perl(XML::Simple) is needed by openqrm-server-4.5-centos5.x86_64
        php is needed by openqrm-server-4.5-centos5.x86_64
        php-mysql is needed by openqrm-server-4.5-centos5.x86_64
        puppet-server is needed by openqrm-server-4.5-centos5.x86_64
        samba is needed by openqrm-server-4.5-centos5.x86_64
        screen is needed by openqrm-server-4.5-centos5.x86_64
        subversion is needed by openqrm-server-4.5-centos5.x86_64
        tftp-server is needed by openqrm-server-4.5-centos5.x86_64
        zabbix is needed by openqrm-server-4.5-centos5.x86_64
        zabbix-agent is needed by openqrm-server-4.5-centos5.x86_64

[root@openqrm openqrm]# yum install bind dhcp expect httpd iscsi-initiator-utils mysql mysql-server nagios nagios-devel nagios-plugins nagios-plugins-nrpe php php-mysql

[root@openqrm openqrm]# yum install puppet-server samba screen subversion tftp-server perl-XML-Simple

Ok we still need to get ash, zabbix, and zabbix-agent

[root@openqrm openqrm]# wget ftp://mirror.switch.ch/pool/3/mirror/centos/4.7/os/x86_64/CentOS/RPMS/ash-0.3.8-20.x86_64.rpm
[root@openqrm openqrm]# wget ftp://ftp.muug.mb.ca/mirror/fedora/epel/5/x86_64/zabbix-agent-1.4.6-1.el5.x86_64.rpm
[root@openqrm openqrm]# wget

[root@openqrm openqrm]# yum install net-snmp-libs iksemel

Then you can install all the aditional RPMs and finally

[root@openqrm openqrm]# rpm -ivh openqrm-server-entire-4.5-centos5.x86_64.rpm


After install you can access the openQRM console in http://openqrm-server-ip/openqrm/

user: openqrm
password: openqrm


Monday, October 12, 2009

Citrix Xen Server Setup

Setting up a Citrix Xen Server is very easy, however do you have to consider design before start to deploy a Citrix Xen Server farm.

First ,  I would like you to think about networking. For a single server, one nic is enough however if you want to grow your farm and add servers to your pool you should have to use a second nic.  Why??? well if you are going to use share storage in a pool of server (pools only make sense with share storage) Citrix Xen will require to have a nic for management and a second nic for storage. The storage nic can be shared for VM network traffic however it is not recomended for production enviroment.

Citrix Xen Server standalone = 1 NIC
Citrix Xen Server Pool (shared storage) = at least 2 NIC , For production enviroments at least 3 NIC

Second , make sure that you have a 64 bit processor. In addition, remember to enable VT on BIOS so you can create paravirtualized Guest OS.  Paravirtualization can give you a lot better performance.

Third, make sure you have a lot of memory. Basically, memory will limit the quantity of VMs that you can create. You can over provision memory however the overall performance of the virtual machine will be affected because VM memory will require to swap on disk.

Now, we have our server ready to install so you have to download the ISOs for Citrix Xen Server.


There are two ISOs, one is the Citrix Xen Server and the second one is the Linux Guest support. I suggest you to donwload both and burn them in CDs.  The Linux Guest support contains some addtional templates that they are good to have.

Boot the machine with the Citrix Xen Server CD and follow instructions.
Only configure one NIC with you management IP.

In addition, do not forget to download XenCenter on your windows desktop and install it.

After XenCenter installation, you can set the management ip and access Citrix Xen Server Remotely.

If you want to add another server, follow the installation procedure and then add it to XenCenter.

In XenCenter, you can configure everything else you need. I recommend you to configure the other NICs. If you only have one more and you want to enable VLANs, do so on the switch and then create the sub interfaces on XenCenter for each VLAN. If you want to create a pool, remember that all the server you have the same configuration or access on the network.

After the pool is created everything will be set up for the pool and no just for that server. 
For installing new VMs, I suggest you to use NFS or CIFS to maintain the ISOs. You just have to add that storage on the storage tab on XenCenter and all the ISOs will be available to all VMs.
You can create machines with any OS however Citrix Xen will tell you that they only support the ones that they have templates for. So try to use the templates for the creation and then boot the VM with the ISO for that template. The template will guarantee that the VMs is compatible with Xentools, however this package is not required by the machine to operate.

Finally , set up the shared storage. Shared Storage will give you the flexibility to move the machine from one server to another one (XenMotion) without losing service or network.  A normal set up will include some class of ISCSI target ( Shared Storage) which can be accessible by Citrix Xen Servers.
I recommend you to check any of the Opensource SAN (Openfiler, Freenas) solutions or if you have the budget go for Commercial solutions.

VMWare vs Citrix Xen Comparisons ....

I have used both products for long time and I can tell you that Citrix Xen has better virtualization strategy for any business size.

Let's make the point.

  •  it has been the longest time.
  • They started offering VMware server for free however this product does not perform very well on production enviroments.
  • After a while, They decided to offer ESXi version for free. It was a good move however ESXi does not have console access because they designed to be embedded. This means limited access and flexibility.
  • VMware never allows public performance comparisons. This is very obvious, why a company would do that. Furthermore, they created their own performance benchmark. 
  • VMware is a commercial product that requires license for everything. Call your sell rep and ask him about prices...Morevover VMWare needs to be careful with piracy. 
  • VMware did not offer paravirtualization. If I remeber they started to offer paravirtulization recently. Paravirtualization gives a lot better performance than full-virtualization.
  • VMware Virtual Center is sold separately and it only runs on windows. This product is required to perform most of the advantage of the virtualization.

Citrix Xen Facts

  • Although it is pretty new product, the Xen Hypervisor has been around for a while.
  • First people must distinguish between Citrix Xen and Xen. Citrix Xen is a whole suite that uses Xen. Xen is the free hypervisor built by Linux Community. You can use Citrix Xen or Xen separately.
  • Initially, Citrix Xen was not free, because Citrix felt very confident about this product however VMware had basically most of the market. The market was reluctant to change to the new platform so Citrix offer was not too attractive.  Finally they decided to offer Citrix Xen 5 for free with all the features.
  • Citrix Xen is commercial product based on Xen Opensurce. Citrix Xen Server is free with all features. If you want to go further you can add more features buying the Essential license or Platinun license.
  • Citrix Xen has always offered paravirtualization. Paravirtualization was the core of Xen hypervisor.
  • Citrix Xen Console ( vs VMware VC) is a client that connects directly to the pool of Citrix Xen Servers. It does not require a dedicated server.
Both products can offer you the same functionalities, however Do we use all those functionalities?

Let 's review what we need.

Required options:
  • Storage Support (ISCSI, local storage,NFS)
  • Cloning and Snapshot Capabilities.
  • Guest OS support.
  • Vmotion or Xenmotion capabilities. 
  • Virtual switches support (multiple VLANs)
  • Good Performance (Paravirtualization)
  • Centralized console (Virtual Center or Citrix Xen console)
  • Templates support.
Optional requirements.
These are options good to have however I have not seen people using them a lot. In addition, they can be provision in other ways.
  • HA  and DR
  • Specialized Storage support
  • Automatic Load Balancing 
  • Fast Provisioning
Now it is almost clear. If I have Citrix Xen 5 or 5.5 free I can provide all the required options, if I want to get the Optional requirements , I would have to pay or I can deploy different solutions. In VMware world, I can use ESXi for free however I am going to require to buy Virtual Center so I can manage all those servers. Moreover, I will require a windows license to install VC.

In Conclusion, both products are great, however I think that Citrix Xen can offer you more free so your learning curve and testing enviroment can be more productive.

Thursday, October 8, 2009

Nagios replaced with Groundworks

After several years of using Nagios with its old interface, I realized that there should be something out there that it can simplify the configuration. Moreover, I tested a lot of configurations tools for Nagios but none was quite as good as I expected. Finally, I decided to test other monitor tools like Zenoss, ZabbixHypericOpsview, and Groundworks

Zenoss : Enterprise ready, Opensource/Commercial,  Nice GUI,  and support nrpe.

Zabbix : Enterprise ready, Opensource/Commercial,  Nice GUI,  and uses its own agent.

Hyperic: Enterprise ready, Opensource/Commercial,  Nice GUI, and uses its own agent.

Opsview: No Enterprise ready, Opensource, Clean GUI, and  Nagios based.

Groundworks:  Enterprise ready, Opensource/Commercial,  Nice GUI, and Nagios based.

All are wonderful tools, easy to install, and provide the service that you need. However, Nagios has been around for the longest time and people is very familiar with it , so I decided to be focused on Opsview and Groundworks.  Well ...as always , you must be covered by some kind of support ...so I suggest you to go Groundworks......

It is very easy to install ....

First , get a clean installation of linux ....I recommend Centos...(This is another discussion)

Second, download the binary file from http://www.groundworkopensource.com/community/downloads/6.0-download.html

Third, execute the file and follow the instruction.

 Finally, you can configure it through the web interface. You can use Autodiscovery, add your own commands and so on....