Wednesday, November 25, 2009

Configuration management: Netdirector with Ubuntu 9/04 (64bit)

" Research time ..."
I was almost ready to deploy puppet when I realized that puppet does not have any GUI interface. Well there is one puppetshow but I started to test it and I noticed that it is still in development. So I decided to check if there are another tools to do configuration management and I found Netdirector. My first impression was very good because they have a pretty good website and they offer commercial services.Moreover, I noticed that the application is java based for server and client. Here I show you how to make a quick installation and how to deploy the agent.

First let get the binary for Linux  (32 or 64 bit) . My case is 64 bit.


# wget http://sourceforge.net/projects/netdirector/files/GPL%20NetDirector%20Server%20Manager/3.2.3/netdirector.tar.bz2/download

Uncompress the file.

# tar jxvf netdirector.tar.bz2

Make sure you have all the required packages installed.

# apt-get install sun-java6-jre postgresql libpg-java

You can notice that I am installing postgresql database because the default installation support this database. If you want to use MySQL you would have to research a little bit so you can attached netdirector to you MySQL database.

#cd netdirector/netdirector/dists/netdirector/main/binary-amd64

Install the package.
# dpkg -i netdirector_3.2.2+tomcat_5.5.27-psql7-all.deb

Now you can access the application on http://serverip:8080/netdirector
user admin
pass admin

Do not forget to create an user and role. The GUI for users contains more info and you can create group of servers.

Installing the netdirector agent (Client)

Get the binary file.
# wget http://sourceforge.net/projects/netdirector/files/NetDirector%20Server%20Agent/Netdirector%20Agent%203.1%20Stable/netdirectoragent-3.1-linux-installer.bin/download

Now if you try to install the binary and you are using 64bit Ubuntu, nothing is going to happen. The reason is that the binary requires some 32bit libraries. To avoid this problem install the following package.

#apt-get install ia32-libs


Then install the package using text mode and follow instructions.

# ./netdirectoragent-3.1-linux-installer.bin --mode text


After installing the agent you can go to the web interface and add the server.

Enjoy it ...

Thursday, November 19, 2009

Installing Groundwork 6 (Nagios) on Ubuntu 9.04 64bits

"Research time .."
Yes , yes ...you probably try to figure out why Ubuntu. I prefer other linux distros like Red Hat, Centos or SUSE, however sometimes you can face developers that like Ubuntu and you can not do anything... so I decided to base many of blogs on installations for Ubuntu 9.04 (Current version). Moreover, 32bits installations for current servers do not make sense since most of the new servers have 64bit support and more that 2GB of memory.

I have installed Groundworks on Centos with no problem at all , however ...if you try to install Groundworks (64bit) on Ubuntu 9.04, you will find one issue. 
First, make sure you have Ubuntu upgraded

#apt-get upgrade

then get the Groundworks binary


# chmod +x groundwork-6.0-br120-gw440-linux-64-installer.bin

Now wait ....you would think that you can just run the binary file to install ..however you will get an error with the agent.bin file. The problem is that agent.bin was created dynamically linked to 32 bit libraries ...so you Ubuntu 64 bit does not contain these libraries. For this specific version of Groundworks, I suggest to install 32 bit libraries.

# apt-get install libc6-i386

Then you can install Groundworks .....This bug has already been reported http://www.groundworkopensource.com:8080/browse/GWCE-17
So newer versions probably will have it fixed.

Enjoy it ....

Wednesday, November 18, 2009

Installing vmware tools on Ubuntu 9.04

" Research time ..."
Ubuntu is not a OS supported by VMware so installing vmware tools on Ubuntu is not a easy task. 
First You can select to install vmware tools , then mount the cdrom drive on the Ubuntu VM, then copy the tar file to your local hard drive, untar the file, and finallly run ./vmware-install.pl.....WAIT ...WRONG .....it does not work.


First ...First ...you have to make sure you have gcc and make installed ...


#apt-get install gcc make


Then ....the modules will not compile so you will see some errors. You can google and find the patches so then you can compile.  Finally you would think all this trouble to get vmware tools installed on Ubuntu .....
However, you can grab this script http://chrysaor.info/scripts/ubuntu904vmtools.sh and executed on your VM and you will get every done . 
Thanks to http://chrysaor.info/, this guy is providing Debian, openbsd, and Ubuntu images with vmware tool  installed.


Enjoy it....

Tuesday, November 17, 2009

Installing Cacti on Ubuntu 9.04

" Research time ..." 

 Cacti is a network graphic solution based on RRDTools. Installing Cacti on Ubuntu 9.04 is a very easy task.

Firts you will need a Ubuntu Server with all the basic installation.

Make sure you have LAMP stack installed.

#tasksel install lamp-server

Then install Cacti package.

#apt-get install cacti cacti-spine

This process will install Cacti 0.8.7b. After the install you can access the installation pages on http://server/cacti/ . Follow the instructions and the installation will be finished.....???

Access http://server/cacti/ using user: admin and password: admin
Cacti will ask you to change password so then you can access.

PROBLEMS, PROBLEMS!!!
 1- No able to add network devices with SNMP.
 2- Thunbnails are not working.

Ubuntu 9.04 repos will install rrdtool 1.3.1, however Cacti 0.8.7b seems to support only up to rrdtool 1.2.X.
In that case, it will be better to get the latest version of Cacti 0.8.7e and install it.


Get the package http://www.cacti.net/downloads/cacti-0.8.7e.tar.gz

Enjoy it..

Wednesday, November 11, 2009

Quick Install: MediaWiki on Ubuntu 9.04

 " Research time!!"

MediaWiki is an opensource project that allows you to keep your documentation updated. You probably can find a bunch of Wiki projects out there, however it is the most complete. This process shows you how to install MediaWiki on Ubuntu. 

Requirements:
Ubuntu server , basic installation.

#tasksel install lamp-server

# apt-get install mediawiki

The installation is going to ask you about mysql password.

Additional Packages

#apt-get install imagemagick mediawiki-math php5-gd


Uncomment from /etc/mediawiki/apache.conf

Alias /mediawiki /var/lib/mediawiki


Restart apache


go to your browser

http://mediawikiserver/mediawiki/

Follow steps

#mv /var/lib/mediawiki/config/LocalSettings.php /etc/mediawiki/LocalSettings.php
#chmod 600 /etc/mediawiki/LocalSettings.php
#rm -Rf /var/lib/mediawiki/config


Logo change

# cp logo.png /var/lib/mediawiki/skins/common/images/wiki.png

User administrator

user: WikiSysop pass:

Make sure to setup your relay for SMTP.

Installing extensions

# apt-get install mediawiki-extensions mediawiki-semediawiki

Extension available on
/etc/mediawiki-extensions/extensions-available

Enable LDAP authetication

# mwenext LdapAuthentication.php


add this to Localsettings.php

# LDAP CONFIGURATION
$wgAuth = new LdapAuthenticationPlugin();
$wgLDAPDomainNames = array("LDAPDEV");
$wgLDAPServerNames = array("LDAPDEV"=>"ldap.example.local");

$wgLDAPUseLocal = true;

$wgLDAPEncryptionType = array("LDAPDEV"=>"clear");

$wgLDAPBaseDNs = array("LDAPDEV"=>"dc=example,dc=local");
$wgLDAPSearchAttributes = array("LDAPDEV"=>"uid");


$wgLDAPDebug = 3; //for debugging LDAP
$wgShowExceptionDetails = true;  //for debugging MediaWiki

$wgGroupPermissions['*']['createaccount'] = false;
$wgGroupPermissions['*']['read'] = true;
$wgGroupPermissions['*']['edit'] = false;
$wgGroupPermissions['*']['createpage'] = false;
$wgGroupPermissions['*']['createtalk'] = false;




Enjoy it ....

Tuesday, November 10, 2009

Quick tip: Disk Usage on linux

" Research time !!"

I can not imagine how many times a disk of a linux server gets filled up ( I would say it is common with virtual machine since we normally size the disk to fit only the OS). Moreover, system administrators need to find out what happened to get that disk filled up.  A good and powerful tool is du command so I will show you how to use it.

#du -hs *

It will show you the disk usage on that directory.

#du -s *|sort -n

It will sort the output so you can see the largest directory or file on the bottom ...

With only this two commands you can easily find the largest directory or file.


Enjoy it

How to setup a chroot enviroment with ssh/ scp for Linux ...

"Research Time...."

Problem:
 We need to create user accounts in a server with limited access to the file system. Quick answer create a chroot environment. The user should be able to ssh or scp to the server without having access to others user home directory or root file system.

Solutions:
After a little research, you probably can find different way to do this. One approach is using a restricted shell however this is not a real chroot environment because if the user can change the shell he can access the root file system.Second approach is modifying sshd so the user can only see he is own home dir however this require to change the standard sshd configuration and binary. Finally approach , and this is a very clever solution is creating a chroot enviroment usign chroot command and modifying the user shell...http://www.fuschlberger.net/programs/ssh-scp-sftp-chroot-jail/

Basically you can download the script  http://www.fuschlberger.net/programs/ssh-scp-sftp-chroot-jail/make_chroot_jail.sh . Make sure is enable to be executed then you can use as follow:

make_chroot_jail.sh /path/to/chroot-shell /path/to/jail


The script allows you to create a new user with a shell on the path defined and under a jail path.

chroot-shell is stored by default on /bin/chroot-shell. However you should specify /bin directory when you require to use a path for the jail. 

Path to the jail is by default /home/jail. The script will create a chroot environment under this directory. User directory will be on /home/jail/home/user.

This chroot environment allows the user to ssh , scp or sftp.

Other way to do but more complex is using the JAILKIT from http://olivier.sessink.nl/jailkit/index.html

Enjoy it !

Friday, November 6, 2009

Creating a CPAN mirror

!!! Research Time....making infrastructure easy...

Probably you have a lot of perl developers in your network and they require to create custom packages for easy deployment. Moreover, you do not want all the developers to get the packages from internet every time because it could consume your bandwidth. The solution for this problem is having a local CPAN mirror. This procedure is pretty simple , but it can take sometime to synchronize the mirror the first time.

First lets have a a server with enough space to store the mirror data. Let say you provision 10GB for CPAN ( CPAN can consume around 7GB).

Synchronize the  CPAN mirror with one of the mirrors available using rsync. I have a separated partition for CPAN mounted on /cpan directory.


 # /usr/bin/rsync -av --delete ftp.funet.fi::CPAN /cpan

This process can take long time depending on your internet access, so I suggest you to use screen to leave the terminal running.


After the first synchronization , you must make sure that your mirror is updated ( CPAN can have many updates during the day) , so I suggest you to create a cron job that runs everyday at midnight. Edit your crontab and add the following:

#crontab -e

Add:
@daily /usr/bin/rsync -av --delete ftp.funet.fi::CPAN /cpan >/dev/null 2>&1

Now that you have the CPAN mirror copied to the disk, you need to share it on the network. You can use ftp or http, for my case I prefer to use http with Apache. I have Apache already installed for another mirrors so basically I need to give access to the mirror putting a symlink on Apache default directory (/var/www)


# ln -s /cpan /var/www/cpan


Then create a cname on your local DNS for cpan.example.com to server.example.com

Now you have your mirror available on http://cpan.example.com/cpan/

CPAN client configuration

If your team of developers have already used external CPANs, they will require to modify CPAN to access your local mirror. If it is the first time you are using CPAN, it will be asking you to choose a mirror.

# cpan

cpan shell -- CPAN exploration and modules installation (v1.9402)
Enter 'h' for help.

cpan[1]> o conf



On CPAN, "o conf" will show you the configuration.


cpan[2]> o conf urllist


This command will show you the list of mirrors that you have configured.

cpan[3]> o conf urllist flush


This command will delete any mirror on the list.

 cpan[6]> o conf urllist push http://cpan.example.com /cpan/


This command will add your local mirror to urllist. Finally you need to commit the changes 


cpan[7]> o conf commit


Done ....start using your mirror























 


Wednesday, November 4, 2009

Resize Ext3 partition Linux VM on Citrix Xen

" Research time ..everyday try to compile what you learn..."

Sometime when you create a VM in a Virtual Enviroment (Citrix Xen or VMware), you assign certain amount of disk space and then you realize that you need more. Resizing a partition is not a easy task, since there is a lot of concern when there is data already stored. Moreover, Resizing a boot partition will require to shutdown the server and boot with CD in rescue mode. Well if you are shutting down your server it would be easier to just create a new VM with the size that you want. But what about non-boot partitions? On virtual environments this task is easy and it can be performed while the machine is on.

The following process shows you how to resize partitions on Citrix Xen however I am pretty sure that you can also use it on VMWARE.

First, you have to umount the partitions that requires to be resized.

# umount /data


Go to Storage tab on XenCenter for the machine that you want to resize filesystem.

hard drive
Size and location : Change Size

Then
hard drive
Now we have the device resize but the filesystem is still the same size.
Now I suggest to make a Filesystem check in  your partition on that device

# fsck -n /dev/xvdb1

Now the partition is clean but it is still using Ext3. Ext3 can not be resize but ext2 can. So we need to convert the partition ext3 to ext2. The conversion is basically done disabling journal on ext3.

# tune2fs -O ^has_journal /dev/xvdb1

Now we need to make another filesystem check on ext2 format.

# e2fsck -f /dev/xvdb1



This is the scary part you will have to delete the partition, however do not worry you are not going to lose the data. Basically you are making sure the partition table is updated with the new size.

# fdisk /dev/xvdb    (Remember fdisk is on devices no partitions)

Delete partition /dev/xvdb1 and create it again with the new size.





Then do a filesystem check to make sure that everything is running smooth.

# fsck -n /dev/xvdb1





Then run the resize command so ext2 knows about the size of the partition.

# resize2fs /dev/xvdb1








Finally, you need to turn on journal and mount the partition.


This command turns the partition ext3

#tune2fs -j /dev/xvdb1

#mount /dev/xvdb1 /data

Enjoy it !!!!!

Monday, November 2, 2009

Improving transfer speed for SAMBA

Samba will allow you to share directories for Windows Client. Installation is pretty simple so you can install samba on Ubuntu server with #apt-get install samba. After having samba installed, you have to decide what directory to share and what kind of security to use. Security is very interesting since you can design your server to use AD security from the network or use local user database.  Finally you have a samba server in you rnetwork where you can share directories with all the other user, however if you start to copy a lot of data to your samba server you will notice that the transfer speed is very low. I would say almost the same that FTP.
The following tests show how to improve the transfer speed on the Samba Server changing basically two parameters.

Requirements:

One Ubuntu server with SAMBA
One ubuntu with smbclient and smbfs

Testing for different configuration

On server smb.conf

socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=8192 SO_SNDBUF=8192

On Client:

# mount -t smbfs -o rw,username=mpi //load01/data /mnt
# cd /mnt
# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 3.93339 s, 26.7 MB/s

On server

socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=16384 SO_SNDBUF=16384

On client

# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 2.08827 s, 50.2 MB/s


On server
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=32768 SO_SNDBUF=32768

On Client
# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 1.5947 s, 65.8 MB/s


On server:
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=65536 SO_SNDBUF=65536

On Client:
# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 1.56355 s, 67.1 MB/s

On Server:
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=262144 SO_SNDBUF=262144
On Client
# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 1.52957 s, 68.6 MB/s


On Server:
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=524288 SO_SNDBUF=524288

On Client:
# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 1.46897 s, 71.4 MB/s


On Server:
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=1048576 SO_SNDBUF=1048576

On Client:
# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 1.45731 s, 72.0 MB/s


NoteThis test assumes that you are writing data to the Samba Share. Changing those values definitively shows an improvement in transfer speed, however I have not tested all the possible scenarios. 
On server, the local transfer speed is the following: 

# dd if=/dev/zero of=testfile count=10240 bs=10240
10240+0 records in
10240+0 records out
104857600 bytes (105 MB) copied, 0.242673 s, 432 MB/s


Still I have pending to verify why the transfer speed is so low comparing to create the file locally. If somebody has found the reason of such  a difference..please let me know.