Tuesday, November 23, 2010

Rollback application deployments with yum and RPM

In many cases, you will require to have a software repository or YUM repo set up on your network so you can have package management on your Linux network. The regular process is to update the new package on the new environment, test it or rollback the changes.

Using yum in combination with RPMs as package management tool is a great strategy since consistency and best practices can be applied. When a package is updated via YUM , several RPMs packages can be installed or a single package can be updated using the option –U with rpm.

#yum update

or

#rpm –Uvh foo.rpm

If you enable the repackaging option , every time you install an update the old package is sent to /var/spool/repackage dir on RPM form. With RPMs , you will need to use the option –-repackage:

# rpm -Uvh --repackage foo.rpm

With YUM, you can enable it on /etc/yum.conf adding the following line:

tsflags=repackage

and /etc/rpm/macros.up2date  file needs to have the following line:

%_repackage_all_erasures 1

Finally, if you require to rollback changes you  will have to use the rpm command.

# rpm -Uvh --rollback '2 hours ago'

Restore will be based on packages 2 hours ago.

I hope you enjoy it.

Thursday, March 4, 2010

Web Load Testing with OpenSource Tools (Linux)

There are many commercial tools that can help you to estimate the load on your web servers, however there are also several opensource tools that can help you to estimate the load on the servers.

After my research, I found the following tools:

  1. Openload
  2. httperf
  3. ab
  4. siege
These tools are really nice and they have a lot of options to perform testing however I was a little disappointed when I tried to use POST methods. Moreover, they only use the server IP to create connections to the server, and the reports where very limited. If I wanted to simulate different connections with different IPs I had to create scripts. So I decided to check curl to start to generate scripts.

Curl is a very powerful command line tool that can create HTTP or HTTPS connections. Moreover it supports POST methods and cookies.

Example:

#curl -c cookie.txt -d "login=user&password=password" http://webserver/login.php

This command will pass login/password (POST DATA) to the form and it will store the cookie on cookie.txt file. Then you can use the cookie to access restricted pages.

Example:

#curl -b cookie.txt http://webserver/search.php

Great tool, do not you think? ...well let's google to find somebody that has worked scripting with curl.

Voila ...I found a tool that it is great for create load testing = CURL-LOADER

This tool has all the features that I need for testing:
  • Multiple clients.
  • Multiple IPs.
  • Post Method available.
  • Complete sequence of page access.
  • Reports per client and totals
Basically you would have to download the latest file from http://sourceforge.net/projects/curl-loader/files/ then make sure you have GCC compiler, Openssl and Openssl development installed.

Then do
$tar zxfv curl-loader-.tar.gz
$cd curl-loader-

$make optimize=1 debug=0
#make install

The format to use this tool is basically:

#curl-loader -f script.conf

Where script.conf is the file that contains the parameter for the load testing.
Some examples can be found on the following directory:

#cd /usr/share/doc/curl-loader/conf-examples/

However I can show you the following example:

# cat script.conf
########### GENERAL SECTION ###################
#
BATCH_NAME=script-results
CLIENTS_NUM_MAX=255
CLIENTS_RAMPUP_INC=2
INTERFACE=eth0
NETMASK=22
IP_ADDR_MIN=10.100.0.0
IP_ADDR_MAX=10.100.0.254
IP_SHARED_NUM=255
URLS_NUM=4
CYCLES_NUM= 200

########### URLs SECTION #######################

### Login URL - cycling

# GET-part
#
URL=http://webserver/login.php
URL_SHORT_NAME="Login-GET"
REQUEST_TYPE=GET
TIMER_URL_COMPLETION = 5000
TIMER_AFTER_URL_SLEEP =1000

# POST-part
#
URL=""
URL_SHORT_NAME="Login-POST"
URL_USE_CURRENT=1
USERNAME= user
PASSWORD= pass
REQUEST_TYPE=POST
FORM_USAGE_TYPE= SINGLE_USER
FORM_STRING= username=%s&password=%s
TIMER_URL_COMPLETION = 4000
TIMER_AFTER_URL_SLEEP =1000

### Cycling URL
#
URL=http://webserver/search.php?action=search&keyword=test
URL_SHORT_NAME="Service List"
REQUEST_TYPE=GET
TIMER_URL_COMPLETION = 5000
TIMER_AFTER_URL_SLEEP =1000

### Logoff URL - cycling, uses GET and cookies to logoff.
#
URL=http://webserver/logout.php
URL_SHORT_NAME="Logoff"
REQUEST_TYPE=GET
TIMER_URL_COMPLETION = 0
TIMER_AFTER_URL_SLEEP =1000

A quick explanation is the following:

1- Results will be stored on files according to the BATCH_NAME.
2- The machine will be using 255 shared IPs. Two clients will be starting every second with max number 255.
3- Each client will go through 4 pages: Login page, Post user and pass, run a search, then logout.


Enjoy it!!!

Thursday, February 25, 2010

Creating SSL certificates for Apache

First we need to understand the concepts of how SSL works.
When a client app (browser) makes a connection to the server (Web Server) via SSL (normally port 443), the initial communication creates the encrypted channel. This process is done as follows:

1- Server provides a certificate file (signed by a CA) to the client.
2- Client verifies the certificate file with the root CA (Certificate Authority). For external services, we can use Verisign, and for internal services we can create our own CA server.
3- Client also verifies that the lookup Name matches the CN (Common Name) f the certificate.
4- Client accepts the connection or sends a warning if the requirements are not met.
5- Client starts to encrypt traffic using the Certificate file.
6- Server decrypts the data using the Private Key.

Basically, whoever has the certificate file can encrypt data but no decrypt. The only one that can decrypt the data is the private key, so make sure you store your private key on a safe place.

Getting Certificate Files

1- First generate the Private key

#openssl genrsa -out server.key 2048

2- Generate the request for a Certificate. This file must be sent to the CA ( Verisign or Internal CA) so it can be signed.
This request does not contain the private key but it will help the CA to signed it using their own private key.

# openssl req -new -key server.key -out server.csr

Then you will be ask several questions and the most important one is the Common Name. The Common Name must match the exact name to access the server that requests the certificate (CRT file).

When you buy certificates from Verisign they have a website where you post CSR, then they will send you a certificate file. For Microsoft CA, you will have to submit the CSR file as a base64 encoded PKCS #10, then Microsoft CA will return a CER file that will need to be converted to a certificate.

Microsoft CA conversion CER to CRT file.

# openssl x509 -in server.cer -inform d -out server.crt

Finally add to your apache configuration for SSL the following configuration:


SSLCertificateFile /path/to/ssl/certificate/server.crt
SSLCertificateKeyFile /path/to/ssl/key/server.key

Note: Make sure your key file are stored in a secure location.

Enjoy it !!!!








Tuesday, February 23, 2010

MySQL Cluster 7 Set up with Ubuntu

If you are accessing this page, it is possible that you want to figure out how difficult is to create a MySQL Cluster. I can tell you that the installation process is not that complicated, the most complicated part is to try to move current databases to this environment , since MySQL cluster engine (NDB) has some differences with other engines (MyISAM, InoDB, ..)

First, we need to understand that MySQL uses different storage engines. Each storage engine has their own advantage and disadvantage. However, NDB is the only engine that according to MySQL can guarantee HA.

Different MySQL configurations can guarantee some degree of HA. The most popular options are the following:

1- Master - Slave Replication : If your Master Server is down you can promote the Slave to master. This configuration is considered active - passive . Moreover, some data can be lost.

2- Master - Master Replication (Asynchronous) : This configuration is great for HA however you would have to have some considerations to avoid conflict. This configuration is considered active - active.

3- MySQL HA with DRBD: DRBD is a process that keep two server file system in sync. The synchronization process is at data block level, so database corruption can be seen. This configuration is active - passive.

4- MySQL Cluster: Basically, MySQL introduces a new storage engine that can help to maintain redundancy and HA.

MySQL Cluster

First, we need to recognize the different processes and facts for MySQL Cluster.

The NDB engine stores Indexes and Data on memory by default. However, if you define a tablespace on disk, Data can be stored on disk.


1- ndbd or ndbmt : Single or Multi thread process for data nodes. This process reserves the memory and stores the data on memory. (Data Nodes)

2- ndb_mgmd : This process provides the configuration file to all nodes (Data Nodes and MySQL Nodes).

3- Mysqld (API): This process provides all the storage engine available plus access to the NDB engine.

Starting the Set UP

Although you can run MySQL cluster in one machine , it does not make sense to install it since you do not have any redundancy. I suggest a basic setup of 5 nodes ( 2 data nodes, 2 mysql nodes and 1 management). Also the following specs will help you to get the HW.

1- Data Nodes:
- A lot of memory. Index and Data are mainly stored in memory.
- Disk space. If you store Data tables on disk.
- CPU. Data nodes are not CPU intensive. Moreover , if you use multiple cpu, you should use ndbmtd process.

2- MySQL Nodes:
- CPU. These Nodes are CPU intensive.
- Memory. As required.
3- Management Node.
- Any small machine can perform this task.

Moreover, it is suggested to have a different subnet for data communication between the data and sql nodes (security and traffic). According to this, sql nodes must have two interfaces, one for access from external apps and one for data communications to data nodes (NDB).

datanode01 192.168.0.1
datanode02 192.168.0.2

sqlnode01 192.168.0.3 172.1616.1
sqlnode02 192.168.0.4 172.16.16.2

mnode01 192.168.0.5

Installing Mysql Cluster.

You are probably tempted to install Mysql using apt-get , however the repos do not contain the latest version of Mysql, so I suggest you to use the binary installation of MySQL. Go to MySQL website and chose the Linux Generic Download, then arch 32 or 64 and finally the mirror.

# wget http://www.mysql.com/get/Downloads/MySQL-Cluster-7.0/mysql-cluster-gpl-7.0.13-linux-x86_64-glibc23.tar.gz/from/http://mysql.he.net/

# tar xzvf mysql-cluster-gpl-7.0.13-linux-x86_64-glibc23.tar.gz

Then
# cd mysql-cluster-gpl-7.0.13-linux-x86_64-glibc23

Read the INSTALL-BINARY file ...so you can install it.

shell> groupadd mysql
shell> useradd -g mysql mysql
shell> cd /usr/local
shell> gunzip < /path/to/mysql-VERSION-OS.tar.gz | tar xvf - shell> ln -s full-path-to-mysql-VERSION-OS mysql
shell> cd mysql
shell> chown -R mysql .
shell> chgrp -R mysql .
shell> scripts/mysql_install_db --user=mysql
shell> chown -R root .
shell> chown -R mysql data

Wait ...This is the basic installation for all the nodes and we do not want to have mysqld running on all the nodes, only sql nodes.

This command will be executed on sql nodes when all configuration files are ready:

shell> bin/mysqld_safe --user=mysql &


Setting Up Management Node.

Copy the template configuration file for management node.

#cp /usr/local/mysql/support-files/ndb-config-2-node.ini /usr/local/mysql/config.ini

This configuration contains pretty basic stuff that you would have to tweak for your cluster.
However, I can suggest you a couple of changes depending on the amount of memory that you have available.
Remember mysql cluster stores tables on memory, so if we have more memory available adjust the values.
For example, I have 32GB of memory on each data node.

DataMemory=25634M
IndexMemory=3205M

There are more changes that you can tweak so you can improve the behaviour of the cluster or you can try this website to help you configure the cluster. http://www.severalnines.com/config/

Modify the IP for the management node.

[ndb_mgmd] Id=1 HostName= 192.168.0.5


Then modify the IPs for data nodes under [ndbd]

[ndbd] Id= 2 HostName= 192.168.0.1 [ndbd] Id= 3 HostName= 192.168.0.2
Then the IPs for sql nodes under [mysqld]

[mysqld] Id= 4 HostName= 192.168.0.3 [mysqld] Id= 5 HostName= 192.168.0.4
Save the file.

Create the mysql-cluster directory.

# mkdir /var/lib/mysql-cluster

then Start the management node.

# /usr/local/mysql/bin/ndb_mgmd -f /usr/local/mysql/config.ini

If we wan to check the cluster use the following command.

# /usr/local/mysql/bin/ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> show


Setting Up Data node

(I assume you have installed MySQL Binary)

Basically we need to tell the ndbd daemon to use configuration from management node.

Create configuration file on /etc/mysql/

#mkdir /etc/mysql
#vi /etc/mysql/my.cnf

[mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring=192.168.0.5

[MYSQL_CLUSTER]
ndb-connectstring=192.168.0.5

Because it is the first time we are going to start ndbd daemon we need to initialize it.

# /usr/local/mysql/bin/ndbd --initial

After both data nodes are up, you can check the status on the management node using the show command.

If you want to stop a data node , you can run the following command on the management node:

ndb_mgm> 2 stop

It will stop the datanode with ID=2 (see management node configuration)


Setting Up SQL Node or API.

(I assume you have installed MySQL Binary)

Create a my.cnf file.

#vi /etc/my.cnf

[mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring=192.168.0.5
[mysql_cluster]
# IP address of the cluster management node
ndb-connectstring=192.168.0.5

Finally start the SQL nodes.

# /usr/local/mysql/support-files/mysql.server start

After all the nodes are running you can monitor the nodes from the management nodes.

Note: Remember that you must create tables with ENGINE=NDB so they can be running on the cluster.

Enjoy it ...



















Wednesday, February 10, 2010

Linux Hard Drive Performance measurements

Many people trust vendor specs, moreover they trust hard drive speed, disk controller speed and so on. However, reality is that you will never get those specs when you set up the hardware. Why? Vendors test their hardware on special environments, with specific combination of hardware.

I was trying to set up a server with fast access to hard drives so I did a little research of how to test different configuration of hard drives. These are my findings:

Note: All these tests should be executed several times. The real results are affected by hard drive cache, controller cache, and operative system cache.

1- (Read Speed) hdparm -t /dev/sda
This is a very well known command that It can give you the max read speed (sequential).

2- (Write Speed) dd count=1k bs=10M if=/dev/zero of=/data/test.img
This is another well known command that It can give you an average of writes.
This command creates a 10G file and measure the speed.

3- (Write speed) This people http://www.nlanr.net/Dskwtst/ created a C program that logs write speed.
#wget http://www.nlanr.net/Dskwtst/Software/dskwtst.c
#gcc -O2 dskwtst.c -o dskwtst

(Compiling probably you would get an error)

dskwtst.c: In function âmainâ:
dskwtst.c:32: warning: incompatible implicit declaration of built-in function âexitâ

This error is only a warning so it should not affect results.


#./dskwtst > /data/ouputfile 2> ./log

The command will create a file on /data/ and it will send the results to log.

4- (Random access)This guy http://www.linuxinsight.com/how_fast_is_your_disk.html created a C program that can measure the seek time of the hard drive.

#wget http://www.linuxinsight.com/files/seeker.c
#gcc -O2 seeker.c -o seeker
#./seeker /dev/sda


Enjoy it ...

Tuesday, February 9, 2010

Postfix relaying emails with GMAIL SMTP (Centos 5.3)

I was tired of maintaining my email server so I decide to move everything to Google app. They can host your mail server up to 50 accounts for free, isnt it great?. So I have moved all my domains to Google apps and everything was working perfect, however I missed an issue RELAY emails!!
So I relaxed and I figured out how to relay emails using an Google app (GMAIL) account.

After reading a lot of post, I realized that everybody was missing a part of the problem.
Some people shows you how to create client certificates for postfix when you do not need them.
Some people tell you to get the root CA certificates when you already have them.
However the most important part is to have all the required packages to make it work.

For Centos 5.3

Verify package installed

# rpm -qa |grep postfix
postfix-2.3.3-2.1.el5_2

# rpm -qa |grep sasl
cyrus-sasl-lib-2.1.22-5.el5
cyrus-sasl-2.1.22-5.el5
cyrus-sasl-plain-2.1.22-5.el5

# rpm -qa |grep openssl
openssl-perl-0.9.8e-12.el5_4.1
openssl-devel-0.9.8e-12.el5_4.1
xmlsec1-openssl-1.2.9-8.1.1
openssl-0.9.8e-12.el5_4.1
openssl097a-0.9.7a-9.el5_2.1

Copy root CA certificates. Postfix needs to know the location of root CA.

# cp /etc/pki/tls/certs/ca-bundle.crt /etc/postfix/cacert.pem

Create file that stores GMAIL user and password.

# vi /etc/postfix/sasl_passwd

smtp.gmail.com user@domain:password

#postmap /etc/postfix/sasl_passwd


Edit /etc/postfix/main.cf

# Relay all e-mail via GMail.
relayhost = [smtp.gmail.com]:587

# SASL authentication
smtp_sasl_auth_enable=yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options =
smtp_sasl_tls_security_options = noanonymous
smtp_sasl_mechanism_filter = login

# TLS
smtp_tls_eccert_file =
smtp_tls_eckey_file =
smtp_use_tls = yes
smtp_enforce_tls = no
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtpd_tls_received_header = yes
tls_random_source = dev:/dev/urandom


Finally, restart postfix.


Enjoy it ...

Monday, February 1, 2010

Installing Ubuntu from USB drive

I am not a very big Fan of Ubuntu, however Ubuntu has a lot of acceptance in the Opensource Community. So I decided that I would have to give a try.

On this Blog, I will present an installation method that it is very helpful for many System Admin.
Let's review the installation methods first:

1- Installing from CD. You will have to download the ISO image from Ubuntu, have a ISO burner software, a CD , and a CD drive. Today, many servers do not bring CD unit since they are a waste of space and hardware.
When you Installing from CD, you can have two options:
- Installing all packages from CD
- Installing all packages from Repositories. (You need DHCP, Repository, and network enable)

2- Installing from Network. This method is great however Ubuntu is not very good at this. You will need DHCP, TFTP, and Repository. Moreover, you will need to write down the MAC address of the server so you can configure DHCP and create a autoconfiguration file.

3- Installing from USB. This method is a variation of the CD, however Ubuntu is not straight forward with this installation method. There are many tutorials and blogs about installing Ubuntu and other Linux flavors, using USB drives, however they do not specify the complications during the installation process. To make an USB drive bootable is a very simple process, however making the installation from USB for Ubuntu is not.

My problem: I have a HP server with RAID controller (1 Logical Volume), no network connection, and no CD drive. I need to install Ubuntu 9.10 (Although it works for other versions too). I have my Laptop running with Windows (Creating a USB installer on Linux is a simpler process)

1- First get syslinux for Windows.


2- Download Ubuntu 9.10

3- Open ubuntu.iso file and copy isolinux dir from ISO to / USB.
There are many options to do that. You can mount the ISO as Virtual CD using http://poweriso.com/ or you can use an Zip software to grab that directory.

4- Rename directory isolinux to syslinux .
On isolinux dir, you can find isolinux.cfg, rename it to syslinux.cfg.

5- Create a directory named install on / USB drive.

6- Get vmlinuz and initrd.gz files for hd-media.


7- Copy vmlinuz and initrd.gz files to /install USB drive.

8- Run syslinux.exe -m Drive: (USB Drive letter)

9- Copy ISO image for Karmic-amd64 on USB

10- Boot from USB ( I assume that you know how to set up the BIOS to boot from USB)

11- Follow the installation process.

12- Before the end, when the installation has ended, you will have to make some modifications to grub. If you miss this part the server will not boot since the grub is looking for the USB Drive.

Press alt-F2.

On The command line:

# chroot /target

/target is the mounted root partition.

#vi /boot/grub/device.map

Delete the entry for (hd0) that set to USB
Modify (hd1) to (hd0) on your boot drive

Finally, set grub on the hard drive.
# grub-install

#exit


Then Boot server.

All done.




Monday, January 18, 2010

Trixbox installation on Centos 5.3 (My journey)

I have been using Trixbox for a while and I think it is time to have documented how to install it without CD using a VPS.

First, Trixbox is a set of tools that help you to maintain Asterisk.
Asterisk is opensource PBX software. It allows you to have VOIP extensions using your network connections. For my case, I have deployed 5 extensions in 5 different location in 3 countries with no long distance charge. Moreover, I can use my VOIP extension anywhere as long as I have Internet connection.

I started to use Trixbox at home using virtual machine running on Citrix Xen, however with time you realize that your phone system needs to be reliable and keeping it at home makes that task complicated (power outage, hardware problems). So I decided to move it to a data center.
Trixbox installation at home was pretty simple task since you can get de ISO image and burn a CD. But the big question was how I can install it on server running Centos 5.3 (32 bits)*

Note: I have installed Trixbox on Centos 5.3 (64bit), however Trixbox was designed to work on 32bit so you will probably have some issues with libraries since the path on Trixbox has bee set up to /usr/lib instead of /usr/lib64.

Choosing Hosting provider. My current installation is running with Godaddy , the service has been excellent and the price is ok ($30 a month for 256Mb, 20GB server) however I found a hosting provider RackUnlimited that can provide you the same for only $8 a month. At this point, I have a server with RackUnlimited and I am going to show the process of installing Trixbox.

Installation Process

1- Login on the server and make sure you have all the basic tools that you need. RackUlimited provide me with a very slim version Centos 5.3 so I had to install some packages.

# yum -y install vim-minimal sudo postfix

Also I like to install webmin ...so I grabbed the RPM form webmin.com and I installed it.

2- Install essentials packages for Trixbox.

#yum -y install mysql mysql-server mysql-devel

MySQL database

#yum -y install httpd memcached php php-pear php-mysql
# pear install DB

Apache Server and PHP packages

Now we I have server with a LAMP enviroments

3- Add the Trixbox repo

#vi /etc/yum.repos.d/trixbox.repo

[trixbox]
name=trixbox RPM Repository for CentOS and RHEL
baseurl=http://yum.trixbox.org/centos/$releasever/RPMS/
gpgcheck=0
enabled=1

#yum clean all

4- Install asterisk

#yum install asterisk
#mkdir /etc/asterisk
#cp /etc/asterisk-1.4.21.2_samples/ /etc/asterisk

5- Start all services.

#/etc/init.d/mysql start
#/etc/init.d/httpd start
#/etc/init.d/memcached start
#/etc/ini.d/asterisk start

6- Install Trixbox scripts.

#yum install tbm-pbxconfig

7- Create Databases and users.

#mysqladmin create asterisk
#mysqladmin create asteriskcdrdb
#mysql asterisk < /usr/src/tbm-pbxconfig/SQL/newinstall.sql
#mysql asteriskcdrdb < /usr/src/tbm-pbxconfig/SQL/cdr_mysql_table.sql

#mysql

mysql> GRANT ALL PRIVILEGES ON asteriskcdrdb.* TO asteriskuser@localhost IDENTIFIED BY 'amp109';
Query OK, 0 rows affected (0.02 sec)

mysql> GRANT ALL PRIVILEGES ON asterisk.* TO asteriskuser@localhost IDENTIFIED BY 'amp109';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

8- Install all Trixbox packages

#yum groupinstall Trixboxcore

9- Installing the amportal

#/usr/src/tbm-pbxconfig/install_amp

Accept the default values.

After this is done restart MySQL, Apache, memcached , and amportal.

10- Authentication for web interface

#mkdir -p /usr/local/apache/passwd/
#touch /usr/local/apache/passwd/wwwpasswd
#passwd-maint

user: maint
pass: ******

Finally you can access the Web interface.

http://serverip/

Note:
Time on Xen VMs is restricted by Dom0, so if you want to change the time in your VM you will have to do the following.

#echo 1 > /proc/sys/xen/independent_wallclock

to leave it perm do the following

#vi /etc/sysctl.conf
add
xen.independent_wallclock = 1

Change the timezone

#cp /usr/share/zoneinfo/America/New_York /etc/localtime

However you can install yo modify the time zone

#yum -y install system-config-date
#system-config-date

Adding G729 Codec .....

G729 is a voice compression protocol designed by Digium.G729 is capable of using very little bandwidth with good voice quality (8K). You can buy a license for this codec from Digium or use a free version (free version is not considered very stable).

This guy made a pretty good job showing how to use G729.
http://nigglingaspirations.blogspot.com/2009/10/installing-free-g729-codec-on-trixbox.html













Wednesday, January 13, 2010

Citrix Xen Server: Virtual Appliances repositories

I consider Citrix Xen a great product with a lot of advantages compared to VMWare. However, one big disadvantage is the lack of a Virtual Appliance repo.

There are some sites that can provide you with appliances ready to work but they are very VMWare oriented. Example:

http://www.jumpbox.com

Although Jumpbox has some support for Citrix Xen and other virtualization platforms, I suggest to create a place where we can start to build our own Virtual appliances for Citrix Xen.

My first donation is the following.

Centos 5.3 64 bit with Xentools installed
user:root pass:root

You can get it from : ftp://ftp.carlosgomez.net/

user:ftp01 pass:ftp01

filename: centos53-64.rar

OR
you can try http://www.filebox.com/jlspdob2rn8k

Enjoy it ...

MySQL: Changing data directory on Ubuntu 9.04

You would think it is a pretty easy task to go to the /etc/mysql/my.cnf file and change the datadir to the new directory, then restart MySQL. However, MySQL will fail every time you restart it. There are a couple of considerations to make when you do this change:

1- Make a copy of all databases to the new directory.
2- Change the datadir configuration on /etc/mysql.my.cnf
3- Change /etc/apparmor.d/usr.sbin.mysqld so the old data dir is no longer there.
4- Reload Apparmor daemon.
5- Verify ownership and pem on new data dir.
6- Restart MySQL

You can follow this blog http://blog.taragana.com/index.php/archive/how-to-move-the-mysql-data-directory-in-ubuntu/ for more detailed instructions.

Enjoy it...

Friday, January 8, 2010

Citrix Xen Server: Installing Citrix Xen from USB drive

Great, we need to add a new server to the pool of Citrix Xen and we do not want to use the CD.
Follow these instructions an you can install Citrxi Xen using a USB drive.

http://www.thegenerationv.com/2009/08/howto-put-xenserver-iso-installer-on.html

After installed you can join the server to the pool, however you have to make sure that the Linux pack has been installed.

You will receive a message saying different version so the server can not be added it.

If you do not have the linux pack installed , create a USB drive with the Linux Pack ISO, then plug in it to the server.

#mount /dev/sdX1 /mnt

Make sure what /dev/sdX the server assigned to the USB. (you can check that using dmesg)

then do

#cd /mnt
#./install.sh

Done, now you can join the server to the pool.

Tuesday, January 5, 2010

Citrix Xen Server: Changing the pool master

Currently, these changes can not be performed from XenCenter so you will have to use the command line.

First, you have a pool of Citrix Xen servers and you want to change the pool master.To control a pool of servers, you would need to connect to the pool master server. Citrix Xen Server populates the management database among all the servers, however only one is the master server.

Go to the slave server that you want to use as pool master.

Disable HA.
#xe pool-ha-disable
List UUID for all the hosts
#xe host-list

Promote slave

# xe pool-designate-new-master host-uuid=

Easy right!!!

If the pool master is down , use the following commands.

#xe pool-emergency-transition-to-master

then re-establish connectio to the slaves.

# xe pool-recover-slaves

Enjoy it ...