Blog migrated

Just migrated my blog over to wordpress after the announcement made by Microsoft 7 months ago to ditch live spaces.

I believe, (a comment which came late by a mere few months) this is a decision the industry giant will have to make one day when products fails to gain a significant share to shake the market.

Lets bow our heads and have a minute of silence for

  • WindowsCE
  • MSX
  • UMPC
  • WinME
  • Vista
  • Zune
  • Kin
  • Livespaces
http://www.msnbc.msn.com/id/39388269/ns/technology_and_science-tech_and_gadgets/

APC daemon for your UPS

You’ve just purchased a  UPS and realized that it has no support for your current linux operating system.  "Just what am I going to do to power down my server in a power event?" you wondered….

Apcupsd to the rescue
Thanks to the open source community, there is a dependable daemon that runs on your unix based server to monitor events from USB, Serial and even via Network.

This is a nice little script written in perl to monitor incoming signals from your APC device.  It even comes with a neat little feature to trigger other Apcupsd on the network for shutdown events.

I’ve integrated this with our SMS machine by adding in a few lines of code allowing it to send off SMS notifications to us in the event of Power failures.

You can download the daemon here.


VMWare Server 2 is OUT!

The long waited VMWare server 2 has finally grew out from its release candidate status into full production release.  There are 3 versions available from the download site; of which a choice of RPM or a TAR version:

 

VMware Server 2
Version 2.0.0 | 116503 – 09/23/08
   
Windows 575 MB EXE image
   
Linux 32bit 534 MB RPM image
Linux 32bit 535 MB TAR image
   
Linux 64bit 503 MB RPM image
Linux 64bit 505 MB TAR image
   

Downloading the files would require registration which provides a license allowing users to use up to 10 windows/linux host combined.

I just can’t wait to see what enhancement they’ve made since the last candidate release; it was really a hassle from the installation process all the way till removal.  Nothing seemed to work!

I’ve prepared a Dell Optiplex 755 for this test and had installed CentOS5 64bit as the base Os.  Installation looks straight forward with the RPM build, looking good with no obstacles throughout the process.

This version works much better then I’ve expected; it allows VLAN support(via Host network driver) which was only possible with any-any-update in the earlier version.  Processor affinity doesn’t seem to work and its still limited to 2 vCPU per guest. Provisioning reserved ram for guests is not working properly in the VI Client.

I’m going to leave this box in our server room and convince our developers to start running test machines on it.  Will update once I’ve gotten more stuff.. hmm maybe a IO test?  Sounds evil… lets try.

Tips:
You can access to server via the VI client by adding the port number 8333 at the back of the IP address.


Inside EMC 3-20

EMC guys came over to our office the other day and claim that one of the SP is faulty even though there were no amber lights indicating the fault.

Unisys was the appointed service vendor over here and during the hardware replacement, I managed to take a few snapshots of the unit.

 

As seen from the picture above, the iSCSI module is connected to box via a daughterboard.  I supposed the CPU is right underneath the big cooper heat-sink but I haven’t got a chance to remove the heat-sink to reveal the CPU make and model.

One thing for sure, this thing uses 2 x 1Gb DDR2 ECC rams and with 2 more slots to spare.  These rams will be used for both the Read/Write cache and adding in extras will boost the performance significantly.  Having said that, I’m beginning to wonder the difference between the CX3-20 and the Top-Of-The range model CX3-80 which is sharing the same chassis; izzit just the ram or what?


Shrinking VMFS in ESX

http://spininfo.homelinux.com/news/Virtual_Machine_and_Guest_OS/2007/04/19/How_to_shrink_or_grow_(extend)_a_virtual_machines_disk

Now where is that command to shrink VMFS files in ESX3.5? Well, sadly there is no such command as yet and the method which I used is to shrink it first and then clone over to another smaller VMFS.

SPININFO, the site above provides great step by step information and solutions for resizing VMFS; do drop over and have a look.


Software which I’ve used for this purpose:

a. Gparted    – To resize the partition in the VMFS
b. Clonezilla  – To duplicate the data into a smaller VMFS 

1. Create an additional Virtual Harddrive with the new size in the guest server which you would need to shrink.

2. Using Gparted, shrink the partition to the desired size.

3. Boot up the Machine once into the OS to ensure everything is working. *skipping this step will cause cloning to fail later on.

4. Using Clonezilla, clone the repartitioned drive to the new harddrive.  *Under clonezilla, uncheck the default -g-auto option. Choose Y to clone the Boot Loader.

 

Done.  Don’t forget to perform some test like scandisk to check for disk/data integrity.


Arrival of EMC 3-10

Today is the day where our EMC 3-10 (purplebox) from Dell came!  5 boxes in all. WOO HOO!

     

Now we have the two brothers all under one roof; EMC 3-20 and EMC 3-10, This means MORE test!  I’ll have to wait for fellas from EMC to setup these boxes before I can start conducting; craps!

I was told the difference between these two units is just the cache.  But then there might just be more difference between the two of them; their PRICE difference is  H U G E!

Time for more white papers it is!


ESX Storage Journey Takes III

*updated 22Aug08 – an exciting find!  Do stayed tuned for ENCORE!
http://mradomski.wordpress.com/2008/01/19/benchmark-tools-part-i-disk-io/
In ESX Storage Journey Takes II, I mentioned about migrating one of our physical file server into ESX and attach a ISCSI storage drive to it.  And here it is, our long waited storage arrived a couple of weeks ago!  I’ve setup and run several test on this unit with a surprising find.

System Setup

Host
ESX 3.5 update 2, Dual Quadcore processor with 16Gb of ram.

Guest
Windows 2003 standard R2 Sp1
1 vcpu @ 2.5Ghz
1GB ram
Drive C is attached to EMC Clariion 3-20, 4GB cache with 5 x 10k 300Gb disks (FC@4gbps)
Drive D is attached to MD3000i, 512MB cache with 5 x 7.2k 1TB disks (iSCSI@1Gbps)

I’ve plug the MD3000i directly into the ESX installed Dell 2900 bypassing the switch to minimize possible bottlenecks.

Performance Test
I’ll be using Iozone v3.3 throughout this test.  Although there are forums which indicates fluctuating results at high CPU utilization, I’ve found mine to be consistent. Nevertheless, I’ll be conducting more test using other benchmarking software; that is if I find the time.

Testing speed of MD3000i connected to ESX Guest as RAW partition (Virtual).

Testing speed of MD3000i connected to ESX Guest as RAW partition (Physical).
Write speed is ridiculous here at 4MB/s.  I have to ran this report THRICE! to confirm my findings.
 

Testing speed of MD3000i connected to a physical machine (Windows)  *updated 21Aug08
The speed here is almost similar as if the box is connected to ESX as Physical LUN.

Charts of EMC 3-20(FC) versus MD3000i (iSCSI)
 
 
As seen here, MD3000i is really lagging behind in Random Write tests with a cap of 31MB/s versus somewhere 100MB/s for EMC Clariion.

 
 
Same here with results lagging bad behind EMC 3-20 box.

So what about the existing machine?  What is its current performance?

It is a Dell 2650. Storage is made up of 3 SCSI Raid 5 disk; specifications unknown as of now. Performance is as below.

Charts of Dell 2650 with 3 x disk on Raid 5

 
Performance over here is unexpectedly bad.  I’m starting to wonder if there are cobwebs in the spindles.  Having said that, I’ll definitely rerun this test again on my next post.

Conclusion
From the results I obtained so far, I just felt there might just be something amiss.  The 4MB/s write speed is so ridiculously slow, its comparable to a consumer class NFS device.

It is also good to note that once we’ve attached the RAW LUN into the guest of ESX in Virtual mode, we can see dramatic increase in the Writing Speed boosting it to 20MB/s.  I wonder if this is the result of additional caching done by ESX kernel.

Random Read speed is acceptable with peak times of 100Mb/s and average of 40Mb/s; keeping in mind 128Mb/s is the ceiling for 1Gbps.

Overall, I’m still not convinced on the capabilities on this unit.  The speed obtained is nowhere near my expectations on a enterprise classed storage device.  Although this product maybe classified as an entry level on the enterprise market, at the end of the day, I’m just not getting the performance I’m expecting for the price.

On my next post probably ESX Storage Journey Takes III – Encore, I’m going to plug the MD3000i directly into a physical machine running Windows.  Till then.


Connecting MSSQL via Linux

reference: http://www.linuxjournal.com/article/5732

Ever wondered how you can connect to a Remote MSSQL server via a simple perl script?  I’ve got here a host running on a 64bit CentOs5 and have managed to get the connection up and running.  I hope my guide below is of help to you!

cd /usr/src
wget ftp://ftp.ibiblio.org/pub/Linux/ALPHA/freetds/stable/freetds-stable.tgz
wget http://search.cpan.org/CPAN/authors/id/M/ME/MEWP/DBD-Sybase-1.08.tar.gz
tar -zxvf freetds-stable.tgz
tar -zxvf DBD-Sybase-1.08.tar.gz
cd freetds-0.82
./configure –with-tdsver=7.0 –prefix=/usr/local/freetds
make
make install
export SYBASE=/usr/local/freetds
cd ..
cd DBD-Sybase-1.08
vi CONFIG    (under enable the 64)
perl Makefile.PL (when asked for threaded, select YES)

#Next configure your SQL connection.  You’ll need to edit freetds.conf under /usr/local/freetds

You’ll see something like this:

[MSSQL]
             host = PutYourServerNameHere
             port = 1433
             tds version = 7.0

Replace the hostname to the one you’ll be connecting to and save it.

Yup if there are no complications, you are done.  Perl is ready to rock and roll with your SQL!  You can test it with the script below which connects to the Default NorthWind database; thanks to Trevor Price.

Copy and paste the following to a file and name it test.pl
Type perl test.pl in your favourite shell to run.

Listing 1. Trevor Price’s Perl Script that Queries the Sample

Database Called Northwind

#!/usr/bin/perl

#
# test the db2 dbi driver
#

use DBI ;
$user = 'sa' ;
$passwd = 'password' ;


$dbh = DBI->connect('DBI:Sybase:server=file1',
$user, $passwd);
$dbh->do("use Northwind");

$action = $dbh->prepare("sp_help") ;
$action->execute ;
$rows = $action->rows ;
print "rows is $rows\n";

while ( @first = $action->fetchrow_array ) {
        foreach $field ( @first ) {
        print "$field\t";
        }
        print "\n";
}

exit(0);
 

Issues & Solutions

1
Issue

During compilation of DBD-Sybase

dbdimp.c:5317: warning: format â%ldâ expects type âlong intâ, but argum86ent 5 has type âCS_INTâ
make: *** [dbdimp.o] Error 1

solution Open dbdimp.c and add the following line somewhere near the top; I added it just below
#include "Sybase.h"

#define BLK_VERSION_150 BLK_VERSION_100
#define BLK_VERSION_125 BLK_VERSION_100
#define BLK_VERSION_120 BLK_VERSION_100

   

2
Issue

During execution of perl test.pl

install_driver(Sybase) failed: Can’t load ‘/usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/auto/DBD/Sybase/Sybase.so’ for module DBD::Sybase: libct.so.4: cannot open shared object file: No such file or directory at /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DynaLoader.pm line 230.
at (eval 3) line 3
Compilation failed in require at (eval 3) line 3.
Perhaps a required shared library or dll isn’t installed where expected
at testsql.pl line 12

solution This is due to library path issues. 

ln -s for all files in /usr/local/freetds/lib to /lib64

*note: ln -s to lib instead of lib64 directory if you’re using 32bit version of linux

   
   

ESX Storage Journey Takes II – iSCSI Storage Limitation

30th May 2008

Preparations for file server migration
Deadline is near to virtualize one of our File Print+ Symantec AV server.  I’ll need to virtualize and hook up a LUN from a Dell MD3000i via iSCSI which is connected to ESX and map it as a raw partition to the file server.

This is one hell of a work considering that we’ll need to p2v the server running on Win2k and upgrade it to Win2k3 at the same time ensuring clients will be able to access to the services after upgrading. But what the heck, as far as the management is concerned, we are the ones who makes all things possible. 

I’ll need to test out the environment for this new virtualized system before we put it to production.  I would also need to test out if the 2TB storage limit is in effect if we mount it as a raw partition on guest via the host iSCSI initiator.

Equipment
The MD3000i unit is still pending arrival so I’ll be running some test on our existing Clarion 3-20, flexing out its iSCSI capabilities which we’ve never even thought of using.  Test is conducted on a Dell 2900 with 16Gb of ram attached with 2 FC HBA and 6 NICs running ESXi.

Setup
I started off by firing off the Navisphere console and setup a 2.4TB LUN to test out the 2TB barrier of ESXi.  On the ESXi, I’ve created a new VMkernel port and have assigned to it a dedicated NIC.

With CHAP disabled, the LUN, 2.4TB appears on the VI client after a click on the rescan; I was trilled.  That is just part one, we’ll need to test it on the guest to be sure. 

I mounted the raw partition as "virtual compatibility mode" into the guest and proceed to power up the Win2k3 Ent R2.  Only 367.28GB was detected out of the 2.4TB.  Thinking it might be due to "virtual compatibilty mode" I proceed to delete the drive from the guest and created a new raw partition as "Physical mode".  This time its worse; it states there unknown under storage device.  I’ve gotten the latest vmware-tools installed but issue still persist.  (*refer Note)

After several more attempts, I proceed to remove the virtual hard drive from the guest and did a iSCSI rescan from ESXi and poof goes the drive; it went missing.  Could it be EMC?  I don’t really trust that unit; its like a total black box with a few click-me buttons.

Note: It was the next day after all these test has been done that I’ve realized that I’ve got the FC HBAs connected to the SAN as well.  Navisphere clearly shows the two connections as separate entities during testing but yet in this case, the LUN was connected via FC instead. 

I’ve tried unplugging the HBA  and found that LUN size higher then 2TB is not detectable via ESXi’s iSCSI initiator – below.  Further investigation shows only FC is able to detect LUNs higher then 2TB but the 367Gb persist in Win2k3 Ent.  Could that be some kind of limitation on the iSCSI initiator?

 

iSCSI Storage limitation
It took me a lot more tries to remove and adding in the partition from Navisphere, making sure that I’ve added in the correct LUN into the appropriate Storage Group. No good, the LUN is not showing up anymore in ESXi.  And finally it came up right after I recreated and attached it to a 2TB LUN. (I’ve unplugged both the HBAs at this point)

Now, I’ll need to mount it in guest to make sure it works.   With the LUN mounted as a raw partition in the guest, Win2k3 Ent R2, disk manager shows a solid 2TB drive ready to be used.

It was an awfully long, tedious and frustrating day spent testing on the solution but least I now know that its going to work.  Will be spending more time on VMware’s Community forum on this issue while preparing for the next phase of our file server migration.

till then…..


ESX Storage Journey Takes I

Working with iSCSI on ESXi

Awhile ago I was working on ESX Network Performance tuning and at the same time I was also looking at possibilities of improving iSCSI transfer rates using the same way.

Sadly on my test lab over here, we have a Clarion 3-20 box which does neither have support for iSCSI port binding nor any features that helps in loadbalancing.  I’m not so sure about other SAN boxes out there but I really hope the Dell MD3000i, which is due to arrive to our lab 2 weeks from now, has this feature available.

Not giving up, I proceed to explorer for other available options and bingo!  There is a new feature in ESX3.5, iSCSI Round Robin path loadbalancing, which is labeled (Experimental) below.  Smart of them to label Experimental; that also means non-production environment only, use at own risk. 

All glory to google, I found someone who has tested this feature and has posted in VMware community. http://communities.vmware.com/thread/131295

iSCSI CHAP Authentication in ESXi & 3.5
The setup for the system mentioned in the communities page above, posted by Damin mentioned that he is using a ISCSI box from WASABI Systems.  I wondered if he has encountered any CHAP authentication issues while connecting to ESX?

I’ve spent endless hours trying to get ESX to connect to a CHAP authentication enabled iSCSI target in the Clarion box only to find that it won’t work 😦  The target will disappear from ESX the moment authentication is enabled.

Sometime ago I’ve setup a linux based iSCSI box for ESX running on iET, iSCSI Enterprise Target with similar issues.  This has lead me to believe there is something really wrong somewhere in ESX’s iSCSI authentication.

It is worth to note that there are no issues while connecting from Microsoft initiators in both cases.  I’ll be doing more research on the issue and will update.