Move from MySQL to DB2 via the Cloud

24 November 2010 » Cloud, DB2, developerWorks, Linux, PHP, Zend

IBM developerWorks has just published the first article in a series that Mark Nusekabel, Yan Li Mu and I wrote about our experience migrating a large PHP and MySQL application to DB2.

In the four part series we look at preparation, switching databases, porting code, and finally deploying the application. This first installment covers the steps to plan and resources to consult when starting a migration project.

Along with the MySQL to DB2 migration Redbook, a key technology supplementing each step in the process is the IBM Smart Business Development and Test Cloud.

If you already have access to the Development and Test pilot, the PHP developer’s guide (PDF) can give you some tips for configuring Zend Server along with DB2 using virtual machines in that cloud.

The article series and the developer’s guide may also be useful to those who have a contract for the GA version of Development and Test.

Another option to evaluate DB2 for a migration is to use the Amazon EC2 AMIs pre-configured with IBM software individually.

Or, if you’re interested in managing several instances or more complex configurations, RightScale and IBM have collaborated (PDF) to bridge the Amazon and IBM clouds.

So, if you’re considering a new relational backend for your application, the developerWorks migration series, the PHP developer’s guide for the IBM cloud, and the images within the Amazon and IBM clouds will give you a new set of tools to make evaluating the move and executing the switchover much easier.

Technology of the day: Zend Server

03 September 2009 » DB2, Linux, MySQL, New York PHP, PHP, Zend

A few months back, Ed Kietlinski introduced us to the new Zend Server at a New York PHP meeting. I’ve since installed it on two of my department’s servers and put together some notes on my experience.

Update: See the comments section for some configuration suggestions from Zend that differ from the steps I followed. Jess also clarifies the difference in caching between the standard and Community Editions.

What is Zend Server?
Zend Server is a packaged version of PHP targeted at businesses that require a supported and tested stack that’s easy to install and maintain.

It also integrates the other Zend products, such as the Zend Framework, Zend Studio for debugging, Zend Caches, Zend Java Bridge, and Zend Guard/Optimizer among others.

Zend offers several variants and licensing models. There’s a Community Edition that’s free but doesn’t include the more advanced features such as caching and monitoring, there are several tiers of production support, and there are half-price development licenses.

In all cases, migration from one version to another is simply a matter of updating your license information in Zend Server’s console.

Why it interests me
Zend Server hits a sweet spot for my team, where we run only one each of development and production LAMP servers.

We don’t cluster nor do we require a job queue for our PHP applications, so the Zend Platform Enterprise Solution doesn’t fit our needs (See this comparison table).

Of course, we use plenty of WebSphere and Java for our sponsor facing applications hosted in advanced data centers that have different functional and non-functional requirements.

However, our internal department tools are supported by hybrid front-end/server-side developers that can learn and get up and productive on PHP quickly, rather than needing to know or learn Java in the same ramp up period.

In the cases where we require queuing or clustering, we look to WebSphere Application Server, rather than Zend Platform Enterprise Edition, as we do to run our Restlet/Spring Integration pseudo-ESB that integrates many of our other internal tools.

The primary attraction of Zend Server for these department servers is that it can be used as an RPM-based system that bundles the latest stable version of PHP with all the extensions that we need, including the DB2 and MySQL drivers, curl, libxml and mbstring.

Our full-time system administrator has long had to maintain custom compiled versions of PHP and Apache, but we want to move to an automated, package-managed way of doing things as he increases the volume of non-sysadmin work he has taken on.

The RPMs from CentOS repositories are traditionally a few PHP versions behind and aren’t patched frequently enough to fully rely on the operating system’s default package management system.

As I’m writing this, the most recent RPM version of PHP is 5.1.6 where the most recent version of PHP is 5.2.10 (shipped with Zend Server 4.0.4) and 5.3 (shipped with Zend Server 4.0.5).

Beyond easing maintenance for us from an installation and update perspective, Zend Server also offers a performance boost (an optimizer and cacher), monitoring features (logs, traces and event notifications), and simplified configuration (switching on and off extensions, setting directive properties) managed through its GUI Web console.

I installed Zend Server on two servers – an x86 and an x86_64 – running CentOS 5.3 using the RPM method. One is the test server; the other is a future production server that is hosting some supplemental applications now.

The production server has a tiered license provided by Zend, who is an IBM Business Partner. The development server will run a half-priced version of the license intended for test server installations.

In both cases, installation was straightforward. There are more detailed instructions available for other operating systems and package management methods.

  • My first step was to find other PHP packages on the system.
    [code lang=”bash”][root@]# rpm -qa | grep php[/code]
    And remove each one.
    [code lang=”bash”][root@]# rpm -e {package name}[/code]
    After uninstalling the PHP packages, php.ini will be backed up:
    [code lang=”bash”]/etc/php.ini saved as /etc/php.ini.rpmsave[/code]
  • Then, I installed Zend Server.
    [code lang=”bash”][root@]# tar xvzf ZendServer-4.0.4-RepositoryInstaller-linux.tar.gz
    [root@]# cd ZendServer-RepositoryInstaller-linux/
    [root@]# ./[/code]
    This script will set up your repositories, and kick off the installation process. It will stop, configure, and restart the existing Apache instance (we’ve kept that as a standard RPM).

The only additional package I needed to install that was not part of the default set was the DB2 driver. You must have a DB2 runtime client on the server at a minimum to use this.

  • Install the DB2 driver for PHP.
    [code lang=”bash”][root@]# yum install php-ibmdb2-zend-pe[/code]

A few tips
Of course, as with any PHP build, some post-installation configuration may be necessary.

I pointed out above that my existing php.ini was backed up after removing the older PHP packages. You’ll want to make sure the old and the new php.ini are functionally equivalent.

  • DB2
    For one of my servers, I didn’t have to follow any special configuration steps to get the Zend Server to interact with DB2, but the other wouldn’t load the DB2 extension at first.

    It failed with the following error.
    [code lang=”bash”]PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/local/zend/lib/php_extensions/’ – cannot open shared object file: No such file or directory in Unknown on line 0[/code]

    The resolution to this issue was simple, it requires you to source the DB2 environment for the Web server user. In my case, I added the following line to the /etc/init.d/httpd startup script:
    [code lang=”bash”]. /home/db2inst1/sqllib/db2profile[/code]

  • Mail
    I also had to make a small adjustment to my mail directives to point it to sendmail on my system, as described in this forum post.
    [code lang=”bash”]/usr/sbin/sendmail -t -i[/code]
  • PEAR
    The latest version of Zend Server fixes an issue with the PEAR installer, but if you are using the packaged (not tarball, version 4.0.5) version of Zend Server 4.0.4, you might want to follow the tips in this forum post.

If I run into any other issues or tips on Zend Server, I’ll post updates here.

I also hope to dig into the differences between Zend Core for IBM and Zend Server, in order to evaluate whether it’s a worthwhile cross-upgrade for my server at home that hosts this blog.

Find out more
To learn more about Zend Server from the source, check out the main product page, with pointers to the different editions, getting started tutorials and videos, and the FAQ.

The Zend Server documentation is very helpful too, including sections on best practices for performance and security.

As with Zend’s other products, they’ve also got an active forum for Zend Server.

Cut spam with Postgrey

28 October 2008 » Linux, System administration

I’ve regained control over my inbox (and my BlackBerry) thanks to a nice little utility called Postgrey. A hat tip to Thomas on the NYCBUG list for the pointer.

Postgrey is a policy server for Postfix that employs an RFC compliant technique for handling mail called greylisting. This is roughly equivalent to placing an incoming phone call on hold for a fixed amount of time.

If the caller has legitimate business, she’s more likely to wait around to chat with you. If it’s a telemarketer, he’s more likely to hang up and move on to the next prospect.

Employing the utility on a CentOS server using these simple steps (condensed from the CentOS HowTo) dramatically reduced the spam I receive:

[code lang=”bash”]
[root@]# yum install postgrey
[root@]# vi /etc/postfix/
smtpd_recipient_restrictions = permit_mynetworks,reject_unauth_destination,check_policy_service unix:postgrey/socket,permit
[root@]# /sbin/service postgrey start
[root@]# /sbin/service postfix reload
[root@]# /sbin/chkconfig –levels 345 postgrey on

The war against UCE rages on, but for now, this technique offers much needed respite, saving valuable time and wireless network bandwidth charges.

One week with the Drobo on Linux

16 March 2008 » Drobo, Linux, System administration

I’ve been in the market for a media storage and/or backup device for my home network for some time now.

I don’t have any more free bays in my server, so adding space there wasn’t an option. Reusing any of the spare machines cluttering up the basement didn’t make much sense either, from a power or capacity point of view.

I had considered a few consumer network attached storage devices, but nothing really felt right for my needs; an SSH interface for nightly rsync backups, relatively easy setup, and future expandability.

Fortunately, I bounced the idea off of my gadget-savvy co-worker Kashif. He pointed me to a product called Drobo.

At first, it didn’t seem to fit in with what I wanted to do, primarily networkability and an SSH interface. But after watching the demo, I was sold. I was going to make it work somehow.

Drobo is intended to plug into your Mac or PC as an external USB drive. To your computer, it looks like any other external storage device, but while it just appears a chunk of capacity, Drobo uses a hot-swappable pseudo-RAID approach internally to protect data and provide extreme flexibility for future expansion.

That said, Drobo only officially works with Mac and PC. You can format it in their mutually incompatible filesystem formats; HFS+ and NTFS respectively, or share it between platforms with the old FAT32 standard.

To mount it under Linux, I had to choose to use either FAT32, NTFS-3G on FUSE, or ext3. In order to get the ext3 support, you’re supposed to use the DroboShare, which costs an extra $200.

Instead, I used that money to get two 500GB hard drives and approached ext3 support a different way. I connected the Drobo to my Linux server and formatted it as I had for the new drive I mounted internally last year.

This meant that the storage isn’t directly accessible on the network, but I could easily share it out via the server. This also makes backups from the server faster.

Following are the steps I took:

  • Unbox the Drobo and put in two drives from Newegg. (Western Digital Caviar SE16 WD5000AAKS 500GB 7200 RPM SATA 3.0Gb/s Hard Drive – OEM).
  • Plugged it into my Windows XP machine to check for firmware updates, not to format the drive. There were none so I could have skipped the step.
  • Plugged it into my CentOS 4.6 Linux server, then ran lshw to find the device name (/dev/sdc).
  • Entered the following commands to format the drives and mount the Drobo at startup:
    [code lang=”bash”]
    [root@]# /sbin/mke2fs -j -i 262144 -L Drobo -m 0 -O sparse_super,^resize_inode -q /dev/sdc
    [root@]# mkdir /drobo
    [root@]# mount -t ext3 /dev/sdc /drobo
    [root@]# vi /etc/fstab
    /dev/sdc /drobo ext3 defaults 0 0
  • Reboot and chown’d the filesystem to my rsync user name.

Everything seems to work well for now, but I’ll post an update when I add capacity later this year.

I suspect I’ll have to make some changes in the future, but for easily adding expandable protected storage to a home Linux server the Drobo is a highly recommended option.

Another tip from Kashif: use the promo code “Cali” when checking out at to save $50.

Adding a new hard disk to a Linux server

19 April 2007 » Linux, System administration

Last week I added a second hard disk to an IBM eServer xSeries 226 server running CentOS 4.4.

I picked up a large hard drive in January, but I didn’t have the proper hardware to install it right away.

Besides that, I had to settle on how to partition the drive, create the filesystems, and decide on the mount points.

Since the overall process did take me some time, I figured I’d share some notes on the steps I took for folks interested in doing the same. I’ve also provided links to resources that helped me out along the way.

Planning is the lion’s share of the work. Once you’ve decided what to do, the actual addition of the drive and the execution of the commands is fairly straightforward.

  • Determine why you need another hard drive
    It’s always nice to have more space, but beyond that I had some more practical concerns.

    The server came with a single 80GB disk. When I installed CentOS, I was quite lazy and settled on a small swap partition with a large root partition taking up the rest of the disk.

    So, there were at least four reasons guiding my decision to add another drive:

    • To prevent the single root filesystem from filling up and rendering the machine unusable. This could happen since I have nightly cron jobs that sync backups (large media files) from my workstation to the server. It might also happen if my machine was suddenly hit with a burst of traffic or a DOS attack.
    • I didn’t want to resize the existing disk partition or reinstall the operating system. I felt more comfortable creating overflow filesystems on a blank disk.
    • I wanted to provide a measure of redundancy in case the first hard drive failed. Based on rumors from colleagues and other unscientific anecdotes, Maxtor hard drives are more prone to failure. There’s also the small matter of my server sitting on the floor of a dusty basement next to the litter box.
    • To limit different types of disk activity to different partitions. For example, logging (writes to large files) is a different usage pattern than Web serving (reading lots of little files). I wanted a measure of control about how to optimize I/O.
  • Find the right drive for your server
    Given that my machine had only one free SATA bay, my strategy was to get the largest drive possible for around $100.

    A great tool for analyzing your current system to determine its current configuration and potential for expansion is Hardware Lister (lshw). This will give you specific model information and insight into where you might expand your system.

    Pipe the output of this command to a text file, so that you’ll have the information readily available. This will guide your upgrade decisions and serve to verify installation later.
    [code lang=”bash”][root@]# /usr/sbin/lshw > /root/lshw-before.txt[/code]

    Based on this information, I was able to Google compatible drives for the machine, and found a solid price on a Western Digital 400MB SATA drive at Newegg.

  • Make sure you have the right hardware to install the drive
    Hard drives are often sold in Retail or OEM packaging. Retail comes in a pretty box and will include the essential hardware bits, such as cables and fasteners. OEM is the way to save some cash if you’re replacing an existing drive or have extra hardware for your box.

    I had plenty of screws from the carcasses of some E450s, but I mistakenly thought I could reuse the brackets too. Instead, I had to pay $25 for a new plastic bracket in which to mount the drive. Arg.

    Along with an anti-static wrist strap, a Torx T10 screwdriver came in quite handy.

  • Install the drive and make sure it’s seen by the kernel

    DISCLAIMER: If you intend to add a hard drive to your system, please take all due precautions before starting, as there is a real possibility that you will lose some data.

    I shut down the machine, removed the power cable, slid the drive in, closed it up, then booted back up.

    I then ran lshw again. Comparing this with the prior snapshot of the system, I could verify that the drive was recognized as /dev/sdb.
    [code lang=”bash”]
    [root@]# /usr/sbin/lshw > /root/lshw-after.txt
    [root@]# diff /root/lshw-before.txt /root/lshw-after.txt

  • Create the partitions
    Through trial and error I had settled on a reasonable filesystem layout for my previous Sun systems. Solaris has very impractical defaults for my purposes, so I needed to do some leg work in the past to make sure I had enough space in the right places.

    This is what my latest Solaris 10 partitioning scheme looks like:

    c0t0d0 (~20GB)
            0 /             (~8GB)          [c0t0d0s0]
            4 /swap         (~1GB)          [c0t0d0s4]
            7 /export/home  (~10GB)         [c0t0d0s7]
    c0t2d0 (~80GB)
            0 /var          (~25GB)         [c0t2d0s0]
            4 /opt          (~25GB)         [c0t2d0s4]
            7 /usr          (~25GB)         [c0t2d0s7]

    If I was doing a fresh Linux operating system install today, I might choose something similar. But given that I was upgrading an existing system, I decided only to offload what was in the /var and /home directories.

    Using fdisk, I decided to slice up the 400GB hard drive like so:

    • New primary partition of 50GB at /dev/sdb1 (for /var/www)
    • New primary partition of 50GB at /dev/sdb2 (for /var/log)
    • New primary partition of 50GB at /dev/sdb3 (for /var/mail)
    • New primary partition of 200GB at /dev/sdb4 (for /home)

    Partitioning with fdisk” and “Creating Linux partitions” are helpful resources for working with fdisk.

  • Create the filesystems
    There are several options for filesystems, but ext3 seems to be the sweet spot for most Linux usage scenarios.

    IBM developerWorks has a good overview of ext3. There’s more information in chapter 1 of the new “Linux Performance and Tuning Guidelines” Redbook as well.

    I formatted each of the partitions as ext3 using the following command. (From this point on I’ll only show the commands for one of the four new partitions).
    [code lang=”bash”]
    [root@]# /sbin/mkfs -t ext3 /dev/sdb1

  • Mount the filesystems
    To make the new filesystems accessible, I mounted each to a location under /mnt.
    [code lang=”bash”]
    [root@]# mount -t ext3 /dev/sdb1 /mnt/var/www
    At this point, all I saw in each new mounted directory was a “lost+found” folder.
  • Copy over data to the new filesystem
    Now that the new filesystem is ready for use, I needed to move the existing directories to it.

    To make sure nothing is being written to the source filesystem, I dropped to single user mode.
    [code lang=”bash”]
    [root@]# init 1

    I then copied data from the source filesystem recursively while preserving file metadata, such as owner and last modified time.
    [code lang=”bash”]
    [root@]# cd /var/www
    [root@]# cp -ax * /mnt/var/www

    I verified quickly to make sure everything was copied. You might want to check things more thoroughly than this however.
    [code lang=”bash”]
    [root@]# find /var/www | wc -l
    [root@]# find /mnt/var/www | wc -l

    Moved the source directory so I could mount the new filesystem in its place.
    [code lang=”bash”]
    [root@]# mv /var/www /var/www.old

    Mounted the new filesystem and returned to multi-user mode.
    [code lang=”bash”]
    [root@]# mkdir -p /var/www
    [root@]# umount /mnt/var/www
    [root@]# mount -t ext3 /dev/sdb1 /var/www
    [root@]# ctrl-D

    System Administration Toolkit: Migrating and moving UNIX filesystems,”
    Partitioning in action: Moving /home” and
    Partitioning in action: Consolidating data” are good resources which document these steps.

  • Add the drive to fstab
    Everything looked good, so I mapped the new filesystems permanently and rebooted the box.
    [code lang=”bash”]
    [root@]# vi fstab
    /dev/sdb1 /var/www ext3 defaults 0 0

    In a few days, once I’m satisfied that everything is working as planned, I’ll archive the old directory and remove it to save space.
    [code lang=”bash”]
    [root@]# rm -fr /var/www.old

And that’s it. Once I had a plan, everything went very quickly. The recursive copy from the source directory to the new destination was probably the longest command timewise, mainly because I had 40GB already in my home directory, but even that step took less than half an hour.

Here are some more resources I consulted when adding my new hard disk.

I also owe thanks to Martin Corona for doing a sanity check on my setup.