Thoughts on DB2 as a MySQL storage engine

26 April 2007 » DB2, IBM, MySQL, PHP

IBM and MySQL have announced plans to port MySQL to IBM’s mid-range i5/OS servers.

Taking it one step further, DB2 will also be available as a storage engine behind a MySQL front end, just as the popular MyISAM and InnoDB table types are today.

The technology is still off in the future, and even then has only been announced for System i, but it opens the door to some interesting possibilities.

I don’t work for the group at IBM responsible for this technology, nor do I know what their detailed plans are, but here’s how I see the potential behind this collaboration.

You’ll be able to…

  • Run MySQL natively on i5/OS
    Can’t say I’ve ever built an application consisting wholly of the MySQL Server, but this is the foundation that makes everything else possible.

  • Run third-party applications built for MySQL and PHP on System i without modification
    Paired with strong PHP support, the entire ecosystem of third-party apps would now be available on this platform and none the wiser to their environment.

  • Enable existing data stored in DB2 to be accessed through MySQL
    Imagine you’re using a scripting language or application framework that has a MySQL driver, but a driver for DB2 is incomplete, unstable, or simply doesn’t exist. MySQL could be the glue.

  • Use MySQL itself as a de facto database abstraction layer
    MySQL is a relatively small download. What if instead of writing database-agnostic SQL or working through a PHP-based or C-based abstraction layer you could use the database server itself to translate your calls, and simply program to the mysqli_* API?

  • Manage access levels and tweak performance based on user type
    If you store your data in a DB2 storage engine, and provide access to it to two separate user communities; read-only Web visitors through the MySQL Server and read/write internal content editors through the standard DB2 clients, you’ve given yourself more flexibility to manage privileges, caching, and performance settings.

It seems there will be a lot of intriguing options available to developers as this cooperation between IBM and MySQL evolves. These are just some of my first thoughts on what might be possible.

I’d love to find out more from the folks working to make this happen, and hear what else might be possible from other developers in the community at large.

Code review in version 7 of Rational tools

26 April 2007 » Java, RAD, RSA, Web architecture

A quick note to help out other folks who are looking for the Code Review feature they used in version 6 of Rational Application Developer or Rational Software Architect in version 7 of these tools.

Instead of selecting the Code Review view from within the Java perspective, you’ll now find the same functionality under Analysis Results.

In general, references to “Code Review” in the help documentation have been changed to “Static Analysis.”

Chris Shiflett at NYPHP April 24th

23 April 2007 » Ajax, Apache, DB2, JavaScript, MySQL, PHP

New York PHP has officially lined up its next four monthly presenters:

The RSVP system for the April meeting tomorrow night is still open till 6pm EDT tonight. Hope you can make it.

Update: The RSVP deadline has been moved to midnight tonight.

Adding a new hard disk to a Linux server

19 April 2007 » Linux, System administration

Last week I added a second hard disk to an IBM eServer xSeries 226 server running CentOS 4.4.

I picked up a large hard drive in January, but I didn’t have the proper hardware to install it right away.

Besides that, I had to settle on how to partition the drive, create the filesystems, and decide on the mount points.

Since the overall process did take me some time, I figured I’d share some notes on the steps I took for folks interested in doing the same. I’ve also provided links to resources that helped me out along the way.

Planning is the lion’s share of the work. Once you’ve decided what to do, the actual addition of the drive and the execution of the commands is fairly straightforward.

  • Determine why you need another hard drive
    It’s always nice to have more space, but beyond that I had some more practical concerns.

    The server came with a single 80GB disk. When I installed CentOS, I was quite lazy and settled on a small swap partition with a large root partition taking up the rest of the disk.

    So, there were at least four reasons guiding my decision to add another drive:

    • To prevent the single root filesystem from filling up and rendering the machine unusable. This could happen since I have nightly cron jobs that sync backups (large media files) from my workstation to the server. It might also happen if my machine was suddenly hit with a burst of traffic or a DOS attack.
    • I didn’t want to resize the existing disk partition or reinstall the operating system. I felt more comfortable creating overflow filesystems on a blank disk.
    • I wanted to provide a measure of redundancy in case the first hard drive failed. Based on rumors from colleagues and other unscientific anecdotes, Maxtor hard drives are more prone to failure. There’s also the small matter of my server sitting on the floor of a dusty basement next to the litter box.
    • To limit different types of disk activity to different partitions. For example, logging (writes to large files) is a different usage pattern than Web serving (reading lots of little files). I wanted a measure of control about how to optimize I/O.
  • Find the right drive for your server
    Given that my machine had only one free SATA bay, my strategy was to get the largest drive possible for around $100.

    A great tool for analyzing your current system to determine its current configuration and potential for expansion is Hardware Lister (lshw). This will give you specific model information and insight into where you might expand your system.

    Pipe the output of this command to a text file, so that you’ll have the information readily available. This will guide your upgrade decisions and serve to verify installation later.

    [root@192.168.1.1]# /usr/sbin/lshw > /root/lshw-before.txt

    Based on this information, I was able to Google compatible drives for the machine, and found a solid price on a Western Digital 400MB SATA drive at Newegg.

  • Make sure you have the right hardware to install the drive
    Hard drives are often sold in Retail or OEM packaging. Retail comes in a pretty box and will include the essential hardware bits, such as cables and fasteners. OEM is the way to save some cash if you’re replacing an existing drive or have extra hardware for your box.

    I had plenty of screws from the carcasses of some E450s, but I mistakenly thought I could reuse the brackets too. Instead, I had to pay $25 for a new plastic bracket in which to mount the drive. Arg.

    Along with an anti-static wrist strap, a Torx T10 screwdriver came in quite handy.

  • Install the drive and make sure it’s seen by the kernel

    DISCLAIMER: If you intend to add a hard drive to your system, please take all due precautions before starting, as there is a real possibility that you will lose some data.

    I shut down the machine, removed the power cable, slid the drive in, closed it up, then booted back up.

    I then ran lshw again. Comparing this with the prior snapshot of the system, I could verify that the drive was recognized as /dev/sdb.

    [root@192.168.1.1]# /usr/sbin/lshw > /root/lshw-after.txt
    [root@192.168.1.1]# diff /root/lshw-before.txt /root/lshw-after.txt
  • Create the partitions
    Through trial and error I had settled on a reasonable filesystem layout for my previous Sun systems. Solaris has very impractical defaults for my purposes, so I needed to do some leg work in the past to make sure I had enough space in the right places.

    This is what my latest Solaris 10 partitioning scheme looks like:

    c0t0d0 (~20GB)
            0 /             (~8GB)          [c0t0d0s0]
            1
            3
            4 /swap         (~1GB)          [c0t0d0s4]
            5
            6
            7 /export/home  (~10GB)         [c0t0d0s7]
    
    c0t2d0 (~80GB)
            0 /var          (~25GB)         [c0t2d0s0]
            1
            3
            4 /opt          (~25GB)         [c0t2d0s4]
            5
            6
            7 /usr          (~25GB)         [c0t2d0s7]
    

    If I was doing a fresh Linux operating system install today, I might choose something similar. But given that I was upgrading an existing system, I decided only to offload what was in the /var and /home directories.

    Using fdisk, I decided to slice up the 400GB hard drive like so:

    • New primary partition of 50GB at /dev/sdb1 (for /var/www)
    • New primary partition of 50GB at /dev/sdb2 (for /var/log)
    • New primary partition of 50GB at /dev/sdb3 (for /var/mail)
    • New primary partition of 200GB at /dev/sdb4 (for /home)

    Partitioning with fdisk” and “Creating Linux partitions” are helpful resources for working with fdisk.

  • Create the filesystems
    There are several options for filesystems, but ext3 seems to be the sweet spot for most Linux usage scenarios.

    IBM developerWorks has a good overview of ext3. There’s more information in chapter 1 of the new “Linux Performance and Tuning Guidelines” Redbook as well.

    I formatted each of the partitions as ext3 using the following command. (From this point on I’ll only show the commands for one of the four new partitions).

    [root@192.168.1.1]# /sbin/mkfs -t ext3 /dev/sdb1
  • Mount the filesystems
    To make the new filesystems accessible, I mounted each to a location under /mnt.

    [root@192.168.1.1]# mount -t ext3 /dev/sdb1 /mnt/var/www

    At this point, all I saw in each new mounted directory was a “lost+found” folder.

  • Copy over data to the new filesystem
    Now that the new filesystem is ready for use, I needed to move the existing directories to it.

    To make sure nothing is being written to the source filesystem, I dropped to single user mode.

    [root@192.168.1.1]# init 1

    I then copied data from the source filesystem recursively while preserving file metadata, such as owner and last modified time.

    [root@192.168.1.1]# cd /var/www
    [root@192.168.1.1]# cp -ax * /mnt/var/www

    I verified quickly to make sure everything was copied. You might want to check things more thoroughly than this however.

    [root@192.168.1.1]# find /var/www | wc -l
    [root@192.168.1.1]# find /mnt/var/www | wc -l

    Moved the source directory so I could mount the new filesystem in its place.

    [root@192.168.1.1]# mv /var/www /var/www.old

    Mounted the new filesystem and returned to multi-user mode.

    [root@192.168.1.1]# mkdir -p /var/www
    [root@192.168.1.1]# umount /mnt/var/www
    [root@192.168.1.1]# mount -t ext3 /dev/sdb1 /var/www
    [root@192.168.1.1]# ctrl-D

    System Administration Toolkit: Migrating and moving UNIX filesystems,”
    Partitioning in action: Moving /home” and
    Partitioning in action: Consolidating data” are good resources which document these steps.

  • Add the drive to fstab
    Everything looked good, so I mapped the new filesystems permanently and rebooted the box.

    [root@192.168.1.1]# vi fstab
            /dev/sdb1   /var/www  ext3    defaults     0 0

    In a few days, once I’m satisfied that everything is working as planned, I’ll archive the old directory and remove it to save space.

    [root@192.168.1.1]# rm -fr /var/www.old

And that’s it. Once I had a plan, everything went very quickly. The recursive copy from the source directory to the new destination was probably the longest command timewise, mainly because I had 40GB already in my home directory, but even that step took less than half an hour.

Here are some more resources I consulted when adding my new hard disk.

I also owe thanks to Martin Corona for doing a sanity check on my setup.

Ten Years of JavaScript

10 April 2007 » Ajax, JavaScript, PHP

This spring marks a decade since I first used JavaScript.

My initial exposure to programming, and computer science in general, was through an introductory course for non-computer science majors called “Computers in a Modern Society.”

The class filled us in on the history of the Internet, the World Wide Web, and HTML. JavaScript seemed tacked on to the syllabus at the last minute, given the buzz surrounding its release the year before.

It was a massacre. Less than half of the hundred or so political scientists, economists, psychologists, and philosophers survived that course, even though JavaScript had been touted as an easy-to-learn, high-level, “toy” of a language.

Thus began my love/hate relationship with JavaScript. The beast that slew so many of my friends introduced me to C-like syntax, showed me functional programming, and provided the leverage I needed to start my career a couple of years down the road.

Tin or aluminum is the traditional 10-year anniversary gift, and it befits my tribute to this lightweight staple of the modern Web, waiting in the sand to slice your foot open.

Following is a short history of my relationship with JavaScript.

  • 1997
    My first Date() with JavaScript. Original programs, later cleaned up in 2000: Reflection Cipher Encryption, EU Currencies Converter.
  • 1998
    Went through my “Select menu choose a link from the dropdown and you’ll go to the address you don’t even need to click submit!” phase. This carried through most of the year.
  • 1999
    Interned at the Federation of American Scientists, Arms Sales Monitoring Project. Rebuilt the site (it’s still up!). Ten percent HTML, 90% JavaScript to provide cool roll-over effect in masthead image links. Oddly, masthead JavaScript is no longer used.
  • 2000
    Built the TLD Lookup tool to see where my Web site visitors were coming from. Discovered server-side scripting such as ASP and PHP, began to look down upon front-end coders like I was until the month just prior. Like every other developer working at a health-oriented Web site, I wrote a Body Mass Index calculator.
  • 2001
    Lots of front-end workarounds since the platform of the site I worked on full-time went from ASP to ATG Dynamo, robbing us of any server-side logic we lowly front-end developers could script. Seeds of bitterness towards MVC planted. Besmirched JavaScript’s good name via pop-unders at behest of higher ups. Ug.
  • 2002
    Continued to mock those with functions beginning with “MM_” in their source.
    Did lots of date related things on the client-side to work around lack of server side scripting. Came up with ill-conceived “scrambleCard(strCC)” function to mask credit card numbers (Fortunately the site was served via HTTPS anyway).
  • 2003
    Worked with some folks at NYPHP to develop an airline ticket booking site. Chris Snyder opens up a whole new world for us by implementing a pre-Ajax technique to manage city and airport dropdowns. I remain proud of my form validation system which changes styles of improperly filled form fields text to bright red and alters title attribute of surrounding label elements with condescending error messages.
  • 2004
    Lose most of the year to an ill-fated content management tool project. Not all is lost, I cut my teeth on J2EE and Struts, and make peace with MVC. Implement rollovers from Young Pup on every freelance job I can get a hold of. Have fun with banner and splash image rotations, transparent PNGs and the like.
  • 2005
    Began drafting a presentation on how to learn JavaScript instead of relying on copy and paste. After seeing JSF tooling demonstrated in Rational Application Developer expressed shock that there were tools and technology that would generate JavaScript for you, instead of handcoding it for browser compatibility and efficiency. The horror.
  • 2006
    Shamefully lag behind on Ajax and Web 2.0 advancements. On the other hand, strengthen knowledge of PHP, J2EE and database modeling to fill out that part of the architecture.
  • 2007
    Return to client-side coding via job change and new projects within my larger organization. Lots of Ajax, JSON and SOA. More when that launches later this year…

Here’s to JavaScript 1.7, and many more rumors of its imminent demise.

Southern California

10 April 2007 » Photos, Travel

I’ve finally posted pictures from the trip Cat and I took to Southern California at the end of February. I’m working on a better method of paging through them, but for now, enjoy.

Update: Added a paging mechanism.

Kathy Sierra and the Blogger’s Code of Conduct

The blogosphere was aflutter last week after a slew of nasty comments and death threats were leveled at one of my favorite authors, Kathy Sierra of Head First fame, by one or more anonymous posters on her blog and in other high-profile forums.

Kathy’s reaction to the punks (and the owner’s of the sites themselves) was covered on Slashdot and even hit the BBC. CNN was slated to air a segment on it this morning.

The incident spurred a call for a “Blogger’s Code of Conduct,” and Tim O’Reilly has led the way with a first draft on his blog. Mostly common sense I suppose, but still a good start and cause for reflection when posting or replying to blogs.

In any case, I hope to see Kathy back and writing soon. Readers like myself owe much to her ability to help us understand complex software development concepts through humor, and to drive the point home by involving beer consumption as the logical end goal of any proper sample application.