Taking it one step further, DB2 will also be available as a storage engine behind a MySQL front end, just as the popular MyISAM and InnoDB table types are today.
The technology is still off in the future, and even then has only been announced for System i, but it opens the door to some interesting possibilities.
I don’t work for the group at IBM responsible for this technology, nor do I know what their detailed plans are, but here’s how I see the potential behind this collaboration.
You’ll be able to…
- Run MySQL natively on i5/OS
Can’t say I’ve ever built an application consisting wholly of the MySQL Server, but this is the foundation that makes everything else possible.
- Run third-party applications built for MySQL and PHP on System i without modification
Paired with strong PHP support, the entire ecosystem of third-party apps would now be available on this platform and none the wiser to their environment.
- Enable existing data stored in DB2 to be accessed through MySQL
Imagine you’re using a scripting language or application framework that has a MySQL driver, but a driver for DB2 is incomplete, unstable, or simply doesn’t exist. MySQL could be the glue.
- Use MySQL itself as a de facto database abstraction layer
MySQL is a relatively small download. What if instead of writing database-agnostic SQL or working through a PHP-based or C-based abstraction layer you could use the database server itself to translate your calls, and simply program to the mysqli_* API?
- Manage access levels and tweak performance based on user type
If you store your data in a DB2 storage engine, and provide access to it to two separate user communities; read-only Web visitors through the MySQL Server and read/write internal content editors through the standard DB2 clients, you’ve given yourself more flexibility to manage privileges, caching, and performance settings.
It seems there will be a lot of intriguing options available to developers as this cooperation between IBM and MySQL evolves. These are just some of my first thoughts on what might be possible.
I’d love to find out more from the folks working to make this happen, and hear what else might be possible from other developers in the community at large.
A quick note to help out other folks who are looking for the Code Review feature they used in version 6 of Rational Application Developer or Rational Software Architect in version 7 of these tools.
Instead of selecting the Code Review view from within the Java perspective, you’ll now find the same functionality under Analysis Results.
In general, references to “Code Review” in the help documentation have been changed to “Static Analysis.”
New York PHP has officially lined up its next four monthly presenters:
- Chris Shiflett of OmniTI has volunteered on short notice to discuss the latest in Web 2.0 security at tomorrow’s meeting.
- Kenneth Downs from Secure Data Software will take us back to basics with his Introduction to Databases for Programmers in May.
- Mike Potter from Adobe will kick off the summer with a lesson on Rich Internet Applications (RIA) and PHP.
- Mike Smith from IBM will bring us up to speed on PHP for i5/OS in July.
The RSVP system for the April meeting tomorrow night is still open till 6pm EDT tonight. Hope you can make it.
Update: The RSVP deadline has been moved to midnight tonight.
Last week I added a second hard disk to an IBM eServer xSeries 226 server running CentOS 4.4.
I picked up a large hard drive in January, but I didn’t have the proper hardware to install it right away.
Besides that, I had to settle on how to partition the drive, create the filesystems, and decide on the mount points.
Since the overall process did take me some time, I figured I’d share some notes on the steps I took for folks interested in doing the same. I’ve also provided links to resources that helped me out along the way.
Planning is the lion’s share of the work. Once you’ve decided what to do, the actual addition of the drive and the execution of the commands is fairly straightforward.
- Determine why you need another hard drive
It’s always nice to have more space, but beyond that I had some more practical concerns.
The server came with a single 80GB disk. When I installed CentOS, I was quite lazy and settled on a small swap partition with a large root partition taking up the rest of the disk.
So, there were at least four reasons guiding my decision to add another drive:
- To prevent the single root filesystem from filling up and rendering the machine unusable. This could happen since I have nightly cron jobs that sync backups (large media files) from my workstation to the server. It might also happen if my machine was suddenly hit with a burst of traffic or a DOS attack.
- I didn’t want to resize the existing disk partition or reinstall the operating system. I felt more comfortable creating overflow filesystems on a blank disk.
- I wanted to provide a measure of redundancy in case the first hard drive failed. Based on rumors from colleagues and other unscientific anecdotes, Maxtor hard drives are more prone to failure. There’s also the small matter of my server sitting on the floor of a dusty basement next to the litter box.
- To limit different types of disk activity to different partitions. For example, logging (writes to large files) is a different usage pattern than Web serving (reading lots of little files). I wanted a measure of control about how to optimize I/O.
- Find the right drive for your server
Given that my machine had only one free SATA bay, my strategy was to get the largest drive possible for around $100.
A great tool for analyzing your current system to determine its current configuration and potential for expansion is Hardware Lister (lshw). This will give you specific model information and insight into where you might expand your system.
Pipe the output of this command to a text file, so that you’ll have the information readily available. This will guide your upgrade decisions and serve to verify installation later.[email@example.com]# /usr/sbin/lshw > /root/lshw-before.txt
Based on this information, I was able to Google compatible drives for the machine, and found a solid price on a Western Digital 400MB SATA drive at Newegg.
- Make sure you have the right hardware to install the drive
Hard drives are often sold in Retail or OEM packaging. Retail comes in a pretty box and will include the essential hardware bits, such as cables and fasteners. OEM is the way to save some cash if you’re replacing an existing drive or have extra hardware for your box.
Along with an anti-static wrist strap, a Torx T10 screwdriver came in quite handy.
- Install the drive and make sure it’s seen by the kernel
DISCLAIMER: If you intend to add a hard drive to your system, please take all due precautions before starting, as there is a real possibility that you will lose some data.
I shut down the machine, removed the power cable, slid the drive in, closed it up, then booted back up.
I then ran lshw again. Comparing this with the prior snapshot of the system, I could verify that the drive was recognized as /dev/sdb.[firstname.lastname@example.org]# /usr/sbin/lshw > /root/lshw-after.txt
[email@example.com]# diff /root/lshw-before.txt /root/lshw-after.txt
- Create the partitions
Through trial and error I had settled on a reasonable filesystem layout for my previous Sun systems. Solaris has very impractical defaults for my purposes, so I needed to do some leg work in the past to make sure I had enough space in the right places.
This is what my latest Solaris 10 partitioning scheme looks like:
c0t0d0 (~20GB) 0 / (~8GB) [c0t0d0s0] 1 3 4 /swap (~1GB) [c0t0d0s4] 5 6 7 /export/home (~10GB) [c0t0d0s7] c0t2d0 (~80GB) 0 /var (~25GB) [c0t2d0s0] 1 3 4 /opt (~25GB) [c0t2d0s4] 5 6 7 /usr (~25GB) [c0t2d0s7]
If I was doing a fresh Linux operating system install today, I might choose something similar. But given that I was upgrading an existing system, I decided only to offload what was in the /var and /home directories.
Using fdisk, I decided to slice up the 400GB hard drive like so:
- New primary partition of 50GB at /dev/sdb1 (for /var/www)
- New primary partition of 50GB at /dev/sdb2 (for /var/log)
- New primary partition of 50GB at /dev/sdb3 (for /var/mail)
- New primary partition of 200GB at /dev/sdb4 (for /home)
- Create the filesystems
There are several options for filesystems, but ext3 seems to be the sweet spot for most Linux usage scenarios.
I formatted each of the partitions as ext3 using the following command. (From this point on I’ll only show the commands for one of the four new partitions).[firstname.lastname@example.org]# /sbin/mkfs -t ext3 /dev/sdb1
- Mount the filesystems
To make the new filesystems accessible, I mounted each to a location under /mnt.[email@example.com]# mount -t ext3 /dev/sdb1 /mnt/var/www
At this point, all I saw in each new mounted directory was a “lost+found” folder.
- Copy over data to the new filesystem
Now that the new filesystem is ready for use, I needed to move the existing directories to it.
To make sure nothing is being written to the source filesystem, I dropped to single user mode.[firstname.lastname@example.org]# init 1
I then copied data from the source filesystem recursively while preserving file metadata, such as owner and last modified time.[email@example.com]# cd /var/www
[firstname.lastname@example.org]# cp -ax * /mnt/var/www
I verified quickly to make sure everything was copied. You might want to check things more thoroughly than this however.[email@example.com]# find /var/www | wc -l
[firstname.lastname@example.org]# find /mnt/var/www | wc -l
Moved the source directory so I could mount the new filesystem in its place.[email@example.com]# mv /var/www /var/www.old
Mounted the new filesystem and returned to multi-user mode.[firstname.lastname@example.org]# mkdir -p /var/www
[email@example.com]# umount /mnt/var/www
[firstname.lastname@example.org]# mount -t ext3 /dev/sdb1 /var/www
“System Administration Toolkit: Migrating and moving UNIX filesystems,”
“Partitioning in action: Moving /home” and
“Partitioning in action: Consolidating data” are good resources which document these steps.
- Add the drive to fstab
Everything looked good, so I mapped the new filesystems permanently and rebooted the box.[email@example.com]# vi fstab
/dev/sdb1 /var/www ext3 defaults 0 0
In a few days, once I’m satisfied that everything is working as planned, I’ll archive the old directory and remove it to save space.[firstname.lastname@example.org]# rm -fr /var/www.old
And that’s it. Once I had a plan, everything went very quickly. The recursive copy from the source directory to the new destination was probably the longest command timewise, mainly because I had 40GB already in my home directory, but even that step took less than half an hour.
Here are some more resources I consulted when adding my new hard disk.
- “My partition is full! Do I need to reinstall?“
- “How to Add New Hard disk to Linux Machine“
- “How can I add new hard disk after I’ve installed Linux?“
- “Tutorial: Adding Additional Hard Drives in Linux“
I also owe thanks to Martin Corona for doing a sanity check on my setup.
My initial exposure to programming, and computer science in general, was through an introductory course for non-computer science majors called “Computers in a Modern Society.”
Tin or aluminum is the traditional 10-year anniversary gift, and it befits my tribute to this lightweight staple of the modern Web, waiting in the sand to slice your foot open.
Went through my “Select menu choose a link from the dropdown and you’ll go to the address you don’t even need to click submit!” phase. This carried through most of the year.
Built the TLD Lookup tool to see where my Web site visitors were coming from. Discovered server-side scripting such as ASP and PHP, began to look down upon front-end coders like I was until the month just prior. Like every other developer working at a health-oriented Web site, I wrote a Body Mass Index calculator.
Continued to mock those with functions beginning with “MM_” in their source.
Did lots of date related things on the client-side to work around lack of server side scripting. Came up with ill-conceived “scrambleCard(strCC)” function to mask credit card numbers (Fortunately the site was served via HTTPS anyway).
Worked with some folks at NYPHP to develop an airline ticket booking site. Chris Snyder opens up a whole new world for us by implementing a pre-Ajax technique to manage city and airport dropdowns. I remain proud of my form validation system which changes styles of improperly filled form fields text to bright red and alters title attribute of surrounding label elements with condescending error messages.
Lose most of the year to an ill-fated content management tool project. Not all is lost, I cut my teeth on J2EE and Struts, and make peace with MVC. Implement rollovers from Young Pup on every freelance job I can get a hold of. Have fun with banner and splash image rotations, transparent PNGs and the like.
Shamefully lag behind on Ajax and Web 2.0 advancements. On the other hand, strengthen knowledge of PHP, J2EE and database modeling to fill out that part of the architecture.
Return to client-side coding via job change and new projects within my larger organization. Lots of Ajax, JSON and SOA. More when that launches later this year…
I’ve finally posted pictures from the trip Cat and I took to Southern California at the end of February. I’m working on a better method of paging through them, but for now, enjoy.
Update: Added a paging mechanism.
The blogosphere was aflutter last week after a slew of nasty comments and death threats were leveled at one of my favorite authors, Kathy Sierra of Head First fame, by one or more anonymous posters on her blog and in other high-profile forums.
The incident spurred a call for a “Blogger’s Code of Conduct,” and Tim O’Reilly has led the way with a first draft on his blog. Mostly common sense I suppose, but still a good start and cause for reflection when posting or replying to blogs.
In any case, I hope to see Kathy back and writing soon. Readers like myself owe much to her ability to help us understand complex software development concepts through humor, and to drive the point home by involving beer consumption as the logical end goal of any proper sample application.