Charlotte, North Carolina

16 March 2008 » DB2, IBM, Photos, Travel

I spent ten days on business in Charlotte, North Carolina in the middle of February. The city and its metropolitan area were much larger than I had expected.

The airport’s big, the IBM complex is massive (it used to be the home of 6,000 manufacturing jobs), and the city really does have a relative importance I hadn’t realized (forgive my Northeastern, non-finance sector prejudices). On top of that it’s growing fast.

I found out later that had I stuck around a few more days I could have stopped by the SIRDUG meeting and had a chance to hear from DB2 gurus Robert Catterall and Roger Sanders at the same facility. Damn.

I didn’t have much time to see the sights, but I did grab some not-so spectacular pictures of downtown.

One of the highlights was a Saturday afternoon trip to the outskirts for some of the best barbecue I’ve ever had, at a biker bar called Mac’s.

Perhaps the most incredible part of the trip was that not one, but two places sold Genny Cream Ale by the bottle. The importance of this can not be overstated, though I don’t have time to go into the details just yet…

Mashups from IBM at NYPHP in January

10 January 2008 » DB2, IBM, New York PHP, PHP, Zend

On Tuesday, January 22nd, Dan Gisolfi will talk about the latest PHP-based technologies from IBM for developing Web 2.0 mashups at New York PHP.

Centered around the concept of “situational applications,” IBM’s work with mashups targets a growing trend in Web site development.

Applications are increasingly built by end users to meet their particular needs at a particular time without the time and expense of a traditional software development process.

A recent paper in the IBM Systems Journal describes the new approach in great detail.

Situational applications are created rapidly by teams or individuals who best understand the business need, but without the overhead and formality of traditional information technology (IT) methods.

Understandably, traditional PHP developers might be wary of this new technology; as a general contractor would be if a Home Depot opened around the corner.

Instead, IT specialists should embrace the model as a foothold for PHP in the enterprise. To that end, Dan Gisolfi will:

  • Demo IBM’s Mashup Starter Kit (which includes IBM Mashup Hub and QEDWiki).
  • Highlight best practices for designing and assembling data-driven mashups.
  • Discuss IBM’s collaboration with Zend and ProgrammableWeb to bring mashups to the enterprise.

As always, New York PHP meetings are free and open to the public, but you must RSVP by 3pm on Monday, January 21st.

Log Buffer #78: A Carnival of the Vanities for DBAs

04 January 2008 » DB2, IBM, MySQL, PHP, System administration, XML

Happy new year everyone! This week I’m honored to host the 78th edition of Log Buffer, the weekly roundup of database blogs.

A special thanks goes to Dave Edwards of the Pythian Group for the opportunity to start the year right by catching up on the latest developments around the database world. I’ve been blissfully out of the loop planning a wedding, relaxing on the honeymoon, and spending time with family. :)

About this week’s news
Many folks were also off celebrating the holidays (or recovering from New Year’s celebrations), so it’s been a quiet week.

Without an earth-shattering announcement to stir up controversy, there’s been a trend towards end-of-year summaries, predictions for the new year, and time to jot down tips or otherwise reflect on projects that scratch the author’s itch.

I’m an IBM Web application developer – not a database administrator per se – so this week’s edition will offer my biased take on the news. I hope you enjoy anyway. :)

DB2 and Informix
First up, Chris Eaton encourages us to have a look at (and get involved with) the new PHP-based DB2 Monitoring Console project at SourceForge.

The DB2MC aims be the long awaited Web-based console for managing DB2 instances and databases, merging the role of the standalone Control Center shipped with DB2 and the simplified approach to database administration taken by the popular phpMyAdmin project favored by many MySQL shops.

Over at DB2 Magazine, Scott Hayes of DBI asserts that “performing excessive and unnecessary sorts is the number two performance killer in most databases” on the Linux, Unix, and Windows platform. Fortunately, he offers a few tips for neutralizing this elusive killer.

On the mainframe, Robert Catterall provides some tips for maximizing performance when accessing data by tweaking the size of blocks fetched over the network from DB2 z/OS.

Further good news for DB2 customers is that the always popular “Recommended reading lists” for database administration and application development at IBM developerWorks have been updated for v9.

On the Informix platform, the latest issue of the International Informix Users Group (IIUG) Insider has been published, which announces that registration for the IIUG Informix conference is open, announces board elections (man, those middle American states have a lot of electoral clout) and reflects on the year marked by the release of IDS 11 at mid-year.

MySQL
Over at The Open Road, CNET blogger Matt Asay reveals MySQL CEO MÃ¥rten Mickos’ reflections on 2007. The widespread adoption of several editions of MySQL 5 was a highlight this year, along with advancements in scale-out features such as replication, partitioning, load-balancing, and caching.

Mickos notes that MySQL continues to build on its strength as a Web database and expand into corporations to complement instead of compete with existing proprietary platforms such as Oracle.

In other integration news, there has been some traction on the planned DB2 storage engine and MySQL port to i5/OS. An IBM Redbook will be published by the end of the month.

Moving down to the bare metal, Mark Robson has decided to put down an explanation for the many users who ask him about the pitfalls of running out of address space (not memory itself) on 32-bit MySQL installations.

Short answer: Spring for a 64-bit machine and stock plenty of RAM, regardless of the underlying operating system. :)

PostgreSQL
Andrew Dunstan offers up source for a conditional update trigger that intercepts modifications if their values don’t differ from what’s already in the database. This filter can save the expense incurred by unnecessary index updates.

Leo Hsu and Regina Obe clarify PostgreSQL’s support for stored procedures (or lack thereof) for a user over at the Postgres OnLine Journal.

They retort; “So the question is, is there any reason for PostgreSQL to support bona fide stored procedures aside from the obvious To be more compatible with other databases and not have to answer the philosophical question, But you really don’t support stored procedures?” Touché, grasshopper.

Robby Russell points us to the call for papers at PGCon 2008, and is himself interested in seeing a presentation relevant to Ruby on Rails Web app developers.

Oracle
Howard Rogers provides a hefty PDF of the courseware he once used to teach a 5 day bootcamp – complete with exercises, slides and explanatory notes on “everything there is to know about Oracle.” But does not that mean the oracle also knows everything about Howard? Think about it.

The seventy megabyte download targets 9i and has been partially updated for 10g, but the underlying themes should still be relevant for 11g.

Richard Foote provides details another subtle gotcha in his series on the difference between unique and non-unique indexes.

A befuddled Steven Karam details his root cause analysis of a problem upgrading Oracle 10 across x86 platforms. He found the solution despite a none-too-helpful error message. He concludes with a suggestion to Oracle for a better way to aid those who run into a similar problem…

Matt Topper announced a new way to keep up with Oracle news, a link-sharing site called Ora-Click.com. For those groaning “not another social network for geeks,” this is a subject specific site and looks quite slick. I can see this model being emulated by other technology or product knowledge domains.

Eddie Awad is already on board with the Ora-Click idea and has offered a few suggestions for making it even more useful.

SQL Server
There have been quite a few posts about learning the new features of SQL Server 2008 ahead of its hotly anticipated February release.

SSQA.net provides us with a pointer to virtual training courses that Microsoft is offering through the end of January ahead of the 2008 general release. This ten part Web seminar series covers topics ranging from high availability to manageability, security, business intelligence, and reporting.

Bob Beauchemin has a trio of tips for using the new features of SQL Server 2008. There are some tips on plan guidance, as well as a pointer on using row constructors.

Thrudb
With all the buzz surrounding SimpleDB in December, Ilya Grigorik, CTO of Igvita details Jake Luciani‘s “faster, cheaper alternative” to Amazon’s offering. So far the reviews are positive. If you’re into document-based databases or S3 storage, this is worth a look.

CouchDB
Anant Jhingran and Sam Ruby have announced that Damien Katz of CouchDB will join IBM over in Information Management. In addition, CouchDB will be donated to the Apache Software Foundation as a top level project.

ObjectStore
Dan Weinreb, co-founder of Object Design which developed ObjectStore, carries on the backlash against Michael Stonebraker with a detailed account of how object-oriented database technology did indeed succeed from both a business and technical perspective.

In a follow-on post the same day, Weinreb delves into more detail about the lessons learned when creating ObjectStore.

In the words of the great General Kenobi, “Luke, you will find that many of the truths we cling to depend greatly on our own point of view.”

 

And that wraps it up for this week’s Log Buffer. I hope you have a good time reading, but make sure you don’t spend all weekend in front of the computer, there’s plenty of good old analog wild card action to follow. Go Giants!

Have a great 2008!

Instant XML feeds via the JSTL SQL tags

20 December 2007 » DB2, Java, MySQL, Web architecture, WebSphere, XML

A dusty old Java tag library can help conjure up siloed Web site data for new uses.

Some background
I’ve developed a number of server-side Java Web applications over the years, first with scriplets embedded in JSP, then with the template and tag driven paradigm offered by ATG Dynamo before the J2EE standards, and most recently with the Model-View-Controller architecture pattern in Struts and Spring MVC.

Each of those technologies (mostly) improved on its predecessor and enforced a better separation of concerns between the database, application logic, and presentation of the end result in the browser. This in turn has helped my teams divide and conquer Web application development among specialized job roles.

That’s why I’ve long been puzzled why the SQL tags in the JavaServer Pages Standard Tag Library exist as a standard part of J2EE 1.3 onward. These tags enable a front-end developer to embed SQL directly into a JSP page without the need for scriptlet code.

This tag library seemed an ill-conceived reversion (anti-pattern even) to the days before MVC took hold as a best practice in the Java world, and I’m pretty sure I skipped that section of the objectives when studying for the SCWCD exam.

That said, the SQL tags came in pretty handy this week for a particular challenge, and the more I think about how I can use them beyond their intended purpose, the more every new requirement I see looks like a nail.

My particular application context
I support a content management application which was designed, developed, and deployed circa Web 1.9. It’s stable, performant, and most importantly, met its functional requirements of the day.

In the two years that it’s been deployed, several new requirements have arisen that have expanded its anticipated scope as a traditional Web application.

In particular, the ubiquity of XML feeds have driven the need for it to present its core data outside of the templates existing in the confines of its own Web site. The rise of tagging and the popularity of multimedia as syndicatable content has also made it creak.

Compounding the architectural limitations of the application itself is its inflexible hosting environment. The data center that this site is deployed to is governed by CYA-driven restrictions (rightly so) which constitute a barrier to frequent application deployment cycles that add new functionality.

This environment makes it difficult to adopt nascent technological advances – the next big thing in “coolness” or usability – but have also kept it exceptionally stable and available to meet its codified requirements without introducing undue legal or financial risk.

The application itself consists of two subcomponents. There is a Web application module on the secured intranet for authors to generate new content, and a publicly accessible read-only Web application module to display published content.

It’s primarily this latter Internet application where the use of JSTL SQL tags comes in most handy, but I can imagine uses on the intranet side as well (ad hoc reports, for example).

The case for SQL tag driven XML feeds
The JSTL standard defines a tag library for issuing queries against a data source defined in the Web deployment descriptor without using JDBC in Java scriptlet code in a JSP.

If this sounds like a simple concept that harks back to the type 1 JSP days, that’s because it is. The documentation shows its own apprehension about the inappropriate use of these tags:

The JSTL SQL tags for accessing databases … are designed for quick prototyping and simple applications. For production applications, database operations are normally encapsulated in JavaBeans components.

But therein lies their simplicity, flexibility and power for this particular production application scenario.

In order to take any slice of your data that can be exposed via a SQL query to the authorized user mapped to the JNDI entry for that data source, all you need to do is is write your query and iterate through the result set in an XML template defined in your JSP.

Think about that outside of this technology’s intended use as a prototype or simple application building block. Instead, imagine how you could use these tags to improve the value of a complex existing production application.

For example, suppose you’ve always provided an RSS feed for your latest ten published news stories. You’ve written your Controller or Action in your chosen MVC framework of choice and deployed it.

But now your users are demanding the latest five thumbnails of images published with a story to accompany its syndicated title and abstract in their latest mashup. Or perhaps they only want to see the last 10 stories which contain a given keyword.

What do you do? You could write a new Action or Controller and proper Command class in Java to meet that requirement. That would require updating some configuration files or deploying an EAR or WAR.

But look, you have an existing deployed stable application. Why risk introducing new code or downtime to a perfectly good application? Why not just free your data for use by your users’ new requirements in a quick hitting, low risk way?

Reuse your data by plugging in new JSTL SQL tag driven JSP files, don’t rebuild your application for every new data usage requirement.

To the tag library!
Ok, so you’ve read this far. I promise, the implementation itself will be much shorter :)

So your users want more information delivered via your feeds, or they wish to query by keyword or otherwise filter your data in a way you never anticipated.

Let’s see if we can free up that data for them.

  1. Write your query, with or without input parameters.

    SELECT ID, TITLE, ABSTRACT FROM NEWS_ARTICLES;

    SELECT ID, TITLE, ABSTRACT
      FROM NEWS_ARTICLES
      WHERE BODY LIKE ‘%?%’
      FETCH FIRST 5 ROWS ONLY;

    SELECT ID, TITLE, ABSTRACT, THUMBNAIL
      FROM NEWS_ARTICLES NA, NEWS_ARTICLE_IMAGES NAI
      WHERE NA.ID = NAI.NA_ID;

  2. Determine what XML format it should be in, whether a standard such as Atom or something custom like the following.

    <?xml version="1.0" encoding="UTF-8" ?>
    <results>
      <result id="">
        <title></title>
        <abstract></abstract>
        <thumbnail></thumbnail>
        <body></body>
      </result>
    </results>
  3. Tie the query to the format in a JSP file using the JSTL SQL tag library (and optionally, the Core tag library to escape output) and the JNDI name of the data source you already have configured in web.xml.

    Consult the documentation if you want to use placeholders.

    <%@ page contentType="text/xml; charset=UTF-8" pageEncoding="UTF-8" session="false"%>

    <?xml version="1.0" encoding="UTF-8" ?>

    <%@ taglib prefix="sql" uri="http://java.sun.com/jsp/jstl/sql" %>
    <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>

    <sql:setDataSource dataSource="jdbc/yourdatasource"/>
    <sql:query var="items">
      SELECT ID id, TITLE title, ABSTRACT abstract, BODY body, THUMBNAIL thumbnail
        FROM NEWS_ARTICLES NA, NEWS_ARTICLE_IMAGES NAI
        WHERE NA.ID = NAI.NA_ID;
    </sql:query>

    <results>
     <c:forEach var="row" items="${items.rows}" >
      <result id="<c:out value="${row.id}"/>">
        <title><c:out value="${row.title}"/></title>
        <abstract><c:out value="${row.abstract}"/></abstract>
        <thumbnail><c:out value="${row.thumbnail}"/></thumbnail>
        <body><c:out value="${row.body}"/></body>
      </result>
     </c:forEach>
    </results>

  4. Deploy the JSP file as your application server requires. If reloading is not enabled, restart the application (consider setting a 15 minute timeout or similar, so you gain the performance boost but provide a hook for updating JSPs individually).

Conclusion
That’s it, you’ve now added an aspect of functionality to your application which frees up any data you can query for via SQL (or XQuery, if your data server is so enabled). You’ve done it in a pluggable fashion and haven’t needed to build any new Java code within your existing application and its framework.

Of course, the flip side is that you’ve done it outside of your application framework and may have circumvented some well-intended best practices. However, you may to prefer to think of this approach as a temporary, low-risk way to share the data available to the users of your application in novel ways that may justify investing in the development of longer term solutions.

Ironically enough, this reversion to single file deployment can make an application buzzword compliant with the one of the most touted recent enterprise targeted architectural pattern – SOA. It reduces the barrier between the value an application has – its data – and the consuming end point of that data – to a simple JSP.

Native XML Databases at NYPHP next week

17 October 2007 » DB2, Java, MySQL, PHP, XML

Elliotte Rusty Harold will offer his take on Native XML Databases at New York PHP next Tuesday night in Manhattan.

The presentation follows a mailing list thread and resulting blog post that generated a lot of interest and discussion on the topic. It should be a great talk for database administrators, application developers and content producers alike:

While much data and many applications fit very neatly into tables, even more data doesn’t. Books, encyclopedias, web pages, legal briefs, poetry, and more is not practically normalizable. SQL will continue to rule supreme for accounting, human resources, taxes, inventory management, banking, and other traditional systems where it’s done well for the last twenty years.

However, many other applications in fields like publishing have not even had a database backend. It’s not that they didn’t need one. It’s just that the databases of the day couldn’t handle their needs, so content was simply stored in Word files in a file system. These applications are going to be revolutionized by XQuery and XML.

If you’re working in publishing, including web publishing, you owe it to yourself to take a serious look at the available XML databases. This high-level talk explains what XML databases are good for and when you might choose one over a more traditional solution. You’ll learn about the different options in both open and closed source XML databases including pure XML, hybrid relational-XML, and other models.

As always, the meeting at IBM is free and open to the public, but you must submit your RSVP by 6PM EDT Monday, October 22nd.

DB2 for Intel Mac

24 September 2007 » DB2, IBM, Java, Mac, PHP

Antonio Cangiano has offered tantalizing news about the upcoming release of a developer’s edition of DB2 Express-C for Intel Macs.

According to Cangiano – a software engineer at the IBM Toronto Software Lab – a beta of the full data server, not just an application development client or driver, should be out by the end of the year.

The interest of Python and Ruby developers helped drive the case at IBM for a Mac version of DB2, but I imagine PHP and Java programmers on this platform are looking forward to the official announcement as well.

I’m still running on PowerPC and it would be nice to see DB2 released on an IBM processor built for Apple, but it’s another good reason to pick up a new Mac around Christmas :)

Elliotte Rusty Harold on native XML data servers

16 August 2007 » DB2, IBM, MySQL, PHP, Web development

Soon after a New York PHP mailing list exchange debating the merits of storing information in hierarchical XML format versus traditional relational tables, XML guru Elliotte Rusty Harold posted a summary of the State of Native XML Databases to his blog.

Like the thread that inspired it, the post has generated a lot of comments showing that it’s an emerging technology whose potential is not well understood and that the products which implement the technology aren’t well known.

Why use an XML database?
Before considering an investment in a data server which offers native XML storage (one which doesn’t decompose it, nor store it as an unstructured chunk; and which allows the user to query its arbitrary individual elements), it’s necessary to take a step back and see what XML as a storage method offers the Web developer.

  • What sort of information should be stored as XML?
    The examples cited by Elliotte include large documents where the document itself is composed of related data, yet which it would be inefficient to break down into related tables and columns. A book can be broken down logically into a title and an abstract but what about the individual paragraphs in each chapter? What of the table of contents and index which are derived dynamically from data which exists elsewhere in the document?
  • Why can’t this data be stored in another format?
    It can be stored, but how do you make use of it? You might shred it, but this requires time to decompose and then recompose, assuming you can get back the data in the form you require. For example, what if you needed the first paragraph and figure of every chapter to compose a detailed table of contents? How would you write that query? What would you do if you needed to add, remove, or reorder a paragraph in an encyclopedia?
  • Why is data stored in XML format increasingly becoming valuable?
    According to Elliotte Rusty Harold and Anant Jhingran, most existing data isn’t traditional relational data at all. There is a ton of information that can not be queried currently with traditional SQL nor stored efficiently in relational tables. Think about the Web itself, it’s a collection of documents that have individual (ideally semantic) structure.

Sound bites
Here are a few of the insightful nuggets which sum up Elliotte’s point of view. For some of his thoughts on the future of XML in general, have a look at his Ten predictions for XML in 2007.

From http://lists.nyphp.org/pipermail/talk/2007-August/022724.html

Roughly 80% of the world’s data cannot plausibly be stored in a
relational database. The 20% that does fit there is important enough
that we’ve spent the last 20 years stuffing it into relational databases
and doing interesting things with it. I’m still doing a lot of that.

But there’s a lot more data out there that doesn’t look like tables than
does. Much of this data fits very nicely in a native XML database like
Mark Logic or eXist. There’s also data that has some tabular parts and
some non-tabular parts. This may work well in a hybrid XML-relational
database like DB2 9.

If your only place to put pegs is a table with square holes, then you’re
going to try pound every peg you find into a square hole. However, some
of us have noticed that a lot of the pegs we encounter aren’t shaped
like squares, and sometimes we need to buy a different table with
different shaped holes. :-)

Relational databases didn’t take the world by storm overnight. XML
databases won’t either. But they will be adopted because they do let
people solve problems they have today that they cannot solve with any
other tools.

From http://lists.nyphp.org/pipermail/talk/2007-August/022788.html

XML is not a file format. We’ve been down this road before. A native XML
database is no more based on a file format than MySQL is based on tab
delimited text.

From http://lists.nyphp.org/pipermail/talk/2007-August/022789.html

Storing books, web pages, and the like in a relational database has only
two basic approaches: make it a blob or cut it into tiny little pieces.
The first eliminates search capabilities; the second performs like a dog.

Also from http://lists.nyphp.org/pipermail/talk/2007-August/022788.html

>> I’m glad we have multiple tools to bring to bear on this kind of
>> problem, because I worry about the performance implications of
>> querying an XML database for the average price of those books, or
>> performing an operation that adds another field (tag?) to each book’s
>> “record”.

Average prices, or adding a field, can be done pretty fast. I don’t know
if it’s as fast as oracle or MySQL. I don’t much care. Sales systems are
exactly the sort of apps that relational databases fit well. But
actually publishing the books? That’s a very different story.

>> If it’s not too much trouble, could you give us some other use cases
>> for an XML database? Because title and first paragraph, if that’s
>> something a system “routinely does” could easily be stored as
>> relational data at the time of import.

Just surf around Safari sometime. Think about what it’s doing. Then try
to imagine doing that on top of a relational database.

Think about combining individual chapters, sections, and even smaller
divisions to make new on-off books like Safari U does. Consider the
generation of tables of contents and indexes for these books.

Closer to home, think about a blogging system or a content management
system. Now imagine what you could do if the page structure were
actually queryable, and not just an opaque blob in MySQL somewhere.

And the takeaway from the State of Native XML Databases:

If you’re working in publishing, including web publishing, you owe it to yourself to take a serious look at the available XML databases. If they already meet your needs, use them. If not, check back again again in a year or two when there’ll be more and better choices.

The relational revolution didn’t happen overnight, and the XQuery revolution isn’t going to happen overnight either. However it will happen because for many applications the benefits are just too compelling to ignore.

Conclusion
This is interesting stuff, and I’m glad Elliotte was able to put forward some of the reasons one might use an XML database and describe the maturity level of the data server products out there now.
FLWOR
We’ve asked Elliotte to present at one of the upcoming New York PHP meetings in October or November. If he can’t make it, it would be interesting to hear from other folks doing PHP work with XML databases, such as the XML Content Store / Zend_Db_Xml in the Zend Framework.

Thoughts on the DB2 9 Fundamentals exam

03 August 2007 » DB2, IBM

This past Tuesday I took the DB2 9 Fundamentals certification exam that I set my sights on a couple of months back. The exam covers the basic topics in DB2 installation, administration and database usage. Successful candidates earn “IBM Certified Database Associate” status.

I hadn’t planned to have a go at it so soon, but I discovered the hard way that I only had until the end of July to both redeem my particular exam voucher and take the test. When buying or receiving vouchers in the past, the two dates have been separate. For example, one normally must redeem a voucher by a certain date, but can slot the exam itself for some period after that deadline.

No matter, I did pass, but I have to admit the test was more challenging than I expected.

Study materials
I split my preparation between Roger Sanders’ DB2 9 Fundamentals Certification Study Guide and the DB2 9 Fundamentals certification 730 prep series on IBM developerWorks written by various DB2 subject matter experts.

Both resources overlap in their coverage of the exam objectives, but the developerWorks tutorials and other articles on the site cover XQuery and the XML topics in more detail whereas the book provides sample questions and a comprehensive mock exam.

Besides being a solid guide to the material, the study guide’s binding held up very well during the course of several trips to the beach, repeated cat bites, and rough rides in the back of my truck. :)

What I gained
Despite the rush to study before I had to take the exam, I retained a lot of knowledge and plan to apply what I learned to some existing database applications.

I would liken the preparation for this type of exam to something I did earlier this summer; spend a couple hours with my vehicle’s owner’s manual. I’ve been driving that truck for 2 years, but I discovered a few things I never knew before that made general usage better. For example, how to use cruise control properly, monitor tire pressure, and fine tune the alarm.

In particular, I shored up my knowledge of the following DB2 topics, and plan to take advantage of them in the coming weeks.

  • Isolation levels and lock characteristics. I’m looking forward to tuning our existing applications by specifying appropriate concurrecy tweaks where I can to improve performance.
  • UNION, INTERSECT, EXCEPT set operators. I vaguely recall these from a generic introduction to SQL course I took a few years ago. With more experience under my belt, I think I can really take advantage of these types of queries now now.
  • User defined types (UDTs). This could be an interesting way to apply Object-Oriented Analysis and Design concepts to the database, since one can map a business concept like MONEY to the native type DECIMAL(6, 2).

The Iowa Choral Directors Association
One gripe I do have that doesn’t have anything to do with the test per se is that the short form of the title attained via certification is hard to pin down.

“ICDA” is ambiguous with the more advanced administrator certification – IBM Certified Database Administrator – which is particularly odd given how thoroughly IBM embraces acronyms. Googling the term to find folks who hold the qualification leads to some interesting results…

My next steps
I intend to follow through with the IBM Certified Application Developer (PDF) path, which means preparing for the DB2 9 Application Developer exam.

As IBM’s Program Manager for Information Management points out, there isn’t a dedicated book for this exam, but she does offer some suggestions for other study materials.

Along with those, I’m planning to crack open the DB2 8 Application Development Certification Guide that I’ve been hanging onto for about 3 years. As before, developerWorks tutorials should come in handy.

« Previous pageNext page »