A thought occurred to me on my commute to work this morning.
How can it be that the very folks who claim that the Bible is the Word of God to support their claims of Intelligent Design against Evolution cite a document that has itself gone through so many adaptions, translations, and editions?
I probably first thought about it after seeing a special on the History Channel last year called “Who Wrote The Bible?” That episode discussed how and why certain books were shuffled around to comprise the Bible at certain points in the past.
Growing up Lutheran, I suppose I knew that we were always working with a doctored copy but never really understood how fundamentally it had changed.
Anyway, I don’t want to acknowledge that there’s a debate by providing an argument, but I think the irony here might be fun to explore.
Let me meditate on this some more.
Until then, go in peace, serve the pasta.
CNET just announced that NASA will use the Model driven architecture capabilities in the IBM Rational line of software development products to manage the software to power the James Webb Space Telescope.
The concept take WYSIWYG much further, in that you’re using visual models to create and update entire architectures, not just building user interfaces.
This is particularly interesting because of what Martin Fowler, a luminary in the object-oriented analysis and design, has to say about UML as a programming language.
In the first chapter of UML Distilled, Martin Fowler describes the three common methods for using the UML today:
At the heart of the role of the UML in software development are the different ways in which people want to use it…
To untangle this, Steve Mellor and I independently came up with a characterization of the three modes in which people use the UML: sketch, blueprint, and programming language.
By far the most common of the three, at least to my biased eye, is UML as sketch. In this usage, developers use the UML to help communicate some aspects of a system…. Most UML diagrams shown in books… are sketches. Their emphasis is on selective communication rather than complete specification.
In contrast, UML as blueprint is about completeness… the distinction, I think, rests on the fact that sketches are deliberately incomplete, highlighting important information, while blueprints intend to be comprehensive, often with the aim of reducing programming to a simple and fairly mechanical activity. In a sound bite, I’d say that sketches are explorative, while blueprints are definitive…
Eventually, however, you reach the point at which all the system can be specified in the UML and you reach UML as a programming language. In this environment, developers draw UML diagrams that are compiled directly into executable code, and the UML becomes the source code…
Later in the chapter, Fowler goes on to express his opinion about the third usage,
In my view, it’s worth using the UML as a programming language only if it results in something that’s significantly more productive than using another programming language. I’m not convinced that it is, based on various graphical development environments I’ve worked with in the past. Even if it is more productive, it still needs to get a critical mass of users for it to make the mainstream. That’s a big hurdle in itself.
Prior to this announcement on CNET, I had taken a very pragmatic approach to using UML to model applications: use them as a way to sketch or document your code, as Fowler describes above.
However, if IBM and NASA can pull this off, it would be huge coup for UML as a programming language, not to mention putting regular programmers and developers out of work…. except, of course, those who write the software that creates the UML modeling tools in the first place… :)
I ran into an issue today that I couldn’t find an existing answer to. Hopefully this solution helps anyone else who’s still using Struts 1.x and running into errors when uploading large files through a Web form. It’s a setting for the Apache Jakarta Commons FileUpload library that Struts uses rather than internal to Struts itself.
Occasionally during the submission of forms I’d see errors in the console which began with the following stack trace.
Code)) at org.apache.commons.fileupload.
You might think that it makes sense to increase the buffer size to fix a problem like that. You might even try increasing any of the other Struts RequestProcessor settings.
In retrospect it makes sense that you don’t tell something to allocate more of a resource that it can’t already get enough of, but it took me a cup of coffee or two to realize I had to reduce the buffer size, not increase it.
Anyway, lesson learned.
“bufferSize – The size (in bytes) of the input buffer used when processing file uploads.” Lowering this may affect performance, the documentation says, but that’s a better option than having it not perform at all.
“memFileSize – The maximum size (in bytes) of a file whose contents will be retained in memory after uploading. Files larger than this threshold will be written to some alternative storage medium, typically a hard disk. Can be expressed as a number followed by a “K”, “M”, or “G”, which are interpreted to mean kilobytes, megabytes, or gigabytes, respectively.”
Default <controller> element in struts-config.xml:
What I tweaked to eliminate the error.
Your mileage may vary.
OK, I’ll be the first to admit that I’m not all that familiar with the nuances of IBM’s server brands, despite working for Big Blue.
My professional, freelance and hobby hardware pursuits fall squarely into the x86, x86_64 line, which was considered up until recently the IBM eServer xSeries. These Intel- and AMD-based machines are now known as IBM System x and are the server-grade commodity boxes geared towards the Windows, Linux and FreeBSD operating systems.
On occasion, I deploy Web applications to AIX servers. As I understand it, these are now known as IBM System p (Formerly IBM eServer pSeries). The P refers to their POWER line of microprocessors, which also undergird the pre-Intel Macintoshes and continue to support video game consoles and other embedded applications.
At the far end of the server line is IBM System z (Formerly IBM eServer zSeries) – the mainframes – which are apparently back in vogue. These aren’t your grandfather’s mainframe, they say. The key here is virtualization and mission-critical uptime, meaning you can run any one of any OS in a VM on these closets (they ship with their own IBM-branded hardhats, I hear).
Which brings us to IBM System i (Formerly – you guessed it – IBM eServer iSeries). I’m not entirely sure where this fits in to the hardware families described above, but my interest has been piqued. These machines are intended for minimal administration, support DB2 as a core operating system feature, and appear to have a “fanatical” following.
In any case, it’s on my todo list to see what these machines and the accompanying OS are capable of, particularly if I can score a demo box to run in my basement. :)
Back to the topic of this blog post though… If you *do* happen to have an IBM System i machine, you might be interested in some of the following new developments:
- Zend and IBM team to release a new version of the Zend Core for IBM on the POWER-based i5/OS.
- IBM has released a new Redbook on getting PHP up and running on System i.
Enjoy. And if you happen to have any clarifications on what System i actually is (or more user-friendly details about any of the other server lines) let me know.
There’s a great thread going on over on the Suns-at-Home mailing list about case mods to old workstations built by Sun Microsystems.
Suns-at-Home has been catering to the foolish among us for almost 20 years who couldn’t resist picking up outdated Sun hardware – known for its high-end enterprise applications – and seeing if we could apply it towards some domestic purpose, however impractical it may be.
> To summerise, what other ways can one with too much time
> pimp out their sparc?
Trying to imply a machine is fast with neon lights, flame jobs, glowing
skulls, etc. is not going to fool anyone into thinking these machines
We’re fans of slow machines, and we should embrace that by evoking the
visage of shambling zombie hordes. This is probably best accomplished
by riveting hunks of meat onto the side of the case and take a more
organic approach to case modding. Within 2 or 3 days you should have a
pretty clear indication that the zombie motif is working. Inside of a
week the neighbors will be asking if you’re keeping a dead cat in your home.
Are we not, after all, keepers of the undead? Caregivers to the
carcasses that IT managers of old have long since given up for deceased?
It seems the 1990 season of good breaks finally caught up with the Giants. That year, everything went right for Big Blue, from a last minute bomb from backup Jeff Hostetler against Phoenix to a missed field goal by Buffalo kicker Scott Norwood in the Super Bowl.
This year, they were finally put out of their penalty-ridden, injury-laden misery by the Eagles on a last second kick by David Akers in the first round of the NFC playoffs. Which is for the best, I think, because my blood pressure can return to normal and I can again enjoy watching football in the coming weeks.
Without Tiki Barber, the offense will be missing one of the best backs in the NFL, but by trading Eli Manning (please!) for one or more more promising offensive players, I’m sure the Giants can put together a more disciplined unit in 2007. Maybe even behind short yardage star Jared “The Hefty Lefty” Lorenzen… :)
I’ve explored a few PHP frameworks for some new application prototypes recently.
Normally when I build sites, I prefer to have full control over the codebase, but with short deadlines and the abundance of new frameworks available, I’ve found that pre-built infrastructure code for handling the plumbing common to all applications makes it easy to get new concepts up and running.
In short, it’s getting easier to leave most of the drudgery of building PHP applications to the framework, and spend more time developing the logic behind my applications.
Two of the more interesting frameworks are CakePHP and CodeIgniter. Both are modeled on Ruby on Rails and adhere to its “Convention over configuration” principle, meaning they are ready to go out-of-the-box with little initial setup and take a very pragmatic approach to Web development.
They also support MVC architectures, so they simplify maintenance and separation of concerns between modules of code. This is all in addition to simplifying security, data-mapping, and rich user interface development.
While I see CakePHP as the more fully featured framework, CodeIgniter seems to have it beat when it comes to the initial learning curve, so it’s what I’ve been using more often.
In any case, I look forward to using these frameworks more in the coming year (and to make good on my promise to enable CakePHP for DB2).
If you’re a PHP developer and still building applications from the ground up, you owe it to yourself to check out the many framework options now available. You can’t go wrong by starting with one or both of these two frameworks.
Peter Seebach returns with another crotchety installment of his “Cranky User” column at developerWorks. :)
A couple of my favorite points:
A shocking number of people still believe that Web pages should be designed to run only with the most common browser, because that way more people can use them. This is ridiculous; if a page works with every browser more people can use it!
Search engines go a long way toward bringing users to your site, but only if your pages are navigable by spiders. That means adding a few plain old links to the brilliant, enthralling, and genuinely challenging flash video game you use for site navigation.