Last weekend, Slashdot unearthed a debate over system architecture that was simmering between key Linux kernel developers and a creator of Sun Microsystems Inc.'s next-generation file system, ZFS.
Like all good debates, this one reaches beyond the matter at hand--in this case file system design'towards deeper issues, namely how projects as large as operating system development should be managed.
In a nutshell, Andrew Morton, one of the chiefs behind the Linux kernel, offhandedly remarked that ZFS is a 'rampant layering violation,' meaning that (we assume) it blurs the lines between the OS, file system and physical storage. The comment sparked the ire of ZFS developer Jeff Bonwick, who argued on his blog that ZFS is effective precisely because it collapses the space between layers, that it simplifies by eliminating needless connections. In fact, he even goes a step beyond, and charges that the Linux community's inability to collapse layers is one of their OS's greatest weaknesses.
Piling on in the debate is Ars Technica's John Siracusa who extended Bonwick's sentiment by writing 'Linux on the desktop, user-friendly Linux, the consumer Linux software market, Linux games'all the historic struggles in all these areas can be adequately explained solely in terms of this one failing.'
In other words, by not having one company oversee the entire OS development cycle, no one can take charge of large scale initiatives that cross boundaries.
ZFS works well (in Bonwick's mind anyway) because it collapses several layers of storage management. 'While designing ZFS we observed that the standard layering of the storage stack induces a surprising amount of unnecessary complexity and duplicated logic,' he wrote. 'Refactoring' the boundaries between the file system, volume manager, and RAID controller 'make the whole thing much simpler,' he wrote.
I'm not sure why that particular discussion caught my eye last weekend, but perhaps it something to do with my own journey through the many layers of Linux, thanks to my use of a utility called Linux Volume Manager.
LVM is a great example of what is both good and frustrating about Linux. Gentoo Technologies president Daniel Robbins goes as far as to say LVM is 'a wonderful technology.' It is indeed a mighty handy program, for a number of reasons. LVM can aggregate a number of disks so they appear as a single entity to the operating system, something that can't be done out of the box for either Linux or Microsoft Windows. Thus a collection of data too large for any one disk can still be filed in under one central directory, rather than splitting the set across multiple locations, which can be a management headache.
LVM also lets users resize partitions on the fly'a feature again not available out of the box with most OS'es. If one of your logical volumes is filled to the brim, you can appropriate some space from another disk, or just shift some from another LV.
Best of all, LVM is free.
O.k. now for the downside. Like many low-level Linux programs, getting LVM running requires multiple steps, and there was no one set of instructions I could find that explains all these steps. Sure, LVM has a graphical user interface for LVM, but if the sorry LVM GUI on my Fedora Linux distribution was any indication, trying to managing your LVs by this route is pretty futile. And so once again, like many low-level Linux programs, configuration was best done from the command line.
Nothing wrong with that except, like I said, it involves many steps, not even counting the work you must do to install LVM in the first place, which is another tar ball altogether.
For the record, here are the steps you have to do to initialize a logical volume: First, of you must initialize the disks or disk partitions you hope to use in a LV, through the pvcreate command. Even if you're using an entire disk, the LVM How-To page recommends setting up a partition table for the entire disk, and then initialize it. That involves running another program, fdisk.
So after you initialize your physical volumes, you must then create a volume group to hold them. This involves evoking the vgcreate command. With vgcreate, you name the group and indicate which physical volumes will get placed into this group.
O.K., now that all you have defined your logical volume, you have to activate it, which involves yet another command, vgchange. And if you think you're done, think again Batman! You still have to format this volume, using mkfs. The How-To didn't mention this part of the set-up, as I guess this is a step to be executed by an OS command rather than a LVM command.
Those are the steps to get build a logical volume; another series of steps is needed to have the OS recognize this volume. Now, you must mount the disk on the OS (using the mount command). This involves making a directory somewhere on your system and associating to the logical volume.
That will ingratiate your volume into the OS, though once you reboot the computer, you must need to go through all the steps of initiating the LV and mounting it once again. If you get tired of doing this each time you reboot, you can write an initialization script, which involves learning or recalling another set of Unix rituals. Again, no word on any of this in How-To.
What does all this mean? Certainly without all the supporting Linux tools like mkfs, fdisk and make, the creators of LVM would have had a much harder job putting all the pieces in place to make this program work. Indeed, it may not have been possible at all. And yet this flexibility is also the curse that renders LVM all but unusable to anyone other than wise system administrators. I admit to getting flustered fairly easily by Unix, but it does leave me to wonder what advanced features are being blocked by greater sweeps of this same spiraling complexity.--Posted by Joab Jackson
NEXT STORY: Access ability