Data management's misconceptions

What's more

Last book read: Darwin's Black Box: The Biochemical Challenge to Evolution by Michael J. Behe


Last movie seen: 'The Corporation'


Military service: Signal Corps, Israeli Army


Sports/leisure activities: Running, reading, travel


Worst job: Anything involving doing without thinking


Personal motto: 'If I become popular, I'll worry where I've gone wrong.'

Fabian Pascal, database gadfly

Thor Swift

On his way to an academic career, Fabian Pascal took a detour into database management systems.

The technology firebrand and database consultant was working toward a degree in social sciences at the University of Haifa in northern Israel. He disliked the available data management tools so much that he started trying to improve them.

After getting a bachelor's degree in economics, he received a master's in business administration with a concentration in information technology from Northwestern University of Chicago.

Pascal operates a Web site called Database Debunkings, holds seminars and writes about database management fundamentals. Author of three books on databases, he frequently contributes to trade magazines and Web sites, where his work sparks spirited debate.

Pascal is extremely critical of Extensible Markup Language in database administration and data exchange. A self-described contrarian, he also has bad things to say about Structured Query Language and commercial implementations of the relational database model.

He has consulted for the Census Bureau, the CIA and the IRS, as well as for leading software vendors.

GCN associate editor Joab Jackson interviewed Pascal by e-mail.

GCN: Why do you say Extensible Markup Language is being misused?

PASCAL: XML claims to be an intersystem data exchange technology, but what it was intended to solve is HTML's lack of semantics. HTML tags express how data is presented or formatted, not what it means. XML tags purportedly express meaning.

If my system sends data to your system, we must agree upfront on what data will be sent (in other words, the meaning) and some physical format. Once we agree on what to send, there's no need to include the meaning each time data is sent, because the exchange is between systems. Human readability is not an issue.

XML tags are being repeated unnecessarily in every transmitted record when simple delimiters would do. Often XML tags actually overwhelm the data in frequency and size.

There is a lot of talk about how XML can handle unstructured data, but that is a contradiction in terms. If it's unstructured, it's meaningless, random noise and therefore not carrying information.

GCN: Why is XML not appropriate for data management?

PASCAL: Data management requires some organizing structure, to which integrity constraints and manipulation can be applied. Constraints represent business rules; manipulation is querying and updating.

Relational database theorist Edgar Codd defined a data model as a combination of structure, integrity constraints and manipulation. You can't manage data without a data model. The question is, what is the best structure for integrity and manipulation in general-purpose data management? XML fares quite poorly on that score.

XML nestability defines a structure, but not integrity and manipulation. The XML structure is hierarchic. We discarded hierarchic database management decades ago because it was not cost-effective for various reasons. Why would we want to trade down?

The relational model is nothing but logic and mathematics applied to database management. A DBMS is a deductive logic system.

Each time a new fad emerges in the industry, such as multivalued, object or multidimensional DBMS, I ask proponents what they substituted for logic. I have yet to get an answer.

Vendors and users cannot complicate the model, they complicate their products and practices by failing to adhere to it. Any relational principle or feature that is missing, violated or incorrectly implemented causes difficulties in practice.

GCN: What about SQL?

PASCAL: Whatever functions are missing from or poorly implemented in a DBMS, users must undertake in applications. That defeats the purpose of database management, and customers lose the practical benefits conferred by the model. There are many such flaws in SQL and products based on it.

For instance, views'virtual tables'are the relational mechanism for logical data independence. They insulate applications from changes in the logical structure of a database. If applications access data via views, rather than directly in stored tables, whenever the tables change, the views will reflect the change and the applications won't have to be modified. This is a huge saving in development and maintenance.

But in SQL products, multitable views cannot be updated. So updates must be applied directly to base tables, defeating logical data independence.

This deficiency originates in part from SQL's allowance of duplicates, which are prohibited by the relational model. The possible presence of duplicates means an SQL DBMS cannot ensure that view updates are propagated correctly to the underlying tables. To avoid corruption, SQL products prohibit the update of such views altogether.

Another example is SQL's use of nulls for missing data. The relational model is based on the real world's two-valued, true-false logic. To guarantee correctness, the logic underlying the relational model requires us to record in databases only facts known to be true. Nulls violate that requirement. They are essentially an attempt to mix up real-world data with our knowledge of it, substituting a three-valued logic. So DBMSes can produce results that are hard to understand, prone to misinterpretation or incorrect.

GCN: What did you do for Census and the IRS?

PASCAL: At Census, I taught database and relational fundamentals and assessed SQL within that framework. At the IRS, I consulted on whether certain data management technology projects qualified as R&D expenses.

I am in the education business as distinct from training. A lot of it is re-education'deprogramming practitioners misled by the industry.

Foundational knowledge is practically impossible to acquire now. Seminars, conferences, books and trade media are interested only in product-specific information. Even academia is becoming a certification platform for vendors.

GCN: What fundamentals should database managers learn?

PASCAL: Not just managers, but anybody involved with databases: administrators, application developers, even some end users. They should understand basics such as what is a database and a DBMS, what are database functions versus application functions, data independence, the several types of models (data, business, logical and physical) and the relational model.

If you ask new practitioners about these things, even some with 10- or 15-year careers, they often don't know. They say, for example, that Oracle or DB2 is a database. Of course, both are database management systems. But the practitioners only know products. Would you want your doctor to know only how to use a stethoscope, but not know any anatomy or biology?

GCN: What misconceptions about relational applications limit developers from fully following the relational model?

PASCAL: Well, in fact, I detect a misconception in your question. Relational is a property of databases and DBMS [software], not applications.

The industry has the notion that normalized databases perform poorly. It's based on confusion of the logical and physical levels of representation. In fact, normalization is a purely logical construct, whereas performance is determined entirely at the physical level, so it is not possible for the former to affect the latter.

To the extent that normalized databases perform poorly in some'though not all'instances (and assuming no other factors), it's the fault of the physical implementation of SQL products. This may force users to denormalize for performance, which introduces redundancy and makes databases prone to corruption.

To keep redundant data consistent, additional integrity constraints must be defined by users and enforced by the DBMS for each denormalized table. That is a prohibitive burden.

Any performance gains come not from denormalization, but from the failure to implement those constraints. If and when they are added, performance drops back to the normalized level.

In other words, it's product deficiencies that force users to trade integrity for performance, yet they erroneously blame normalization.

inside gcn

  • When cybersecurity capabilities are paid for, but untapped

Reader Comments

Thu, Dec 5, 2013 Dan

Related tables have a common index, the primary/foreign key that goes a long way to giving good performance with a good query optimizer. Usually bad performance results from a full table scan which if done indicates bad design. Design 80%, Implement 20%.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group