Quite a few people still labor under the delusion that "XML Namespaces" are some kind of sound technical solution to a real problem. After all, a documented specification exists from a reputable organization, so it's natural to take it for granted that there was fire behind the smoke.
But neither part of this easy supposition is true. XML Namespaces don't solve the problem it's claimed they do, and the problem itself is mainly an illusion. Far from being a technical innovation, XML Namespaces are actually the result of political legerdemain. It all started with a syntactic device - so called "qualified names" - finding favor in some influential quarters. There was then born a need to retrofit the device with a plausible justification, and stamp it with an imprimatur.
This lack of sound technical basis is reflected in the vagueness of the XML Namespaces specification regarding the problem that is being solved. The confusion is very deepseated, and manifests a profound failure to grasp the point of generalized markup (in XML and SGML).
Before diving briskly into technical details, the specification document devotes a short section to "motivation", to suggest the thinking that might have gone into the invention of colonified names.
We envision applications of Extensible Markup Language (XML) where a single XML document may contain elements and attributes (here referred to as a "markup vocabulary") that are defined for and used by multiple software modules. One motivation for this is modularity; if such a markup vocabulary exists which is well-understood and for which there is useful software available, it is better to re-use this markup rather than re-invent it.
This is muddled. The second sentence invokes a portentous buzzword, "modularity", resonant with Good Vibrations, but it doesn't rescue the banality of the first. Coordinating "multiple software modules" to process a "single document" is by no means uncommon or unusual.
The real intent of the first paragraph shows up early in the second:
Such documents, containing multiple markup vocabularies, pose problems of recognition and collision.
In other words, the authors have envisioned XML applications where a single document has constructs drawn from more than one "vocabulary", where the novelty is in the contrast with documents hitherto having consisted of markup from a single vocabulary only (in this usage of the term "vocabulary".)
Thus, the critical issue would seem to be: how may constructs drawn from multiple vocabularies be used together in a single document, such that the provenance of each construct could be recovered unambiguously - in order, perhaps, to apply the appropriate software module?
But rather than state the problem in so many words, the authors try to characterize it in terms of the behavior of those motivating modular software modules:
Software modules need to be able to recognize the tags and attributes which they are designed to process, even in the face of "collisions" occurring when markup intended for some other software package uses the same element type or attribute name.
Such a scenario would tend to belie any organization. It envisions each of many software modules rummaging through a document to find its own applicable input. In a rationally coordinated system, the problem of discriminating input - determining "what goes where" - could be handled separately, so that ideally no participating software module need be given input that it was not designed to process. A separate coordinating layer would be responsible for presenting input to any software module in the expected form, regardless of the original forms in a document.
For the authors' scenario to make any sense, it has to be assumed that the modular software modules are indeed themselves designed to ferret their own material out of documents. That is, the discrimination procedure solving the problem of "recognition and collision" will have to be implemented - perhaps redundantly - in each of these "modular" software modules. In that case, the behavior of these modules becomes irrelevant, as each of them will embody essentially the same set of rules constituting the discrimination procedure, differing only in particulars. In other words, the problem is still only the definition of an effective discrimination procedure, as any applicable software module can be assumed to have implemented it correctly.
But the authors seem to have set their sights elsewhere. Rather than concede that their scenario requires software modules to implement a generic and correct discrimination procedure, instead they would specify in advance what these modules will in fact do - which in the worst case could make an effective discrimination procedure impossible.
In particular, the authors postulate that software modules will try to "recognize tags and attributes" more or less directly. Without offering an analysis of the essential requirements of a correct discrimination procedure, the authors hasten to suggest that the procedure involves inspection of "tags and attributes" - by which is meant, syntactically visible names such as generic identifiers and names of attributes - in isolation.
This is a whopping non sequitur.
However, the leap of logic is necessary in order to draw a "conclusion":
These considerations require that document constructs should have universal names, whose scope extends beyond their containing document.
This "requirement" apparently describes a sufficient condition: if the constructs indeed have "universal names" - by which is meant, unique names - then their inspection in isolation may suffice to identify their provenance.
But, of course, none of this is in any way necessary, and the postulate that generic identifiers and attributes must be examined in isolation is absurd. Why preclude the possibility of examining the entirety of the information in a start-tag before concluding anything about what may have originated where or what may be destined whence - or, for that matter, how the relevant portion of the document should be processed?
Except perhaps for the brave new world of XML Namespaces, no software module in existence need be programmed to do something quite as silly.
A seriously fallacious premise underlying the leap of logic in the XML Namespaces specification is that names from multiple vocabularies must all be syntactically visible as generic identifiers and attribute names. This unnecessary presumption makes it difficult for constructs from different vocabularies to attach to the same value simultaneously.
The W3C's TAG has decreed, for example, that the SRC attribute of HTML must yield to the HREF attribute of XLINK, because, sadly, an attribute value can have only one name. Even if some people noticed that this deontic fiat strikes at a first principle of the SGML/XML formalism - that document type designers are free to choose their own names - no one spoke up.
A similar problem with structural content - when the same aggregate is to be tagged with generic identifiers from more than one vocabulary - is "solved" by introducing extra element structure into the document: an element with one of the generic identifiers is given an element with the other as its sole content, and the "real content" which instigated this rigamarole is made the content of the inner element of the two. This, of course, is the lamentable consequence of a starttag having room for only one generic identifier. (A rule - useful if not needed - that whitespace is ignored between elements with generic identifiers from different "namespaces", hasn't been formulated officially, though.) The vagaries of innocent pretty printing aside, such factitious extra element structure is not without its own definitional difficulties - which naturally the oracles of XML Namespaces haven't pronounced on.
The problem is that of indeterminate opacity. When an element from one namespace "contains" an element from another namespace, is the content of the latter still part of the effective content of the former? That is, the tag of the inner element may be presumed transparent (in the sense of the generic identifier being "unknown" to the namespace of the outer one), but what of its content?
Sometimes opacity is desirable, and at other times, transparency is. For instance, it has been claimed in some quarters that namespaces can be used to partition off parts of a document "not intended for another namespace". This is clearly an opacity prescription. But when generic identifiers from more than one namespace are intended to apply to the same effective content, this is a transparency prescription. There is no mechanism defined in XML Namespaces to distinguish these cases.
All that XML Namespaces really offers is an invitation to DWIM, with ad hoc criteria all the way. In order to understand not only how dangerous this is, but also how unnecessary this is, the real requirements of a proper discrimination procedure have to be analysed.
The possibly surprising truth of generalized markup is that no names from externally defined vocabularies need be syntactically visible. Not a single one. Not even in the case of only one such vocabulary being applicable to the document.
This is because SGML and XML support attributes. A principal use of attributes is association, where the name denotes a semantic connection, and the value instantiates it in terms of a referent. Any information of externally predetermined form can thus be made relevant to a document (or part of it) by carrying it - or a representation of it - in the value of an attribute, where the name of the attribute is dedicated to the semantic connection that the relevance manifests. The associative power of the attribute mechanism extends to using them to qualify the interpretation of other attributes, where the name would denote the qualifying semantic, and the value would consist mainly of names of the attributes affected. This is the key to how document type designers have the essential freedom to choose their own names.
The native markup in a document comprises its own vocabulary, which in principle could be as idiosyncratic as the author wished. But when it comes to publishing the document for the benefit of others, it's more convenient if not advisable to use an already accepted vocabulary rather than one home-grown. The systemic problem is one of association - between the vocabulary an author might want to use (as the document is, after all, his own creation) and one that is understood by others. The attribute mechanism suggests the answer: define an attribute to denote the semantic "known as, in such-and-such vocabulary", and have the value exhibit the name of the construct. Thus (with minimized endtags to avoid clutter):
<A foo="html"> <B foo="head"> <C foo="title">Demo</></> <D foo="body"> <E foo="h1">Hello World!</></></>
This is a point-by-point mapping of structures with arbitrary names to a hopefully well-known vocabulary, by means of a dedicated attribute, here named
The problem in this case reduces to publicising just this fact: that the
foo attribute carries the map to names that a HTML processor would be prepared to recognize. In other words, the author only needs a means to declare that referents in a particular vocabulary are to be found in the values of a particular "gi-mapper" attribute. For concreteness, consider a processing instruction at the top of the document:
<?xmldoc vocab="something that identifies the HTML vocabulary" gi-mapper="foo" ?>
This is a completely generic mechanism that addresses two systemic needs
Note that the syntactically visible names in the document (A, B, C, D, E and foo) are all arbitrary, but this doesn't in any way affect the ability to identify this document as one of the HTML "type" (taken in the sense of "exhibiting the HTML vocabulary by authorial intent"), as long as the semantics of the pseudo-declaration are understood. And, perhaps more importantly, if the document had looked like this:
<html foo="html"> <head foo="head"> <title foo="title">Demo</></> <body foo="body"> <h1 foo="h1">Hello World!</></></>
then what would make this a document one of the HTML type is still the
foo attribute (via <?xmldoc?>) and not the apparently familiar generic identifiers!
Of course, in this case, the
foo attribute is redundant in the sense that it carries an identity mapping of generic identifiers. It could be left out of the markup if the pseudo-declaration said, in effect, "the generic identifier position is the gi-mapper" - in other words, that a default understanding applies. Thus,
<?xmldoc vocab="something that identifies the HTML vocabulary" ?>
simply the pseudo-declaration itself, with no qualifying information, serves to assert that a default understanding with respect to a particular vocabulary applies, regarding the intent of the generic identifiers in the document.
The point of this exercise is that even when only a single external vocabulary applies to a document, to use its names in syntactically visible positions is necessarily to appeal to an identity mapping by default, and still requires the relevant vocabulary (as a source of names) to be identified.
Which is to say: it's inherent in generalized markup that syntactically visible names are always expressions of authorial choice. They are never constrained by external vocabularies.
The natural development of applying a single external vocabulary to the element structure of a document is the case where this vocabulary is not intended to apply comprehensively. That is, the author intends that parts of the document should be opaque with respect to this vocabulary. Consider the following example:
<A foo="html"> <B foo="head"> <C foo="title">Demo 2</></> <D foo="body"> <E>An extended list</> <F foo="ul"> <G>Some stuff</> <G foo="li">First list item</> <G>Some more stuff</> <G foo="li">Second list item</></></></>
Where the author's intent with respect to the HTML vocabulary is this partition of the document:
<html> <head> <title>Demo 2</></> <body> <ul> <li>First list item</> <li>Second list item</></></></>
With the values of the
foo attribute as guides, this partition can be derived from the original, provided there were also a way to know what to leave out. The fact that the
foo attribute wasn't asserted for the
<E> element and two of the
<G> elements by itself didn't necessarily imply that the contents of these elements should also be suppressed.
This, of course, is the opacity problem, an inherent aspect of a partial match between a document and an external vocabulary. In the general case the mapping formalism (which so far has introduced only a means to map generic identifiers) must also provide information on the handling of the content of an "unrecognized" element.
The first option is to treat the element as completely opaque, calling for the content to be ignored. The alternative is transparency, with two suboptions. First, complete transparency, where all content is subsumed into the context of the "parent" element, and second, partial transparency, where all immediate text content is ignored.
Treatment of subelements need not be specified, as the same options can be applied to them separately. In fact, such recursive application of the same considerations shows that the opacity problem really boils down to the treatment of immediate text content only. Complete opacity can be achieved by not mapping generic identifiers for any of the elements in a subtree and at each point calling for all immediate text content to be ignored. Thus, with the psuedo-declaration having another setting to identify the control of the inclusion of immediate text:
<?xmldoc vocab="something that identifies the HTML vocabulary" gi-mapper="foo" text-controller="bar" ?> <A foo="html"> <B foo="head"> <C foo="title">Demo 2</></> <D foo="body"> <E bar="no">An extended list</> <F foo="ul"> <G bar="no">Some stuff</> <G foo="li">First list item</> <G bar="no">Some more stuff</> <G foo="li">Second list item</></></></>
The intended partition can be determined by a generic mechanism, of a form which can be integrated into a parser if needed.
The remaining consideration in the application of a vocabulary to a documemt is the disposition of attribute specifications. Essentially, matching attribute names from the external vocabulary to the actual (arbitrary) attribute names in a starttag is an association list (or "lookup table") of paired names, which can be made the value of an attribute dedicated to this purpose. Thus, another aspect of the mapping formalism, besides locating generic identifiers and controlling the inclusion of text, is "relocating" attribute values from one name to another. Thus, for example, this:
<?xmldoc vocab="something that identifies the HTML vocabulary" gi-mapper="foo" text-controller="bar" att-renamer="blort" ?> <A foo="html"> <B foo="head"> <C foo="title">Demo 2</></> <D foo="body"> <E foo="h1" quux="center" blort="align quux">A<H bar="no">n extended</> list</> <F foo="ul"> <G bar="no">Some stuff</> <G foo="li">First list item</> <G bar="no">Some more stuff</> <G foo="li">Second list item</></></></>
Should imply this partition for the HTML vocabulary:
<html> <head> <title>Demo 2</></> <body> <h1 align="center">A list</> <ul> <li>First list item</> <li>Second list item</></></></>
Inasmuch as the attribute specification
blort="align quux" says, in effect, "take the value of the
align attribute from the value of the
There is, of course, a further complication in that enumerated tokens may need to be similarly translated. For instance, in the example above, the attribute specification
align="center" may have had to be derived from something like
quux="flurble". Yet another attribute could be defined to hold the relevant association list of translations, although in this particular case, the complete disjunction of the relevant names could also be handled by simply adding
align="center" to the start tag and specifying an identity mapping
blort="align align" to ensure that the attribute specification is picked up for the partition.
The foregoing analysis, of how an external vocabulary can be mapped to any partition or the whole of a document, was conducted in a framework where the native markup consisted entirely of arbitrary names. That is, it shouldn't and doesn't matter for a processor of a vocabulary to know what these syntactically visible names are, or even what they could "mean", as long as its own significant names can be recovered through the settings of a small set of dedicated "control attributes" (such as
blort in the examples above). The names of these control attributes were also in principle arbitrary, except that they were made known through a declaration (
<?xmldoc?>) whose express purpose was to identify them.
Since the names in the original native markup were arbitrary, they could just as well have been names known to some other vocabulary! And this could be either by default or through the use of the same technique to map another external vocabulary. That is, using control attributes to map a vocabulary can be extended to the case of multiple vocabularies uniformly, without any danger of interference as long as the relevant sets of control attributes have distinct names. This is always possible inasmuch as the native markup is necessarily unconstrained.
To repeat, a document may be partitioned element by element with respect to a particular vocabulary by providing control attributes at each point to determine the following aspects:
These control attributes can be identified through a declaration which happens to be necessary in any case to associate any vocabulary, even a completely defaulted one, with a document. To meet these requirements, the analysis above had a concrete scheme for the purpose of exposition, but equivalent schemes are possible, perhaps better designed for economy of markup. (For instance, "scoping" control attribute specifications by element boundaries would be a useful form of minimization at the cost of increasing the processing load by having to propagate "inherited" values. Defaulting to effective identity mappings could be indicated differently. And so on.) However, the formalism is uniform and general. Using it, multiple external vocabularies can apply simultaneously in a document, with partitions and overlaps completely under the control of the author.
The problem of "recognition and collision" is an utter and complete illusion. It suffices to place the "collisions" in the values of recognizable attributes, where they aren't syntactically visible. It is as simple as that.
Interestingly enough, the basic requirements of generic discrimination procedures and the feasibility of implementation based on the processing of control attributes were already known even before the XML Namespaces bandwagon got under way. The problem, which the published specification portentously suggests is new and perhaps unprecedented, was not just old, it had already been anticipated and solved.
The "Motivation" section of the specification is actually a red herring only. It's a remnant of early drafts where vocabulary combination was trial-ballooned as a problem to be solved by the magnificent device of qualified names. This problem proving to be a non-problem, never mind that qualified names were a non-solution anyway, was disconcerting. It became "necessary" to propound a different purpose.
Procedurally - by which is really meant, politically - this took the form of a non-negotiable "requirement" imposed on the XML Activity by various other Activities in the W3C who were anxious to partake of the benefits of XML. For reasons neither explained nor open to scrutiny, it was ordained that generic identifiers and attribute names henceforth shall have immutably associated URIs, to fix their unique individual provenances in some ineffable, immanent noumenal dimension. It didn't matter whether this solved a problem or not; this just simply had to be.
Consequently, any investigations of purpose (and - horror of horrors - counterproposals) became moot, therefore procedurally irrelevant, and therefore blithely dismissible. It only remained to fix the syntactic details. Nice and technical. Who said anything about politics? No such thing!
If the Platonist conceit of "universal names" could serve to justify colonified names as syntax, it certainly "worked". That is, even if there were an ounce of sense in the concoction of "universal names", the use of multipart names in syntax wasn't necessarily a consequence, but those who might have objected simply got tired of the whole exercise. After all, the Powers That Be wanted colonified names, so saying no really wasn't an option, and there's only so much insensate babbling that one can stomach.
Universal names have a fascinating cosmology, thanks in no small part to the labyrinthine metaphysics of URIs. First and last, they simply are. It doesn't matter what they could denote, never mind mean, they just are. And, of course, they can be invented on the spot. The mere act of creation secures an immutable niche in a limitless expanse, where yea verily, everything is something unique and what you make of it is, well, what you make of it.
This, apparently, is Important For Markup.
How? No one really seems able to say. Now and then, the old "software module" chestnut crops up. It's argued that such modules will "know" immediately what to do when they see a universal name they recognize, and so, "false positive" actions can be obviated. The problem with this comforting thought is that software modules don't really know: they are merely programmed and predictable. As long as the names a software module is prepared to act upon are known, it suffices to present the module with only these names in order to get the correct outcomes.
Things could get interesting here. What if the names a software module will recognize are not known? Prudence would dictate not using the module at all, but if universally unique names were the order of the day, it becomes possible to throw arbitrary input at the module - or let the module loose on arbitrary data - and trust it to do the right thing. Up pops a dialog box:
This document cannot be processed without a download of software from BigCompany.com. [Continue] [Cancel]
On other days, there is no dialog box and wondrous things happen "automagically", because, with universal names, they are supposed to.
[To be continued]