Anyone can put their ideas on the Internet, but not everyone does it well. That’s because effective publishing must negotiate between conflicting strategies. On one hand, there’s the desire to push the most information to the most people in the most formats in the most languages. That’s a nice goal, but can you provide the right information in the right format in the right language to the right person at the right time? Content that is personalized, easily found, appropriately scoped, and pleasant to interact with has a name: Adaptive Content.
For text-based content, at least, Adaptive Content is any text file that can be broken down into small enough chunks and then described with metadata to enable query-based reuse on any device. The definition applies to rich-media content as well, but text content is generally easier to chunk and retrieve in this fashion.
Ideally, content should adapt to the reader as well as to the device. This goal requires a more systematic approach to how we create and publish content. There is no magic filter that you can drop into an existing process that will convert dreck into dreams. Whether published on paper or on the Web, content with haphazard pedigree is always affected by inertia; how it was created dominates how easily it can be used later, if ever.
Developing new content under the principles of a clean architecture puts a different spin on things: cleanly designed and structured content has the momentum to carry it into new opportunities.
Karen McGrane, one of the most respected voices in the Web content community, defines Adaptive Content as being:
- Reusable on multiple platforms and in multiple formats
- Structured so as to enable selective reuse in new scenarios
- Free of presentational artifacts
- Enriched with semantic identification and descriptive properties
- Designed to encourage API-based querying of parts of that structure based on those properties.
I find it interesting to note how these points align with principles of the Extensible Markup Language (XML). XML is a content structuring methodology based on separation of presentation from content, the determinate use of semantic names and properties for parts of the structure, and on processing tools embedded in all popular Web server stacks for API-based querying and manipulation of those structures. When designed as part of an overall Web content architecture, XML principles can provide robustness in parts of the system that need help with rules-based processes and with repeatable, reuseable processing, particularly for narrative content that is more document-like than data-like.
Why DITA?
And this reasoning leads back to the question, “Why DITA, especially for the Web?”
Comparing the birth of the Internet to the Big Bang origin of the universe, starting from a singularity at CERN in 1994, the commercial Internet experienced rapid inflation, with browsers and versions of HTML coming and going like early stars, adding their newly generated elements into the mix that eventually settled out into the structures and physics we now use every day. XML arrived early in that history followed by an explosion of bespoke XML vocabularies that described particular data models and their communities of use.
The Darwin Information Typing Architecture, commonly just called by its acronym DITA, was designed to take advantage of the page-like nature of content in this new publishing universe. Hence DITA has as its atoms topics (stand-alone chunks of structured content) and maps (a way of organizing particular selections of topics as related items for many roles).
DITA also introduced the innovation of “design specialization,” which means that base elements can be extended represent more specific vocabularies, and base processing can be extended to support more specific processing for those designs. The standard has many other features now, some of which are still germane to the goal of supporting Adaptive Content on the Web.
Although DITA has grown to become a complex standard, these core principles still work the same, and a base DITA topic is conceptually a near candidate for expressing Web-friendly content within XML content processing frameworks. And these frameworks, as I mentioned before, exist on nearly every Web server and delivery stack (the languages and services that drive Web applications).
Just DITA?
So what qualifies DITA more than any other XML standard for direct use in at least some Web applications? I have a biased opinion, just to be clear; I led the IBM workgroup that came up with the initial design in late 1999, and I helped organize IBM’s 2004 contribution of the design to the OASIS open standards consortium where I continue to oversee the design activity of the standard with the help of now-Chair Kristen Eberlein. With that disclaimer out of the way, I’ll offer a condensed version of both the gems and flaws in the use of the DITA flavor of XML for direct-to-Web content delivery.Footnote 1
Among its high points for alignment with direct-to-Web content delivery solutions, DITA provides:
- Close affinity to Web page writing conventions and length
- Intentional similarity of inner content markup names (p, ul, ol, dl, etc.)
- A close match in its title, short description, and body structure to the way most Web CMS tools manage their content.
- Maps that work so very well for representing collections of content.
On the minus side, HTML content models have evolved well past the internal models that DITA assimilated in 1999, which causes these limitations:
- Web authors often organize content in patterns that DITA’s content model won’t allow. You can’t always author in DITA “as if it were HTML.”
- HTML5 has added elements for which there are no equivalent base forms in DITA. Normally, domain specialization in DITA can help rectify this mismatch, but because HTML5 is a “Living Standard” and can add or drop elements as it evolves, an ongoing tension remains between the two formats.
- Entering values for DITA’s various metadata structures is perhaps harder than it should be for light editing environments.
Some of the difficulty in providing better structure content for the Web lie with the Web architecture itself:
- In-browser editors provide only limited markup choices for rich text regions. There are many semantic phrase types in HTML that are identical to already-defined elements in DITA, but in-browser support for them is universally absent (kbd, var, cite, code and even dl are not natively supported in the contentEditable feature that I’m alluding to).
- Common in-browser editors are notoriously poor at managing the insertion and updating of properties on markup in the browser. Moreover, they are generally bad about supporting clean markup by any measure. This is not the fault of XML or DITA as an example of XML, of course. But because wildly inconsistent HTML content is tolerated in the Web ecosystem, nothing is “broken” in the sense that most things seem to work as intended. The race for creating adaptive content may hopefully bring more focus to the role of inline browser editors for cleaning up the parts of the Web that they touch.
I still think it is a good goal to provide WYSIWYG-like friendliness for authors in order to promote the direct creation of structured content on the web. But without a universal way within the Web processing architecture itself to create such structured, adaptive content in browsers, it takes extra, inventive efforts to make use of DITA concepts (or even XML in general) that map so well to the call for better, more adaptive content.
Whither Structure and Metadata?
Given this lack of authorial guidance for the newer structural elements that were added in HTML5 (section, aside, header, main, footer, among others), and because HTML5 has such loose rules anyway about which of these elements can occur where, there are few tools in the HTML author’s arsenal that can properly enforce consistent use of this markup in Web content.
By involving DITA topic templates as archetypes to guide Web content creation, writers can create more consistent internal patterns for content such as API references and online training materials. One way I envision this happening is by the use of DITA topic-type templates to auto-generate form fields with JavaScript controls for inserting new components as inferred from the template, which itself conforms to XML schemas even further behind the scenes. The result: no overt XML markup is shown, the overall structure conforms to a validated pattern, and the scope of cleanup is now just at the component level, not at the whole-document level. I’ll have more to report about this effort soon.
In addition, DITA maps provide some benefits for Web content applications. The high-level representation of resources in a DITA map is an improvement over the common practice of representing collections of objects as non-semantic HTML lists. This goes for navigation links, curated link lists, feed lists, bookmark lists, search results, loops for sliders and portfolios, and even blog loops–the backbone of automated content publishing. This is a departure from the common misconception about maps–that they were designed expressly to represent fixed views of content. As bags of pointers, they fit neatly into the RESTful Web concept of resources and collections; for example, http:example.com/category/things_i_like_about_html5 names a resource, and http://example.com/category is a query that represents a collection (that is, an index of all resource members of that category name). This is powerful well beyond conventional uses of lists of links in Web applications.
In this sense, DITA maps and topics provide the structural rules for parts of HTML5 that are simply not defined at a meta level (such as applying repeatable design patterns to HTML5 structures and data fields). The processing features for meta-markup are available in the XML-aware processing libraries of most server and client languages, but the architecturally provided disciplines for their use are just not there. The World Wide Web Consortium’s workgroups concerned with Web protocols seem content to hold off on any guidance at this level. I have a name for what we are left with: the Hole in the Web’s Architecture.
The upcoming OASIS DITA 1.3 specification will add some nifty Web-friendly capabilities that I look forward to using, but that specification includes other markup and features that I may rarely ever need. If I obey the laws of physics and good taste in this alternative universe, I can use these features to create “Every Page is Page One” articles, long-form posts, repair manuals, white papers, emergency response wikis, and more. And these can be served very efficiently as long as any build or compilation preprocessing is avoided (or at least cached for efficient reuse).
The universe continues to evolve, stretched into new shapes, responsively, by the structured, Adaptive Content that we prepare for it.
et alia: I pulled this post out of wraps today after seeing the excellent post by Jeff Eaton in A List Apart, The Battle for the Body Field. I am very much in agreement with Jeff’s concerns about how the Internet is going to solve the problem of creating structured content, rich in links and semantic intent and selectable properties, ready to serve the hunger for adaptive content.
Also, I covered principles of using structured content for adaptive roles in my February 2014 presentation for Intelligent Content Conference, Connecting Intelligent Content with Micropublishing and Beyond. At the conference, Scott Abel and Rahel Anne Bailie also launched their new book, The Language of Content Strategy, for which I authored the “Structured Content” definition and essay. I like to think that the Semantic Web is being retooled along the lines of the Six Million Dollar Man franchise–the original concept is still there in spirit, but with augmented capabilities.
________
- Footnote 1
- By “direct-to-Web” I mean that the DITA as content is directly fetched upon request and transformed on the fly into the particular, personalized form of HTML needed for a particular session. Content applications that require complex DITA feature use simply won’t be using this model of content creation and delivery.
Don:
As usual, this post is spot on. I’d love to republish this for you and help you reach a wide audience.
Great work!
Scott