Menyelami HTML5/Sejarah (Berat Sebelah) HTML5
No1. Bagaimana Kita Boleh Sampai Ke Sini?[sunting]
Baru-baru ini, saya terserempak dengan petikan seorang pembangun Mozilla tentang keadaan tegang yang terdapat dalam penggubalan piawai sesawang:
Implementasi dan spesifikasi terpaksa menari secara halus sesama sendiri. Anda tidak mahu implementasi dilakukan sebelum spesifikasi selesai, oleh sebab orang ramai akan mula bergantung pada perincian implementasi dan itu menghambat spesifikasi. Namun, kita juga tidak mahu spesifikasi selesai sebelum terdapat implementasi dan pengalaman pengarang dengan implementasi-implementasi tersebut, kerana kita perlukan maklum balas. Terdapat ketegangan yang tidak dapat dielakkan di sini, akan tetapi kita perlu teruskan dengan harapan semua akan berkesudahan dengan baik.
Ingat petikan ini dan benarkan saya menerangkan bagaimana HTML5 wujud.
Buku ini berkaitan HTML5 dan bukannya versi HTML terdahulu, dan bukan apa jua versi XHTML. Namun, bagi memahami sejarah HTML5 dan motivasi-motivasi yang mendorongnya, kita perlu memahami beberapa perincian teknikal lebih dulu. Khususnya, jenis-jenis MIME.
Setiap kali pelayar sesawang anda meminta halaman, pelayan sesawang mengirim "pengepala" sebelum ia mengirim penanda laman sebenar. Pengepala-pengepala ini lazimnya tidak kelihatan, mahupun terdapat alat pembangunan sesawang yang dapat memperlihatkan pengepala sekiranya anda berminat. Namun pengepala penting oleh sebab pengepala ini akan memaklumkan pelayar bagaimana ia harus mentafsir bahasa penanda yang menyusul. Pengepala yang paling penting digelar Content-Type, dan ia kelihatan begini:
Sudah tentu, realiti lebih rumit daripada itu. Generasi pertama pelayan sesawang (dan saya bercakap tentang pelayan web dari tahun 1993) tidak mengirim pengepala Content-Type kerana ia belum wujud lagi pada masa itu. (Ia tidak direka sehingga tahun 1994). Atas sebab-sebab keserasian yang bermula dari tahun 1993, beberapa pelayar web popular mengabaikan pengepala Content-Type dalam beberapa keadaan tertentu. (Ini disebut content sniffing harfiah, "penghiduan kandungan"). Namun sebagai congak am, semua yang pernah anda lihat dalam sesawang — laman HTML, imej, skrip, video, PDF, semua yang mempunyai URL — telah disarikan kepada anda dengan MIME type spesifik yang terkandung dalam pengepala Content-Type.
Jangan lupa apa yang telah diperkatakan. Kita akan melawatinya semula.
Penyimpangan panjang tentang bagaimana piawai dibuat.
Mengapa kita ada unsur <img>? Ini bukan soalan yang anda kita dengar setiap hari. Jelas sekali ada orang yang telah menciptanya. Perkara-perkara sedemikian tidak muncul begitu sahaja. Setiap unsur, setiap atribut, setiap ciri HTML yang pernah kita guna — ada orang yang mencipta, menetapkan bagaimana ia harus bekerja, dan menulisnya. Mereka ini bukan tuhan, dan mereka tidak bebas daripada kesilapan. Mereka hanya manusia. Manusia pintar, memang betul. Tapi hanya manusia.
Satu daripada perkara hebat tentang piawai yang dibangunkan "secara terbuka" ialah kita boleh berundur ke masa lampau dan menjawab soalan-soalan demikian rupa. Perbincangan berlaku dalam senarai mel, yang selalunya diarkib dan dapat digelintar. Justeru saya mengambil keputusan untuk membuat sedikit "arkeologi e-mel" demi menjawab soalan, "Mengapa kita ada unsur <img>?" Saya terpaksa melangkah ke belakang sehingga ke satu waktu sebelum wujudnya satu organisasi yang dikenali sebagai World Wide Web Consortium (W3C). Saya pulang ke zaman terawal sesawang, satu waktu bila kita dapat kira bilangan pelayan web dengan dua belah tangan dan mungkin dua jari kaki.
Pada 25 Februari 1993, Marc Andreessen menulis:
Saya ingin menyusulkan satu tag HTML pilihan yang baharu:
Argumen yang diperlukan ialah SRC="url".
Ini menamakan fail peta bit atau pixmap bagi pelayar untuk mencuba menariknya melalui sesawang dan mentafsirnya sebagai sebuah imej, dan dibenamkan dalam teks pada titik tag ditemui.
(Tiada tag penutup; ini tag tunggal.)
Tag ini boleh dibenamkan dalam penambat seperti yang lain; apabila itu berlaku, ia menjadi ikon yang sensitif pada pengaktifan sama seperti penambat teks.
Pelayar-pelayar harus diberi kebebasan tentang format imej mana yang mereka dokong. Xbm dan Xpm adalah format yang sesuai, misalnya. Jika pelayar tidak dapat mentafsir sesuatu format, ia dibenarkan melakukan apa yang ia suka (X Mosaic akan timbul sebagai peta bit lalai dan bertindak sebagai pemegang tempat).
Ini adalah kefungsian wajib buat X Mosaic; kami sudah berjaya membuat ini bekerja, dan sekurang-kurangya kami akan menggunakannya pada tahap dalaman. Saya terbuka kepada cadangan bagaimana ini harus dikendali dalam HTML; jika anda mempunyai idea yang lebih baik daripada apa yang saya cadangkan, sila beritahu. Saya tahu ini format imej wrt kabur, tetapi saya tidak melihat pilihan selain daripada menyatakan "biar pelayar melakukan apa yang ia mampu" dan tunggu penyelesaian yang sempurna muncul (MIME, satu hari nanti, mungkin).
Xbm dan Xpm adalah format grafik popular pada sistem-sistem Unix.
"Mosaic" adalah pelayar web yang terawal sekali. ("X Mosaic" merupakan versi yang bekerja dalam sistem-sistem Unix.) Apabila beliau menulis pesanan ini pada awal tahun 1993, Marc Andreessen belum lagi menubuhkan syarikat yang membuatnya masyhur, yakni Mosaic Communications Corporation, dan juga belum memulakan kerja ke atas produk perdana syarikat itu, "Mosaic Netscape." (Anda mungkin lebih mengenali perkara-perkara ini dengan nama-nama baharu, "Netscape Corporation" dan "Netscape Navigator.")
"MIME, satu hari nanti, mungkin" ialah rujukan kepada rundingan kandungan, satu ciri HTTP – sesuatu klien (misalnya pelayar sesawang) memberitahu pelayan (misalnya pelayan sesawang) apa jenis sumber yang didokongnya (misalnya image/jpeg) agar pelayan dapat memulangkan sesuatu dalam format yang dikehendaki klien. HTTP Asal telah ditetapkan pada tahun 1991 (satu-satunya versi yang diimplementasi pada Februari 1993) dan tidak memiliki cara bagi klien untuk memberitahu pelayan apa jenis imej yang didokong, justeru timbullah dilema yang dihadapi Marc.
Beberapa jam kemudian, Tony Johnson menjawab:
Saya ada sesuatu yang agak sama dengan Midas 2.0 (di masa ini digunakan di SLAC, dan dijadual untuk terbitan awam dalam masa terdekat), melainkan perbezaan dengan semua nama yang digunakan, dan ia juga mempunyai argumen tambahan NAME="name". Ia memiliki kefungsian yang hampir sama dengan tag yang anda usulkan, tag IMG. contoh:
<ICON name="NoEntry" href="http://note/foo/bar/NoEntry.xbm">
The idea of the name parameter was to allow the browser to have a set of “built in” images. If the name matches a “built in” image it would use that instead of having to go out and fetch the image. The name could also act as a hint for “line mode” browsers as to what kind of a symbol to put in place of the image.
Saya tidak berapa pedulikan parameter atau nama tag, akan tetapi lebih sesuai jika kita mengguna perkara yang sama. Saya tidak berapa pedulikan singkatan, yakni kenapa tidak IMAGE= dan SOURCE=. Saya agak lebih cenderung mengguna ICON kerana ia memberi implikasi bahawa IMEJ itu harus kecil, namun mungkin ia merupakan perkataan yang lampau sarat?
Midas ada sebuah lagi pelayar sesawang awal, sezaman dengan X Mosaic. Ia pelayar silang platform; ia bekerja dengan kedua-dua Unix dan VMS. "SLAC" merujuk kepada Stanford Linear Accelerator Center, sekarang dikenali sebagai SLAC National Accelerator Laboratory, yang menjadi hos kepada pelayan sesawang pertama di Amerika Syarikat (malah pelayan sesawang pertama di luar Eropah). Apabila Tony menulis pesananan ini, SLAC sudah menjadi orang lama dalam WWW, dengan menjadi hos kepada lima halaman sesawang dalam pelayan webnya selama 441 hari.
Sementara kita membincangkan tag-tag baharu, saya ada satu lagi tag yang agak sama, yang saya ingin didokongi Midas 2.0. Pada prinsipnya ia adalah:
Tujuannya di sini adalah dokumen kedua dimuatkan sekali dengan dokumen pertama pada tempat di mana tag itu ditemui. Pada prinsipnya, dokumen yang dirujuk itu mungkin terdiri daripada apa sahaja, tetapi tujuan utamanya adalah untuk membenarkan imej (dalam kes ini disaiz secara sembarangan) dibenamkan ke dalam dokumen. Sekali lagi niatnya adalah apabila HTTP2 muncul format dokumen yang diselitkan itu boleh dirunding semula secara berasingan.
"HTTP2" merujuk kepada Basic HTTP, menurut definisinya pada tahun 1992. Pada titik masa ini, pada awal tahun 1993, ia masih belum dilaksanakan. Draf yang dikenali sebagai "HTTP2" berubah dan akhirnya terpiawai sebagai "HTTP 1.0" (mahupun dalam masa tiga tahun kemudian). HTTP 1.0 tidak memuatkan pengepala permintaan buat rundingan kandungan, juga dikenali sebagai "MIME, satu hari nanti, mungkin".
An alternative I was considering was:
<A HREF="..." INCLUDE>See photo</A>
I don’t much like adding more functionality to the <A> tag, but the idea here is to maintain compatibility with browsers that can not honour the INCLUDE parameter. The intention is that browsers which do understand INCLUDE, replace the anchor text (in this case “See photo”) with the included document (picture), while older or dumber browsers ignore the INCLUDE tag completely.
This proposal was never implemented, although the idea of providing text if an image is missing is an important accessibility technique that was missing from Marc’s initial <IMG> proposal. Years later, this feature was bolted on as the <img alt> attribute, which Netscape promptly broke by erroneously treating it as a tooltip.
A few hours after Tony posted his message, Tim Berners-Lee responded:
I had imagined that figues would be reprented as
<a name="fig1" href="fghjkdfghj" REL="EMBED, PRESENT">Figure </a>
where the relation ship values mean
EMBED Embed this here when presenting it PRESENT Present this whenever the source document is presented
Note that you can have various combinations of these, and if the browser doesn’t support either one, it doesn’t break.
[I] see that using this as a method for selectable icons means nesting anchors. Hmmm. But I hadn’t wanted a special tag.
This proposal was never implemented, but the rel attribute is still around.
Jim Davis added:
It would be nice if there was a way to specify the content type, e.g.
<IMG HREF="http://nsa.gov/pub/sounds/gorby.au" CONTENT-TYPE=audio/basic>
But I am completely willing to live with the requirement that I specify the content type by file extension.
This proposal was never implemented, but Netscape did later add support for embedding of media objects with the <embed> element.
Jay C. Weber asked:
While images are at the top of my list of desired medium types in a WWW browser, I don’t think we should add idiosyncratic hooks for media one at a time. Whatever happened to the enthusiasm for using the MIME typing mechanism?
Marc Andreessen replied:
This isn’t a substitute for the upcoming use of MIME as a standard document mechanism; this provides a necessary and simple implementation of functionality that’s needed independently from MIME.
Jay C. Weber responded:
Let’s temporarily forget about MIME, if it clouds the issue. My objection was to the discussion of “how are we going to support embedded images” rather than “how are we going to support embedded objections in various media”.
Otherwise, next week someone is going to suggest ‘lets put in a new tag <AUD SRC="file://foobar.com/foo/bar/blargh.snd">‘ for audio.
There shouldn’t be much cost in going with something that generalizes.
With the benefit of hindsight, it appears that Jay’s concerns were well founded. It took a little more than a week, but HTML5 did finally add new <video> and <audio> elements.
Responding to Jay’s original message, Dave Raggett said:
True indeed! I want to consider a whole range of possible image/line art types, along with the possibility of format negotiation. Tim’s note on supporting clickable areas within images is also important.
Later in 1993, Dave Raggett proposed HTML+ as an evolution of the HTML standard. The proposal was never implemented, and it was superseded by HTML 2.0. HTML 2.0 was a “retro-spec,” which means it formalized features already in common use. “This specification brings together, clarifies, and formalizes a set of features that roughly corresponds to the capabilities of HTML in common use prior to June 1994.”
Dave later wrote HTML 3.0, based on his earlier HTML+ draft. Outside of the W3C’s own reference implementation, Arena, HTML 3.0 was never implemented, and it was superseded by HTML 3.2, another “retro-spec”: “HTML 3.2 adds widely deployed features such as tables, applets and text flow around images, while providing full backwards compatibility with the existing standard HTML 2.0.”
Dave later co-authored HTML 4.0, developed HTML Tidy, and went on to help with XHTML, XForms, MathML, and other modern W3C specifications.
Getting back to 1993, Marc replied to Dave:
Actually, maybe we should think about a general-purpose procedural graphics language within which we can embed arbitrary hyperlinks attached to icons, images, or text, or anything. Has anyone else seen Intermedia’s capabilities wrt this?
Intermedia was a hypertext project from Brown University. It was developed from 1985 to 1991 and ran on A/UX, a Unix-like operating system for early Macintosh computers.
The idea of a “general-purpose procedural graphics language” did eventually catch on. Modern browsers support both SVG (declarative markup with embedded scripting) and <canvas> (a procedural direct-mode graphics API), although the latter started as a proprietary extension before being “retro-specced” by the WHATWG.
Bill Janssen replied:
Other systems to look at which have this (fairly valuable) notion are Andrew and Slate. Andrew is built with _insets_, each of which has some interesting type, such as text, bitmap, drawing, animation, message, spreadsheet, etc. The notion of arbitrary recursive embedding is present, so that an inset of any kind can be embedded in any other kind which supports embedding. For example, an inset can be embedded at any point in the text of the text widget, or in any rectangular area in the drawing widget, or in any cell of the spreadsheet.
“Andrew” is a reference to the Andrew User Interface System (although at that time it was simply known as the Andrew Project).
Meanwhile, Thomas Fine had a different idea:
Here’s my opinion. The best way to do images in WWW is by using MIME. I’m sure postscript is already a supported subtype in MIME, and it deals very nicely with mixing text and graphics.
But it isn’t clickable, you say? Yes your right. I suspect there is already an answer to this in display postscript. Even if there isn’t the addition to standard postscript is trivial. Define an anchor command which specifies the URL and uses the current path as a closed region for the button. Since postscript deals so well with paths, this makes arbitrary button shapes trivial.
Display Postscript was an on-screen rendering technology co-developed by Adobe and NeXT.
This proposal was never implemented, but the idea that the best way to fix HTML is to replace it with something else altogether still pops up from time to time.
Tim Berners-Lee, March 2, 1993:
HTTP2 allows a document to contain any type which the user has said he can handle, not just registered MIME types. So one can experiment. Yes I think there is a case for postscript with hypertext. I don’t know whether display postcript has enough. I know Adobe are trying to establish their own postscript-based “PDF” which will have links, and be readable by their proprietory brand of viewers.
I thought that a generic overlaying language for anchors (Hytime based?) would allow the hypertext and the graphics/video standards to evolve separately, which would help both.
Let the IMG tag be INCLUDE and let it refer to an arbitrary document type. Or EMBED if INCLUDE sounds like a cpp include which people will expect to provide SGML source code to be parsed inline — not what was intended.
HyTime was an early, SGML-based hypertext document system. It loomed large in early discussions of HTML, and later XML.
Tim’s proposal for an <INCLUDE> tag was never implemented, although you can see echoes of it in <object>, <embed>, and the <iframe> element.
Finally, on March 12, 1993, Marc Andreessen revisited the thread:
Back to the inlined image thread again — I’m getting close to releasing Mosaic v0.10, which will support inlined GIF and XBM images/bitmaps, as mentioned previously. …
We’re not prepared to support INCLUDE/EMBED at this point. … So we’re probably going to go with <IMG SRC="url"> (not ICON, since not all inlined images can be meaningfully called icons). For the time being, inlined images won’t be explicitly content-type’d; down the road, we plan to support that (along with the general adaptation of MIME). Actually, the image reading routines we’re currently using figure out the image format on the fly, so the filename extension won’t even be significant.
An unbroken line
I am extraordinarily fascinated with all aspects of this almost-19-year-old conversation that led to the creation of an HTML element that has been used on virtually every web page ever published. Consider:
HTTP still exists. HTTP successfully evolved from 0.9 into 1.0 and later 1.1. And still it evolves. HTML still exists. That rudimentary data format — it didn’t even support inline images! — successfully evolved into 2.0, 3.2, 4.0. HTML is an unbroken line. A twisted, knotted, snarled line, to be sure. There were plenty of “dead branches” in the evolutionary tree, places where standards-minded people got ahead of themselves (and ahead of authors and implementors). But still. Here we are, in 2012, and web pages from 1990 still render in modern browsers. I just loaded one up in the browser of my state-of-the-art Android mobile phone, and I didn’t even get prompted to “please wait while importing legacy format…” HTML has always been a conversation between browser makers, authors, standards wonks, and other people who just showed up and liked to talk about angle brackets. Most of the successful versions of HTML have been “retro-specs,” catching up to the world while simultaneously trying to nudge it in the right direction. Anyone who tells you that HTML should be kept “pure” (presumably by ignoring browser makers, or ignoring authors, or both) is simply misinformed. HTML has never been pure, and all attempts to purify it have been spectacular failures, matched only by the attempts to replace it. None of the browsers from 1993 still exist in any recognizable form. Netscape Navigator was abandoned in 1998 and rewritten from scratch to create the Mozilla Suite, which was then forked to create Firefox. Internet Explorer had its humble “beginnings” in “Microsoft Plus! for Windows 95,” where it was bundled with some desktop themes and a pinball game. (But of course that browser can be traced back further too.) Some of the operating systems from 1993 still exist, but none of them are relevant to the modern web. Most people today who “experience” the web do so on a PC running Windows 2000 or later, a Mac running Mac OS X, a PC running some flavor of Linux, or a handheld device like an iPhone. In 1993, Windows was at version 3.1 (and competing with OS/2), Macs were running System 7, and Linux was distributed via Usenet. (Want to have some fun? Find a graybeard and whisper “Trumpet Winsock” or “MacPPP.”) Some of the same people are still around and still involved in what we now simply call “web standards.” That’s after almost 20 years. And some were involved in predecessors of HTML, going back into the 1980s and before. Speaking of predecessors… With the eventual popularity of HTML and the web, it is easy to forget the contemporary formats and systems that informed its design. Andrew? Intermedia? HyTime? And HyTime was not some rinky-dink academic research project; it was an ISO standard. It was approved for military use. It was Big Business. And you can read about it yourself… on this HTML page, in your web browser.
But none of this answers the original question: why do we have an <img> element? Why not an <icon> element? Or an <include> element? Why not a hyperlink with an include attribute, or some combination of rel values? Why an <img> element? Quite simply, because Marc Andreessen shipped one, and shipping code wins.
That’s not to say that all shipping code wins; after all, Andrew and Intermedia and HyTime shipped code too. Code is necessary but not sufficient for success. And I certainly don’t mean to say that shipping code before a standard will produce the best solution. Marc’s <img> element didn’t mandate a common graphics format; it didn’t define how text flowed around it; it didn’t support text alternatives or fallback content for older browsers. And 17 years later, we’re still struggling with content sniffing, and it’s still a source of crazy security vulnerabilities. And you can trace that all the way back, 17 years, through the Great Browser Wars, all the way back to February 25, 1993, when Marc Andreessen offhandedly remarked, “MIME, someday, maybe,” and then shipped his code anyway.
The ones that win are the ones that ship.
A timeline of HTML development from 1997 to 2004
In December 1997, the World Wide Web Consortium (W3C) published HTML 4.0 and promptly shut down the HTML Working Group. Less than two months later, a separate W3C Working Group published XML 1.0. A mere three months after that, the people who ran the W3C held a workshop called “Shaping the Future of HTML” to answer the question, “Has W3C given up on HTML?” This was their answer:
In discussions, it was agreed that further extending HTML 4.0 would be difficult, as would converting 4.0 to be an XML application. The proposed way to break free of these restrictions is to make a fresh start with the next generation of HTML based upon a suite of XML tag-sets.
The W3C re-chartered the HTML Working Group to create this “suite of XML tag-sets.” Their first step, in December 1998, was a draft of an interim specification that simply reformulated HTML in XML without adding any new elements or attributes. This specification later became known as “XHTML 1.0.” It defined a new MIME type for XHTML documents, application/xhtml+xml. However, to ease the migration of existing HTML 4 pages, it also included Appendix C, that “summarizes design guidelines for authors who wish their XHTML documents to render on existing HTML user agents.” Appendix C said you were allowed to author so-called “XHTML” pages but still serve them with the text/html MIME type.
Their next target was web forms. In August 1999, the same HTML Working Group published a first draft of XHTML Extended Forms. They set the expectations in the first paragraph:
After careful consideration, the HTML Working Group has decided that the goals for the next generation of forms are incompatible with preserving backwards compatibility with browsers designed for earlier versions of HTML. It is our objective to provide a clean new forms model (“XHTML Extended Forms”) based on a set of well-defined requirements. The requirements described in this document are based on experience with a very broad spectrum of form applications.
A few months later, “XHTML Extended Forms” was renamed “XForms” and moved to its own Working Group. That group worked in parallel with the HTML Working Group and finally published the first edition of XForms 1.0 in October 2003.
Meanwhile, with the transition to XML complete, the HTML Working Group set their sights on creating “the next generation of HTML.” In May 2001, they published the first edition of XHTML 1.1, that added only a few minor features on top of XHTML 1.0, but also eliminated the “Appendix C” loophole. Starting with version 1.1, all XHTML documents were to be served with a MIME type of application/xhtml+xml.
Everything you know about XHTML is wrong
Why are MIME types important? Why do I keep coming back to them? Three words: draconian error handling. Browsers have always been “forgiving” with HTML. If you create an HTML page but forget the </head> tag, browsers will display the page anyway. (Certain tags implicitly trigger the end of the <head> and the start of the <body>.) You are supposed to nest tags hierarchically — closing them in last-in-first-out order — but if you create markup like , browsers will just deal with it (somehow) and move on without displaying an error message.
three birds laughing
As you might expect, the fact that “broken” HTML markup still worked in web browsers led authors to create broken HTML pages. A lot of broken pages. By some estimates, over 99% of HTML pages on the web today have at least one error in them. But because these errors don’t cause browsers to display visible error messages, nobody ever fixes them.
The W3C saw this as a fundamental problem with the web, and they set out to correct it. XML, published in 1997, broke from the tradition of forgiving clients and mandated that all programs that consumed XML must treat so-called “well-formedness” errors as fatal. This concept of failing on the first error became known as “draconian error handling,” after the Greek leader Draco who instituted the death penalty for relatively minor infractions of his laws. When the W3C reformulated HTML as an XML vocabulary, they mandated that all documents served with the new application/xhtml+xml MIME type would be subject to draconian error handling. If there was even a single well-formedness error in your XHTML page — such as forgetting the </head> tag or improperly nesting start and end tags — web browsers would have no choice but to stop processing and display an error message to the end user.
This idea was not universally popular. With an estimated error rate of 99% on existing pages, the ever-present possibility of displaying errors to the end user, and the dearth of new features in XHTML 1.0 and 1.1 to justify the cost, web authors basically ignored application/xhtml+xml. But that doesn’t mean they ignored XHTML altogether. Oh, most definitely not. Appendix C of the XHTML 1.0 specification gave the web authors of the world a loophole: “Use something that looks kind of like XHTML syntax, but keep serving it with the text/html MIME type.” And that’s exactly what thousands of web developers did: they “upgraded” to XHTML syntax but kept serving it with a text/html MIME type.
Even today, millions of web pages claim to be XHTML. They start with the XHTML doctype on the first line, use lowercase tag names, use quotes around attribute values, and add a trailing slash after empty elements like
. But only a tiny fraction of these pages are served with the application/xhtml+xml MIME type that would trigger XML’s draconian error handling. Any page served with a MIME type of text/html — regardless of doctype, syntax, or coding style — will be parsed using a “forgiving” HTML parser, silently ignoring any markup errors, and never alerting end users (or anyone else) even if the page is technically broken.
XHTML 1.0 included this loophole, but XHTML 1.1 closed it, and the never-finalized XHTML 2.0 continued the tradition of requiring draconian error handling. And that’s why there are billions of pages that claim to be XHTML 1.0, and only a handful that claim to be XHTML 1.1 (or XHTML 2.0). So are you really using XHTML? Check your MIME type. (Actually, if you don’t know what MIME type you’re using, I can pretty much guarantee that you’re still using text/html.) Unless you’re serving your pages with a MIME type of application/xhtml+xml, your so-called “XHTML” is XML in name only.
A competing vision
In June 2004, the W3C held the Workshop on Web Applications and Compound Documents. Present at this workshop were representatives of three browser vendors, web development companies, and other W3C members. A group of interested parties, including the Mozilla Foundation and Opera Software, gave a presentation on their competing vision of the future of the web: an evolution of the existing HTML 4 standard to include new features for modern web application developers.
The following seven principles represent what we believe to be the most critical requirements for this work.
In a straw poll, the workshop participants were asked, “Should the W3C develop declarative extension to HTML and CSS and imperative extensions to DOM, to address medium level Web Application requirements, as opposed to sophisticated, fully-fledged OS-level APIs? (proposed by Ian Hickson, Opera Software)” The vote was 11 to 8 against. In their summary of the workshop, the W3C wrote, “At present, W3C does not intend to put any resources into the third straw-poll topic: extensions to HTML and CSS for Web Applications, other than technologies being developed under the charter of current W3C Working Groups.”
Faced with this decision, the people who had proposed evolving HTML and HTML forms had only two choices: give up, or continue their work outside of the W3C. They chose the latter and registered the whatwg.org domain, and in June 2004, the WHAT Working Group was born.
WHAT Working Group?
What the heck is the WHAT Working Group? I’ll let them explain it for themselves:
The Web Hypertext Applications Technology Working Group is a loose, unofficial, and open collaboration of Web browser manufacturers and interested parties. The group aims to develop specifications based on HTML and related technologies to ease the deployment of interoperable Web Applications, with the intention of submitting the results to a standards organisation. This submission would then form the basis of work on formally extending HTML in the standards track.
The creation of this forum follows from several months of work by private e-mail on specifications for such technologies. The main focus up to this point has been extending HTML4 Forms to support features requested by authors, without breaking backwards compatibility with existing content. This group was created to ensure that future development of these specifications will be completely open, through a publicly-archived, open mailing list.
The key phrase here is “without breaking backward compatibility.” XHTML (minus the Appendix C loophole) is not backwardly compatible with HTML. It requires an entirely new MIME type, and it mandates draconian error handling for all content served with that MIME type. XForms is not backwardly compatible with HTML forms, because it can only be used in documents that are served with the new XHTML MIME type, which means that XForms also mandates draconian error handling. All roads lead to MIME.
Instead of scrapping over a decade’s worth of investment in HTML and making 99% of existing web pages unusable, the WHAT Working Group decided to take a different approach: documenting the “forgiving” error-handling algorithms that browsers actually used. Web browsers have always been forgiving of HTML errors, but nobody had ever bothered to write down exactly how they did it. NCSA Mosaic had its own algorithms for dealing with broken pages, and Netscape tried to match them. Then Internet Explorer tried to match Netscape. Then Opera and Firefox tried to match Internet Explorer. Then Safari tried to match Firefox. And so on, right up to the present day. Along the way, developers burned thousands and thousands of hours trying to make their products compatible with their competitors’.
If that sounds like an insane amount of work, that’s because it is. Or rather, it was. It took five years, but (modulo a few obscure edge cases) the WHAT Working Group successfully documented how to parse HTML in a way that is compatible with existing web content. Nowhere in the final algorithm is there a step that mandates that the HTML consumer should stop processing and display an error message to the end user.
While all that reverse-engineering was going on, the WHAT working group was quietly working on a few other things, too. One of them was a specification, initially dubbed Web Forms 2.0, that added new types of controls to HTML forms. (You’ll learn more about web forms in A Form of Madness.) Another was a draft specification called “Web Applications 1.0,” that included major new features like a direct-mode drawing canvas and native support for audio and video without plugins.
Back to the W3C
cat and dog holding an umbrella
For two and a half years, the W3C and the WHAT Working Group largely ignored each other. While the WHAT Working Group focused on web forms and new HTML features, the W3C HTML Working Group was busy with version 2.0 of XHTML. But by October 2006, it was clear that the WHAT Working Group had picked up serious momentum, while XHTML 2 was still languishing in draft form, unimplemented by any major browser. In October 2006, Tim Berners-Lee, the founder of the W3C itself, announced that the W3C would work together with the WHAT Working Group to evolve HTML.
Some things are clearer with hindsight of several years. It is necessary to evolve HTML incrementally. The attempt to get the world to switch to XML, including quotes around attribute values and slashes in empty tags and namespaces all at once didn’t work. The large HTML-generating public did not move, largely because the browsers didn’t complain. Some large communities did shift and are enjoying the fruits of well-formed systems, but not all. It is important to maintain HTML incrementally, as well as continuing a transition to well-formed world, and developing more power in that world.
The plan is to charter a completely new HTML group. Unlike the previous one, this one will be chartered to do incremental improvements to HTML, as also in parallel xHTML. It will have a different chair and staff contact. It will work on HTML and xHTML together. We have strong support for this group, from many people we have talked to, including browser makers.
There will also be work on forms. This is a complex area, as existing HTML forms and XForms are both form languages. HTML forms are ubiquitously deployed, and there are many implementations and users of XForms. Meanwhile, the Webforms submission has suggested sensible extensions to HTML forms. The plan is, informed by Webforms, to extend HTML forms.
One of the first things the newly re-chartered W3C HTML Working Group decided was to rename “Web Applications 1.0” to “HTML5.” And here we are, diving into HTML5.
In October 2009, the W3C shut down the XHTML 2 Working Group and issued this statement to explain their decision:
When W3C announced the HTML and XHTML 2 Working Groups in March 2007, we indicated that we would continue to monitor the market for XHTML 2. W3C recognizes the importance of a clear signal to the community about the future of HTML.
While we recognize the value of the XHTML 2 Working Group’s contributions over the years, after discussion with the participants, W3C management has decided to allow the Working Group’s charter to expire at the end of 2009 and not to renew it.
The ones that win are the ones that ship.
The History of the Web, an old draft by Ian Hickson HTML/History, by Michael Smith, Henri Sivonen, and others A Brief History of HTML, by Scott Reynen
This has been “How Did We Get Here?” The full table of contents has more if you’d like to keep reading. Did You Know?
In association with Google Press, O’Reilly is distributing this book in a variety of formats, including paper, ePub, Mobi, and DRM-free PDF. The paid edition is called “HTML5: Up & Running,” and it is available now. This chapter is included in the paid edition.
If you liked this chapter and want to show your appreciation, you can buy “HTML5: Up & Running” with this affiliate link or buy an electronic edition directly from O’Reilly. You’ll get a book, and I’ll get a buck. I do not currently accept direct donations.
Copyright MMIX–MMXI Mark Pilgrim