One’s esteemed colleague Lachlan Hunt inaccurately accuses me of engaging in a “movement against the WCAG Working Group.” Listen, I can barely get the ragtag handful of standardistas in this city together once a month for drinks, let alone run a “movement.” In any case, I have conclusively disproved such allegations, which are, flatly, false.
Now we get to the matter of duelling opinions.
A future website that complies with WCAG 2 won’t need valid HTML – at all, ever…. You will, however, have to check the DOM outputs of your site in multiple browsers and prove they’re identical.
There are two issues here: Whether or not validity should be required by WCAG 2.0 and the insanity of that particular technique. … Personally, I’m not convinced validity should be strictly required by WCAG 2.0, I believe it should remain as just a technique.
My remarks in the original piece stand: “Even if valid HTML everywhere all the time is unattainable, the fact remains that, in 2006, we have never had more developers who understand the concept and are trying to make it real on their own sites. WCAG 2 undoes a requirement that, were it retained, could be perfectly timed now.”
We’ve gone from enforcing valid code at an optional level (Priority 2) to refusing to require it at all. It is indeed as if Web standards never existed.
You’ll be able to define entire technologies as a “baseline,” meaning anyone without that technology has little, if any, recourse to complain that your site is inaccessible to them.
[…] If your baseline is set too high, users will have “recourse to complain that your site is inaccessible to them.”
Sadly, no. The user has no say in the baseline designation at all.
Not that anybody ever made them accessible, but if you post videos online, you no longer have to provide audio descriptions for the blind at the lowest “conformance” level. And only prerecorded videos require captions at that level.
I’m not sure how either of these are serious issues.
Because the correct method to make video accessible to the deaf is to caption it; for the blind, the correct method is audio description. There are no second choices. (For extra credit, explore why translating a video into one sign language helps deaf viewer as a whole.)
Those with the knowledge and tools available to provide audio descriptions of video can still do so, but requiring audio descriptions at the lowest level is an unreasonable expectation for most authors who are unlikely to have the technical skills, let alone the tools, to do so.
They’ll probably have to send it out of house.
Such authors can still provide a full text alternative, which is much easier to produce.
It is not. You have to transcribe the audio (why not just caption it?); write audio descriptions (why not just record them?); then put them together in a precise, albeit undefined, sequence; then duplicate all the links or other “interactivity” in the video. And the resulting document will have to meet all the other WCAG requirements, including perfect DOM outputs in multiple browsers.
It may be fun to read the screenplay of a movie after you’ve watched it, but forcing disabled people to do the same thing instead of watching it is discriminatory.
I’m sorry, but I’ve written enough about captioning and description over the last 16 years (and have actually written enough description scripts) that I am going to exercise authorial prerogative and state that I just do not feel like having to explain why correct and established accessibility methods really are superior to handing deaf or blind people a transcript and telling them to get lost. Can’t Lachlan do his own homework for once, viz. watching captioned TV and described movies for a couple of weeks and then imagining how any of that would work with an added text-only document instead of the real thing?
We don’t expect deaf people to mail in a self-addressed stamped envelope for a transcript of an uncaptioned TV show and we don’t expect blind people to surf to a Web site to listen to Jaws yammer on endlessly in a recitation of pretend “audio descriptions.” This isn’t a physical artifact like a book, which can only be made accessible by producing another artifact. Electronic media including video can be made accessible in and of themselves. Don’t hand me a printout when I want to watch a video.
If we’re really concerned with “difficulty,” like the difficulty of captions and descriptions, then why does WCAG 2 force authors to go to insane lengths just to produce an HTML “authored unit,” but gives them a pass when it comes to video? I wonder what kind of blithering idiot equates the difficulty of writing valid code with that of providing captioning and description – because, after all, at various points in WCAG 2 all of those things are optional. But by God you’ve got to be careful with your DOM outputs.
WCAG 2’s multimedia sections border on utter bullshit. I haven’t spent this many years working in and around captioning and description – the former being the actual genesis of my interest in accessibility – to have those techniques defined out of existence by ignoramuses.
However, Joe claimed that the full text alternative is a “discreditied holdover from WCAG 1,” yet failed to provide or link to any evidence to support that claim.
Read the original more carefully, please:
The whole thing is supposed to be of help to deaf-blind people, who were never surveyed for their preferences, an action I recommended to WAI at a face-to-face meeting in 2003. Nor was any user testing carried out. (That’s all I can tell from published evidence, anyway. I sent E-mail inquiries to deaf-blind organizations in several countries asking if they’d been surveyed or had any opinions, with no response.)
There are about three known examples of such a transcript in the seven-year history of WCAG…. And there really aren’t any HTML semantics for such transcripts, unless you wanted to push the envelope of the definition list….
Additionally, my “friends” at the National Center for Accessible Media decried the idea. I’m waiting for somebody to accuse them of a conflict or interest or sour grapes. In the meantime, their objection can be viewed as another expert opinion that the whole idea is bollocks.
His explanation stating that parts of it are not needed by the blind, and other parts by the deaf, doesn’t seem to have a point – it certainly does nothing to discredit the technique.
Make the original video accessible. Deaf people don’t need descriptions (they can watch the video) and blind people don’t need captioning or transcription (they can listen to the audio).
Deaf-blind people – again, the ostensible audience for this technique – were never surveyed for their needs and seem so uninterested in this bullshit about post-facto transcripts that they can’t be arsed to answer my questions about them. I’m going to send out another E-mail, but if this were really the preferred method of accessibility for that group, would we not find a greater clamour for it?
One more time: The means to make video accessible are established. WCAG 2 tries to define them out of existence or cook up imaginary alternatives for them.
As for requiring captions for prerecorded videos only, I don’t understand why this is a problem at all. Would anyone seriously expect the average person with a live Webcam to be able to provide captions in real time?
A specious example, as Webcams merely need a text equivalent under WCAG 2.
Also at the highest level, you have to provide a way to find all of the following:
- Definitions of idioms and “jargon”
- Expansion of acronyms
- Pronunciations of some words
Again, I don’t see how this is a problem at all. It is only required for level 3 conformance
…which doesn’t mean anything, as it is nonetheless in WCAG 2.
- It’s absurd to suggest that it is straightforward to meet these requirements. In the present day it is not straightforward to meet a mere requirement for expansion of acronyms and abbreviations.
- It’s entirely likely that your “jargon” won’t have an online definition. While I am OK with forcing authors to write
alttexts (an alternative for content they’re already using), I am not OK with forcing authors to become lexicographers.
- And how does one provide pronunciations? In which dialect? Using International Phonetic Alphabet? (How many people with reading disabilities can read IPA?) With a sound file? Again, in what dialect, and won’t you have to caption it?
Joe also announced the launch of the WCAG Samurai, an effort to publish corrections for and extensions to the existing WCAG 1.0 recommendation. In principle, it’s a very good idea for the community to begin addressing accessibility issues and the serious problems with WCAG 2.0 themselves, but my main concern with the WCAG Samurai is this:
… another thing we’re not going to do is run a totally open process. It’s a viable model for standards development, one I have championed in another context, but in Web accessibility it is proven not to work.
WCAG Samurai will toil in obscurity for the foreseeable future. Membership rolls will not be published, and membership is by invitation only.
Working on this behind closed doors seems like a huge mistake to me. In fact, it seems downright hypocritical of Joe
Yeah, I admitted that: “Of course this is unfair to say the least, if not actively elitist and hypocritical.” The so-called open process for the Web Content Accessibility Guidelines actually is not open in the first place and, more importantly, is a demonstrated failure. We’re trying something else. Maybe it won’t work, but we know the alternative already hasn’t.
Think whatever you want in the meantime. In fact, you can go ahead and stretch the concept of “accessible,” as Lachlan does, as a means to critique our elitist behaviour. Go right ahead. Judge us on results, which we have not even begun to work on yet. (At this point, the WCAG Samurai itself consists of a single E-mail and a few instant messages.)