I QUIT

It is not widely known, because I don’t talk about it much, but I worked on a government-funded project. I had a five-word mandate, “best practices in online captioning.” I eventually published my findings (after extensive delays in having the university site publish them; they still haven’t).

A separate project, CNICE, also explored accessibility of some kind or other. I wasn’t really involved, though I should have been.

The project ended up producing software:

  1. CapScribe for online captioning
  2. The Flash Captioning Tool
  3. LiveDescribe for audio description

In CNICE’s documentation on online enhanced captioning (PDF), one finds:

There are no limits to how caption text can be presented on the Web.

There are actually quite a few limits, as our dear friends at WGBH has documented. (We are talking about player closed captions and not open captions.)

Due to the small frame size of videos that are streamed on the Web, it may make more sense to place text outside of the video itself.

That would be the norm for player closed captions, although there are examples of superimposed captions.

In addition, text presented in different fonts, colours, and style can be used to identify speaker, content, emphasis, and even emotions.

You can’t change fonts in Line 21, teletext, or Rear Window, or really not in burned-in movie open captioning, but you can change fonts nearly everywhere else in captioning. Most media, including Line 21, teletext, and DVB, permit colour changes. It remains unclear what use font “style” might have in captioning. No one has proven that using font changes to indicate “speaker, content, emphasis, and even emotions” is actually understood by captioning viewers.

We’ve worked with a number of new-media developers over the past several years. Our sense has been that the lack of access is more a lack of knowledge and tools than willingness.

The problem is, once you cure their ignorance, they tell you you’re perfectly right but it isn’t their problem, or you’re perfectly right but they’re not going to change any aspect of what they do. I also find the ignorance argument less and less convincing with the growth of Web standards: Surely these people have heard of accessibility before? At least captioning?

Participants began to appreciate that access was something that went beyond people with disabilities and could be potentially helpful to all.

I’ve always considered this an irrelevant and possibly harmful argument. We are making the world accessible for people with disabilities (QED). We are not trying to find a cover story to make it palatable. It’s like this brochure I have for a new low-floor streetcar (more on that shortly, with photos): It carefully illustrates a mom with a stroller walking off the accessible streetcar. Anything to avoid showing why we’re really doing it, right – people in wheelchairs? Because, my God, who wants to look at them?

If nondisabled people find accessibility interesting and useful, that’s fine, but we aren’t doing it for them. The selling point of accessibility is not that it’s useful to people who don’t need it. If you’re trying to say we cannot use a feature because only people with disabilities need it and there is no other use case, then we’re not talking about the same thing anymore.

The project also published a document on online audio description (PDF) that mostly describes the usefulness of the project’s own nonlinear description software, LiveDescribe.

Unlike closed captioning, quantifying and qualifying errors contained in [an audio] description is a difficult and subjective task. It involves subjective judgments on whether the description of a particular scene or series of events provides a sufficiently accurate and complete picture to users of the visual information. For closed captioning, errors can be readily identified by incorrect spelling, missing words, or words incorrectly matched to the dialogue of the show. However, determining whether a description is correct or sufficient involves a personal judgment.

Really, this reduces the entire concept of quality captioning to accurate transcription. Quality captioning starts there but does not end there. It is often difficult to explain this fact to people with low literacy, like broadcasting executives and broadcasting regulators.

Moreover, it reduces critiques of audio description to mere opinion. I can back up my “opinions.” Are people backing up theirs if their entire response is “That’s your opinion”?

The foregoing posting appeared on Joe Clark’s personal Weblog on 2005.10.26 16:39. This presentation was designed for printing and omits components that make sense only onscreen. (If you are seeing this on a screen, then the page stylesheet was not loaded or not loaded properly.) The permanent link is:
https://blog.fawny.org/2005/10/26/cnice/

(Values you enter are stored and may be published)

  

Information

None. I quit.

Copyright © 2004–2024