First of all, can somebody tell me why the site for Web4All is so horrid? Should the cobbler’s children really go unshod in this case?

Anyway, there are a few papers of interest to be presented next week in bucolic Banff, which is a heck of a long haul for the many Brits planning to attend. Maybe someday a university or multi-billion-dollar megacorporation will fly me out there. (Most papers are, also curiously, PDFs. As usual, Google the URL.)

  • “Accessibility 2.0: People, policies and processes” (in many formats) by Brian Kelly et al. (including Patrick Lauke): More on the business of how guidelines measure only one thing, and do so not very well, while accessibility per se can be achieved by other methods that don’t rely on applying checkmarks next to guidelines.
  • Accessibility for simple to moderate-complexity DHTML Web sites” by Cynthia Shelly and George Young: Gives a lot of code samples to functionally reproduce buttons, tooltips, menubars, and the like.
  • Experimental evaluation of usability and accessibility of heading elements” by Takayuki Watanabe: Interesting study that compared sites with and without headings with sighted and blind users. Sighted users had a Firefox extension available to make headings directly selectable, which is one issue casting doubt on the study. The tiny number of blind subjects – four (vs. 16 sighted) – is another. Nonetheless, everyone took longer to locate information on sites without headings. Blind people took roughly 1.5 to 6 times as long to accomplish those tasks even on sites with headings, but that was still faster than without headings. Semantics matter, people.
  • Making multimedia content accessible for screen-reader users” by Hisashi Mitashita et al.: This is the one that got all the coverage recently, even though it’s vapourware and certainly won’t work on Macs. It’s supposed to let you operate any multimedia player just by the keyboard. The important thing to understand here is that the claimed ability to play audio description is actually an ability to use synthesized voice to read out a text file of audio description. Thus does the decade-long threat of eliminating the human voice from audio description become real. The day that the rest of the dialogue in online video gets replaced by computer yammering is the day I’ll consider this option even remotely viable.

The foregoing posting appeared on Joe Clark’s personal Weblog on 2007.05.04 15:17. This presentation was designed for printing and omits components that make sense only onscreen. (If you are seeing this on a screen, then the page stylesheet was not loaded or not loaded properly.) The permanent link is:

(Values you enter are stored and may be published)



None. I quit.

Copyright © 2004–2024