On 2006.06.20, CNIB, which is now the only name for the former Canadian National Institute for the Blind (though they write it in lower case, sort of like K.D. Lang), published a literature review and a set of purported “print clarity standards,” i.e., standards for large print. This organization, widely despised by large swaths of the blind population (like nearly the entire membership of the AEBC), again attempts to speak for every blind person in the country. What it actually ends up doing is failing even at the simple task of communicating its message, which itself is suspect.
Let’s start with their shitty new Web site. The actual Clear Print page shows their ill-executed new blue-green site design, complete with:
- Tables for layout (six, even used for something as simple as a navbar).
- Invalid HTML with incorrect character encoding, no doubt due to some factotum trying to paste a Microsoft Word document into Microsoft FrontPage.
- A measure as wide as your window, since obviously the widest possible line will be the most readable one. Make your window wide enough and the search field disappears, an amazing achievement.
But what is the best part?
The best part is the
title of the page: Ckear Print. Can’t these people do anything right?
No, apparently not. The actual guidelines for “clear print” are given – on a Web site – solely as Microsoft Word and PDF documents. “Ah, but,” you reply, “blind people love MS Word documents.” So they do, engaged as they are in a Stockholm syndrome with Microsoft. But a plain-text document published solely in Word format is a barrier to accessibility. (You could claim it’s a WCAG violation, too.)
The PDF turns out to be some kind of brochure with an unclear readership. It’s full colour, yet also includes an entire page of black with 17 words in white Arial type. (How well will your laser printer fare with that page?) Other pages have full-bleed backgrounds of orange, blood red, lilac, and two shades of green. (How well will your ink-jet printer fare?) It’s a Quark 6.5 document with the title “colour palette,” and of course it’s an untagged PDF. (It could still be functionally accessible without tagging, but it isn’t, as 13 images have no alternate text.) Even the PDF’s default page-view options are screwy.
The whole thing is typeset in Arial. Yes, it is. And that is true even though we are told (on p. 11) “Don’t crowd your text: keep a wide space between letters. Choose a monospaced font rather than one that is proportionally spaced.” Yes, the CNIB’s Ckear Print guidelines use Arial to tell you to typeset your documents in Courier. (And if you’re looking for “a wide space between letters,” why pick a font with exactly the same measured space between letters but a different apparent space between letter pairs?)
Notwithstanding the business about monospaced fonts, CNIB’s own recommendation for font selection reads exactly as follows:
Avoid complicated or decorative fonts. Choose standard fonts with easily-recognizable upper and lower-case characters. Arial and Verdana are good choices.
Ckear Print is marketed as “the first formal print clarity standards for making printed materials more accessible to all Canadians, from fully sighted individuals to people living with vision loss.” In essence, then, CNIB wants you to typeset all your books so they look like printouts of Web pages.
So: At this point, are you buying anything CNIB is selling? Probably not, right?
Fine. Let’s look at their research. “An evidence-based review of the research on typeface legibility for readers with low vision” is dated April 2006. It lists the lead author as Elizabeth Russell-Minda, with two other authors and four “collaborators,” including two people from CNIB Communications. Yes, PR agents were permitted to “collaborate” on scientific research. It’s available only as a Word document even though it could be easily expressed in HTML (save for a handful of footnotes that would have to become endnotes).
The researchers admit, but do not sufficiently emphasize, that they conclude there is no consensus in the literature concerning typeface selection and many other factors: “e found that the overall body of research on low-vision reading and typeface-legibility characteristics is somewhat inconsistent, with an absence of controlled trials.” CNIB would ignore this conclusion and go right ahead and recommend using Arial and Verdana (or is it monospaced fonts like Courier?) as typefaces for books and other printed materials.
There is certainly an admission that, here as elsewhere in legibility and readability, a wide range of variables interact at once: “The presence or absence of serifs, contrast of text to page, thickness of letters, interletter spacing, leading, and the medium on which text is printed (medication labels, for example) can all affect the legibility of type.”
While reading the paper, I was never reassured at all that any of the authors knew the first thing about typography, let alone about the range of typefaces actually available. There’s a helpful diagram explaining, for the umpteenth time, what serifs are – “the fine lines that extend horizontally from the main strokes of a letter.” (Isn’t it odd how the diagram also shows vertical serifs?)
The researchers should be more careful with the use of terms like “serifs” and “sans-serifs” (hyphenation sic) as headings for discussions of font variation. They also seem to believe that serif fonts are more condensed than other fonts, as they “are frequently used in newspapers and books where the space for print is tight.” (Why not use Univers 39 in that case?) Meanwhile, sansserif fonts “have letters with straight lines and no curls or appendixes,” which rather rules out a few letters of the alphabet, like S and O. (How many letters have “appendies”?)
After reading the document and doing many searches, here is my complete list of typefaces mentioned by name in the report:
- Times (19 mentions), Times New Roman (12), Times Roman (18)
- Arial (26)
- Tiresias (19)
- Lucida (10)
- Universe (sic; 6), Univers (3)
- Swiss (sic; 9)
- Adsans (9)
- Dutch (sic; 8)
- Century Schoolbook (6)
- Verdana (5)
- Helvetica (5)
- Palatino (2)
- Century Gothic (2)
- Book Antiqua (2)
- Tahoma (2)
- APHont (1)
- Clearview (1)
- Garamond (1)
- Avant-Garde (sic; 1)
It seems that standard Windows fonts dominate the list. Even Windows-specific font names are often preferred. I wonder why.
“In general, there is disagreement in the research as to which type of font is the most legible for low-vision readers. Standard serif and sans serif fonts (such as Arial or Times Roman) are generally considered to be the best fonts for legibility.” What is a “standard” font? (One that appears high up in the font menu in Word for Windows?)
The researchers really are not up to speed on the current knowledge of the importance of spacing as it relates to legibility. It isn’t merely a question, as the literature the authors quote would have it, of “crowding” letters together and the way such crowding reduces legibility. We have quite a bit of knowledge from screenfont development, little or none of it published in peer-reviewed journals, that spacing is the first and most important characteristic in legibility of onscreen type. (Here, “spacing” refers to overall tracking and kerning, not a special scenario of letters abnormally close together.) I would expect the effects to be smaller in print due to the higher resolution, different angle of incidence, and different reading distances, but not by much.
I don’t think the attempt to locate studies that deal with diacritical marks in reading of French text made any sense at all – especially not to a native French reader. While French readers are surely aware of accents, if only when they’re missing or incorrect, and French writers are surely aware of them all the time, only an Anglophone or somebody else who doesn’t read or write French natively would suspect that adding diacritics to a pure, unmodified letter would alter legibility or readability.
There seems to be an assumption that a letter without a diacritic is some kind of fundamental, primitive, or Platonic form, while diacritics are what “foreign” languages add for some reason or another. (That is, English letters are pure and correct, while French letters are weird in some way.) If that’s really true, why don’t the dots on i and j bother us more? I would be interested in seeing this approach applied to languages that treat some letters with accents as separate letters, like Spanish, Swedish, Icelandic, and Maltese.
The punchline here, though, is that the researchers couldn’t find any literature on the topic. Of course they couldn’t. It would be like our studying the dots on i and j. (Nonetheless, it would be interesting to study native readers of languages that use diacritics when tested with passages that use the wrong diacritics or none.)
I think the discussion of font weight was muffed completely, but that is because the literature muffs it completely. It seems the only discussion is one of “bold vs. everything else.” (“Fonts may also have a generally wide stroke width, referred to as ‘bold,’ or a thin stroke width, referred to as ‘light.’ ”) Well, I can show you four kinds of bold. Which one are you talking about? The report states that “t is generally recommended that medium heaviness be used and light type should generally be avoided.” Book weights of typefaces – a concept unaddressed in the CNIB report – are not necessarily, or even commonly, “medium” or “light.”
We are also told that, “or emphasis, bold fonts may be used, rather than italics or all-capitals,” which rather undoes a century of English typography. This seems to be a proxy for a claim that too-ornate italics are hard to read. Sure, but how many fonts have those? (How about those lovely sansserif fonts CNIB wants us to use?) How many of those fonts are used in print? How long are the italicized passages?
Rapid serial visual processing – an experimental method in which you look at a restricted area and see exactly one word at a time, replaced by another and another in sequence – is said to be “not unlike cellphone text messaging.” Really? My phone shows me entire paragraphs. (I can even type a carriage return.)
Much too much credence is placed in the magic bullet of typeface legibility, APHont (“APHont™”). This typeface “was developed by a fontographer” with certain characteristics. Even if it were provably perfect, which it isn’t, nobody uses it. APHont’s character forms are at once too common and too weird.
The paper incidentally considers the issue of legibility of medication labelling – one of those true life-or-death typographic domains, like road signs – but fails to mention Deborah Adler’s superb redesign of the pill bottle itself. Adler’s nice large flat surfaces, accompanied by an included information sheet (that will probably be badly typeset), will nearly always lead to greater legibility than trying to read a label wrapped around a cylinder. However, that design, like others that seem obviously better by a wide margin, needs actual published user testing.
The authors have a love affair with Arial and Tiresias. Arial, in fact, is said to have a “uniform stroke width,” which can be readily disproved by looking at a G nice and big. They swallow wholesale the claims by RNIB and John Gill that Tiresias actually is “superior” to faces like Times Roman or Arial. Since the authors specifically sought out “grey” literature, e.g., Web pages, I wonder why they never found, or refused to cite, my demolition of the research claiming to support Tiresias Screenfont.
My fundamental objection to research on functional typography, including legibility of specific typefaces, is that researchers muddy the waters by choosing lousy comparison fonts, unrealistic reading scenarios, or self-defeating colour combinations to use as cases or controls. They pick fonts that nobody with any type knowledge or training would pick in the first place, often because they cannot see even monumental differences between typefaces. In essence, Windows-using researchers use Windows fonts to test other fonts, and they’ll go so far as to do things like make printouts and show them to grannies through a TV monitor as a means of testing captioning fonts (a real example from Tiresias Screenfont).
How much do you think this research cost, who paid for it, and would it survive peer review in its current form?