I QUIT

Our question of the day: Will noted technology commentator–cum–futurist Cory Doctorow ever learn how computers work?

First he was going around saying plain text (he means dumb-as-a- mule US-ASCII text) is the most adaptable format. It’s actually the least, since it contains no structure. Any structured format – your choice, even OpenOffice – beats plain text hands down. Strike one.

Now Doctorow wants to be able to edit InDesign-produced PDFs on the Linux laptop he bought to be ideologically pure. It isn’t true one absolutely cannot edit a PDF, but as they are binary files, the difficulty is orders of magnitude higher. The legitimate PDF experts I’ve spent time with create applications that write their own PDFs. But you can’t just go in and delete a comma. Strike two.

And the punchline? There’s always a punchline, and here it is: The noted commentator/futurist deleted my comment that merely said “You can’t. That’s not what PDF is for.”

Cory Doctorow, committed as ever to open cultural discourse. Someday he might even “learn computers.”

The foregoing posting appeared on Joe Clark’s personal Weblog on 2009.07.28 13:56. This presentation was designed for printing and omits components that make sense only onscreen. (If you are seeing this on a screen, then the page stylesheet was not loaded or not loaded properly.) The permanent link is:
https://blog.fawny.org/2009/07/28/doctorow-pdf/

Marco Arment is still the lead developer of Tumblr. His code sucks. He won’t explain why, preferring to act all high and mighty. Nor will he fix it. But Arment merely provides the latest evidence that stupid code pays. The worse your adherence to Web standards, the more money you make. So do your users.

Arment also brought you Instapaper, the iPhone application. The Web version has marginal code at best (no H1, inline styles, mixed XML/SGML tag closings, DIVitis). But Instapaper’s revenues are more than enough to buy a new MacBook (“a 15″ Instapaper 2.0 congratulatory launch present”). Not a huge payday, but a nice one – and that’s despite the obstacles Arment faces getting apps approved by Apple. (Fake Steve told him to just suck it up.)

Tumblr is his day job and Instapaper is his sideline business, but put the two together and what you’ve got is an ecosystem of stagnant tag soup. And it pays. Not just for him, but for Tumblrizers in general.

Start with the fact that the N+1-style literary elite all seem to run Tumblrs. It’s just expected, and it’s obviously peer pressure at work. I don’t know how this half-assed reblogging platform suddenly became au courant with New York intellectuals. But it strikes me as something that the young women who populate this demimonde feel resentful about having to do. Maybe it’s like being expected to develop just the right tan (but not melanoma).

Even blogging doyennes run Tumblrs, like the offputting and imperious Elizabeth Spiers. Her bosses always put off firing her way too long, and new bosses are always far too eager to snap her up. Ask Jesse Oxfeld what it’s like working for her.

Tumblrs, then, are a vehicle for in-group fame, a kind of Internet celebrity in miniature. Internet celebrity already is celebrity in miniature – comparable to the way a celebrated intellectual swims in a rather small pond. As we are dealing here with Tumblr-wielding intellectuals, the effect is multiplicative.

The hammerhead in this ecosystem is surely the book agent, who skims through the most “outrageous” Tumblrs, contracts in hand. An egregious example is of course Look at This Fucking Hipster, a kind of Hot Chicks with Douchebags for ’09.

It’s gotten to the point where the stupider you are, the farther away from real writers you situate yourself on the career axis, the likelier you are to score a book contract. By the same token, the stupider your blogging platform, the faster you get hired.

The exception that proves the rule here is Christian Lander, who is anything but stupid and, save for his book’s half-assed graphic design, deserves to be famous. Perhaps significantly, he conquered the globe using not Tumblr, which wasn’t popular then, but WordPress. What’s his second act? What is the Tumblr user’s? Care to bet on which of the two will turn out stupider?

The foregoing posting appeared on Joe Clark’s personal Weblog on 2009.07.28 12:29. This presentation was designed for printing and omits components that make sense only onscreen. (If you are seeing this on a screen, then the page stylesheet was not loaded or not loaded properly.) The permanent link is:
https://blog.fawny.org/2009/07/28/stupid-tumblrs/

Dave Morris (emphasis added):

But to me, the most amazing thing about Dépêche Mode is how, despite the extravagant and expensive visuals, the hired session players, their slick branding (I say that as a compliment) and even the grandiosity of their own music, the show never quite feels corporate.

Because Dépêche Mode is a graphic-design band.

The foregoing posting appeared on Joe Clark’s personal Weblog on 2009.07.27 15:02. This presentation was designed for printing and omits components that make sense only onscreen. (If you are seeing this on a screen, then the page stylesheet was not loaded or not loaded properly.) The permanent link is:
https://blog.fawny.org/2009/07/27/neverquite-dm/

In broad stokes, I support what Ezra Levant is doing to shine light on the excesses of human-rights commissions. The specific topic I broadly support is his criticism of commissions’ encroachment on freedom of speech, not his general critique of human-rights commissions. He believes what he says about infringement of free speech, but at root he also just wants all 13 of them shut down. This would not end well for people with disabilities, as any reading of case law will confirm. So let’s not throw out babies with bathwater.

Levant has congratulated writers and organizations who would otherwise be his opponents, like “left-wing” newspapers and gay groups, for standing alongside him in his criticism of abrogation of free speech. So he has no issues with aligning himself with erswthile critics if he and they agree on something.

But not everybody is so generous with their affiliations or willing to overlook ink spilling across the blotter. If you support what Levant is doing on the topic of freedom of speech, you might be somewhat taken aback to learn he has called for summary execution in “war zone[s].” While Levant believes free speech is an inalienable right, as is the right to a fair trial even in the case of alleged hate speech, he does not believe in due process of law for certain people he views as worthy of “summary hearing and execution on the spot.” (He thinks there are actually several kinds of due process of law. This puts him in agreement with those who filed complaints against him at human-rights commissions.)

Something tells me the right not to be shot to death under the cover story of a “summary hearing” is a more important human right than, say, the one that lets you publish the Danish cartoons.

Shorter Ezra Levant: I have the right to say anything I want, and if you take me to court over it at least I’ll have due process of law at my disposal. But some people we ought to just execute right then and there.

At least when Islamofascists do something along those lines they run the whole show in the public square.

The foregoing posting appeared on Joe Clark’s personal Weblog on 2009.07.27 13:17. This presentation was designed for printing and omits components that make sense only onscreen. (If you are seeing this on a screen, then the page stylesheet was not loaded or not loaded properly.) The permanent link is:
https://blog.fawny.org/2009/07/27/antilevantism/

Moz here means Mozilla, not Morrissey. Sylvia Pfeiffer et al. have been exploring the issue of captioning, subtitling, and other forms of moving text as used on the Web. There’s too much of an insistence on an open standard to guarantee a successful outcome (sometimes the best option is one somebody else owns), but for today, let’s consider some warning signs from Pfeiffer’s recent report.

Open captions don’t have to be burned in

You can use a separate text track (of any kind) and merely force a player to display the captions. Same goes for subtitles. You see it quite a lot in the DVD field (check the concept of “forced subtitles”; it involves setting SPRMs, but I’ll let you look that up).

Open captioning merely means “they’ll be visible to everybody who can see.” It is not the opposite of encoded captioning or captioning provided as a separate stream. Hence it is also incorrect to state that “Only closed captions and subtitles, as well as lyrics and linguistic transcripts, have widely-used open text formats.”

DVD subpictures are essentially TIFFs

Pfeiffer helpfully suggests that we could do something like run OCR on DVD subpictures, but implies this would be ever so tricky, listing an alphabet soup of open-source formats nobody now uses or will ever use. Subpictures start out as TIFFs and are merely run-length-encoded graphics files that are trivial to decode. Once decoded, they will have been helpfully removed from any obfuscating background graphics and can be read via OCR with high accuracy, at least in scripts like Latin and Cyrillic.

The myth of written audio description

I wonder if we’re ever going to be able to kill of this mythology, repeated ad nauseam by beginners in the industry, that handing a blind or deaf-blind person a transcript of an audio description is in some way helpful. It isn’t. Film is a medium of motion; action happens right now and so does its description. (Or a second or two before or after the action, but in any event not at some other time jotted down in a text file.)

The idea that deaf-blind people have an interest in transcriptions of audio descriptions is essentially false. There was one trial project – one – with ambiguous results. The idea is a non-starter. If a soundtrack can be reduced to a printout, why can’t the moving image be reduced to a single photograph?

This is all about bottom-centred titles

As with the YouTube case and so many others, I get the impression that proponents have watched a bit of captioning, maybe five minutes here and there, and have added that as a kind of icing on the cake of lifelong subtitle viewing. Half these people are British and cannot actually distinguish the two, also perhaps claiming there is no distinction.

Hence I see Mozilla’s and YouTube’s work as doing nothing but providing invariant-bottom-centred titles of one sort or another, with likely limitations on number of lines and number of characters per line. In short, there is no understanding that pop-on captions must be positioned in all cases, that even some subtitles must be positioned away from screen bottom, and that some subtitlers use flush-left justification.

Many combinations unaccounted for in an all-bottom-centre system are seen every day:

  1. Multiple languages per subtitle, e.g., one line each of Chinese, Vietnamese, and English. (Hence this statement is false: “If, for example, there are subtitles in ten different languages, a user will only want to see the subtitles of one language at a time.”)
  2. Multiple simultaneous caption blocks.
  3. Simultaneous captioning or subtitling, or, more commonly, a subtitled program with occasional added captions. (Let me guess: Your system makes me choose one or the other but not both.)
  4. Scrollup captioning. While misused, it exists and has to be accounted for.
  5. Combinations of scrollup and pop-on captioning, e.g., scrollup for dialogue and pop-on for music.
  6. Multiple caption streams, as in verbatim and easy-reader versions (rare but not unknown) or same-language and translated versions (viewable every day of the week on American television).

To explain this deficiency another way, there is a rush to solve what underinformed people view as the dominant use case with no understanding of other use cases. When presented with the latter, the response is either “That doesn’t happen” (it does) or “That’s pretty rare” (across a mythical 500 channels running 24/7, it isn’t).

Solving the four-fifths of the problem you think is the entire problem means you haven’t solved the problem.

Then we get into issues like typography (I’m not naïve; it’ll be all Arial all the time, and you’ll defend that to the death); in-frame vs. out-of-frame display; and just how Mozilla and everybody else intends to actually digitize largely undocumented proprietary formats. The complete ideological zeal for open-source formats will, I promise, get in the way of the entire project.

Simplistic answers to complex problems

From my reading, it seems Mozilla is doing what YouTube did: Grasping at the most expedient and simplistic solution.

Mozilla manages this degree of oversimplification despite publishing one paper after another ostensibly documenting months of research. “The aim of the study,” Pfeiffer writes, “was to ‘deliver a well-thought-out recommendation for how to support the different types of accessibility needs for audio and video, including a specification of the actual file format(s) to use.’ ” What they’re on track to deliver is a very elaborate system to encode the dumbest possible form of subtitling and declare it suitable for all variations of everyone’s needs.

The foregoing posting appeared on Joe Clark’s personal Weblog on 2009.07.27 12:57. This presentation was designed for printing and omits components that make sense only onscreen. (If you are seeing this on a screen, then the page stylesheet was not loaded or not loaded properly.) The permanent link is:
https://blog.fawny.org/2009/07/27/mozcc1/

The Indigo E-book service with the unfortunate name, Shortcovers (Shortcomings?), now claims to be able to convert publishers’ files, even “PDF,” into ePub files. At a discount!

Let me offer a prediction of what’s gonna happen.

Shortcovers, which has already proven it cannot handle reasonably complex source texts, will “convert” your original files into ePub (or, as it errantly calls it, ePUB – I wonder what the U and B stand for). ePub is just XHTML 1.1. XHTML is HTML and XML at the same time, but that is one of the many details that will be overlooked.

The files they’ll return to you – and bill you pennies a byte for! – will surely be the worst kind of tag soup. Only tag soup can be so cheap. The typical case of marking up everything as a paragraph will be a walk in the park compared to this malarkey. In fact, I suspect each chapter will be marked up as single paragraph BRoken up with BR “tags.”

Canada, the natural home of the mediocre, has barely any definable skills at semantic markup. Shortcovers developers won’t know what that means. Neither will publishers. The latter is actually a larger problem. I kept getting inklings of this in advance of and at BookCamp (q.v.), where all of a sudden publishers who can barely wrangle their Wintel boxen are suddenly concerned with putting books online in electronic format.

Publishers as a species, like authors as a separate species, self-select against those who know anything about computers. They’re literary people, or marketing people, or some kind of people other than “computer” people.

Hence even though I can teach anyone, and have taught many people, the basics of Web standards in eight minutes flat (and co-deprogrammed a group of tag-soup-indoctrinated blind kids in a single day), these are not the kind of people who will naturally cotton to the concepts underlying semantic markup. And they don’t have to! They’re publishers and writers, not “text encoders.” [continue with: Tag soup, now at a discount →]

The foregoing posting appeared on Joe Clark’s personal Weblog on 2009.07.24 12:45. This presentation was designed for printing and omits components that make sense only onscreen. (If you are seeing this on a screen, then the page stylesheet was not loaded or not loaded properly.) The permanent link is:
https://blog.fawny.org/2009/07/24/shortcovers-tagsoup/

Today my article, long in gestation, appears at A List Apart: “Unwebbable” examines how not every document can be folded and mutilated into a “Web page.” (Illustrations.) HTML simply does not provide the right semantics, or even enough semantics, for the many document types that predate the Web. And – much worse – people with half-assed knowledge try to import pre-Web document formats intact to the Web.

People like John August. You may remember him from such films as Go<bang>, Big Fish, and Charlie and the Chocolate Factory, the last of which I watched in London with open captioning. August is an established screenwriter, among other things, and is one of the few of that breed who actually bothers with the Web, a medium he half-understands. It seems to be the wrong half.

As the article describes, August, who runs a WordPress blog, has produced a plug-in for various blog systems, Scrippets, with which “you can add boxes of nicely-formatted script to your blog.” The problem is you cannot graft screenplay format onto the Web, for the reasons I describe.

Not a greenhorn

Now, the standard objection here is that I must be too green in this field to truly understand the deep history and philosophical significance of screenplay formatting. Established screenwriters have many variations of this objection which they foist upon newbies. The problem here is that I understand screenplay formatting quite well. I’ve done my homework.

As the article explains, screenplays are a genius example of document design. They are superb at their intended function – to impart the dialogue and directions of a movie in strict relation to the movie’s runtime. (One page more or less equals one minute of screen time.)

Typography is atrocious, and on this I give no quarter. The reason we use monospaced fonts – that just means Courier – this late into the 21st century is because scripts from the old studio era were written on typewriters. (What else could they have used?) Monospaced fonts are a skiamorph (or skeumorph, if you prefer a narrower transliteration); their use reflects nothing but unconsidered habit.

Typewriter fonts merely give the illusion that a checklist of screenplay-format requirements has been met. You could do exactly the same thing with proportional fonts, except it would be easier to cheat (busting the one-page-a-minute rule) and get away with it. Monospaced fonts are a kind of quality-control system that keeps honest writers honest.

But there’s no reason to use Courier. Outline-font versions of Courier are atrocious and are at best a pale shadow of what is actually a superb typeface when viewed on an IBM Selectric. (Elite works better than pica.) Selectric Courier is a letterpress face, all of which digitize poorly. We are still living with a pre-1980 design decision, taken at Adobe, to digitize Courier as a stroked font, in which all strokes are actually the same stroke of identical width that runs around corners, like making a typeface out of a coat hanger. There absolutely is not an adequate digitization of Courier in existence, including the one you use, which, as of today, you should cease to use.

In the intervening decades we have spent quite a lot of time designing vastly superior monospaced fonts. The nicest thing you the screenwriter can do for your readers is to switch to a real monospaced font. Consolas, or any other variant of TheSans Mono, is a great place to start. So is Fedra Mono. You can and must safely ignore complaints that your script has the “wrong” formatting. It will surely have exactly the right formatting; you’re just using a better font. (This would be a fruitful master’s project for a type-design student at, say, Reading.)

To reiterate, the standard objection holds that I couldn’t possibly know what I’m talking about because everybody who launches any criticism of screenplay formatting must be new around here. I am not new and I do know what I’m talking about.

The same cannot really be said for August’s use of the Web. [continue with: ‘Unwebbable’ →]

The foregoing posting appeared on Joe Clark’s personal Weblog on 2009.07.21 14:45. This presentation was designed for printing and omits components that make sense only onscreen. (If you are seeing this on a screen, then the page stylesheet was not loaded or not loaded properly.) The permanent link is:
https://blog.fawny.org/2009/07/21/unwebbable/

It’s your blog’s 2,149th day without a print stylesheet!

Keep it up, Observers! You’re sure to hit 10,000!

The foregoing posting appeared on Joe Clark’s personal Weblog on 2009.07.20 15:18. This presentation was designed for printing and omits components that make sense only onscreen. (If you are seeing this on a screen, then the page stylesheet was not loaded or not loaded properly.) The permanent link is:
https://blog.fawny.org/2009/07/20/unobservant-printing/

Rusted CN freightcar sits in front of new silver-and-yellow boxcar labelled IST

The foregoing posting appeared on Joe Clark’s personal Weblog on 2009.07.20 14:56. This presentation was designed for printing and omits components that make sense only onscreen. (If you are seeing this on a screen, then the page stylesheet was not loaded or not loaded properly.) The permanent link is:
https://blog.fawny.org/2009/07/20/ist-jaune/

← Later entries ¶ Earlier entries →

(Values you enter are stored and may be published)

  

Information

None. I quit.

Copyright © 2004–2025