Shortlog - a log of everyday things

Home

2010-10-19

I know I'm nearly a month behind on the Not So Humble Pie blog, but oh my goodness, that is a lot of cupcakes. A periodic tableful. And ones with Feynman diagrams. And a benzene resonance cake. So many adorable delicious-looking sweets. This makes me want to bake cookies. I need to go to the grocery store.

A brief discussion with Sarah today led to me thinking about professional tools. In particular, I got to thinking about whether or not professional tools for a specialized task should be intuitive. Background: Sarah is planning to get a DSLR and get more professional at photography. I suggested that she play around with digiKam, KDE's (notably powerful) open-source tool for professional photographers. She replied "I tried digikam previously...it's not intuitive." My first thought was to protest, since I love the tool. After a moment more thought, I realized that her statement was valid - digiKam is NOT as intuitive as other tools for super simple editing. Then I thought: "And it shouldn't be."

Back to the interesting and more generalizable question: should professional tools be intuitive? A tool's level of intuitiveness is a reflection of how well the user's mental model aligns with the tool's presented model [1, 2]. I suspect that for professional tools, the user's natural mental model is less efficient in the long run than adopting the superior model used by the tool. This means that at some point, the user is going to have to suck it up and deal with learning a nonintuitive system for a greater long-term benefit. Example: compare the efficiency of a master programmer using Vim against someone using Notepad. The model of the former is anything but intuitive to a first-time user, but you'd be hard-pressed to find a master programmer who didn't accept that Vim is a far more effective tool for writing software than notepad. The same applies to professional photography - there's a reason the pros choose Lightroom over MS Paint - despite being a pain now, it's less of an accumulated pain in the long run.

The second interesting question is thus: how do you know when to force a superior model on the user? When is "good enough" good enough? Or, put in another way, how should a user know when it's a good idea to take a temporary performance hit for a long-term benefit as a result of switching tools? People tend to be bad at making short-term sacrifices for long-term goals (see Dan Ariely's introduction to The Upside of Irrationality), so I suspect that this irrationality keeps people from using effective tools.

I should note that tool "professionality" isn't an excuse for poor design - even the non-intuitive tool model should be well-thought-out and optimized for the user's long-term benefit. It just need not match the novice user's initial mental model. I wonder what other papers I can find on this topic.

  1. Direct Manipulation Interfaces, Edwin L. Hutchins, James D. Hollan, and Donald A. Norman, Human-Computer Interaction, 1(4), 1985, pp. 311 - 338.
  2. User Technology: From Pointing to Pondering, Stuart K. Card and Thomas P. Moran, ACM Conference on the history of personal workstations, 1986, pp. 183 - 98