[Re-posting. First attempt seems to have failed.]
I've only just seen this
>Subject: Re: replacing the desktop metaphor (Why any metaphor?)
>Date: 25 Dec 88 17:34:01 GMT
>Subject: Re: replacing the desktop metaphor (Why any metaphor?)
>I suspect that metaphors are useful in keeping consistency. But
>now Jonathan Grudin is about to present a paper in CHI 89 arguing about
>the foolishness of consistency: systems are often improved by
programming languages and learning environments, by attempting to defend
the following slogan:
"Power is more important than consistency"
The most obvious example of the trade-off is the comparison between any
natural language (all of which, I believe, are very powerful but riddled
with inconsistencies) and either predicate calculus or any other
formalism that logicians and mathematicians have devised. For reasons
which I do not fully understand <but see below>, natural languages,
despite all their complexity and inconsistencies, seem to be things that
all (or should I say most?) human beings learn with far less resistance
than the simpler, more consistent, artificial formalisms.
Let's call the former "scruffy" formalisms, the latter "neat"
formalisms, following Bob Abelson's labelling of AI types. (Clearly
there's a whole spectrum of cases, with most programming languages a
curious mixture of neatness and scruffiness.)
Neat formalisms, including Predicate calculus, BNF and number notations
are learnt, and put to very good use, by a subset of the population, for
a subset of their activities. So this is not an all-or-nothing issue.
(I've never met a logician or mathematician who attempts to communicate
with her children solely using a neat formalism.)
Also, although it is probably clear that overall natural languages are
more powerful and general than any artificial and neat formalism so far
devised (e.g. natural languages, or at least the ones I know about,
contain their meta-languages, and allow creative deployment using
metaphor and other devices) there are specific kinds of power that they
don't have. E.g. try to explain in English what it means for something
to be increasing its speed while decreasing its acceleration, then
explain it using the notation of differential calculus. So further
development of this topic would require a taxonomy of kinds of power of
I suspect that one reason why the scruffy more powerful natural systems
are more suited to the human mind is that they handle far more special
cases directly e.g. using particular words, phrases, idioms, etc., that
just have to be memorised, rather than interpreted on the basis of
general rules. By contrast, the neat artificial systems require you to
do some problem-solving to find the right construction, or some analysis
and interpretation to understand one produced by someone else.
(The best way to explain how 'Can you pass the salt?' is interpreted as
a request rather than a question, is probably by saying that people
simply remember that that is how it is used. Of course, it is possible
to derive the interpretation using very general principles and
assumptions, but nobody need bother to derive this if they simply learn
the usage along with all the other bizarre special forms of expression
encountered in natural languages. E.g. I can do something for your sake
whether you have a sake or not. Of course, general principles may
explain how something got into the language in the first place, even if
they play no role in the particular uses of the construct.)
A common observation may explain all this:
Human brains appear to contain very powerful and fast associative
storage mechanisms with very large storage capacity. They also appear to
have relatively slow and incomplete problem solving mechanisms. This
suits the learning and use of large numbers of particular cases, rather
than the derivation of particular cases using powerful generative
Moreover, I don't think this is simply a feature of the human brain -
pressures toward this kind of imbalance are probably a result of design
requirements for any physical implementation of an intelligent system
that generally has to act within severe time constraints. This is
because (almost) all symbolic derivational processes are inherently
However, any formalism that copes directly with lots of special cases,
i.e. has constructs defined specifically to deal with them, is far more
likely to exhibit inconsistencies than a formalism that has a relatively
small and powerful set of primitives which can be combined to generate
all the special cases in a principled way. This is because checking for
consistency is also an inherently combinatorially explosive process: the
number of things to check is an alarmingly fast-growing function of the
number of items in the system when the inconsistencies can involve
relationships between arbitrarily large sets of items.
Of course there are all sorts of exceptions, including the case of
people using a system that is inherently simple and therefore needs only
a relatively simple formalism (e.g. arithmetic?) or people using a
system only infrequently, so that they can't be expected to remember all
the special cases. Perhaps if human languages were not used so
frequently in daily life they'd have evolved different characteristics?
If, for the reasons indicated, scruffy and powerful systems are
generally easier for people to learn and use (on a regular basis) than
neat consistent systems that obtain their power from generative rules,
then people designing learning environments (and increasingly ALL
computing systems will be learning environments for their users), will
be under strong pressure to sacrifice the requirement of consistency.
Dare I say QED?
Incidentally, all this is one reason why I favour Pop-11 over Lisp (or
LOGO) as a programming language for beginners. The syntax of Lisp is
elegant and is very powerful if you can parse it, whereas that of Pop-11
has lots of special case constructs, and is highly redundant, and
apparently simpler for people to parse (though not simpler for computers
to parse). I think the redundancy helps to make it easier for human
brains to take in, despite the greater surface complexity such as the
use of matching pairs of opening and closing keywords
until ... enduntil
for .. endfor
define .... enddefine
if .... endif etc.
[This needs systematic research]
Returning to Macs and the like:
The desktop metaphor may be a simple and consistent one for a range of
tasks, that are relatively simple. But what about
'show me all the files in folders A and B that have the
substring "prog" in their names'
'Move everything that I haven't looked at for at least 5 days
to folder OLD'
'Whenever anyone else looks at any of my files, please add their
names to my nosey file'
When I'm getting near my disc quota send me a mail message.
If any mail message arrives mentioning grants tell me immediately.
"Direct manipulation" analogous to shoving things around on desk-tops or
rooms etc is only relevant to a tiny subset of the things most of us
really want to do with information systems. Maybe only the first few
things we want to do...
I agree!Quote:> Where consistency and mepaphor and consistent
> system images-mental models help and where they hinder is not yet
> properly understood.
> Time for some more research, folks.
> don norman
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QN, England
IN CASE OF DIFFICULTY use "syma" instead of "cogs"