About this document ...

This document is a modified version of one generated using the LaTeX2HTML translator Version 95 (Thu Jan 19 1995) Copyright © 1993, 1994, Nikos Drakos, Computer Based Learning Unit, University of Leeds.

The translation was performed by Joel Austin Votaw on 6 October 1996.


There is also a postscript version available (171k).


Metaphor and Symbolic Activity, 11(2), pp.101-123.

Artificial Intelligence and Metaphors of Mind: Within-Vehicle Reasoning and its Benefits [footnote]

John A. Barnden, Stephen Helmreich, Eric Iverson & Gees C. Stein

Computing Research Laboratory & Computer Science Department
New Mexico State University
Las Cruces, New Mexico, USA

RUNNING HEAD: Within-Vehicle Reasoning

KEYWORDS: beliefs, propositional attitudes, mental states, psychological metaphors, metaphors of mind, folk psychology, discourse coherence, common-sense reasoning, metaphor-based reasoning.

ABSTRACT

We define within-vehicle and within-tenor reasoning to be reasoning that is done on-the-fly within the vehicle domain or tenor domain, respectively, of a conceptual metaphor, during the comprehension of utterances that manifest the metaphor. The main claim of this paper is that, at least in Artificial Intelligence systems for understanding metaphorical discourse, within-vehicle reasoning is often beneficial. Indeed, in several important respects it is to be preferred over within-tenor reasoning that would achieve the same overall effect; and in any case for certain types of metaphorical sentence there is useful within-vehicle reasoning that can be done but for which there is no feasible within-tenor parallel. Although some work on metaphor involves within-vehicle reasoning, its benefits do not appear to have been explained and argued. The examples in the present paper focus on metaphors for the particular domain of mental states and processes; also, the examples involve only metaphors that are already familiar to the understander. However, these restrictions are not essential to the arguments.

1. INTRODUCTION

Black (1979) discusses a metaphor of AS ZERO-SUM GAME.'' He says that part of the ``implication complex'' for the vehicle domain of this metaphor might be the following pair of propositions [note 1]:

(G1/2) A game is a contest between two opponents.

(G3) In a game, one player can win something [points, say] only at the other's expense.

According to Black, someone who is trying to understand an utterance that uses the metaphor is invited to create a parallel implication complex that fits the tenor domain (namely marriage). Part of this complex might be:

(M1/2) A marriage is a sustained struggle between two contestants.

(M3) In a marriage, one contestant gains rewards only at the other's expense.

For definiteness, let us assume that the rewards and expense in M3 are a matter of emotional satisfaction. Suppose some piece of discourse has been portraying Xavier and Yolanda's marriage as a zero-sum game, and then the next sentence is ``Xavier won some points.'' Clearly, a metaphor-based inference that can then be drawn is that Yolanda lost some emotional satisfaction. However, there are two alternative routes to this inference:

(a) First use a correspondence between winning game-points and gaining emotional satisfaction to infer that Xavier gained emotional satisfaction. Then use M3 to infer that Yolanda lost emotional satisfaction.

(b) First use G3 to infer that Yolanda lost game points. Then use a correspondence between losing game points and losing emotional satisfaction to infer that Yolanda lost emotional satisfaction.

In (a), we first do a conversion operation and then a within-tenor inference step. In (b), we first do a within-vehicle inference step and then a conversion operation. The within-tenor and within-vehicle steps are parallel to each other. In a more complex example, several within-vehicle steps might be done before a conversion operation is effected.

The main claim of this paper is that, at least in Artificial Intelligence systems for understanding metaphorical discourse,

We do not, of course, claim that WTR should not be done at all. After all, tenor propositions constructed through WVR plus conversion operations may be then subjected to WTR that has no WVR parallel.

We stress that when we refer to WVR in this paper we will mean WVR that is done on-the-fly, on encountering a sentence. Also, the distinction between on-the-fly within-vehicle reasoning and on-the-fly conversion of information from vehicle to tenor is crucial in this paper. It is of course standard for metaphor theories and analogy theories to envisage such on-the-fly conversion, where the vehicle information that is converted may well be the result of past reasoning within the vehicle. For instance, this is presumably the interpretation we should take some of the information transferred during analogy by the PHINEAS system of Falkenhainer (1990). This paper will not be concerned with prior within-vehicle reasoning of this sort, important though it is.

To claim that (on-the-fly) WVR can be useful may not seem startling. However, most technically detailed work on metaphor-based reasoning fails to address it. One of the few exceptions is the work in Artificial Intelligence of Hobbs (1990). (For a general picture of AI work on metaphor, see the literature reviews in Hobbs 1990, Indurkhya 1992 and Martin 1990, 1996.) A few systems for analogical processing in AI can also be seen as using various types of WVR --- see, e.g., Brown (1977), Kedar-Cabelli (1985) and Mitchell & Hofstadter (1990), although we do not have space here to clarify and justify these attributions. Also, the approach to metaphor of Lakoff and colleagues within linguistics appears to involve WVR. For example, Lakoff & Turner (1989, p.62) discuss the metaphor ``LIFE AS JOURNEY,'' and appear to be alluding to within-vehicle inferences when they discuss an inference that when someone hits a ``roadblock'' in their life they must deal with it somehow, for instance by removing it.

Notably, however, WVR does not play an explicit role in the prominent analogy/metaphor systems SME (Falkenhainer et al. 1989) and ACME (Holyoak & Thagard 1989). The exact role of WVR in the wider thinking of the authors of these computational systems is not entirely clear to us. The SME and ACME systems themselves involve reasoning processes neither within the vehicle nor within the tenor --- the only reasoning they involve takes the form of vehicle-to-tenor mapping/transfer. Of course, this does not stop us thinking of the vehicle items that are mapped/transferred being the result of previous reasoning within the vehicle. The authors are presumably favorably disposed to such WVR, and there is no apparent reason to think that WVR is in principle incompatible with the ACME and SME systems. But even authors who do explicitly advocate or implement some form of WVR fall short of discussing whether it should be used in preference to WTR in cases where WTR could parallel the WVR.

The present paper only explicitly considers metaphors that are already familiar to the understander, i.e. conventional from the understander's point of view. Accordingly, we assume that the system already knows how vehicle concepts correspond to tenor concepts (to the extent that they do). The paper does not address the question of how such correspondences arise in the first place, even though this has been a primary focus of previous work on the algorithmic details of metaphor and analogy. In particular, therefore, we will not be considering metaphors that are novel to the understander. However, we will explicitly consider the question of linguistic expressions that use a familiar metaphor in a way that is novel to the understander. In this way, we assume that familiar metaphors can still be live. People need not be conscious of the aliveness of metaphors familiar to them, or even be consciously aware that utterances based on the metaphors are indeed metaphorical (cf. Lakoff & Turner 1989, pp.62, 129, etc.).

Our emphasis is strongly on the pragmatic role that metaphors play in multi-sentence discourse, rather than on issues such as the extraction of the meaning of individual metaphorical sentences or the determination of when a sentence is metaphorical. The important way in which metaphors can affect the understanding of extended discourse is clear from Gibbs (1986), Johnson (1987), Hobbs (1990), Lakoff & Johnson (1987), Martin (1990) and Nayak & Gibbs (1990), among other works.

The paper is structured as follows. The second section discusses the types of metaphorical discourse and attendant reasoning processes with which we are specifically concerned in our research. Within that context, the third section sets out the nature of WVR in more detail. The fourth section presents the arguments for our claims concerning WVR. The fifth section contains additional discussion, and the sixth concludes.

WVR is central in an AI system, called ATT-Meta, that reasons about metaphorically described mental states. A prototype is described in Barnden et al. (1994a, 1994b), and earlier theoretical work is set out in Barnden (1989a, 1989b, 1992). ATT-Meta is in rough conformity with the processing scheme implied by the present article, but the paper does not describe the system as such.

Two points about terminology before proceeding. First, it is fairly standard, though unfortunately confusing, to let a ``metaphorical meaning'' of a metaphorical sentence be a meaning in terms of the tenor. For example, some might argue that one metaphorical meaning of ``Mike shines at parties'' is that at parties Mike is lively and attracts attention. A ``literal meaning'' of a metaphorical sentence is a meaning in terms of the vehicle; so, for the same example sentence, a possible literal meaning is that Mike [literally] emits light at parties. (But we do not assume that metaphorical sentences always have well-defined metaphorical meanings. See section 4.1). The second terminological point is that, in common with Lakoff (e.g., in Lakoff 1993b and Lakoff & Turner 1989) we take a metaphor to be conceptual matter, not a surface linguistic object. We say that metaphors are manifested in metaphorical linguistic expressions.

A further point to note is that the precise tenor domain of a metaphor varies according to the particular manifestation. For instance, if the metaphor of ``MIND AS CONTAINER'' is used in discussing X's mental states, we take the tenor to be X's mental states, not mental states in general. Of course, that tenor can still inherit information about the domain of mental states in general.

2. METAPHORS OF MIND and DISCOURSE REASONING

To clarify our claims, we need to look at some specific metaphors and discourse chunks manifesting them. We discuss only metaphors of mind, as these are the metaphors with which we have been concerned in our AI system development. However, our main arguments do not rely on special features of the mental domain as compared to other abstract domains.

In mundane discourse, mental states and processes are often described through the lenses of various common-sense, metaphorical views of the mind. Consider:

``John was leaping from idea to idea.''
``One part of Mike knows he's clumsy.''
``George put the idea that he was a racing driver into Sally's mind.''
``His desire to go to the party was battling with the knowledge that he ought to work.''
``Peter hadn't brought the two ideas together in his mind.''
``Martin had a blurred view of the problem.''
``Veronica caught hold of the idea and ran with it.''
``That belief was firmly fixed in his mind.''

We claim that all of these sentences are metaphorical (in ordinary contexts), are easily understandable, and exemplify sentence types commonly found in mundane written and spoken English prose. Therefore, it is important to have a way of coping with them. They illustrate only a fraction of the ways in which mental states can be metaphorically described. For other ways, see for example: Belleza (1992), Casadei (1993), Cooke & Bartha (1992), Fesmire (1994), Gallup & Cameron (1992), Gentner & Grudin (1985), Gibbs & O'Brien (1990), Jäkel (1993), Johnson (1987), Katz et al. (1988), Lakoff (1993a), Lakoff, Espenson & Schwartz (1991), Lakoff & Turner (1989), Larsen (1987), Leary (1990), Lehrer (1990), Richards (1989), Roediger (1980), Sweetser (1987, 1990), and Tomlinson (1986),

Metaphors of mind often provide extra information, possibly quite subtle, about the quality of mental states themselves. For example, the sentence, ``The belief that Veronica was evil was firmly fixed in Mike's mind'' says something important about the particular way in which Mike believed that Veronica was evil, and suggests that it would have been difficult to persuade Mike otherwise. The variations of mental state type conveyed by metaphors are important partly because they can make a crucial difference to one's understanding of a chunk of discourse, through their influence on one's reasoning about the depicted situation. This is illustrated in the remainder of this section.

2.1 An Example: Veronica and the Recipe

Consider the following passage:

(1) ``Veronica was preparing for her dinner party. Her brother's cookbook had said to fry the mushrooms for one hour.''

Consider the following four alternative continuations of this passage, which differ only the way in which they portray Veronica's mental state:

(1a) ``Although she knew the recipe was wrong, she followed it anyway.''

(1b) ``Although in one part of her mind she knew the recipe was wrong, she followed it anyway.''

(1c) ``Although one part of her knew the recipe was wrong, she followed it anyway.''

(1d) ``Although she said to herself, `The recipe's wrong,' she followed it anyway.''

For each continuation, one discourse coherence task for the understander is to reconcile Veronica's obeying the recipe with her belief that it was wrong. The different metaphors of mind that are used in sentences (1b--d) lead to considerable differences in the reasonableness of various possible reconciliations. Similarly, the use of those metaphors leads to different effects from the absence of such metaphor in sentence (1a).

The metaphor manifested in (1b) is AS PHYSICAL SPACE.'' Under this metaphor, the mind is viewed as a physical region, and ideas or thinking episodes can lie at locations within the region. The relative placing of thoughts within the region can have considerable significance. Sentence (1b) is an example that overtly refers to a specific subregion (part) of the mind-region. Ideas in one subregion can be incompatible with ideas in another. Alternatively, ideas in one subregion can simply be absent from another. One reasonable interpretation of (1b) is that the part of Veronica that had the strongest influence on her actions contained a belief that the recipe was correct. The part actually mentioned in (1b) was subsidiary.

The well-known AS CONTAINER'' metaphor (see Jäkel 1993 for a recent detailed account) is a special case of AS PHYSICAL SPACE.'' We reserve the name AS CONTAINER'' for cases in which the mind is indeed viewed as a material object such as a box, room or can, not just as a region.

Sentence (1c) manifests a metaphor that we call PARTS AS PERSONS.'' The mind is viewed as having parts that themselves are viewed as people --- they have thoughts and can even talk to each other. Much as with AS PHYSICAL SPACE,'' one part can have thoughts that conflict with thoughts of another part. Or, one part can have a thought that is simply not entertained by other parts. The PARTS AS PERSONS'' metaphor is strongly related to a multiple-selves metaphor discussed by Lakoff (1993a).

We take (1d) to portray a thinking action by Veronica as the silent, inner utterance of a natural language sentence. Inner speech is not literally speech, so that in (1d) we have a manifestation of the metaphor of AS INTERNAL UTTERANCES.'' Under this metaphor, a thought is necessarily an occurrent event happening at a particular moment in time (as opposed to a long-term believing), is conscious, and is (usually) a ``forefront'' thought as opposed to being in the ``background.''

Note that if (1d) had used the verb ``thought'' instead of ``said to herself'' we would still have regarded it as a manifestation of ``IDEAS AS INTERNAL UTTERANCES''. However, this stance requires additional discussion, which we omit for reasons for brevity. (See Barnden 1992, Barnden, in press, Barnden et al. 1994a or Barnden et al. 1994b for some comments.)

There are tight links between ``IDEAS AS INTERNAL UTTERANCES'' and ``MIND PARTS AS PERSONS.'' One reading of (1d) is that one part of Veronica is trying (for some reason we do not necessarily know) to convince another part that the recipe is wrong, where the latter part believes that it is correct. Thus a sentence that explicitly portrays internal utterances can implicitly bring in ``MIND PARTS AS PERSONS.'' Further, manifestations of ``MIND PARTS AS PERSONS'' can portray sub-persons as engaging in verbal communication. This is the case in the sentence ``Although one part of Veronica was vociferously insisting that the recipe was wrong, she followed it anyway'' Since utterances of sub-persons can be classed as thoughts of the overall agent, ``IDEAS AS INTERNAL UTTERANCES'' is being manifested as well.

2.2 Additional Discussion

If one looked only at (1b--d) one might dispute the above claims about metaphor, saying that they just involved canned forms of language. However, consider the following variants:

(1b') ``She did this [i.e. fried the mushrooms for an hour] after forcibly shoving the idea that the recipe was wrong to a murky corner of her mind.''

(1c') ``She did this even though one part of her was nervously worrying about the mistake in the recipe.''

(1d') ``She did this even while whining to herself, `Oh no, this damned recipe's wrong.' ''

Consider also the immense potential for further varying these, e.g. using verbs other than ``shove'' and ``whine'' or physical location phrases other than ``murky corner.'' The most economical explanation of the sense that (1b'--d') and their further variants make is that they appeal to the metaphors we mentioned above. (The argument here is similar to the one in Lakoff 1993b, and is essentially an argument from the productivity of metaphor manifestation.) Then, for uniformity and continuity, it is a short step to saying that (1b--d) themselves also manifest the metaphors, albeit in a more pallid way.

It might be tempting to claim that saying-to-oneself as in (1d) is literally an act of saying, even though entirely mental and not aloud, in opposition to our claim that it is only metaphorical saying. However, it would be hard to extend this literalist view to variants of internal saying such as whining and insisting. Do all verbs of oral communication have a literal mental reading as well as an out-loud meaning? In any case, in the arguments in this paper, when ``IDEAS AS INTERNAL UTTERANCES'' appears it is in manifestations that involve such variants, not bare saying, and we take the metaphoricity of such variants to be relatively uncontroversial.

In manifestations of metaphors of mind, especially ``MIND AS PHYSICAL SPACE,'' the word ``mind'' is used sometimes to mean the conscious mind only, but sometimes the whole mind. Sometimes and adjective such as ``conscious'' or ``unconscious'' qualifies the word, but when none does so it is sometimes difficult to tell which meaning is intended. The sentence ``He put it out of his mind'' seems (to the present authors) to imply that the agent put the idea in question out of his conscious mind; the idea may still lurk about elsewhere in his mind. Equally, the phrase ``at the back of his mind'' seems to convey that something is at the back of consciousness, rather than in an unconscious region. On the other hand, a sentence like ``He had forgotten about Veronica: he had pushed the thought to a far distant corner of his mind'' seems to convey that the thought of Veronica had been pushed outside the agent's consciousness. In a sentence like ``In one part of her mind, Veronica knew this,'' we (the present authors) do not have strong intuitions as to which meaning of ``mind'' is intended. But in similar cases to this, the mental verb can provide guidance: for instance, if ``realized'' is used instead of ``knew,'' then consciousness is strongly suggested.

2.3 The Influence of the Metaphors on Reasoning

Consider the subsentence ``in one part of her mind she knew the recipe was wrong'' in (1b). Let the mentioned mind-part be P. The utterer presumably means to convey, at least, that (i) there was a part Q of Veronica's mind different from P that might not have contained the belief that the recipe was wrong. More strongly, the utterer may intend to convey that (ii) there was a part Q of Veronica's mind different from P that did not contain that belief. Or, more strongly yet: (ii') there was a part Q of Veronica's mind different from P that contained instead the belief that the recipe was correct. Unless one of these possibilities holds, there would be no point in the utterer mentioning just one part. Now, the understander can use the statement that Veronica did follow the recipe to reason that, in the absence of evidence to the contrary, her action was governed by a belief that the recipe was correct. The understander can then further reason that a plausible possibility is that Veronica conforms to (ii') (and Q is the mind-part that contains the correct-recipe belief). Thus, although the metaphorical portrayal of Veronica's mind did not by itself select between (i), (ii) and (ii'), further information lends weight to a particular selection.

Sentence (1c), using PARTS AS PERSONS'', has a very similar effect, and our analysis is largely the same. However, variants of (1c) could have a significantly altered analysis, though still leading to a similar overall result. Thus, if a part of Veronica had been reported as ``vociferously insisting'' that the recipe was wrong, rather than ``knowing'' that it was wrong, there would have been a strong suggestion that some other part or parts were explicitly rejecting that claim, and were therefore thinking that the recipe was correct. This is because a common reason for people to insist upon something is that they are faced with objections. Therefore, the hypothesis that Veronica partly believes the recipe to be correct is given stronger support than is the case with (1b) or with (1c) as it stands.

However, another fairly common reason for people to insist upon something is that other people are ignoring them. If the understander takes this into account, then the difference between the vociferous-insistence variant of (1c) and the original (1c) is less clear than we have stated. In all, cases, though, the use of ``insisting'' in the sentence conveys that Veronica's partial belief in the wrongness of the recipe was of special importance to her.

As for (1d), we noted above that the AS INTERNAL UTTERANCES'' metaphor places the thought in a forefront role, and makes it conscious and occurrent. Recall that one reading of (1d) is that one part of Veronica is trying (for some reason we do not necessarily know) to convince another part that the recipe is wrong, where the latter part believes that it is correct. If we further assume that the latter part is more powerful in terms of governing Veronica's actions, the overall effect is somewhat similar to that of sentences (1b) and (1c). However, another interpretation is that Veronica fully believes that the recipe is wrong. (In that case the sentence has similar force to the one that would result from replacing ``said to herself'' by ``thought''). Then it is difficult in ordinary contexts to escape the implication that --- for some reason --- she deliberately acted in a way apparently at odds with that belief. This reason might have been one of the following, for example: (i) she doesn't want to embarrass her brother by repudiating his recipe; (ii) she's just being perverse and does want to embarrass her brother; (iii) someone she wishes to obey has ordered her to use the recipe.

We do not say that the understander of (1d) must choose one such explanation, or mentally enumerate all the possible explanations. Nor are we saying that explanations (i)--(iii) could not conceivably hold for (1b) and (1c). For example, a possible explanation in the case of (1b) is that the mentioned part of Veronica's mind is what was in control of her actions at the time, and that, in that part, she wanted to embarrass her brother. But in the absence of a special context there is no particular support for this possibility. Thus, our point is that the balance of reasonableness of explanations is different between (1b)/(1c) on the one hand and (1d) on the other.

We now turn to (1a), the no-metaphor case. We claim that this is vaguer in its implications than (1b), (1c) or (1d). (See Gibbs 1992 for a similar observation about the relative vagueness of literal statements.) Indeed, any of the explanations for those three passages are moderately plausible possibilities for (1a) as well. Veronica's doubt could have been so weak or subsidiary that it did not seriously conflict with her action, as in (1b)/(1c), or it could have been a strong, forefront doubt that she deliberately ignored, as in one interpretation of (1d). Nevertheless, (1a) seems to us to convey something closer to that interpretation of (1d) than to (1b)/(1c).

3. WITHIN-VEHICLE REASONING

Here we describe in some detail how WVR could proceed. For definiteness, we will assume that an AI system's knowledge of a domain is cast in the form of IF-THEN rules. We will also assume that the system's knowledge of any familiar metaphor is embodied partly in conversion rules that convert hypotheses in the vehicle to hypotheses in the tenor. For instance, a conversion rule for ``MIND AS PHYSICAL SPACE'' (plus a metaphor of ``IDEAS AS PHYSICAL OBJECTS'') might say: IF two ideas do not physically interact (in some agent's mind construed as a physical space) THEN the agent performs no inferences from those ideas. This rule is part of the system's view that the notion of physical interaction in the metaphor's vehicle maps to the notion of inference in the tenor.

Suppose that an AI system is to make some inferences based on the following metaphorical sentence:

(2) ``Part of Veronica was insisting that the recipe was wrong.''

The metaphor is PARTS AS PERSONS'' (involving also ``IDEAS AS INTERNAL UTTERANCES'' because of the word ``insisting.'') In the current manifestation, the tenor is Veronica's mental states. Assume that the system's beliefs about the vehicle domain, namely the domain of personal behavior, include rules that can be paraphrased as follows:

(3vr) IF someone, Z, is saying that Q THEN Z believes that Q.

(4vr) IF someone, X, is insisting that Q THEN X is saying that Q.

(5vr) IF someone, X, is insisting that Q THEN some interlocutors of X have said that not-Q.

The subscript ``VR'' stands for ``vehicle rule''. The rules in this paper are simplistic ones, for illustration purposes only. Also, they are defeasible: that is, when the IF part of a rule is satisfied, the conclusion only follows in the absence of further, sufficiently-strong evidence to the contrary. For example, the conclusion of (5vr) might be abandoned if there is evidence that some of X's interlocutors have ignored Q.

When the system detects that (2) involves ``MIND PARTS AS PERSONS'' it adds to the vehicle a hypothetical person, Person1, as a realization of the mind-part mentioned by (2), and a premise saying that Veronica's mind was a physical region [note 2]. The system also adds the following premises to the vehicle :

(6vp) Person1 was insisting that the recipe was wrong.

(7vp) Person1 was in Veronica's mind.

(8vp) All persons inside Veronica's mind were interlocutors of each other.

(9vp) All interlocutors of a person in Veronica's mind were also in her mind.

We assume that (7vp), (8vp) and (9vp) are added because of the system's general understanding of the metaphor. (The subscript ``VP'' stands for ``vehicle premise.'')

The system can (defeasibly) infer from (6vp) by means of (4vr) and (3vr) that Person1 believed that the recipe was wrong. It can infer by means of (5vr) that some interlocutor, Person2, of Person1 had said to Person1 that the recipe was not wrong. The system can then use (3vr) again to infer that Person2 believed that the recipe was not wrong. Using (7vp) and (9vp) the system can infer that Person2 was in Veronica's mind. These inference steps are all instances of WVR.

Now, the system may be able to convert a conclusion drawn at some stage of WVR into the terms of the tenor. For instance, the system may have the following conversion rule:

(10cr) IF a sub-person of person X has attitude A THEN X to some extent has A.

In our example, application of this rule to sub-persons Person1 and Person2 allows the system to infer the following conclusions:

These might be useful conclusions in their own right for understanding other parts of the discourse. Or, they could lead to conclusions via WTR; for instance, they might be used to conclude that Veronica was undecided as to whether the recipe was wrong. In any case, we see that the system can do a good deal of useful inferencing entirely within the vehicle domain. Except for the conversion rule (10cr), the premises and rules used to get the two conclusions just above were purely about the vehicle domain.

As we said, conversion rules for a metaphor embody system knowledge about the metaphor at hand. In this paper, we hypothesize particular conversion rules for the sake of exemplifying specific points. One might quarrel with the chosen rules, but the concerns of this paper are orthogonal to the issues of how or when the rules are constructed, or of what constraints govern their construction. The relationship of our conversion rules to notions of mapping and transfer in other authors' work (e.g., Falkenhainer et al, 1989; Holyoak and Thagard, 1989; Indurkhya, 1992; Martin, 1990) is slightly involved, and we make only brief observations here. Conversion rules can convert (new) vehicle propositions into entirely new tenor propositions, so that they do something unlike ``mapping'' and more like ``transfer,'' in the senses of mapping and transfer commonly adopted in metaphor/analogy work. On the other hand, conversion rules reflect known, general correspondences between concepts in the vehicle and concepts in the tenor, so that it's reasonable to say they ``map'' vehicle concepts to tenor concepts. Further, the job of conversion rules is unlike ``transfer'' in the usual sense because they do not deal with vehicle concepts for which the system knows no correspondence with tenor concepts.

One point needs to be stressed concerning the difference between WVR and WTR. Suppose a sentence manifests metaphor M. If a system uses only WTR when reasoning on the basis of the sentence, then the reasoning must be based entirely on (i) a metaphorical meaning of the sentence, (ii) pieces of information about the tenor of M that are parallels of pieces of information about the vehicle, and (iii) other information the system has about the tenor. A parallel in (ii) may be the result of converting vehicle information into the terms of the tenor, or they may be information that was independently arrived at but was at some point found to be parallel to some vehicle information. (That is, a mapping was discovered from the vehicle information to the tenor information.) The conversions or mappings may have occurred in the distant past, or at the other extreme they may have been done on-the-fly during the comprehension of the sentence at hand. The difference between WVR and WTR is therefore not whether they involve on-the-fly mappings/conversions, but, instead, whether they involve on-the-fly construction of new propositions within the vehicle.

4. BENEFITS OF WITHIN-VEHICLE REASONING

From now on we will be comparing WTR-only systems and WVR systems. The former perform WTR but not WVR. The latter do WVR and possibly WTR as well. As we saw in the Black ``MARRIAGE-AS-ZERO-SUM-GAME'' example at the start of the paper, there are cases in which the same answer can be arrived at either through WVR or through WTR without WVR. In this section we will argue that in such cases it is generally beneficial to pursue the former route, after stressing that this route may often in practice be the only one available in the first place.

4.1 Are Metaphorical Meanings Available?

If a WTR-only system reasons on the basis of a metaphorical utterance, it needs first of all to construct a (representation of) a plausible metaphorical meaning for the sentence. But consider the metaphorical sentences we have used in examples. It is very difficult to see what the metaphorical meanings would be like in such cases. For instance, what would be the metaphorical meaning of a sentence that talks about ``parts'' of people ``insisting'' things? Notice that we should beware of taking the word ``part'' literally to mean a physiological region of the brain, or even an abstract component in some theory of the information-processing structure of the brain/mind. Any such interpretation would go beyond existing scientific knowledge (as opposed to conjecture) about mind or brain. In any case, it would probably go beyond the speaker's knowledge about mind/brain. Cashing out the word ``insisting'' in non-metaphorical terms is equally difficult if not more so. We just do not have agreed-upon, objective, non-metaphorical accounts of the workings of the mind or brain that are adequate to explicate the nature of the mental states and processes described in metaphorical mental descriptions in mundane discourse. The unparaphrasability (in objective terms) of many metaphorical utterances is of course often pointed out, especially within work on interactionist theories of metaphor (such as Black's; see Waggonner 1990 for a review), and also by Lakoff & Turner (1989, pp.120--122).

Another aspect of this issue is addressed in the next subsection.

4.2 Novel Use of Familiar Metaphors

The usefulness of conclusions reached within the vehicle in WVR relies on there being conversion rules that can handle them, such as (10cr). This is fine for metaphors that are familiar to the understanding system, but by itself says nothing about metaphors that are novel to the system, for which the system has as yet no conversion rules. However, even if we confine attention to familiar metaphors and already-existing conversion rules, there is still considerable room for novelty in the discourse usage of a metaphor --- provided WVR is allowed.

We again take sentence (2) as an example. To make useful inferences from the sentence, the system need have no conversion rules at all concerning the concept of ``insisting'' that appears in the sentence. The conversion rule that was used in section 3 (namely 10cr) only concerns the attitudes of sub-persons, not their inter-communication (least of all their inter-communication in natural language). The use of ``insisting'' in a manifestation of ``MIND PARTS AS PERSONS'' can be entirely novel to the system. All that's important is that the WVR that is partially based on the insistence ultimately lead to conclusions that do connect with conversion rules. By contrast, a WTR-only system would either already have to know how to map to the tenor the notion of, say, insisting, or would have to work out how to map/transfer it on encountering the sentence.

4.3 Convertibles from Unconvertibles

The argument in the previous subsection appealed to a special case of the following general point: within-vehicle conclusions derived from unconvertible propositions in the vehicle need not themselves be unconvertible. We say a vehicle proposition is unconvertible if no conversion rule can use it directly to create a corresponding proposition in the tenor. Consider the sentence

(11) ``Veronica leaped from idea to idea.''

This manifests a metaphor of ideas as physical objects/locations between which people can move. We call this metaphor ``MIND WITHIN PHYSICAL SPACE'' [note 4]. Suppose that the system believes that if someone [physically] leaps then he/she is a physically agile person. So, by WVR the system can defeasibly conclude that Veronica was a physically agile person. Suppose the system makes a further within-vehicle step to the defeasible conclusion that she had well-toned muscles, and from there the defeasible conclusion that she was a physically active person (in a long-term sense, e.g., she frequently engaged in sports). Now, there could be a conversion rule that maps being-physically-active to being-mentally-active (also in a long-term sense). But suppose there is no conversion rule that maps having-well-toned muscles to anything in the tenor. This lack would in no way interfere with the production of the tenor conclusion that Veronica was a mentally active person.

Now consider what could happen in a WTR-only system. For the sake of argument, let the metaphorical meaning of (11) be that Veronica thought about several things in quick succession without following any direct connections between them. Let there be a tenor rule that infers mental agility from this type of thinking process. (This parallels the vehicle rule that goes from physical leaping to physical agility.) Assume therefore that the system infers that Veronica was a mentally agile person. However, if there is no tenor parallel for having well-toned muscles, the system cannot have a tenor parallel for the rule that goes from physical agility to well-toned muscles, and is thus unable to parallel the whole of the above WVR, which led to the conclusion that Veronica was a mentally active person.

Of course, it is possible for the tenor domain to involve a rule that goes straight from mental agility to being-mentally-active. But note that this would parallel the combination of the agility-TO-muscle-tone and muscle-tone-TO-physical-activeness rules in the vehicle. So for us to assume or stipulate the presence of such a rule would be an additional step --- the rule would not be the tenor correspondent of any single vehicle rule (unless, of course, we made the additional assumption that the vehicle contains a rule that goes directly from physical agility to physical activeness, as well as the rules mentioned already, or we had the additional and ad hoc assumption that well-toned-muscles maps to the same mental feature that physical-agility maps to). In short, the WVR system gets for free an inference pattern about minds that would require additional, special assumptions in the WTR-only case.

Some systems, notably SME and ACME, allow the imposition on the tenor of properties from the vehicle that are not already mapped. Such a system could simply impose on the tenor some unknown property that is meant to correspond to the concept of having-well-toned-muscles. Then, the above deficiency of WTR is fixed. However, the approach runs the danger of populating tenor domains with many unknown properties.

4.4 Different Reasoning Methods in Different Domains

It may be advantageous for an AI system to use different reasoning methods, or different parametrizations of a given method, in different conceptual domains. For instance, in one domain, rule-based reasoning of the sort sketched in earlier sections might be effective, whereas in another domain abduction and induction might be better, and, in yet another, case-based reasoning (Riesbeck & Schank 1989) might be the method of choice. Equally, several domains might use the same basic method, but with different parameter or control settings --- e.g., the number of examples needed before an inductive step is made, or the relative priority given to different types of inference step. Such inter-domain differences complicate the question of metaphor-based reasoning.

Suppose, for instance, that a system uses abduction based on IF-THEN rules for reasoning within the vehicle of some particular metaphor [note 5]. If the system is WTR-only, then in order for it to reason within the tenor in a way that parallels reasoning within the vehicle, it should presumably make abductive use of the tenor parallels of vehicle rules. Similarly, suppose the system uses case-based reasoning when reasoning within the vehicle of some metaphor. That is, the system reasons about a situation within the vehicle domain by finding memories of past similar situations (``cases'') within that domain, and adapting and transferring features of those situations so as to make them inferred features of the new situation. It is difficult to see how a WTR-only system could benefit from the vehicle other than by converting the vehicle's cases into terms of the tenor and then using these converted cases as the basis of case-based reasoning within the tenor.

These considerations show that, in a WTR-only system that engages in metaphor-based reasoning, every domain T must be equipped with reasoning methods paralleling those of all domains V that are used in T-as-V metaphors. Also, this might require T to be equipped with several different versions of the same reasoning method --- e.g., induction methods with different parameters. By contrast, the use of WVR allows reasoning methods and their parametrizations to remain private to vehicle domains V, affording a simpler and more modular system overall.

4.5 Serially Mixed Metaphor

A serial mixing (or chaining) of metaphors involves viewing A as B, where B is viewed as C. More exactly, at least some of those aspects of B that are used to illuminate A are viewed in terms of C. A parallel mixing is where A is viewed both in terms of B and in terms of C, directly. The mixes of ``MIND PARTS AS PERSONS'' and ``IDEAS AS INTERNAL UTTERANCES'' pointed out the end of section 2.1 are examples of parallel mixing. As an example of serial mixing, consider:

(12) ``The thought was an angry cloud.'' [note 6]

Manifested here is a metaphor of AS CLOUDS'' mixed in serial with a metaphor of AS PERSONS'' (or perhaps ``CLOUDS AS FACES''). Mixing of metaphors is central for Lakoff & Turner (1989), although those authors do not make a point of the distinction between serial and parallel mixing.

When faced with an A-as-(B-as-C) serially-mixed metaphor, a WVR system reasons with some mixture of: information about C; information about B; conversion rules that map between C and B; and conversion rules that map between B and A. In our example, A is a domain concerned with some particular agent's thoughts, B is a domain that is concerned with weather phenomena, and C is domain concerned with human characteristics and behavior. The WVR system might perhaps reason within C using the rule

(13) IF a person is angry THEN the person is likely to look angry, say something angrily or act angrily.

The system therefore concludes that the thought(-as-cloud-as-person) is likely to look angry, say something angrily, etc. Assume for the sake of example that, by converting this conclusion into the terms of B using some C-to-B conversion rules, the system happens to conclude that the thought(as-cloud) is likely to look dark and stormy, or to emit rain and/or thunder and lightning. Assume further, again for the sake of example, that this conclusion is translated by a B-to-A conversion rule to become the conclusion that the thought is likely to upset Veronica.

On the other hand, a WTR-only system has to have B parallels of all the relevant C information items, and A parallels for those B parallels. In our example, it would have to have a B-parallel for (13), and an A-parallel for this B-parallel. (We refrain from tackling the tricky if not impossible task of specifying what these parallels might be.) In more elaborate examples, there would more C-information items that would have to have A-parallels via B-parallels. Thus, serial mixing magnifies the above advantages accruing from WVR in terms of novel uses of familiar metaphor, convertibles from unconvertibles, and diverse reasoning methods.

The above example, (12), is rather more colorful than the metaphorical examples we had discussed previously. It is worth noticing therefore that we do not have to go to very colorful examples to get interesting cases of serial mixing. The following sentence serially mixes PARTS AS PERSONS'' with a parallel mix of AS PHYSICAL OBJECTS'' and ``MIND AS CONTAINER'':

(14) ``One part of Veronica was excited about this and overflowing with ideas about what to do.''

This casts Veronica as having a sub-person, and serially mixes this view with a parallel mix of a view of that sub-person's mind as a container and a view of that sub-person's ideas as physical objects.

5. FURTHER DISCUSSION

In this paper we have used conversion rules that work on complex, high-level concepts in the vehicle, such as the concept of physical agility in the discussion of sentence (11). There might in reality be a case for having them work on more basic concepts in terms of which the more complex notions could be explicated. However, this would only tend to strengthen the case for WVR. For instance, in connection with the discussion of (11), the system would have to unpack the notion of physical agility in terms of the more basic concepts before it could use conversion rules to create the translated proposition that Veronica was mentally agile. But it seems legitimate to regard the unpacking as a form of inference, so that of course the system is doing some (additional) WVR.

The metaphors of mind that we have discussed, and others we have not, bear some strong relationships to each other, including some (quasi-)hierarchical relationships. ``MIND AS CONTAINER'' is a special form of ``MIND AS PHYSICAL SPACE'' [note 7]. We take ``MIND PARTS AS PERSONS'' to be a specialized form of ``MIND AS PHYSICAL SPACE,'' because the mind-parts conceived of as some persons are presumably within some space, and it is natural to take the agent's mind to be this space. One (common) special case of ``MIND PARTS AS PERSONS'' is where the parts engage in verbal communication. Let us call this subspecies ``MIND PARTS AS CONVERSING PERSONS.'' This is also a specialization of ``IDEAS AS INTERNAL UTTERANCES.'' Arguably ``IDEAS AS INTERNAL UTTERANCES'' in general is a specialized form of ``MIND AS PHYSICAL SPACE''. On the other hand, ``MIND WITHIN PHYSICAL SPACE,'' the metaphor manifested in sentence (11), is not a form of ``MIND AS PHYSICAL SPACE,'' in that a whole person (or the person's whole mind) can be portrayed as moving within the physical space: what goes on inside the person's mind is not focused on. However, one specialization of ``MIND WITHIN PHYSICAL SPACE'' --- let us call it ``CONSCIOUS MIND WITHIN PHYSICAL SPACE'' --- has it that merely the conscious mind is moving within a space. The exterior parts of that space could therefore be the unconscious regions. Hence, some subspecies of ``CONSCIOUS MIND WITHIN PHYSICAL SPACE'' could be also be a special case of ``MIND AS PHYSICAL SPACE.''

Recall now the notion of ``metaphorical meaning'' as described at the end of section 1. The inference processes we have discussed for sentences such as (2) and (11) did not assume that metaphorical meanings for the sentences were constructed. The reasoning was within-vehicle and proceeded directly on the basis of the literal meanings of words such as ``leaping'' and ``insisting.'' In fact, it is roughly true to say that in our approach the reasoning for a metaphorical sentence makes direct use of the literal meaning of the whole sentence. Here we come into contact with the debate concerning the role of literal meanings in the human understanding of metaphorical sentences (see, e.g., Gerrig 1989, Gibbs 1984, 1989, Lytinen, Burridge & Kirtner 1992, and Titone and Connine 1994). Note especially the view (expressed for instance by Gibbs 1989, 1992) that the human mind can go direct to metaphorical meanings of metaphorical sentences without having to construct their literal meanings on the way. Strictly, this matter does not affect the arguments of the present paper, which is about AI systems rather than human understanders, but it is desirable for several reasons to make our approach consistent with psychological evidence. So we offer the following observations.

The direct-to-metaphorical-meaning view just mentioned assumes that metaphorical meanings can indeed be constructed, whereas we feel that there are many cases when they cannot. Also, we are not against the idea that, for sentences containing only completely dead metaphors, solely a metaphorical meaning is computed. Our claims in this paper are directed rather at metaphors that are live --- i.e., used productively and flexibly in discourse, even though they are familiar to the understander. (Recall sentences 1b' to 1d').

Irrespective of whether a metaphorical meaning exists for a given metaphorical sentence, we must still address the issue of the extra processing time required for applying conversion rules to the literal meaning or to within-vehicle inferences from it, over and above the time needed to construct the literal meaning. How does this square with evidence that metaphorical understanding is not slower than literal understanding? We nore first that, as Katz (1996: p.31) pointed out, the evidence applies only to specific circumstances of metaphor usage---a number of variables play an important role in determining whether literal meaning is given processing priority. In particular, Onishi and Murphy (1993) showed that what they called ``referential'' metaphor can slow understanding down. Their referential-metaphor sentences are more like the sentences we discuss in this article than are the metaphorical sentences that are usually considered in experiments on metaphor processing speed.

Also, even it were to be shown that the understanding of metaphorical sentences of the sort we are interested in is not appreciably slowed down by their metaphoricity, this would not in fact establish that a literal meaning is not computed on the way to a metaphorical understanding. This is because when sentences (literal or figurative) occur within a larger discourse, the time taken for understanding arguably involves much processing other than the sheer composition of individual word meanings into an overall sentence meaning. The extra processing can include performing bridging inferences (which link the sentence to surrounding discourse), updating an overall representation of the scenario hinted at by the discourse, and disambiguating between alternative sentence meanings (whether literal or figurative). Such work may be much more time-consuming than the mere construction of a literal sentence meaning from words that are already disambiguated, anaphorically resolved, etc. The work referred to may also be much more time-consuming than the application of a conversion rule. In sum, we find it simplistic to assume that vehicle-to-tenor conversion operations generate time delays that are goiing to make a significant difference to the total time taken for understanding.

6. CONCLUSION

We have considered the task of constructing coherent understandings of passages that include metaphorical sentences describing mental states or processes. We have argued that there are several advantages to the strategy, adopted by a number of researchers including ourselves, of conducting substantial amounts of reasoning within the vehicle domain of a metaphor. This is opposed to conducting analogous reasoning within the tenor domain on the basis of the metaphorical meaning of the metaphorical sentence at hand. One salient advantage of our within-vehicle strategy is that there may be no determinate metaphorical meaning on which the within-tenor strategy could operate. Other advantages concern novel uses of familiar metaphors, convertible propositions derived from unconvertibles ones, and the presence of different reasoning methods in different domains. The benefits are amplified when metaphors are chained (serially mixed).

Although we have explicitly confined ourselves to metaphors of mind, we intend our arguments to generalize to other types of live metaphor having abstract tenor domains. We are not aware of anything in our treatment that depends on the metaphors being metaphors of mind particularly.

ACKNOWLEDGMENTS

We are indebted to Ray Gibbs and Janyce Wiebe for useful suggestions during our project, and to Kim Gor and Kanghong Li for help in developing the ATT-Meta system. The paper has benefited from comments by anonymous reviewers. The work was supported in part by grants IRI-9101354 and CDA-8914670 from the National Science Foundation. Its preparation was assisted by the facilities of the Computer Science Department at the University of Reading, England.

REFERENCES

Barnden, J.A. (1989a). Towards a paradigm shift in belief representation methodology. J. Experimental and Theoretical Artificial Intelligence 2, 1989, pp.133--161.

Barnden, J.A. (1989b). Belief, metaphorically speaking. In Procs. 1st Intl. Conf. on Principles of Knowledge Representation and Reasoning (Toronto, May 1989). San Mateo, CA: Morgan Kaufmann. pp.21--32.

Barnden, J.A. (1992). Belief in metaphor: taking commonsense psychology seriously. Computational Intelligence, 8 (3), pp.520--552.

Barnden, J.A. (in press). Consciousness and common-sense metaphors of mind. In S. O'Nuallain, P. McKevitt & E. Mac Aogain (Eds), Reaching for Mind: Foundations of Cognitive Science. Philadelphia: John Benjamin.

Barnden, J.A., Helmreich, S., Iverson, E. & Stein, G.C. (1994a). An integrated implementation of simulative, uncertain and metaphorical reasoning about mental states. In J. Doyle, E. Sandewall & P. Torasso (Eds), Principles of Knowledge Representation and Reasoning: Proceedings of the Fourth International Conference (Bonn, Germany, 24--27 May 1994). San Mateo, CA: Morgan Kaufmann.

Barnden, J.A., Helmreich, S., Iverson, E. & Stein, G.C. (1994b). Combining simulative and metaphor-based reasoning about beliefs. In Procs. 16th Annual Conference of the Cognitive Science Society (Atlanta, Georgia, August 1994), Hillsdale, N.J.: Lawrence Erlbaum.

Belleza, F.S. (1992). The mind's eye in expert memorizers' descriptions of remembering. Metaphor and Symbolic Activity, 7 (3 & 4), pp.119--133.

Black, M. (1979). More about metaphor. In A. Ortony (Ed.), Metaphor and Thought, pp.19--43. Cambridge, U.K.: Cambridge University Press.

Brown, R. (1977). Use of analogy to achieve new expertise. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA.

Casadei, F. (1993). The canonical place: and implicit (space) theory in Italian idioms. In R. Casati & G. White (Eds), Philosophy and the Cognitive Sciences, pp.95--99. Austrian Ludwig Wittgenstein Society: Kirchberg am Wechsel, Austria.

Cooke, N.J. & Bartha, M.C. (1992). An empirical investigation of psychological metaphor. Metaphor and Symbolic Activity, 7 (3 & 4), pp.215--235.

Falkenhainer, B. (1990). A unified approach to explanation and theory formation. In J. Shrager & P. Langley (Eds.), Computational Models of Scientific Discovery and Theory Formation, pp.157--196. San Mateo, CA: Morgann Kaufmann.

Falkenhainer, B., Forbus, K.D. & Gentner, D. (1989). The Structure-Mapping Engine: algorithm and examples. Artificial Intelligence, 41 (1), 1--63.

Fesmire, S.A. (1994). Aerating the mind: the metaphor of mental functioning as bodily functioning. Metaphor and Symbolic Activity, 9 (1), pp.31--44.

Gallup, G.G., Jr. & Cameron, P.A. (1992). Modality specific metaphors: is our mental machinery ``colored'' by a visual bias? Metaphors and Symbolic Activity, 7 (2), pp.93--101.

Gentner, D. & Grudin, R. (1985). The evolution of mental metaphors in psychology: A 90-year perspective. American Psychologist, 40 (2), pp.181--192.

Gerrig, R.J. (1989). Empirical constraints on computational theories of metaphor: comments on Indurkhya. Cognitive Science, 13 (2), pp.235--241.

Gibbs, R.W., Jr. (1984). Literal meaning and psychological theory. Cognitive Science, 8 (3), pp.275--304.

Gibbs, R.W., Jr. (1986). Skating on thin ice: Literal meaning and understanding idioms in conversation. Discourse Processes, 9, pp.17--30.

Gibbs, R.W., Jr. (1989). Understanding and literal meaning. Cognitive Science, 13, pp.243--251.

Gibbs, R.W., Jr. (1992). What do idioms really mean? J. Memory and Language, 31, pp.485--506.

Gibbs, R.W., Jr. & O'Brien, J.E. (1990). Idioms and mental imagery: the metaphorical motivation for idiomatic meaning. Cognition, 36 (1), pp.35--68.

Hobbs, J.R. (1990). Literature and cognition. CSLI Lecture Notes, No. 21, Center for the Study of Language and Information, Stanford University.

Hobbs, J.R., Stickel, M.E., Appelt, D.E. & Martin, P. (1993). Interpretation as abduction. Artificial Intelligence, 63, pp.69--142.

Holyoak, K.J. & Thagard, P. (1989). Analogical mapping by constraint satisfaction. Cognitive Science, 13 (3), 295--355.

Indurkhya, B. (1992). Metaphor and cognition: An interactionsit approach. Dordrecht: Kluwer.

Jäkel, O. (1993). The metaphorical concept of mind: mental activity is manipulation. Paper No. 333, General and Theoretical Papers, Series A, Linguistic Agency, University of Duisburg, D-4100 Duisburg, Germany.

Johnson, M. (1987). The body in the mind. Chicago: Chicago University Press.

Jolly, S. (n.d.). Marigold becomes a brownie. London, U.K.: Blackie & Son.

Katz, A.N. (1996). Experimental psycholinguistics and figurative language: Circa 1995. Metaphor and Symbolic Activity, 11, pp.17--37.

Katz, A.N., Paivio, A., Marschark, M. & Clark, J.M. (1988). Norms for 204 literary and 260 nonliterary metaphors on 10 psychological dimensions. Metaphor and Symbolic Activity, 3 (4), 191--214.

Kedar-Cabelli, S. (1985). Purpose-directed analogy. In Procs. Seventh Annual Conference of the Cognitive Science Society, pp.150--159, Irvine, CA, August 1985.

Lakoff, G. (1993a). How cognitive science changes philosophy II: the neurocognitive self. Presented at 16th International Wittgenstein Symp., Kirchberg am Wechsel, Austria, 15-22 August 1993.

Lakoff, G. (1993b). The contemporary theory of metaphor. In A. Ortony (Ed.), Metaphor and Thought, 2nd edition, pp.202--251. New York and Cambridge, U.K.: Cambridge University Press.

Lakoff, G., Espenson, J. & Schwartz, A. (1991). Master metaphor list. Draft 2nd Edition. Cognitive Linguistics Group, University of California at Berkeley, Berkely, CA.

Lakoff, G. & Johnson, M. (1987). The metaphorical logic of rape. Metaphor and Symbolic Activity, 2 (1), pp.73--79.

Lakoff, G. & Turner, M. (1989). More than cool reason: a field guide to poetic metaphor. Chicago: University of Chicago Press.

Leary, D.E. (Ed.) (1990). Metaphors in the history of psychology. New York: Cambridge University Press.

Larsen, S.F. (1987). Remembering and the archaeology metaphor. Metaphor and Symbolic Activity, 2 (3), 187--199.

Lehrer, A. (1990). Polysemy, conventionality, and the structure of the lexicon. Cognitive Linguistics, 1 (2), pp.207--246.

Lytinen, S.L., Burridge, R.R. & Kirtner, J.D. (1992). The role of literal meaning in the comprehension of non-literal constructions. Computational Intelligence, 8 (3), pp.416--432.

Martin, J.H. (1990). A computational model of metaphor interpretation. Academic Press.

Martin, J.H. (1996). Computational approaches to figurative language. Metaphor and Symbolic Activity, 11, pp.85--100.

Mitchell, M. & Hofstadter, D.R. (1990). The right concept at the right time: how concepts emerge as relevant in response to context-dependent pressures. In Procs. 12th Annual Conf. of the Cognitive Science Society, pp. 174--181. Hillsdale, N.J.: Lawrence Erlbaum.

Nayak, N.P. & Gibbs, R.W., Jr. (1990). Conceptual knowledge in the interpretation of idioms. J. Experimental Psychology: General, 119 (3), pp.315--330.

Onishi, K.H. & Murphy, G.L. (1993). Metaphoric reference: when metaphors are not understood as easily as literal expressions. Memory and Cognition, 21(6), pp.763--772.

Richards, G. (1989). On psychological language and the physiomorphic basis of human nature. London: Routledge.

Riesbeck, C.K. & Schank, R.C. (1989). Inside case-based reasoning. Hillsdale, N.J.: Lawrence Erlbaum.

Roediger, H.L., III. (1980). Memory metaphors in cognitive psychology. Memory and Cognition, 8(3), pp.231--246.

Sweetser, E.E. (1987). Metaphorical models of thought and speech: a comparison of historical directions and metaphorical mappings in the two domains. In J. Aske, N. Beery, L. Michaelis & H. Filip (Eds), Procs. 13th Annual Meeting of the Berkeley Linguistics Society. Berkeley, CA: Berkeley Linguistics Society, pp. 446--459.

Sweetser, E.E. (1990). From etymology to pragmatics: metaphorical and cultural aspects of semantic structure. Cambridge, U.K.: Cambridge University Press.

Titone, D.A. & Connine, C.M. (1994). Descriptive norms for 171 idiomatic expressions: familiarity, compositionality, predictability, and literality. Metaphor and Symbolic Activity, 9 (4), pp.247--270.

Tomlinson, B. (1986). Cooking, mining, gardening, hunting: metaphorical stories writers tell about their composing processes. Metaphor and Symbolic Activity, 1 (1), 57--79.

Waggoner, J.E. (1990). Interaction theories of metaphor: psychological perspectives. Metaphor and Symbolic Activity, 5 (2), pp.91--108.