Archive

Monthly Archives: March 2013

John Langshaw Austin (1911–1960) was White’s Professor of Moral Philosophy at the University of Oxford. He made a number of contributions in various areas of philosophy, including important work on knowledge, perception, action, freedom, truth, language, and the use of language in speech acts. Distinctions that Austin draws in his work on speech acts—in particular his distinction between locutionary, illocutionary, and perlocutionary acts—have assumed something like canonical status in more recent work. His work on knowledge and perception places him in a broad tradition of “Oxford Realism”, running from Cook Wilson and Harold Arthur Prichard through to J. M. Hinton, M. G. F. Martin, John McDowell, Paul Snowdon, Charles Travis, and Timothy Williamson. His work on truth has played an important role in recent discussions of the extent to which sentence meaning can be accounted for in terms of truth-conditions.

Here are some of my attempts to say something about Austin’s work:

Talking to @philosophybites about Austin.

Stanford Encyclopedia of Philosophy entry on Austin.

A piece specifically on Austin on language.

Here are some useful discussions by others:

A piece by Mark Eli Kalderon (UCL) and Charles Travis (KCL) on Oxford Realism.

A piece by M. G. F. Martin (UCL) on Austin’s Sense and Sensibilia.

In his Groundwork of the Metaphysic of Morals, Immanuel Kant attempts to derive what he calls a metaphysic of morals—an experience-free and absolute grounding for the core principles of morality. On his way to that end, Kant expresses a nuanced view about the secondary aim of making one’s arguments and positions accessible for popular consumption.

“This condescension to popular concepts is to be sure very laudable when the elevation to principles of pure reason has already been achieved to full satisfaction, and that would mean first grounding the doctrine of morals on metaphysics, but procuring entry for it by means of popularity, once it stands firm. But it is quite absurd to want to humor popularity in the first investigation, upon which depends the correctness of principles. Not only can this procedure never lay claim to the extremely rare merit of a true philosophical popularity, since there is no art in being commonly understandable if one relinquishes all well-grounded insight; this produces only a disgusting mish-mash of patched together observations and half-reasoned principles, in which superficial minds revel, because there is always something serviceable for everyday chitchat, but which insightful people disregard, feeling confused and dissatisfied without being able to help themselves; yet philosophers, who can very well see through the illusion, find little hearing when for certain occasions they decry this supposed popularity, in order, through acquiring determinate insight, finally to gain the right to be popular.” (Kant, Groundwork of the Metaphysic of Morals, Chapter II)

Kant’s view here is nuanced. He seems not to commit to the general principle that more serious, and less popular, philosophy must always precede the attempt to make one’s findings accessible. Rather, he is concerned to defend only a special case of the principle, applying to the derivation of principles of pure reason. Nevertheless, the passage raises important questions about the putative requirement for accessibility, questions that have reflexes outside philosophy in, for example, the exact sciences. To what extent should one aim, from the outset, for accessibility in one’s work? Must one’s claims and arguments be widely accessible in order to carry conviction, or could competence to follow a piece of argumentation—however presented—be congenitally limited? Might the operative thinking of a specialist be by nature incommunicable to the non-specialist, at least in advance of the additional labour of translation into common idioms?

Humans naturally seek insight and understanding. More generally, we aim to attain wisdom, in both practical and theoretical form. How, if at all, is it possible to acquire those goods? The following are three very broad answers to that question:

  1. On the basis of art. According to this type of answer, we might seek insight, understanding, or wisdom by, for example, appreciating, creating, and reflecting upon paintings, literature, or music.
  2. On the basis of philosophy. According to this type of answer, we might seek the same goods on the basis of reasoning and reflecting.
  3. On the basis of science. According to this type of answer, we might seek the goods on the basis of observation, experimental manipulation, reasoning, reflecting, and the construction of explicit–typically mathematicized—theories.

One question here is whether 1–3 each provides a way of attaining the desired goods. Can one really acquire insight or understanding on the basis of one’s engagement with fiction? Here’s Noam Chomsky’s answer to that question:

“Plainly, such an approach [the broadly naturalistic approach taken by the natural sciences, including mathematics] does not exclude other ways of trying to comprehend the world. Someone committed to it (as I am) can consistently believe (as I do) that we learn much more of human interest about how people think and feel and act by reading novels or studying history than from all of naturalistic psychology, and perhaps always will; similarly, the arts may offer appreciation of the heavens to which astrophysics cannot aspire.”

(Language and Thought, 1993, p.42.)

Chomsky’s thought here appears to have at least the following three components.

  1. Understanding and insight may take various forms.
  2. Engagement with artistic or literary representations of aspects of the world—both fictional and non-fictional—may provide forms of insight and understanding distinct from those provided by the natural sciences.
  3. Different aspects of the world may be more or less susceptible to the approaches taken in the various natural sciences. For instance, it may be that some aspects of the world are too ill behaved for us to attain the kind of understanding of them that we now have of idealised physical systems.

Perhaps human psychology—or some aspects thereof—fall into that last category. Even if that’s so, it doesn’t follow that the naturalistic approach will offer no returns. Being less effective than fundamental physics is consistent with being very effective indeed. How does one find out whether an aspect of the world—for instance, human psychology, or human psychological capacity—is susceptible to the naturalistic approach? Chomsky’s thought seems to be that the only way to find out is to do one’s best, over a period, to pursue the naturalistic approach and to see where so doing leads. In some cases, pursuing the approach has thus far paid handsome dividends: vision science and Chomsky’s own field, theoretical linguistics, are two prominent examples of success. Neither field has come close to the successes of physics or mathematics. But that, to stress, is no objection.

Might there be other ways of discovering that an aspect of the world is insusceptible to the naturalistic approach? Perhaps, for example, philosophy could reveal that aspects of human psychology are immune to the charms of the naturalistic approach. The question raises two sub-questions.

First, what precisely are the limits of a naturalistic approach to a subject matter? One thought is that they are set by the role of observation and experiment. On this view, insofar as philosophy eschews observation and experiment—broadly a posteriori methods—in favour of reason and reflection—broadly a priori methods—, it departs from the naturalistic approach. However, care is needed here, for pure mathematics is based largely upon a priori methods of investigation. And it would be absurd to preclude mathematics from naturalistic inquiry.

Second, and related, does philosophy, considered as an independent mode of investigation, have the reach required in order to establish that an aspect of the world is insusceptible to naturalistic inquiry? Wouldn’t its doing so require that it could attain sufficient understanding both of naturalistic inquiry and of target aspects of the world that it could establish lack of fit between them? And insofar as the target aspects of the world—and perhaps also the nature of naturalistic inquiry itself—are inaccessible on the basis merely of reflection, wouldn’t establishing that require something like observation and experiment?

Chomsky has himself expressed reservations about what he thinks of as non-naturalistic approaches to questions of this form. (Indeed, one of Chomsky’s great contributions to philosophy has been to emphasise concealed empirical dimensions to questions that might otherwise have seemed susceptible to more or less pure reason and reflection.) His reservations do not, in general, concern the potential benefits of other forms of inquiry. Rather, they concern their potential to figure in the attainment of the type of understanding we can sometimes gain through naturalistic inquiry.

“We are here speaking of theoretical understanding, a particular mode of comprehension. In this domain, any departure from a naturalistic approach carries a burden of justification. Perhaps one can be given, but I know of none. Departures from this naturalistic approach are not uncommon, including, in my opinion, much of the most reflective and considered work in the philosophy of language and mind, a fact that merits some thought, if true.” (Language and Thought, 1993, p.42.)

To emphasise: Chomsky’s objection isn’t to a priori methods per se. For naturalistic inquiry is replete with the employment of mathematics, logic, and other modes of experience-independent reasoning and reflection. Rather, I take it that his objection is to hubris: excessive self-confidence. The concerns here are two. First, that a priori methods should not be applied inappropriately to questions that demand to be answered on the basis of empirical inquiry. Second, that even where a priori methods are appropriate, they must be handled with care. Although there are no grounds for general scepticism about the reliability of such methods—on pain of accepting an absolutely global scepticism—there is equally no reason to suppose that even our best attempts to employ those methods will not, on occasion, lead us astray.

Our first question was whether each of 1–3 could, in principle, furnish us with insight, understanding, or wisdom. A second, prior question is whether it is accurate to think of them as independent approaches. In fact, as Chomsky stresses, 2 and 3 have been closely linked throughout their histories and it is far from obvious that their approaches can safely be disentangled. An analogous claim might reasonably be made with respect to 1 and 2. Finally, recent work of Charles Fernyhough nicely illustrates the potential of scientifically informed literature to provide illumination.

The following case appears in a recent book by Timothy Williamson:

Consider an analogy. I am faced with enormous pile of chocolates. I know that exactly one of them is contaminated and will make me sick; alas, I cannot tell them apart. I have a strong desire to eat a chocolate. I can quite reasonably eat just one, since it is almost certain not to be contaminated, even though, for each chocolate, I have a similar reason for eating it, and if I eat all the chocolates, I shall eat the contaminated one, and my sickness will be overdetermined. No plausible principle of universalizability implies that, in the circumstances, any reason for taking one chocolate is a reason for taking them all; the most to be implied is that, in the circumstances, any reason for taking one chocolate is a reason for taking any other chocolate instead. (Williamson, Knowledge and Its Limits, Oxford: OUP, 2000: 248)

Williamson’s analogy is directed at undermining some simplistic strategies for universalising judgments about what it’s reasonable for one to believe or do. In this case, his ultimate target is principles that attempt to move from its being reasonable to believe each of p and q and r and… to its being reasonable to believe the conjunction of p and q and r and…. Just as it might be reasonable to eat any of chocolate one, chocolate two, chocolate three, and so forth without it being reasonable to eat all of the chocolates, so it might be reasonable to believe any of p, q, r, etc., without it being reasonable to believe all of p, q, r, etc.

Related issues arise concerning knowledge. If you know something, then it must be true. For instance, if you know that this chocolate is uncontaminated, it must be that this chocolate is uncontaminated. Since one of the chocolates is contaminated, it follows that you can’t know, with respect to every chocolate, that the chocolate is uncontaminated. Suppose that you believed, of every chocolate, that the chocolate were uncontaminated. That would involve believing, of the contaminated chocolate, that it was uncontaminated. Since that belief would be false, you could not thereby know that the chocolate was uncontaminated. Hence, you can’t know, with respect to every chocolate, that the chocolate is uncontaminated.

Given that you can’t know, with respect to every chocolate, that the chocolate is uncontaminated, can you know that with respect to any of the chocolates? Considerations of parity suggest that you can’t. In order to know, of any of the chocolates, that the chocolate is uncontaminated, considerations of parity make plausible that your evidential position with respect to that chocolate would have to be better than your evidential position with respect to the contaminated chocolate. However, it’s natural to think that, as the case was described, your evidential positions with respect to contaminated and uncontaminated chocolates do not differ in the required way. If the natural thought is correct, then it seems that you cannot know, with respect to any of the chocolates, that that chocolate is uncontaminated—at least, one can’t know that in advance of eating the chocolate and observing its effects on your health.

In spite of—or in advance of—such considerations, some people seem to be prepared to judge that you can know, with respect to an uncontaminated chocolate, that it is uncontaminated, at least if the number of uncontaminated chocolates is sufficiently high. Suppose, for example, that there are one million uncontaminated chocolates and only one contaminated chocolate. In that case, some people seem to think, it is possible for one to know that a given uncontaminated chocolate is so. However, unless we rig the case so that there is really very little danger of the given chocolate being contaminated—for example, by interposing a pile of 996,347 chocolates between the subject and the contaminated chocolate—the view that you can know here looks to be erroneous. The fact that some of us are willing to judge that you can know in the non-rigged circumstance requires explanation. However, there is no good reason to think that the explanation, or explanations, will make appeal to the correctness of the judgments.

Leaving aside Williamson’s larger aims, his case has the potential to raise a more specific practical question. Suppose that you knew that you were in a situation of the type that Williamson characterises. In that case, how many of the chocolates would it be reasonable for you to eat? I take it that the correct answer to that practical question is dependent on answers to a number of sub-questions, including the following. (1) How many of the uncontaminated chocolates is it reasonable to eat? (2) How many uncontaminated chocolates is it reasonable to believe there are? (3) What degree of sickness following ingestion of the contaminated chocolate is it reasonable to expect? (4) What degree of sickness is it reasonable to endure? But those are not obviously philosophical questions.

Thanks to Aidan McGlynn for suggesting that I consider Williamson’s analogy.

What is it like to eat a chocolate Hobnob? And how, if at all, can one come to know the answer to that question? How, that is, can one know what it is like to eat one? One obvious answer, at least applicable to normally endowed humans would be: by eating one. Alternatively, and perhaps more carefully, one might try: through a combination of one’s being normally endowed with knowledge acquiring powers and one’s undergoing a sufficient quantity of experiences of eating chocolate Hobnobs. But is that answer the only one? That is, is having such an experience of eating a chocolate Hobnob a necessary condition for knowing what it’s like to have the experience?

I wish here briefly to set out two more specific aspects of these questions, analogues of which have figured in some recent philosophical work. The first question is, must one have some sorts of experiences in order to come to know what it’s like to eat a chocolate Hobnob? The first question is a version of questions that have famously been pressed by Thomas Nagel (about what it’s like to be a bat) and Frank Jackson (about what it’s like to see the redness of a rose). The second question is, must one have the experience specifically of eating a chocolate Hobnob? The second question is a version of questions that have famously been pressed by David Hume (about his missing shade of blue) and Thomas Nagel (again, about being a bat, given that none of us has been). (Hume’s question concerned the possibility of acquiring a particular type of idea of a shade of blue that one had not experienced that shade, but had experienced surrounding shades.)

Let’s begin with the first question. Suppose that one had never eaten a biscuit, or any approximately biscuit-like foodstuff. Imagine, for example, that one had been locked in a biscuit-less canteen all of one’s life and fed only yellow paste. Would a verbal description of what it would be like to eat a biscuit put one in a position to figure out what it would be like? Could one even imagine what it would be like for one to experience eating a biscuit? And if one could, could one tell that that was what one was imagining, for example by reliably distinguishing that imagining, as the imagining of eating a chocolate Hobnob, from similar imaginings?

The second question aims to home in on a more specific requirement. Suppose that one had enjoyed a wide variety of experiences of eating (plain) Hobnobs. And suppose that one had also had many experiences of eating chocolate biscuits—for example, chocolate Digestives. Suppose, finally, that one had reflected carefully on those experiences and were normally endowed with powers of imagination. Would one’s experience, reflective knowledge, and imaginative capacity put one in a position to figure out what it would be like for one to experience eating a chocolate Hobnob? Here, one might think that one’s experiences might do so if the experience of eating a chocolate Hobnob were a sort of combination of aspects of the experiences of eating a (plain) Hobnob and eating a chocolate Digestive. For in that case, one might be able to construct an imagined experience of eating a chocolate Hobnob from its constituent aspects. Alternatively, however, one might wonder whether the experience of eating a chocolate Hobnob is merely a combination of aspects of experiences one could have had by other means. Perhaps, for example, the experience of eating a chocolate Hobnob involves a chocolate-aspect and a Hobnob-aspect. However, perhaps those aspects are similar to, but not identical with, the chocolate-aspect of eating a chocolate Digestive and the Hobnob-aspect of eating a (plain) Hobnob, respectively. In that case, although it may yet be possible to figure out what it would be like to eat a chocolate Hobnob, doing so might involve non-combinatorial operations of the imagination.

For further discussion, see David Hume, A Treatise of Human Nature, edited by L. A. Selby-Bigge, 2nd ed. revised by P.H. Nidditch, Oxford: Clarendon Press, 1975: Book 1, Part 1, Sect. 1; Thomas Nagel, 1974, “What is it like to be a bat?” Philosophical Review 83: 435–50; Frank Jackson, 1982, “Epiphenomenal Qualia,” Philosophical Quarterly 32: 127–136; John O. Nelson, 1989, “Hume’s Missing Shade of Blue Re-viewed,” Hume Studies Volume XV Number 2: 353-364; Paul Snowdon, 2010, “On the what-it-is-likeness of experience,” The Southern Journal of Philosophy, Volume 48, Issue 1: 8–27.

Philosophers spend a lot of time with other philosophers. This is so even when they work alone. For a good part of philosophical work involves reading, and reflecting upon, what philosophers have written. Naturally, such work rarely takes place in a vacuum, and philosophers often think carefully about the appropriate setting for engaging with one or another thinker: the most conducive music, lighting, seating, and so forth, for engaging with Descartes, Kant, or whoever. An important, although less oft-remarked, component of appropriate setting involves the selection of snacks and beverages that will best accompany the works of particular philosophers.

In some cases, such a decision may be made on prudential grounds: getting through the work of one or another philosopher may demand intake of a quantity of caffeine, and the promise of chocolate. In other cases, the decision may be made on aesthetic grounds: appreciation of the nuance of a thinker’s work may demand the careful application of Earl Grey tea, for example.

Here, I make some preliminary suggestions about some reasonable pairings, in the hope that this may be at the service of further reflection and discussion.

Plato:        Cheese and fruit; posset or wine.

Aristotle:          Digestive biscuits; water.

Aquinas:          Custard tarts; mead.

Descartes:          Angel cake; lemonade.

Spinoza:          Ginger nut biscuits; espresso.

Newton:         Apple strudel; ginger beer.

Locke:           Madeleine cake; coke.

Leibniz:           Chocolate Hobnobs; Ribena.

Berkeley:         Doesn’t matter; orange juice.

Hume:          Sandwiches; red wine.

Kant:          Rich Tea biscuits; Lapsang Souchong tea.

“Who in the rainbow can draw the line where the violet tint ends and the orange tint begins? Distinctly we see the difference of the colors, but where exactly does the one first blendingly enter into the other? So with sanity and insanity.” Herman Melville, Billy Budd, Sailor.

Amongst the things we eat, we sometimes distinguish those things taken to constitute a meal from those things taken to constitute a snack. What drives us to distinguish things in that way? And do we thereby carve at a natural joint? The aim of this post is to sketch out some preliminary considerations that may figure usefully in more extended reflection on these questions.

The first issue has figured in some recent work in occupational and health psychology, on the hypothesis that the question might be connected with issues of appetite, including over-eating. Here are extracts from the abstracts of two recent studies:

“What determines whether a person perceives an eating occasion as a meal or snack? The answer may influence what and how much they eat on that occasion and over the remainder of the day. A survey of 122 participants indicated that they used food cues (such as the food quality, portion size, perceived healthfulness, and preparation time) as well as environmental cues (such as the presence of friends and family, whether one is seated, and the quality of napkins and plates) to determine if they were eating a meal rather than a snack… ” (Wansink et al., ‘”Is this a meal or snack?” Situational cues that drive perceptions,’ Appetite 54, 2010: 214–216.)

“The purpose of this study was to investigate definitions of snacking, perceptions of snack foods and snacking behavior… The majority of participants believed that snacking was best defined as food or drink eaten between main meals…. This study supports previous evidence that snacks are best defined relative to meals however it highlights a need for further research to be done examining the relationship between meals and snacks. The findings identify that not all snack foods provide extra calories and therefore snacking is not necessarily a predisposition to obesity.” Chaplin and Smith, ‘Definitions and perceptions of snacking,’ Current Topics in Nutraceutical Research, 9, 1, 2011: 53–59.)

Wasnink et el. mention earlier, anthropological work by Mary Douglas, in which she claimed that “a key driver in meal/snack perception is whether a “mouth-entering” utensil is used.” (Wansink et. al., 2010: 214, discussing Douglas, Implicit Meaning: Essays in Anthropology, London: Routledge & Kegan Paul, 1975.)

Here, the following ideas are emphasised:

  • The suggestion is made that the snack-classification may be parasitic, in one or another way, on the meal-classification. (For example, it may be that a snack is food or drink consumed between meals and thus is not itself a meal.)
  • The suggestion is made that the snack–meal distinction, or distinctions, may be multi-faceted. (For example, it may depend upon consideration of timing, method of consumption, manner of presentation, food quality, preparation, &c.)
  • The focus is on discerning the cues subjects use in perceiving, or judging, things to fall into one or another classification, rather than attempting to capture the natures of the kinds subjects track.
  • It seems to be assumed that an appropriate method is to ask subjects to assess presented definitions of either “snack” or “meal”, rather than asking subjects to classify presented cases in one or another way, and then seeking to construct definitions on the basis of their classifications.

Although it’s not implausible that consideration of folk judgments might play a role in further reflection on the issues here, it’s not obvious that the folk are, just by virtue of their native lexical or conceptual competence, especially well placed directly to assess definitions. It’s plausible that one could be competent to classify events as meals or snacks without having an articulate view about the grounds for that classification. For instance, one might be able reliably to judge whether a dog is a spaniel without being able to say in more detail what features of dogs one’s classification relied upon. Similarly, one might be able reliably to judge whether some food, or its consumption, amounts to a snack without being able to say in more detail what grounds one’s classification.

Furthermore, it’s not obvious that either “meal” or “snack” is definable in an interesting way. Perhaps we can say something about the characteristics of all and only those things we count as meals or all and only those things we count as snacks. But it’s not obvious that what we can say would put someone who was entirely ignorant of meals and snacks, and of the ways of life into which eating either is woven, in a position reliably to sort things into one pile or the other. And it’s far from clear that our competence in classifying resides in our knowledge, and exploitation, of the kinds of definitions that might be able to serve such a purpose.

Suppose that, one way or another—either by direct questioning about presented definitions, or by eliciting the classification of cases—we had assembled a range of data about the conditions in which the folk count things, or fail to count them, as respectively meals or snacks. Still, that result might fall short of providing the basis for an account of the distinction for any of the following reasons.

  1. Some of our judgments are liable to be erroneous, even by our own, present standards. One reason for this is that our conceptual abilities are, like any human ability, fallible. We must distinguish between those judgments that reflect our competence and those that are due to extrinsic factors that limit our performance—crudely, grit in the mechanism. Another, related reason is that our judgments typically issue from our conceptual competence only in combination with the exercises of other capacities and abilities. Hence, even where a judgment reflects the proper operation of the system as a whole, it may not transparently reflect the operation of conceptual ability.
  2. Our conceptual capacities can develop over time, in light of reflection and also the gathering of additional evidence about the natures of the things we aim to classify. For example, new discoveries about the nature of gold—its chemical constitution, and so forth—can lead to changes in the types of cues we exploit in classifying stuff as gold. Not every such change in conception amounts to a change in concept: we allow that we can improve our grasp on conceptual requirements of which we were anyway dimly cognizant. It is possible that such developments might occur in our conception of the requirements for being a meal or a snack. We might think here of our changing views about the standing of the mark to which Douglas appeals: use of “mouth-entering utensils”.
  3. Some such developments amount to our coming to see that cues to which we appeal in classifying stuff as gold are only superficial characteristics of gold—features that a stuff might manifest without being gold, or that a stuff might fail to exhibit even though it is gold. Similarly, we might come to see that our initial judgment about a case of food consumption was based on merely superficial cues: perhaps it was a case of a fool’s meal—a snack with the superficial characteristics of a meal—or a fool’s snack—a meal with the appearance of a snack.

A question to be addressed by future work in this area, then, is this. Could there be fool’s meals or fool’s snacks?

%d bloggers like this: