Here is a draft chapter co-authored with the polymathic Peter Lanzer. A philosophical reader will spot that we make no attempt to say what knowledge is, post Gettier. Ho hum.
Medical practice aspires to be based on medical knowledge. This chapter starts by investigating why this is so, what the value of knowledge is. Explicit knowledge is factive: what is known must be true and truth is conducive to the success of medical interventions. But such knowledge has to be more than a matter of luck and hence depends on a pedigree: a justification or warrant. For explicit medical knowledge science now dominates that pedigree. But medicine also depends on practical knowledge (knowledge-how) which does not reduce to explicit knowledge (knowledge-that) whose pedigree is the development of a reliable skill. Forging a connection between practical and tacit knowledge the chapter concludes with a discussion of how it is possible to teach and learn such knowledge.
Knowledge-that: propositional knowledge of facts or of what is the case. Such knowledge is often analysed as true belief with a suitable pedigree such as justification or warrant.
Knowledge-how: knowledge required to perform actions sometimes called ‘know-how’.
Tacit knowledge: by contrast with explicit knowledge, a form of knowledge that cannot be fully put into words or codified in context-independent terms. On one view, this is because tacit knowledge is knowledge-how.
Introduction: the very idea of knowledge
In modern technologically and economically driven societies, knowledge is a critical asset. It is employed to achieve economic gains, social status, competitive advantage and professional expertise. Knowledge has become a commodity, the subject of trade and object of manipulation. The nature of knowledge itself, however, is a subject of ongoing philosophical discussion and disagreement (see e.g. 1). This chapter will review some aspects of the nature of knowledge relevant to medical professional expertise. In particular it will examine medical knowledge through the lens of its pedigree: its justification or warrant. It will compare the nature and transmission of theoretical practical medical knowledge.
Medical practitioners aim to base their interventions on a secure base of medical knowledge. Obvious though this point may seem – especially to readers of a medical textbook such as this one – it is worth reflecting on why this is. What is the value of medical, or any other, knowledge?
Answering the question of the value of knowledge is difficult. It will be approached in this section via a preliminary question: what is knowledge or what does ‘knowledge’ mean? Now there might not be a very helpful or informative answer to this question. Imagine that someone asks what stickiness is or what the word ‘sticky’ means. One might reply by offering a word that means more or less the same: such as ‘tacky’. But this does not help explain the concept of stickiness so much as swap one word for it for another. Alternatively, one might offer a more substantial explanation of the concept such as ‘a tendency of a body to adhere to another on contact’. Such an explanation may more or less equate to the concept but it is not obvious that a speaker who understands the word ‘sticky’ should be able to offer such a formal definition nor that hearing the formal definition will teach the meaning of ‘sticky’ to someone who does not understand that concept since it raises further questions such as what the word ‘adhere’ means. Despite these difficulties in defining it, there is generally no difficulty in learning, understanding and teaching how to apply the word ‘sticky’. So one should approach the question of what knowledge in general is with some caution. There may not be a very helpful definition available.
Some general features of knowledge can, however be learnt from particular examples. Suppose that a clinician knows that, because it is 5pm, her patient is due for medication. If so, she must hold, or take it to be, true that it is time for his medication. That is, she must at least believe it. (‘At least’ because we often use the word ‘believe’ when we are not sure that we do know something. “Do you know that?” “Well I believe it.”)
Second, if she really does know that her patient is due for medication, then he must really be due for medication. If she has knowledge, what she believes must be true. Knowledge is said to be factive. If someone knows that hypertension can be treated by a high grain and low fat diet then it follows that hypertension can be treated by a high grain and low fat diet. Equally if it is not true that hypertension can be treated by a high grain and low fat diet then no one can know this. This example points to a difference between knowledge and belief. Beliefs can be either true or false. Knowledge by contrast must be true. There is no species of false knowledge (by contrast with claims to knowledge which turn out not to be).
Third, if the clinician in the example has knowledge, her belief cannot merely be accidentally true. Neither a reckless guess nor an ungrounded hunch can support knowledge even if they turn out to be true. They might, too easily, not have been true. Knowledge, as Plato suggests in his dialogue Meno is tethered to truths (2). In more recent terminology, knowledge ‘tracks’ truth (3).
A claim to knowledge can be undermined even when one does one’s best. Suppose the clinician believes that it is time for her patient’s medication because she knows that he takes medication every day at 5pm and she believes, by looking at the ward clock, that it is now 5pm. But suppose that the normally reliable ward clock has, in fact, stopped the day before. By lucky chance, however, it is now 5pm. If so, although the clinician has a true belief that it is time for her patient’s medication she does not know it. Her belief is merely true by luck. If she had looked at the clock an hour earlier she would have formed the false belief that it was 5pm and so time for his medication. Being lucky will make no difference to how things seem to her, since she does not realise the clock has stopped, but an observer might say that she didn’t know the time, she was right only by luck. Her claim does not track the truth about time.
To address this incompatibility between knowledge and merely lucky true belief, philosophers have long attempted to analyze knowledge as 1) a belief which is 2) true and 3) some extra ingredient which rules out luck. One longstanding definition which can be found in Plato’s dialogue the Theaetetus construes the third component as justification (4). In the more traditional order, knowledge is justified true belief. The idea is that needing a justification for a belief (for it to count as knowledge) should rule out merely lucky true beliefs. But this prompts a question: in the example of the stopped ward clock, does that work?
Consider this question for a moment. Does the traditional analysis that knowledge is justified true belief give the correct account of the case of the stopped clock? That is, does it fit our intuitions that the clinician does not know that it is time for her patient’s medication even though her belief that it is is true? Here is a clue: ask whether the clinician has a justification for thinking the time is 5pm and also ask whether her true belief is, despite that, lucky. If the answer to both is ‘yes’ then the traditional account does not address the problem of luck. If it does not, could some modification be made to the definition? We will return to this question shortly.
As well as trying to rule out merely lucky true beliefs, justification also plays a second role which is helpful for thinking about the challenge of generating medical knowledge. It provides a way, or a method, or a route, to aim at true beliefs. It is one thing to worry that one’s beliefs about the latest medication for coronary illness may not be right, but quite another to work out how to avoid being wrong.
Suppose a hospital authority issued an instruction that medical staff should replace any false beliefs they hold with true beliefs. On the face of it, this seems a good aim. But would the instruction help? Could one act on it? The problem is that ‘from the inside’ true beliefs and false beliefs seem the same. To hold a belief is to hold it to be true. To believe that something is not true is precisely not to believe it. Thus beliefs which are, in fact, false are not transparently so to someone who holds them. So the instruction is not helpful.
By contrast, the following instruction would help: medical staff should replace any beliefs that they hold without a justification with beliefs that do have justifications or grounds. In essence, this is the advice of Evidence Based Medicine (5). One can tell whether one believes something for a reason, or with a justification. And further, by aiming at having only justified beliefs, one should in general succeed in reaching true beliefs since justification is, in general, conducive to truth. Any ‘justification’ which did not increase the chances of a belief being true would not be a justification for it after all.
Although justification can play this second, helpful role of providing a concrete way of aiming at true beliefs it is not so successful in the first role mentioned above: ruling out being merely true by luck. As the example of the stopped clock illustrates, the clinician does have a justification for believing that it is 5pm: she can point to the clock. Nevertheless, her belief is only true by luck because, as the narrator of the film Withnail and I (6) says: even a stopped clock is right twice a day. So she has a justification for a belief and the belief is true but no one would say that she knows the time.
Although the definition that knowledge is justified true belief dominated philosophy for 2,000 years since Plato, the problem that one might have a justified, true belief but still not have knowledge was first pointed out in the 1960s by the philosopher Edmund Gettier using an example like this one (7). What follows?
It seems at first that, as a definition of knowledge, ‘justified, true belief’ must fail (because the clinician has justified, true belief but she does not have knowledge, she is merely lucky). But a better response is to argue that what the example really shows is that the clinician does not really have a proper justification, a good enough justification for knowledge. Knowledge can still be correctly understood as justified, true belief but not everything that one might think of as a justification (in the example, looking at the ward clock) really is a justification (because the clock has stopped). If so, it is a little like the definition of stickiness from earlier: ‘a tendency of a body to adhere to another on contact’. Just as only someone who understands the concept of stickiness will understanding the concept of adhering, so only someone who can understand the concept of knowledge can understand the kind of justification it needs. Knowledge and justification are a pair of concepts that one learns, in learning a first language, at the same time. The definition, whilst not explaining knowledge to someone who does not already understand it, highlights the essential connection between knowledge truth and its grounds: its justification or warrant.
If so, medical knowledge has to have the right kind of justification or grounding. The route to knowledge to underpin medical practice will be, as suggested above, through suitable justification.
To return to the question first raised why should medical practitioners aim to have knowledge of their subject? What is the value of knowledge? In the light of the discussion so far part of the answer is this. Because knowledge, unlike say mere rumour or public opinion on which medicine might otherwise be based, is by definition true, aiming at knowledge is aiming at truth. Now it may seem obvious in a theoretical or contemplative discipline why one should aim at truth in one’s thinking. Cosmologists, for example, want to understand how the universe works just for the sake of understanding it. And hence they should aim at knowledge and hence true beliefs just for their sake.
But there is a further reason for medical practitioners to aim at truth. This is because medicine is a practical discipline. It aims not just to understand health and illness (as a merely theoretical or contemplative discipline) but, for example, to make a difference, to change people’s states of illness to health. And in general, actions – for example, medical interventions, or acts of caring – based on true beliefs are more likely to succeed than those based on false ones. So medical practitioners should aim at having true beliefs in order that their practical interventions in the lives of their patients are more likely to be successful. But because there are no intrinsic signs or symptoms of true beliefs that mark them out from false beliefs, the route to this is via a suitable justification which forms part of the conceptually rich idea of knowledge.
However, being subject to the rapid evolution of our understanding of natural laws and nature’s first principles the shelf-life of what, at any particular time, we take to be knowledge in science and medicine is limited as illustrated by many examples where deeply held views about the functioning of the body have later been shown to be false. Because of the prominence of both exponents, the following example will suffice. As reported by Philipp Melanchton, Professor of Greek and versatile theologian at the University Wittenberg and Luther’s close friend, Luther had suffered a severe attack of angina pain on 18 February, 1546. In, Melanchton’s words (1): ‘Doctor Martin Luther called me (to see him) a year before his death at 2 in the morning. Coincidentally I was already up at the time. I went to see him and asked, what has happened. O, he said, I am experiencing great and dangerous pains. I asked him, if it was a stone. He replied: No, it is more than a stone. I felt his pulse, it was normal. I said: heart is all right, it is not a heart attack. Then he said: I have a severe tightness in my chest, yet I do not feel any heart constriction, and I do not feel any difficulties with the pulse. I pondered; it cannot be other than liquid rising up in the stomach entrance. That is the reason for the severe tightness, and the disease termed stabbing in the chest. . . . he went to a toilet; it was very cold. Immediately he felt, how he was seized by the cold and promptly by that same disease.’ (8). Despite Melanchton’s exceptional erudition, his medical knowledge based on a 16th century understanding of the body sounds absurd by present standards. At the time it no doubt seemed to be justified and hence likely to be cutting edge knowledge. Given the rapid and accelerating advancements of knowledge in life sciences and medicine, keeping up to date with the progress of knowledge and the correction of previous errors represents an increasing challenge to medical practitioners.
This section has considered a fundamental question: why should medical practitioners aim to have knowledge. ‘Unpacking’ the concept of knowledge suggests answers which connect to the value of truth, the role of justification as a way of aiming at truth and the practical ambitions of medicine to intervene in patients’ lives. There are further, complementary reasons that could have been explored. For example, to identify someone, such as a particular member of a multiple disciplinary team, as knowing a patient’s history is to mark out what he or she says on the matter as reliable. Knowledge can be used to mark out whom to trust in cooperative disciplines like medicine (9). But the idea that knowledge has to possess some suitable justification, warrant or pedigree suggests a lens through which to consider medical knowledge. Is there any general account of the warrant for medical knowledge? The next section outlines a thumbnail sketch of one element of that pedigree: the scientific basis of theoretical medical knowledge.
A short history of theoretical knowledge
Systematic or theoretical knowledge has a long history. Ancient cultures such as the Sumerians in Mesopotamia, the Shang dynasty in China, ancient Indus river cultures in India and those of Egyptian’s empires, and others – all of them considerably older than the era of Socrates – had acquired some systematic knowledge (10). The ancient Greeks, especially Aristotle, wrote widely on natural phenomena based on careful observation. Subsequently, there was an intermingling of the ideas of Plato and Aristotle with Christian doctrines. Thus for example, ‘neo-Platonism’ is the name for original ideas from Plato as interpreted by Plotinus (204-270 AD) and then mingled with Christian theology such that neo-Platonist views came to be thought of official Christian teaching.
Given the importance of something like a justification condition for knowledge, one important aspect of the history of knowledge claims about the natural world is the relation of justification by text versus empirical justification. Thus, ironically, whilst the physician Galen (129-199AD) wrote extensively about empirical demonstration for knowledge claims, his medical works themselves took on an unquestioned textual authority for several hundred years afterwards.
In fact, this distinction in approaches to justification can also be seen within the Christian theology of the medieval period between revealed theology, based on scripture, and natural theology, based on reason and experience. The latter, albeit with the intention of discovering more about the nature of God through God’s creation, served as a stimulus for the kind of empirical inquiry later found in science. Medieval scholars such as Roger Bacon stressed empirical justification rather than appealing to the revelation of God’s ultimate authority albeit as a way of discovering more about God’s creation. This also complicates the story, crystalized in Galileo’s encounter with the Inquisition, that empirical methods and Christian theology were incompatible.
On a standard account of the history of science, ‘modern science’ emerged or evolved from proto-scientific inquiry in a revolution on the seventeenth century when there was a rise in the sophistication of observation using newly developed equipment (for example, telescopes and microscopes), a development of experimentation and manipulation of natural phenomena in order to better to understand them (such as the development of the air pump) and in the development of mathematical representation of nature in codifying laws of nature (11, 12). These developments are associated with Nicolas Copernicus, Johannes Kepler, Galileo Galilee, Anton van Leeuwenhoek, William Harvey and many others. The foremost was Isaac Newton, whose opening sentence of Optics ‘My design in this book is not to explain properties of Light by Hypothesis, but to propose and prove them by Reason and Experiment: In order to which I shall premise the following definitions and axioms.’ (13) serves as the testimony to the transition from metaphysical believes to down-to-earth exact science.
More recent historiography has questioned this account. For one thing, although Newton, for example, is widely embraced as one of the founders of modern physics, he himself conceived of his work as lying within a natural theological tradition as an attempt to understand God through nature (14). For another, drawing boundaries on the ‘revolution’ has proved increasing difficult, expanding it from one century to five in order to include all the relevant founding figures (15). It has thus been plausibly argued by Andrew Cummingham and Perry Williams that the origins of ‘modern science’ would be better situated rather later in what the historian Eric Hobsbawm called the ‘Age of Revolutions’ (1760—184) marked out by the French Revolution, the Industrial Revolution in the UK and the post-Kantian intellectual revolution, centred on the German states (16, 17). That is the period, they argue, when scientists first embraced the label ‘science’ as a distinctive form of empirical inquiry into the natural world based on laboratories and also separated it from natural theological purposes. Further, they argue that such a modification of the traditional picture should go hand in hand with a rejection of the idea that science is an inevitable teleological endpoint of human development, anticipated by earlier proto-scientific forms. Hence theirs is an account of the ‘modern origins of science’ rather than the ‘origins of modern science’. Science, on their view, is a comparatively recent invention, a local and historically contingent way of finding out about the natural world for secular purposes.
Such a view chimes with the trajectory of twentieth century philosophy of science. By contrast with the optimism of the Vienna Circle in the 1930s that it would be possible to codify ‘the scientific method’, philosophers of science such as Karl Popper, Imre Lakataos and Larry Laudan have struggled to balance clear prescriptions for how scientists ought to proceed with a realistic account of methods actually followed (18-21).
Consider, for example, Popper’s famous idea that scientific theories should be falsifiable and aim at refutation rather than confirmation on the grounds that a single observation can refute a general theory whilst no finite number of observations can confirm it. Such a prescription is complicated by the fact that any observation might itself be mistaken. Observations can only be made in the context of a number of other theories (for example, concerning the operation of equipment used to make them) and hence in the face of a recalcitrant observation, there is always an element of choice concerning which theory has been undermined (18). Increasingly, the task of shedding light on the methods of scientists has fallen to other scientist: social scientists specializing in scientific practice (e.g . 22, 23, 24 ). But even if there is no single scientific method which can be set out from first principles but rather a number of related methods which have evolved especially over the last two hundred years, science has increasingly provided the only pedigree and justification for knowledge of nature.
Within medicine, this has been specifically emphasized in the rise of Evidence Based Medicine, whose hierarchy of evidence stresses the role of randomized control trials (RCTs) or the meta-analysis of several RCTs over descriptive studies and the authority of respected figures (5). Medical practitioners are encouraged actively to review evolving empirical evidence for available treatment and management options, echoing the change from justification via historic textual authority to empirical evidence in both natural theology and natural science.
Besides the rapid expansion of knowledge in basic and life sciences over the last 200 years, more recently the retrieval, transfer and distribution of knowledge has been revolutionized through the progress in information and communication technology, ICT. Knowledge has become one of the most formative and critical forces with global impact (e.g. financial markets) (25).
The rise of ICT and the mass distribution of information raise a question about the lens with which this chapter has looked at knowledge: its justification or warrant. The challenge is this: if the end user of a piece of information is separated by many steps from its original production how can he or she ensure that it is justified? If knowledge is more than true belief, how can it be shared electronically across the globe?
One answer to this question is motivated in part by Gettier’s challenge to the idea that knowledge is justified true belief. It may seem natural to think that the person who knows something must also know its justification. But that may not be so. If the purpose of the third condition on knowledge is to counter the knowledge-undermining effects of luck this may not be a matter for each individual as long as general systems of knowledge production and transfer actually are reliable (whether or not anyone knows this additional fact). The rise of modern science has helped to promote the idea that knowledge can be a collective, as well as an individual, enterprise.
Theoretical and practical knowledge
The previous section offered a thumbnail sketch of the development of theoretical knowledge and the invention of science as a secular, laboratory-based empirical study of nature for its own sake. Theoretical knowledge is, however, merely one of the forms of knowledge on which modern medicine is based. The ancient Greeks distinguished between theoretical and practical forms of knowledge. The Greek word epistêmê is usually translated as theoretical knowledge or, we might now say, scientific knowledge. It contrasts with technê which means something like craft or art. In fact, Aristotle suggested that there were five ‘virtues’ associated with knowledge adding to epistêmê and technê, phronêsis, sophia, and nous which are variously translated but the first of which, phronêsis, is also relevant here. Phronêsis is practical wisdom: practical in a sense, like technê, of concerning how to change aspects of the world but also practical in the sense (distinct from technê) of concerning how one ought to act. (For further readings see e.g. 26).
In the twentieth century, Gilbert Ryle emphasised the importance of a distinction which resembles the distinction between, on the one hand, epistêmê and, on the other, technê and phronêsis (27) Ryle 1949). Ryle’s distinction is between knowing-that and knowing-how. Further, he stressed the priority of practical of procedural knowing-how over declarative knowledge-that. Against what he called an ‘intellectualist legend’, he rejected the view that intelligent practical knowledge has to be based on underlying knowledge-that in the form of grasping a principle or proposition. Ryle argued, instead, that ‘[i]ntelligent practice is not a step-child of theory’ (Ryle 1949:27 italics added). In fact, in stressing the priority of practical knowledge over theoretical knowledge, Ryle echoed the views of two other influential twentieth century philosophers: Martin Heidegger and Ludwig Wittgenstein both of whom stressed the practical grounding of intellectual knowledge (28, 29).
At about the same time, a different distinction between kinds of knowledge was promoted by the chemist turned philosopher of science, Michael Polanyi. He contrasted tacit with explicit knowledge (Polanyi 1958). Polanyi starts his book The Tacit Dimension with the following slogan: ‘I shall reconsider human knowledge by starting from the fact that we can know more than we can tell’ (30) Polanyi 1967: 4) But as he immediately concedes, the slogan is gnomic. Does it carry, for example, a sotto voce qualification ‘at any one particular time’? Or does it mean: ever? He continues:
This fact seems obvious enough; but it is not easy to say exactly what it means. Take an example. We know a person’s face, and can recognize it among a thousand, indeed among a million. Yet we usually cannot tell how we recognize a face we know. So most of this knowledge cannot be put into words. [Polanyi 1967b: 4]
Polanyi’s work has prompted the study of tacit knowledge across a range of contexts including business organizations and professions (for further reading see the specialized literature eg (31) e.g. Sternberg and Horvath 1999). But the nature of tacit knowledge has been contested (32, 33).
Bringing Ryle’s and Polanyi’s concepts together suggests the following idea: knowledge-that is explicit knowledge and being explicit is therefore codifiable, accessible, promptly transferred via ICT. But it has no direct connection to action. By contrast, tacit knowledge understood as knowledge-how is critical to the performance of actions but is not codifiable in context-independent terms and hence is transmitted remotely, for example via ICT, only with difficulty.
Applying this distinction to medical knowledge, the tradition has it that physicians are masters of knowledge-that while surgeons are masters of knowledge-how. Appealing though this simple classification may appear, it is clearly wrong as a distinction of kind. While it is true that the professional competence of non-surgeons emphasizes knowledge-that and that of surgeons knowledge-how, the main difference between the two professional groups consists in the relative mix or the degree of required theoretical and practical knowledge or cognitive and psycho-motor skills. Taking into account the upsurge of new surgical techniques (e.g. minimally invasive, often catheter-based techniques) performed by both surgeons and non-surgeons such a simple distinction between the two groups fails. Clearly, in the new formats of medical disciplines, the right mix of specific knowledge-that and knowledge-how determines the relevant expertise.
Is practical knowledge a genuine form of knowledge?
The first section of this chapter illustrated the immiscibility of knowledge and luck and hence the need for knowledge to possess a suitable pedigree, warrant or justification. That need was illustrated for the case of knowledge-that, or explicit, or theoretical knowledge (using the example that it was time for a patient’s medication). If that is a general requirement for knowledge, does practical knowledge (or tacit knowledge or know-how) count as a proper species of knowledge?
If practical knowledge (or know-how) could be analysed as a form of theoretical knowledge (or knowledge-that) then the former could inherit the same sort of warrant or justification as the latter. So if practical knowledge of how to do something could always be encoded in grasp of the truth of some principle of how to do it, it would clearly count as a sub-species of genuine knowledge. So does knowledge-how depend on prior knowledge-that? The assumption that it does is what Ryle calls the ‘intellectualist legend’, mentioned above. His argument against it is that if knowing how depends on grasping a piece of explicit knowledge, such as grasping a principle which encodes how to do something, then such grasping is itself a skill which can be exercised well or badly. And so, according to the intellectualist legend, there will need to be another piece of explicit knowledge which underpins how the first piece is grasped. But that leads to a vicious infinite regress. (For an opposing view see 34).
Instead, Ryle argues, knowledge-how is more basic than knowledge-that and stands in no need explanation in terms of knowledge-that. Skills are fundamental. (It is a further question, on which we will not touch here, whether knowledge-that can be analysed as knowledge-how. But a lesson of the twentieth century seems to be that explicit knowledge-that does depend on practical knowledge-how.)
If knowledge-how is not underpinned by knowledge-that, what is the status of its justification? The connection between knowledge-how and skill, emphasised by both Ryle and Polanyi, suggests the answer. The novice’s first time luck in sinking a golf ball is not a piece of knowledge-how because it is a matter of beginner’s luck. Skilful performance, by contrast, is successful because it is based on a longstanding capacity honed through practice and criticism. So the equivalent of the justification of knowledge-that is developing a genuine skill for knowing how to do something: an enduring capacity or reliable ability.
Skills can be performed with a wide range of expertise: from the first steps of novices up to the highest levels of virtuosity of masters. Dreyfus and Dreyfus developed a five stage model of the acquisition of skills in which a novice starts by grasping context-independent rules and following them but in time and through practice proceeds in the direction of true practical expertise which transcends rules and guidelines and depends instead of flexible recognition of the demands of particular situations (35). This model has been influential in medical education (e.g.36). But although it starts with the following of rules it is in accord with Ryle’s view that expert know-how does not depend on knowledge-that but is free-standing. The Dreyfus and Dreyfus model provides one element in answer to the question raised in the final section of this chapter. How can practical knowledge be taught and acquired?
The explication and transfer of practical knowledge
Perhaps on the basis of Polanyi’s label ‘tacit’, it has often been assumed that while explicit knowledge is communicable, tacit knowledge is not. This intuitive, but questionable, connection between its tacit status and the difficulty of assessing it is a key feature of an empirical study of the know-how required to build a working laser by the contemporary sociologist of science Harry Collins. Finding that published accounts (ie. explicit knowledge) of a newly developed laser were insufficient to enable others to build a working laser, he discovered that the communication of knowledge of how to build a laser required a personal connection and was ‘capricious’.
In sum, the flow of knowledge was such that, first, it travelled only where there was personal contact with an accomplished practitioner; second, its passage was invisible so that scientists did not know whether they had the relevant expertise to build a laser until they tried it; and, third, it was so capricious that similar relationships between teacher and learner might or might not result in the transfer of knowledge. These characteristics of the flow of knowledge make sense if a crucial component in laser building ability is ‘tacit knowledge’. (37).
The view that practical knowledge is necessarily invisible and capricious is, however, questionable. Whether or not it is part of Polanyi’s contrast, it is not part of Ryle’s account of the distinction between knowing how and knowing that. The elicitation, codifiability, transformations and transfer of knowledge-that and knowledge-how are, however, works-in-progress in both, basic and applied cognitive sciences. In fact, they constitute the very core of a now hot research field of cognitive teaching and learning (38-40).
In the medical professional context, strategies and techniques have been developed and are now available to explicate and to teach some, if certainly not all of forms of expert knowledge-how. The former belief that procedural expertise in a professional context results merely from long repetitive practice plus some special gifts or talents is no longer tenable.
Thanks to the work of Ericsson and associates, the principle of deliberate practice as the foundation of virtually any professional expertise has been established (41). Deliberate practice focuses on supervised, conscious, dedicated and repetitive enacting of specific parts of the whole task allowing first grasping and then fine-tuning of their performance. To be successful, deliberate practice requires comprehensible instruction, a sufficient number of repetitions and expert feed-back for corrections. Thus, in the vast majority of cases, true professional expertise, along with the discovery of novel knowledge in some cases, are achieved, paid for by long years of deliberate practice with strokes of genius being limited to important but rare instances.
For such training and education to be effective, however, expert knowledge must first be elicited. Developed by cognitive psychologists, different strategies such as observations, verbal reports and interviews have been employed to elicit and to represent expert knowledge summarily denoted as ‘cognitive task analysis’ (CTA) (42). Subsequently, more elaborate and didactically updated teaching programs have been designed (43) see van Merrienboer and Kirschner 2007).
The transfer of expert knowledge is the essence of all teaching and learning. Over the years much has been published on the similarities and differences between explicit and implicit learning along with lists of theories and models extant in the literature (see e.g. 44, 45). For the purposes of this chapter, a simplified brief presentation of the techniques of knowledge transfer applicable to practice of medical education will suffice.
Traditionally, the transfer of medical and invariably explicit knowledge-that was based on lectures and textbooks. Students learnt via verbal instruction and reading texts. With recent advances in information and communication technology, ICT, new forms of visual and auditory learning assistance have been developed. However, despite these advances, the effective absorption of such explicit medical knowledge-that still requires, above all, data crunching and cramming.
By contrast with the focus on the transfer of explicit knowledge-that in a medical education, the transfer of the mostly implicit knowledge-how has been largely ignored. With the rare exception of some excellent texts on surgery and surgical techniques, in the majority of textbooks knowledge-how has been either taken for granted and avoided or its surface has been, merely, scratched often by providing some ‘tips and tricks’ and time-proven recipes.
In practical education, knowledge-how has been mostly taught within the framework of the traditional ‘mentor-trainee’ relationship. Using this approach typically knowledge-how has been represented as readymade cognitive and/or psycho-motor skills. These skills are demonstrated by the mentor and subsequently imitated and emulated by the trainee. The efficiency of this approach is highly dependent on the ability and skills of the mentor, to demonstrate, and those of the trainee, to imitate, embody and emulate. It has been only recently that a cognitive approach to the transfer of knowledge-how, already proven to be effective in number of other professions, has been adopted in some medical institutions to train physicians to perform specific surgical and anesthesia-related procedures.
To employ this approach successfully, expert knowledge-how has to be articulated and explicated (in simple, well standardized procedures, this may be a relatively straightforward task), verbalized as far as possible and employed to instruct the trainee to perform specific actions. These actions are then practiced with deliberation by the trainee, mostly in model or paradigmatic contexts, observed and corrected, if necessary, by the mentor. Straightforward tasks may be explicated and dissected in a linear series of premeditated steps; more advanced tasks can be divided into (repetitive) patterns for simplification. Highly complex tasks are based on the internalization of whole networks of skills and may therefore be difficult to characterize. Partial explication, only, may be possible. The teaching of complex cognitive tasks in medicine has not yet been systematically approached. Recently, in the context of percutaneous coronary interventions, PCI, Lanzer and Taatgen have proposed developing and exploiting generic strategies and tactics allowing trainees to understand the procedural logic behind decisions in order to develop and hone their own judgment and decision making skills (46). Residual, as yet uncodifiable knowledge, remains proprietary to individual highly skilled operators. These skills have occasionally been termed ‘intuition’.
The transfer of knowledge-how is an active process requiring understanding of the tasks which are proposed and practiced (47). Thus, the knowledge-how must be embodied, i.e. internalized through (deliberate) practice. Therefore the success of knowledge-how transfer is highly dependent on active, focused and vigilant participation of trainees in the learning process and their ability to internalize complex cognitive structures and processes. A number of cognitive techniques have been proposed to enhance and facilitate such transfer. Recently, for example, contextual teaching - by providing contextual and associative information - has been shown to improve the retention and understanding of knowledge-how (48).
While knowledge-how can sometimes be acquired on-the-job, especially in high-risk tasks, a supervised ‘off-line’ teaching environment is (at least initially) required. Subsequent ‘on-line-real-time’ teaching is then often the final step of the process. Furthermore, tasks requiring complex cognitive skills such as the implementation of case-related strategies and tactics and/or those demanding finely tuned hands-eyes coordination are always best learned “off-line”.
Within the medical profession, the transfer of knowledge-that and knowledge-how should be better regulated, supervised and quality controlled. While for example in aviation, all the steps of the process have been tightly regulated (by the Federal Aviation Administration), in medicine, by comparison, it is mainly only the processes associated with acquisition of knowledge-that have been quality controlled, and even there no international standards exist. Sadly, the acquisition of medical knowledge-how still relies in most cases on the time-tested yet in principle still medieval ‘mentor-to-trainee’ practices. (To review proposals to develop knowledge- and skill- based curriculum in percutaneous coronary interventions as an example for quality controlled training in high-risk medical activity see (49).
A final note: this chapter has compared and contrasted the pedigree and transmission of theoretical and practical, or explicit and tacit, knowledge. That binary opposition may, however, be insufficient to capture the subtleties of the requirements for medical knowledge as a whole. For example, skilled clinicians must have an understanding of the states of mind of their patients. They must also be able to understand the values that should govern medical interventions: those of their patients, their own and those of the broader society. It may be that knowledge in such cases can always be mapped onto the distinction between explicit and tacit, theoretical and practical. But that should not mislead one into thinking that the nature of the justification or warrant for a claim to know the values of a particular patient, for example, will be the same – of the same kind – as the justification for knowing the population wide efficacy of a kind of surgical intervention.
Medical practice aspires to be based on medical knowledge. This chapter started by investigating why this is so, what the value is of knowledge. Explicit knowledge is factive: what is known must be true and truth is conducive to the success of medical interventions. But such knowledge has to be more than a matter of luck and hence depends on a pedigree: a justification or warrant. For explicit medical knowledge science now dominates that pedigree. But medicine also depends on practical knowledge (knowledge-how) which does not reduce to explicit knowledge (knowledge-that) whose pedigree is the development of a reliable skill. Forging a connection between practical and tacit knowledge the chapter concluded with a discussion of how it is possible to teach and learn such knowledge.
Gendler TS, Hawthorne J. Oxford Studies in Epistemology. Oxford University Press: Oxford: 2013.
Plato. Meno. Cambridge University Press: Cambridge, 2011.
Nozick R (1981) ‘Knowledge and Skepticism’, in his Philosophical Explanations, Oxford: Oxford University Press pp. 167–185
Plato. Theaetetus. Sophist. Harvard University Press: Cambridge, Massachusetts, 1988.
Sackett, D.L. Straus, S.E. Richardson, W.S. Rosenberg, W. and Haynes, R.B. (2000) Evidence-based Medicine: How to practice and teach EBM, Edinburgh: Churchill Livingstone
Whitnail & I. Film, Bruce Robinson, 1987.
Gettier E (1963) Is justified true belief knowledge? Analysis 23:121-123
Lanzer C, Lanzer P. Re: Angina pectoris revisited; exertional angina preceded Martin Luther's last stretch of a final journey; European Heart Journal (2016) 37, 206-216.
Craig, E.(1987) 'The practical explication of knowledge' Proceedings of the Aristotelian Society 87 221-26:
Van Doren C (1991) A history of knowledge; past, present, and future. Ballantine Books, New York
Butterfield H (1957) The Origins of Modern Science 1300-1800. London: The Free Press
Koyré A (1957) From the Closed World to the Infinite Universe, New York: Harper
Newton I (1704) Opticks: Or, a treatise of the reflections, refractions, inflections & colours of light. London: Printed for Sam. Smith, and Benj. Watford. Printers for the Royal Society, at the Prince’s Arms in St. Paul’s Church-Yard, 1704; reprinted New York: Cossimo Classics, 2007
Schaffer S (1987) ‘Godly men and mechanical philosophers: souls and spirits in Restoration natural philosophy’, Science in Context 1: 55—85
Porter R (1986) ‘The scientific revolution: a spoke in the wheel?’ in Revolution in History (ed. Roy Porter and Mikulas Teich), Cambridge: Cambridge University Press pp 290-316
Cunningham A and Perry W (1993) ‘De-centring the ‘big picture’: The Origins of Modern Science and the modern origins of science’ BJHS, 26: 407-32
Hobsbawm EJ (1962) The Age of Revolution, 1789—1848, London: Weidenfeld & Nicolson
Lakatos, I. (1970). Falsification and the methodology ofscientific research programmes. In Criticism and the Growth of Knowledge (ed. I. Lakatos and A.Musgrave). Cambridge: Cambridge University Press, pp. 91–195
Laudan, L. (1977). Progress and its Problems. Berkeley, CA: University of California Press
Popper, K. (1972) ‘Conjectural knowledge: my solution to the problem of induction’ in his Objective Knowledge, Oxford: Oxford University Press pp1-17
Kuhn, T.S. (1970). ‘Logic of discovery or psychology of research?’ In Criticism and the Growth of Knowledge (ed. I. Lakatos and A.Musgrave). Cambridge: Cambridge University Press, pp. 1–23
Barnes B (1974). Scientific Knowledge and Sociological Theory. London: Routledge
Bloor D (1976). Knowledge and Social Imagery. London: Routledge
Schapin, S, Schaffer S (1985) Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life. Princeton: Princeton University Press
Foray D (2006) The economics of knowledge. The MIT Press: Cambridge, Massachusetts.
Parry, R (Fall 2008 Edition) "Episteme and Techne", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/fall2008/entries/episteme-techne/; accessed April 22, 2013.).
Ryle G. The concept of mind. Hutchinson, London, 1949.
Heidegger, M. (1962) Being and Time, Oxford: Blackwell
Wittgenstein, L. (1953). Philosophical investigations. Oxford: Blackwell
Polanyi M. Personal knowledge; towards a post-critical philosophy. The University Press Chicago: Chicago, 1958.
Sternberg RJ, Horvath JA. Tacit knowledge in professional practice; researcher and practitioner perspectives. Lawrence Erlbaum Associates, Publishers: Mahwah, London, 1999.
Collins H (2010). Tacit and Explicit Knowledge. Chicago: University of Chicago Press
Gascoigne N, Thornton T (2013) Tacit Knowledge. Durham Acumen
Stanley, J. and Williamson, T. (2001) ‘Knowing how’ The Journal of Philosophy 97: 411-444
Dreyfus SE and Dreyfus HL (1980) A Five-stage Model of the Mental Activities involved in Directed Skill Acquisition. Berkeley, CA: University of California
Benner P (1984) From Novice to Expert, Excellence and Power in Clinical Nursing Practice. Menlo Park, CA: Addison-Wesley
Collins, H. Changing Order: Replication and Induction in Scientific Practice, London: Sage, 1985.
Holoyak KJ, Morrison RG (eds). The Cambridge handbook of thinking and reasoning. Cambridge University Press: Cambridge, 2005.
Ericsson KA, Charness N, Feltovich PJ, Hoffman RR (eds) The Cambridge handbook of expertise and expert performance. Cambridge University Press, Cambridge, 2006.
Ericsson KA (ed). Development of professional expertise. Cambridge University Press, Cambridge, 2009.
Ericsson KA, Krampe RTH, Tesch-Römer C .The role of deliberate practice in the acquisition of expert performance. Psychol Rev 1993; 100:363-406.
Schraagen JM. Task analysis. In: Ericsson KA, Charness N, Feltovich PJ, Hoffman RR (eds) The Cambridge handbook of expertise and expert performance. Cambridge University Press, Cambridge, 2006; pp 185-201
van Merrienboer JJG, Kirschner PA. Ten steps to complex learning; a systematic approach to four-component instructional design. Routledge, London, 2007.
Proctor RW, Dutta A . Skill acquisition and human performance. Sage Publication, Thousand Oaks, 1995.
Ormrod JE. Human learning, 5th edn, Pearson Prentice Hall, London, 2009.
Lanzer P, Taatgen N (2013) Procedural knowledge in percutaneous coronary interventions. J Clin Exp Cardiolog, S6 http://dx.doi.org/10.4172/2155-9880.S6-005
Anderson JR . Acquisition of cognitive skill. Psycholog Rev,1982;89:369-406.
Taatgen NA, Huss D, Dickison D, Anderson JR. The acquisition of robust and flexible cognitive skills. J Experiment Psychol-Gen 2008; 137:548-6.
Lanzer P, Prechelt L . Expanding the base for teaching of percutaneous coronary interventions; the explicit approach 2011; 15:372-380.
Tuesday, 10 January 2017
Wednesday, 14 December 2016
Thursday, 8 December 2016
Wednesday, 30 November 2016
[T]he pragmatism that I explicate in this book is concerned with nitty-gritty issues in the scientific disciplines. Based largely on the pragmatism of William James, scientifically inspired pragmatism has no a priori commitments that oblige it to take a side in metaphysical debates such as those between scientific realists and antirealists. Neither does it deny the value of the substantive philosophical distinctions (such as appearance versus reality or subject versus object) that are explored in such debates. [ibid: 25]
Radical empiricism is a view proposed by William James that asserts that experience rests on nothing outside of itself (i.e., neither behind nor beyond all experience). The metaphysical distinctions that we make in order to see how things hang together (such as subjective versus objective) are made using the resources available to experience. [ibid: 239]
Radical empiricism is a theory about the sufficiency of experience for making metaphysical claims. [ibid: 52]
As well as this general claim about the experiential limits of metaphysical distinctions, two other ideas play an important role in the machinery of the book. One is Arthur Fine’s deflationary approach to debates between scientific realists and anti-realists in the philosophy of the physical sciences. Fine argues that both realists and anti-realists accept a common core. Both sides accept the truth claims made by scientists which Fine calls the ‘natural ontological attitude’. But then both interpret these in additional metaphysical terms.
Anti-realists provide a reinterpretation of truth. This might be a social constructionist account of scientific practice. Or it might be the claim that the truth of a belief consists in its coherence with other beliefs. Such modifications re-interpret the common core. Fine’s characterisation of what a realist adds to the common core is simpler: ‘what the realist adds on is a desk-thumping, foot-stamping shout of “Really!”’. The reason for this is that:
The realist, as it were, tries to stand outside the arena watching the ongoing game [of science] and then tries to judge (from this external point of view) what the point is. It is, he says, about some area external to the game. The realist, I think, is fooling himself. For he cannot (really!) stand outside the arena, nor can he survey some area off the playing field and mark it out as what the game is about.’ [Fine 1986: 131]
Zachar summarises the realist side of this disagreement thus:
What then is the difference between scientific realists and antirealists? What is the contrast between these two philosophical positions if it is not about what scientific statements are true? According to Fine, the key contrast between the scientific realist and the antirealist is that along with the various considerations that are relevant in accepting as true a statement such as “bipolar disorder has a genetic component,” a scientific realist wants, in addition, to assert some special relationship called correspondence to reality. For example, in addition to accepting all the reasons for agreeing that bipolar disorder has a genetic component, the scientific realist stomps his foot and shouts out—“Bipolar disorder really does run in families, really!” [Zachar 2014:51]
A third element of the framework is what Zachar calls ‘instrumental nominalism’.
If we were to specify what all true statements have in common, the result—called the universal essence of Truth—should be fully present in every possible true statement. Nominalists reject such universals and attend instead to the variability and plurality that exist within concepts such as truth... Instrumental nominalism is the view that abstract metaphysical concepts (which are best defined in terms of contrasts such as subjective versus objective) can be allowed as long as we are clear on the purpose for making the distinction. [ibid: 238]
Zachar uses instrumental nominalism as a means of avoiding hasty essentialist thinking. It fits with the idea that metaphysical distinctions should be tied to experience. For example, although he commends Wakefield’s harmful dysfunction analysis of psychiatric as a ‘parsimonious, elegant, and useful’ his key criticism is that it goes beyond possible experience.
Horwitz and Wakefield use a conceptual analysis of what we should and should not be expected to do to identify what lies within our biologically designed, naturally selected range of behaviors. According to them, talking to family members without intense anxiety lies in this range, but handling snakes without intense anxiety does not. Only psychiatric symptoms that interfere with what we should naturally be expected to do are to be considered objective dysfunctions. In this analysis the distinction between disordered and normal is being made not by discovering an objective dysfunction but by intuition. The HD analysis cannot, therefore, be reliably used to do what it was proposed to do—factually demarcate valid psychiatric disorders from the larger class of problems in living.
The objection is not that the analysis is false or incoherent. Rather, the appeal to biological dysfunctions to underpin a notion disorder inverts actual explanatory priority. Intuitions about what is and is not a disorder drive judgements about selective history rather than the other way round. So the objection is that the model is a gratuitous metaphysical explanation which goes beyond clinical experience.
Zachar adopts a similarly anti-essentialist view of psychiatric taxonomy in general. Rather than assuming that there must be a common essence behind diagnostic categories, he suggests that the actual pattern of overlapping similarities and differences exhausts the facts of the matter. And hence he commends an ‘imperfect community’ model of kinds rather than an explanation of kind which dig beneath the clinical surface. A similar approach guides the detailed discussion of particular diagnoistic categories.
I think that this is an admirable approach to the philosophy of psychiatry. Explanatory minimalism is a hygienic approach to the insight philosophy can provide into other disciplines. In the next section I will outline a different route to the same metaphilosophical approach: Wittgensteinian philosophy. It can seem, however, that it falls prey to an accusation of idealism. I will argue that it need not but then return, in the final section, to ask whether the same is true of Zachar’s account.
Wittgensteinian anti-explanatory minimalism
In an early passage in the Investigations Wittgenstein suggests that a failure to pay attention to the details of language and practice is not simply the result of carelessness:
If I am inclined to suppose that a mouse has come into being by spontaneous generation out of grey rags and dust, I shall do well to examine those rags very closely to see how a mouse may have hidden in them, how it may have got there and so on. But if I am convinced that a mouse cannot come into being from these things, then this investigation will perhaps be superfluous. But first we must learn to understand what it is that opposes such an examination of details in philosophy. [§52]
Philosophical theory may lead one to ignore practical details because of a prior belief that they cannot be relevant. But, the suggestion goes, the details might contain just what was needed to resolve one’s philosophical difficulty.
Cora Diamond provides an extended discussion of Wittgenstein’s meta-philosophy which includes an interpretation of this passage [Diamond 1991]. She suggests, following a gnomic comment from Wittgenstein, that the tendency to be blinded to important details by philosophical theory is a mark of philosophical realism. This is a surprising remark because, in philosophical debates about the reality of the past, or distant spatio-temporal points, or mathematics, realism is usually thought of as the non-revisionary position, the position which most fits everyday language. Nevertheless, realism fails to be realistic when it goes beyond the everyday phenomena and instead attempts to explain them by postulating underlying processes or mechanisms. Diamond suggests that the central ambition of Wittgenstein’s philosophy is to be realistic whilst eschewing both, on the one hand realism and, on the other, empiricism.
Diamond uses two examples from outside Wittgensteinian philosophy to clarify the distinction between realist and realistic philosophy. One is Berkeley’s discussion of matter in his Three Dialogues. Hylas, the philosophical realist, argues that the distinction between real things and chimeras - mere hallucinations or imaginings - must consist in a fact which goes beyond all experience or perception. For this reason, philosophy has to invoke the philosophical concept of matter to explain the difference. The presence or absence of matter is beyond direct perception or experience, although perception can provide evidence of its presence or absence. This however presents Philonous, who speaks on behalf of a realistic approach, with an opening for a criticism. Because of its independence from perception, matter cannot explain the distinctions that we actually draw between reality and chimeras. But nor, given our actual practices of drawing a distinction, is such a further philosophical explanation necessary. The practical or epistemological distinctions which Hylas can rely on are also available to Philonous without commitment to the philosophical account of matter. The mouse, in this case, is the distinction and the rags, which Hylas is convinced cannot explain the distinction, are the practical distinctions actually made.
The second example concerns a more recent case of philosophical realism. The distinction here is that between laws of nature and merely accidentally true generalisations. Peirce argues that this distinction must consist in the presence or absence of active general principles in nature. These can be used to explain the reliability of predictions based on laws. But:
The reply of a realistic spirit is that an active general principle is so much gas unless you say how you tell that you have got one; and if you give any method, it will be a method which anyone can use to distinguish laws from accidental uniformities without having to decorate the method with the phrase “active general principle”. Peirce of course knows that there are such methods, but assumes that his mouse - properly causal regularity - cannot conceivably come into being from the rags: patterns of observed regularities. [Diamond 1991: 48]
In both these cases, realist explanation is rejected. This rejection does not depend on nominalist scruple, however. Diamond suggests that closer attention shows that realist explanations are wheels that can be turned although nothing else moves with them. They cannot serve as explanations of what the pre-philosophical difference in either case really comprises since their presence or absence is not connected to the practices which they were supposed to explain. Their presence or absence could make no difference.
There is, however, an obvious objection which needs to be countered. The problem is that an opposition to philosophical realism might be thought to comprise a form of idealism, anti-realism or, more relevantly in this case, social constructivism. Here is the general danger.
Diamond’s account of the realistic spirit has idealist connotations for two reasons. Firstly, and most obviously, she selects Berkeley to illustrate a realistic approach to philosophy. Despite Berkeley’s own claims to the contrary, his opposition to matter is not simply a rejection of one philosophical explanatory theory which leaves everything else, including our normal views of the world, unchanged. Instead, he advocates a revisionary idealist metaphysics. Secondly, Diamond characterises Peirce’s account of active principles as a ‘belief in a connection supposed to be real, in the sense of independent of our thought, and for which the supposed regularity is evidence’ [ibid: 50]. This suggests that the object of Diamond’s criticism is the mind-independence of Peirce’s conception of active principles. In both cases the examples of a realistic opposition to philosophical realism appear to support a form of idealism.
Whilst Diamond’s account may encourage an idealist interpretation, idealism is not a necessary ingredient of Wittgenstein’s opposition to philosophical realism. What matters in both these cases, if they are to illustrate philosophical minimalism, is the opposition to realist explanations. But anti-realist or idealist explanations are just as much to be rejected. Wittgensteinian minimalism opposes speculative metaphysical explanation and only thus realism (or anti-realism). I will clarify this by examining one further passage from Diamond’s account.
This is how Diamond characterises the realist account of matter which should be rejected as unrealistic:
For Hylas, real existence is existence distinct from and without any relation to being perceived; and so if the horse we see (in contrast to the one we merely imagine) is real, it is because its sensible appearance to us is caused by qualities inhering in a material body, which has an absolute existence independent of our own. The judgment that the horse is real and not imaginary, not a hallucination, is thus a hypothesis going beyond anything we might be aware of by our senses, though indeed it is clear on Hylas’s view that we must use the evidence of our senses in trying to tell what is real. Still, it is not what we actually see or hear or touch that we are ultimately concerned with in such judgments; and this because however things appear to us, it is quite another matter how they are. [ibid: 47]
This passage contains two characterisations of what it is for something to be real rather than imaginary. One is the claim that reality has ‘an absolute existence independent of our own’. The other is that reality goes ‘beyond anything we might be aware of by our senses’. It is ‘not what we actually see or hear or touch’ and ‘however things appear to us, it is quite another matter how they are’. Ignoring for the moment the qualification ‘absolute’, denying that reality has an existence independent of our own - the first characterisation - would amount to idealism. By contrast, the second characterisation goes beyond an everyday affirmation of the mind independence of the real. It presupposes a philosophically charged and revisionary account of perception in which reality always lies beyond our senses. Thus its rejection is merely the rejection of a philosophical explanatory theory and not itself a piece of revision.
Thus a minimalist or realistic criticism of philosophical realism need not succumb to the criticism that it confuses epistemology and ontology. The rejection of realist explanations of the distinction between real things and illusions or between causal laws and accidentally true generalisations does not imply that these distinctions are constituted by the discriminations we make, by their epistemology. On the other hand, the distinctions are not matters which lie beyond our ways of detecting them. They are not independent of our practices in that complete and absolute sense. (If this is what Diamond means by denying absolute independence, then neither rejection is tainted with idealism or constructivism.)
Does Zachar’s pragmatism slight the independence of reality?
In the previous section, I suggested that Cora Diamond’s account of Wittgenstein’s support of a realistic spirit by contrast with realism can seem to undermine the independence of reality but should instead be construed as a rejection of explanations which go beyond the distinctions made in practice. My purpose in juxtaposing Diamond’s account of Wittgenstein with Peter Zachar’s framework of ideas is to highlight two similarities. First the similarity in minimalism with respect to philosophical explanations. But second, the danger that the resulting account may seem, at least, to slight the independence of reality. Does Zachar also escape that charge?
It is clear that one central aim of the book is to avoid such a charge. The first chapter describes the so called ‘science wars’: sociological accounts which may or may not have a debunking relation to scientific claims. On one view, accounts of the resolution of natural scientific disputes offered in sociological terms imply that physical nature itself is socially constructed. Zachar suggests offers a less metaphysically charged rapprochement:
One important realization on the part of some Science Wars participants was that an analysis of metaphysical terms such as “reality” and “objectivity”—terms that are used to theorize about scientific theories—can be critical without being motivated by an underlying hostility to the truth claims of scientists. [ibid: 11]
Hence later, when discussing whether his suggestion that distinctions should be framed within experience and hence forms of realism that go beyond such experiential limits trap subjects within experience, he connects his nuanced view back to his account of the science wars.
Does radical empiricism of this sort imply that we are trapped within our own experience along the lines of a philosophical idealism? If so, then we are back to the debates of the Science Wars and the claim that nature is constructed by us, not discovered. According to the radical empiricist, however, we are not “trapped” in experience, and making distinctions such as objective versus subjective or real versus imaginary helps us to understand why. [ibid: 34]
On the other hand, some remarks do seem to slight reality. For example, when discussing facts he draws a distinction – within the experiential realm – between fact and fiction. But he then goes on to say something more obviously metaphysically charged.
What Holmes said to Watson the morning after they dispatched Colonel Sebastian Moran was never a fact, but what Conan Doyle ate and drank on the day he finished The Adventure of the Empty House was a fact once, although it is likely no longer even a potential fact because it is not publically ascertainable. That information has been lost. [ibid: 109]
But the latter remark does seem to be revisionary: a form of anti-realism about the past rather than a natural ontological attitude. (One way to test intuitions on this is to ask whether bivalence applies such that despite no present evidence either way still Doyle did or did not eat breakfast that day.) It is one thing to stress the experiential realm when examining philosophical distinctions. It is quite another to limit reality to what is currently experientially – directly or via evidence - accessible.
I think it is unclear whether Zachar successfully treads the fine line between explanatory minimalism and idealism. Take the following example of Zachar’s commendation of a coherence theory of truth:
In philosophical terms, radical empiricism advocates for a version of the coherence theory of truth. One of the ideas behind a coherence theory is that what we consider to be true beliefs are important in evaluating new beliefs whose truth is not yet assured. New propositions that seem to readily cohere with what we already believe are going to be accepted more easily than propositions that contradict currently accepted knowledge... Correspondence theories sometimes give the impression that in knowing what is really there we get beyond evidence and experience. Coherence, in contrast, works from within experience. [ibid: 36-7]
The contrast case with correspondence suggests that a theory of truth is in the business of saying what truth is: ontology rather than epistemology. But the account of coherence concerns ‘what we consider to be true beliefs’ or what is ‘going to be accepted more easily’: epistemology rather than ontology. Putting the two together suggests a shotgun wedding of what is independent of and what dependent on human judgement.
Facts, objectivity and the experiential limits of pragmatic philosophy seem to be at the heart of the venture. But avoiding both metaphysical and excess and a shotgun wedding is tricky. Consider this passage on the notion of what is objective:
The metaphysical concept of the objective, however, is a useful tool for understanding experiences of resistance to preference. The concept of the objective is partly inspired by and reappears with the recurrence of such experiences in one or more members of a community, but it is not constituted by them. Whenever people start talking seriously about the objectivity of such things as the Copernican model, the Apollo moon walks, or global warming, the notion that someone’s preferences are being resisted is not far away. The resistance to what we prefer is not The Objective in an elaborate metaphysical sense. Metaphysical elaborations go beyond their experiential bases, but nevertheless, taking account of those experiences is useful for bringing the lofty concepts down to earth. Something important occurs when the world is not the way we want it to be, but that is a very minimal, even deflated, notion of the objective—one that does not require getting outside of experience. [ibid: 109]
My worry about this passage is that it starts with a notion which is connected to ‘the objective’ which is that one may wish certain beliefs not to be true and yet nevertheless they are true. This alone does not constitute what we mean by objectivity. It is ‘a very minimal, even deflated, notion of the objective’ although it is not ‘far away’ from it. But then the only hint at what would constitute it is ‘The Objective in an elaborate metaphysical sense’ which isn’t something that Zachar is prepared to set out for the reader. So what is the sense of objective ‘that does not require getting outside of experience’? This passage seems to contrast what it admits to be an inadequate account of objectivity with something that is merely beyond the pale according to the metaphilosophical framework of the book.
The same sort of problem occurs in trying to set out how a diachronic approach can balance the aim of remaining with the experiential with a satisfactory account of mind-intendent objectivity:
What about the notion that truths about the world are true independent of what we believe about them, and therefore reality is more than what we experience it to be? Is this something that the radical empiricist cannot account for? No—it cannot be that either. Events from the history of science work well here… Taking a historical perspective allows us to see that our past experience was limited. We can reasonably infer that future generations, with their advanced learning, will see the ways in which our current experience is limited. Reality is one of the names we give to what lies outside those limits, but that naming occurs within experience as a result of experience. [ibid: 36]
The significant phrase is ‘Reality is one of the names we give to what lies outside those limits’. Who are ‘we’? Zachar may mean realist philosophers who mistakenly or pragmatically unhelpfully do not accept the metaphilosophical framework of the book. If so, assuming the truth or pragmatic success of the framework, then that attempt to name what belongs beyond the milts of experience must fail. If, on the other hand, ‘we’ refers to ordinary non-philosophers, there must be some success in this naming. But what, according to radical empiricism, can be named beyond the limits of experience? And if nothing can, how can the inchoate thought that experience can mislead – which is surely what gives this passage its drama - be captured even given a diachronic perspective?
Later he says that:
One can accept this historically informed inference without imagining a getting beyond the veil of ideas. [ibid: 103]
This picks up a repeated theme that it is tempting to think that we are ‘trapped’ within a veil of ideas or experience or beliefs. For example:
The chapter ends with an accounting of the extent to which everyone has to rely on communities and recognized experts to know what to accept and how this psychological fact raises the worry that we are all trapped, not so much behind a veil of ideas but within the boundaries of our chosen community’s beliefs. [ibid: 19 italics added]
The modern dilemma is not that we are trapped behind a veil of ideas and locked into our own subjectivity to such an extent that the objective world is in continual doubt. [97 italics added]
It is important to be cautious about taking the veil of ideas metaphor too literally. For a radical empiricist experience is not a veil of distortion that needs getting beyond. According to such an empiricist we can justify making distinctions between subject versus object and appearance versus reality, but those distinctions are made within experience. [ibid: 102 italics added]
Something important occurs when the world is not the way we want it to be, but that is a very minimal, even deflated, notion of the objective—one that does not require getting outside of experience. [ibid: 109]
In each case, Zachar suggests that it is misleading to think that we are so trapped. But it is not clear to me that he offers enough of a diagnosis of why – despite the temptation to think that we are – we are not. For example, the injunction that it ‘is important to be cautious about taking the veil of ideas metaphor too literally’ suggests that it should be afforded some insight into human predicament, that there is some sort of veil blocking our view of reality. Moving the concern from a Cartesian solitary veil of ideas to a communal set of beliefs does not seem enough of a transformation to yield philosophical ease. Given that Zachar’s key idea is to draw distinctions only within the experiential the realm the worry that the experiential real somehow entraps human subjects blocking knowledgeable access to reality surely needs more philosophical diagnosis.
Furthermore, it is not that there are not diagnostic accounts to ease this intellectual cramp. The most familiar is disjunctivism. It holds that there is more to experience than what is common between veridical and illusory experience. When all goes well, what one experiences is the layout of the world. So when all goes well, there is no veil, simply direct access to objective reality. This is not to say that disjunctivism is both without difficulties or the only game in town. But it would be one way in which to begin to think through the issues raised by the very use of words such as ‘trapped’ or ‘veil of ideas’. The package of ideas of which they form a part is mortal poison to Zachar’s commendable philosophical minimalism.
Diamond, C. (1991) The Realistic Spirit: Wittgenstein, philosophy and the mind, Cambridge, Mass.: MIT Press.
Fine, A. (1986) ‘The natural ontological attitude’ in The Shaky Game, Chicago: The University of Chicago Press: 112-135
Wittgenstein, L. (1953) Philosophical Investigations, Oxford: Blackwell.
Zachar, P. (2014) A Metaphysics of Psychopathology, Cambridge, Mass.: MIT Press
Tuesday, 18 October 2016
Despite the fact that these both seem unhappy – daft even – things to say about constructionism and objectivism, there are motives for them both.
Consider objectivism. Crotty takes it to ascribe to worldly objects meanings that are wholly independent of human subjectivity. I suggested that a more obvious version would have it take the world to be free of meanings. But the former view has a rationale. One motive would be sympathy with McDowell’s objection to a view of nature as disenchanted [McDowell 1994: **]. He argues that this view – a view which looks like the account of objectivism I have suggested – is the result of a misunderstanding of the methodological success of construing the physical world in meaning-free terms. It does not follow that the world in general is meaning-free. One reason to wish to deny this is to think that the world is a world of values too. (This is McDowell’s ‘partial re-enchantment’ of nature.)
There is another – related – line of thinking in favour of Crotty’s version of objectivism. It has to do with the coherence of the idea that meanings might be tied closely to human decisions. Consider the connection between rules of logical inference and the meaning of logical connectives. On a broadly constructionist view, the meaning of such connectives is fixed by convention and hence the forms of inference they permit. But now imagine that a system of logic has been adopted by such conventions. What of the particular inferences it permits? Is accord of a particular inference with a general rule (itself adopted by the convention that fixes the meanings of connectives) itself adopted by convention? Or is it fixed by the meanings so adopted, autonomously? The former looks unhappy because it replaces the sense of constraint in reasoning in accord with logical principles with freely adopted decisions: a kind of logical jazz. The latter requires thinking that logical inferences are fixed by a kind of action at a distance which seems to require the kind of objectivism about meaning that Crotty describes. Meanings are not wholly up to us. So there is a rationale for holding Crotty’s objectivism even if it is already a kind of intermediate position and hence not the best way to chart the logical geography.
There is also a rationale for Crotty’s constructionism even though his description of it isn’t happy: ‘It is the view that all knowledge, and therefore all meaningful reality as such, is contingent upon human practices, being constructed in and out of interaction between human beings and their world…’ No one should rush to say that reality is contingent on human practices. But it is hard to avoid saying this. Consider McDowell’s sympathy with the idea that the world is everything that is the case, the world of facts not of things. McDowell also connects this conception of the world with the set of true thoughts though stressing a contrast between thinkable contents and acts of thinking. The world isn’t the set of acts of thinking; it is what can be truly thought. But there is some difficulty – it seems to me – in stopping an awareness of the contingency of human concepts escalating via the apparently innocent idea that the world is the set of true thoughts, themselves conceptually articulated, into the idea that the world itself is contingent on human concepts and their history. It would be easy if one could help oneself to a distinction between conceptual scheme and extra-conceptual content. The former could be the locus of contingency. But with the death of that dualism, it becomes much harder to apportion the contingency safely.
So I don’t think that Crotty’s descriptions of the options are unmotivated. They are just a bit rash, not the sort of thing that sober researchers should assert without an ironic smile, at least.