CPBD 074: Michael Bishop – An Epistemology for James Randi
(Listen to of Conversations from the Pale Blue Dot .)
Today I interview philosopher Michael Bishop on his book written with J.D. Trout, perhaps my favorite book of all time, Epistemology and the Psychology of Human Judgment.
CPBD episode 074 with Michael Bishop. Total time is 49:15.
Michael Bishop links:
- at Florida State University
Links for things we discussed:
- Bishop & Trout, ““
- Mike and J.D.’s for Philosophy Compass
- ““
- of Epistemology and the Psychology of Human Judgment
- Google Scholar search for
- Google Scholar search for
- Gigerenzer,
- Gladwell,
- Gigerenzer, “” (also discussed in his book )
- Ariely, and
- Trout, (aka The Empathy Gap)
Note: in addition to the , there is also a . You can also .
Transcript
Transcript prepared by and paid for by Silver Bullet. If you’d like to get a transcript made for other past or future episodes, please .
LUKE: Dr. Michael Bishop is a philosopher at Florida State University and the co-author with J.D. Trout of one of my favorite books of all time, “Epistemology and the Psychology of Human Judgment.” Mike, welcome to the show.
MIKE: Well, thank you very much for inviting me, Luke, and I’m glad you like the book that J.D. and I wrote.
LUKE: Now Mike, it seems like a major motivation for your book with J.D. Trout, “Epistemology and the Psychology of Human Judgment,” is your dissatisfaction with what you call standard analytic epistemology. And you even wrote a paper called, “The Pathologies of Standard Analytic Epistemology,” which I thought was very funny. What do you think standard analytic epistemology is and what do you think is wrong or lacking in it?
MIKE: Well, suppose that I believe that Elvis Presley is alive. Standard analytic epistemology is concerned with assessing the belief. Is it knowledge? Am I justified in believing that Elvis is alive?
So, standard analytic epistemologists, what they want to do is provide a detailed account that explains why I don’t know that Elvis is alive or why I’m not justified in believing that Elvis is alive. But, if you think about it, my real mistake was in the reasoning that led me to the belief. Maybe I engaged in wishful thinking or I gullibly accepted somebody’s testimony. If I improve my reasoning ability, then not only does the offending Elvis belief disappear, but I’m also less likely to come up with other offending beliefs in the future.
So, for J.D. Trout and I, the main problem with standard analytic epistemology is that it focuses too much attention on the wrong thing. It focuses on assessing our beliefs, the end result of our reasoning, rather than on assessing the reasoning itself, which led to the offending belief in the first place.
LUKE: But, there are accounts of justification or knowledge that do put some focus on the processes that lead to belief and they would just say that what it is that makes a belief justified has something to do with the processes that produce the belief, for example, various forms of reliabilism.
MIKE: Absolutely. Yes, you’re absolutely right. We find those sorts of theories to be very friendly to our views, but we think those sorts of theories give in to problems that you can avoid if you make the theory about good reasoning rather than about good belief.
LUKE: And so what are some of those problems that standard analytic epistemology runs into even when it’s defining justification or rationality or warrant in terms of the processes that formed the belief?
MIKE: Well, the main problem for reliabilism is the generality problem. And the generality problem is basically the problem of identifying the process type that’s relevant for determining the reliability of the process and so determining the justificatory status of the belief. I mean it’s a bit of a technical issue, but it’s a serious problem for reliabilism.
If you focus on assessing the reasoning rather than the belief so your theory says, “Look, here are the reasoning types that are the good ones,” then you don’t get into any problems with identifying the reasoning types, the theory gives them to you right at the beginning.
LUKE: It also seems to me that what you’re trying to do is to expand the discipline of epistemology beyond its descriptive role of trying to come up with definitions for words like knowledge and justification and rationality and warrant – definitions which fit our intuitions about what we mean when we use those words – and you’re trying to push epistemology into a more normative and practical enterprise that studies the problem of, what processes should we use if we want to get at the truth? How should we go about getting at the truth? Do you see that as partly what you’re doing?
MIKE: Oh, absolutely, Luke. Epistemology should focus on developing theories of good reasoning rather than on theories of good belief. So, J.D. and I defended a theory we called strategic reliabilism, which basically says that good reasoning about the world reliably gets us true beliefs about important matters. Now, if you start with that theory, you don’t need a theory of what you ought to believe, of what’s justified or what’s rational to believe, you get that for free, you should believe the results of good reasoning. And so that’s the basic idea.
Good reasoning is the lever; it’s what does the work. Knowledge and justification, the notions that most philosophers are interested in, they’re the cherry on top, they’re the prizes we get for reasoning well about the world.
LUKE: One of the epiphanies that I had when reading your book, Mike, is that I was speaking with a philosopher about the ethics of belief and he was taking an evidentialist line that we should have evidence for all of our beliefs and in course of that he had to say of course that his standards of evidence for having what counts as knowledge were very, very low, because otherwise just everyday beliefs that we have to get through life wouldn’t count as knowledge, and he doesn’t want that to not count as knowledge.
And I was taking a stronger position saying that, no, really, a lot of the things that we believe just to get through the day don’t count as knowledge, but we don’t have a moral obligation to really spend much time figuring out whether or not the Starbucks is really around that corner or that corner, you know, it doesn’t matter. So, I was taking more of a Jamesian position there.
And he said, “Well, that’s fine. But, remember the data that we’re pulling from is just everyday human intuition. So, we don’t want to make our definition of knowledge too different from what people normally mean. And people use the word like ‘know’ even to refer to things for which they have very little evidence. Like maybe the testimony of a passing stranger. And so, they normally talk about having knowledge that there’s a Starbucks around the corner even though they have basically no evidence at all.”
And what I realized when I was reading your book is that his position again was very much working within this descriptive framework where he’s trying to capture what it is when ordinary people say they have knowledge and me, I was more interested in kind of the normative position of well, I know that people think they have lots of knowledge but they’re wrong a lot of the time and the reason for that is that they’re using bad reasoning processes. And we need to encourage people to use better reasoning processes so that there are fewer false beliefs out there.
But, that just didn’t really make sense within the framework of trying to define words like “knowledge” and “justification” in terms of how most people use those words. But, it does make sense when you think of epistemology from a normative point of view and you’re trying to give recommendations for what people can do if they want to have more true beliefs and fewer false beliefs. And that’s the picture of epistemology that is presented in your book, “Epistemology and Psychology of Human Judgment.”
MIKE: Yes, I would agree with you a hundred percent on that. One way of thinking about your discussion with this philosopher is that when you’re talking about knowledge and you’re trying to come up with a theory that answers to our everyday conception of knowledge, that everyday conception of knowledge is going to be quirky in lots of ways. But, if you’re talking about good reasoning you can say, “Look, it might be the case that the standards of what counts as good reasoning in certain situations can change.” So, if you’re walking to the Starbucks and you’re wondering where it is you might reason, “yes it’s around the corner,” because if you’re wrong it’s not that big a deal.
But, if you’re reasoning about whether your child is seriously ill, you might think that “Well, actually, here, I need to reason a lot more carefully about this because the consequences of being wrong are going to be much more serious.” So, when it comes to good reasoning, you might be able to provide an account of why in one case you have good reasoning even though those standards don’t necessarily apply to other cases where the stakes are higher.
LUKE: One of the sections of your book that I enjoyed, I think in the first chapter, is where you talk about how, you know somewhere around the third round of trying to respond to Gettier, one really can get tired of standard analytic epistemology and wonder if maybe there’s something better to be had. I’ve had that reaction as well, and I think that’s one of the reasons I was so excited in reading your book, because I thought, well, this here, this is a way forward.
Gettier, of course, proposed some counter-examples that said, look, our standard account of what knowledge is, which is justified true belief, doesn’t quite fit our intuitions on what knowledge means, because here are some examples of stuff that is justified and true and believed, and yet, we wouldn’t want to call it knowledge.
And my conclusion from that is, well, yeah, our standard concept of knowledge is confused because language doesn’t come from logicians; it comes from evolved usage between thousands of people over thousands of years. So, yeah, our concept of knowledge is very confused. We can try to define knowledge more precisely to fit our intuitions as long as we want, and we might never come up with something that’s coherent and fits our intuitions. Whereas, you say that the project of epistemology is maybe better if we are not just trying to come up with a definition, not trying to be a “super dictionary,” but give practical advice on how we can have more true beliefs.
MIKE: Absolutely. In fact, one of the more disappointing responses to the book from some professional philosophers was basically, “well, this is all very interesting, but this is not what philosophy does. Philosophy isn’t concerned with giving us practical guidance about how we should reason about the world and what we should believe. That’s not what philosophy does.” And it seems to me that that’s just a mistake. I can’t think of another academic discipline that so proudly insists on its own practical irrelevance.
LUKE: [laughs]
MIKE: It seems to me that the reason most of us originally got into philosophy in the first place was that we wanted to answer these important questions about how we should live, what we should believe, and how we should think about the world.
LUKE: Yeah. Definitions, and analyzing what our intuitions are about certain concepts, of course, that can be useful. But, like you say, I don’t think it’s why people get into philosophy, and I don’t think what the public would most benefit from is a more precise, more closely-reasoned dictionary of certain terms that relate to philosophical concepts.
Now, just as an aside, you know, Mike, that there’s this growing movement of people who call themselves “skeptics” – note in terms of philosophical skepticism, where you doubt the existence of the external world or something, but in terms of being skeptical of things for which there is no evidence or weak evidence. People like Michael Shermer and James Randi. These skeptics have a high regard for science as the best way to know things, and they focus on things like our built-in cognitive biases and intuitions that lead us astray, how to overcome them, and how to train our minds to get more reliably at the truth.
As I was reading your book, it occurred to me that strategic reliabilism, the epistemological framework that you present, would probably be a very attractive epistemological approach for skeptics, who may not be very well studied in philosophical epistemology. Strategic reliabilism places the focus on those same sorts of things – a respect for science, the study of our biases, and how to overcome them with more reliable reasoning processes.
So, I think skeptics, who maybe are not very trained in philosophy, but know how to think about their biases and try to overcome them with more reliable reasoning processes would really benefit and would really already agree with the epistemological framework that you’ve presented in “Epistemology and the Psychology of Human Judgment.” What do you think about that?
MIKE: Well, I certainly hope so. I’m a big fan of James Randi, and as I understand it, the point behind his form of skepticism isn’t just that we shouldn’t believe particular things like faith healing and the paranormal. Rather, the main point is that we should reason in clear and exacting ways about the world. And I think that’s just what strategic reliabilism says. So, I think you’re absolutely right.
LUKE: I hope people will look into that if they count themselves as skeptics.
So, let’s look at the practical advice that you do offer about how to reason better. One of the main tools in your tool kit that you present is the “statistical prediction rules.” So, what is a statistical prediction rule, and how can it help us?
MIKE: Statistical prediction rules are relatively simple rules that have been shown to make predictions more accurately than human experts when based on the same evidence. For example, the last time you applied for a credit card, it was probably a statistical prediction rule that judged you to be credit-worthy, and it probably set your credit limit too. And if you have a heart attack, it’s probably a statistical prediction rule that makes a judgment about whether you’re a high risk for another one in the near future.
But, psychologists have other heuristics that aren’t just for institutions, but are for regular folks like you and me for use in our everyday reasoning. One of the things I do in my critical thinking course is try to get my students to understand and apply some of these rules. Let me give you an example.
We often think we have a reason for believing a causal claim. So, Aunt Millie might think that some homeopathic remedy cures headaches. But there, Aunt Millie is accepting maybe intuitively compelling stories about why this particular remedy would be effective, or maybe anecdotal evidence in support of this causal claim. So, she says, “Well, look, last time I took this remedy, my headache got better.” But, of course, what she doesn’t know is what would have happened if she hadn’t taken the remedy, or if she would have taken a placebo, or if she would have taken some aspirin. What she really needs to look at are well-designed double-blind studies.
Now, of course, it’s more difficult to assess causal claims with well-designed studies rather than with uncontrolled stories that appeal to our intuitions. But, of course, nobody ever said that becoming a good critical thinker was going to be easy.
There’s not a statistical prediction rule there. You see, one of the reasons people focus on statistical prediction rules is because they’re really interesting. This showing that these very simple rules can defeat human experts is sort of a finding that is surprising. But, we meant it just to be one example of many possible examples. And so sometimes, what people have done is focused on statistical prediction rules and thought that they were the only tool we had in our tool box.
Yes, statistical prediction rules are important and they’re interesting, and they’re a really interesting example. But, there is other advice that we can offer – not necessarily a statistical prediction rule, but just a rule of thumb – a look for controls when you’re thinking about causal claims that can help people reason more carefully about the world.
LUKE: And sticking on statistical prediction rules for just a moment, what are some of those examples where the research shows that, in certain situations, a statistical prediction rule will do a better job of predicting certain outcomes or situations than really well-trained experts in that field?
MIKE: One very interesting result is that statistical prediction rules, quite simple ones, do a better job of predicting whether a prisoner who wants to go on parole is likely to commit another violent crime. So, it turns out that your statistical prediction rule will do a better job predicting who’s likely to be more violent and dangerous than a parole board. Interestingly, though, that is a statistical prediction rule that many states refuse to use.
Another interesting example – there’s a statistical prediction rule that predicts the quality of a vintage of Bordeaux wines better than expert wine tasters. The New York Times said that the reaction in their wine-tasting community was somewhere between violent and hysterical.
LUKE: [laughs] Yeah. I think it’s very upsetting to a lot of us when we find out that a simple math equation with few variables thrown in – variables that we know – are going to be more effective and better predictors than really well-trained experts. I think we like to think that we humans are so smart that we would be better than a little math equation.
MIKE: Yes. One thing to keep in mind is that, if humans have more relevant evidence, we can beat the equation. But, when based upon the same relevant evidence, not only can’t we beat the equation if it’s done right, if it’s well constructed, but if you give people the statistical prediction rule and tell them that it will be more reliable than they are, they still won’t be able to be more reliable than the statistical prediction rule.
LUKE: [laughs] That’s pretty funny. Now, of course, in applying these statistical prediction rules, the studies that show where statistical prediction rules are useful and how they have to be constructed are basically built up one and a time. So, there’s a set number of statistical prediction rules that have so far been demonstrated in the literature to be more reliable predictors than human experts. But, that’s a very granular type of tool because there might be situations that seem analogous, but the experts actually do better than the prediction rules. Do you think that’s plausible?
MIKE: It can happen. You have to be careful, though, because the prediction rules are just tools, and they’re tools that do very specific things very well. For example, one of the things that you might need to worry about is if you construct a statistical prediction rule for predicting who’s going to do well in college or in graduate school.
Well, that prediction rule will do very well when applied to the sorts of cases that it was built on. But, if you apply it to very, very different sorts of cases, or if you apply it and you change your view about what counts as a successful student, then, of course, your SPR might not work as well.
But, here’s the thing: people tend to be way optimistic about their ability to find examples in which they can beat the statistical prediction rule. So, the general rule is that you should stick with the statistical prediction rule unless you’re very, very confident that it’s giving you a wrong answer.
And there certainly can be situations like that. There’s one case that I like to give people. I was talking to somebody who had come up with a statistical prediction rule for predicting whether a sex offender would recidivate, would commit another crime, if paroled. I asked him if there were cases in which you should overturn the statistical prediction rule. And he said yeah, there were: even if the statistical prediction rule tells you that there’s a very high likelihood that a particular person will recidivate, you should overturn that rule if the person is dead.
LUKE: [laughs]
MIKE: Because the rule did not have a variable in there for whether the person was dead or alive.
LUKE: That’s funny. Now, looking at the name of the epistemological framework that you present – strategic reliabilism – we’re looking for reasoning processes that will be more reliable than a lot of the ones we’re using by default. And then, in terms of strategy, we’re using these different tools where they’re going to be most useful, whether they’re statistical prediction rules or other types of rules. But, I think another part of the strategy is that you say we should conduct a kind of cost-benefit analysis when seeking after the truth. How does that work?
MIKE: That’s a tricky one. I think that I would distinguish between the theory that assesses your reasoning and what you should do. The basic idea is that we have limits on how much time and mental effort we can devote to particular problems. So, from a theoretical perspective, better reasoners will apportion their energies more effectively.
But, from the perspective of somebody who’s doing the reasoning, typically, this apportioning of resources, we don’t really need to think about it; we’re just naturally disposed to focus on important reasoning problems. For example, we focus on whether our spouse is in a good mood or not by picking up social cues. Or figuring out what we should eat for lunch. And we don’t focus on irrelevant problems, like figuring out how far apart our shoe laces are.
So, we don’t have to focus too much attention in our everyday lives to cost-benefit considerations. But, there can come times when it’s a good idea to focus on what sorts of problems need more attention and what sorts of problems we should be devoting less attention to. For example, when we’re thinking about more serious issues, like whether to support sending our fellow citizens to war in a distant land, we should spend relatively more time and energy figuring out what the right answers are in those cases.
LUKE: Yeah. You know, there’s the old joke that some people spend more time trying to decide which car to buy and not very much on which world view to live by. I think it’s a lucky coincidence that we have some ability naturally, without training, to focus on things that matter. But, we can certainly still improve in that area.
MIKE: Oh, absolutely, we can.
LUKE: Well, Mike, let’s look at some of the criticisms that have been offered of strategic reliabilism. Alan Goldman says that you sometimes play fast-and-loose with the psychological studies that you cite. For example, you say that personal interviews lower the reliability of admissions and hiring decisions, and that “this is one of the most robust findings in psychology.” And yet, Goldman says, in his review for Notre Dame Philosophical Reviews, that you only quote four studies in support of that, and two of them are specifically about admissions to medical school. He speculates that, if you’re trying to predict grades in medical school, then SPR is probably more reliable than interviews. But, maybe when it comes to interviewing sales persons, that kind of interview, then interviews might be of much greater use.
MIKE: Well, I think that’s an interesting criticism. A lot of people have this reaction to this particular literature. So, I think, maybe, the best way to respond is to make three points.
First, the research on unstructured interviews is large and pretty easily available. If you punch in “unstructured interview” in quotes with the word, “validity,” into Google Scholar, you’ll get almost 4000 hits. And if you add the word, “sales,” you’ll get over 1000 hits. So, studies on hiring sales people have been done, and they’re not especially hard to find.
But, the view of the literature that Trout and I offer isn’t quirky. If you look at the articles on this, even psychologists who are critical of it call the view that we articulate the “received view.” So, let me just briefly explain the received view about interviews in a nutshell.
Suppose you and I are trying to hire a sales person, and we have a lot of evidence about these various candidates. Perhaps we have their grades, or letters of recommendation, we have a resume. And suppose we also perform a short, unstructured interview of the kind that most of us have suffered through at some point in our lives. And then, we make judgments about how well somebody will perform in their job on the basis of these various pieces of evidence.
Now, the received view says that, if we make judgments only on the basis of the interview, we will hire better applicants in the long run than if we throw darts. So, interviews are informative.
But, the other information that we have – that is, the person’s grades, the letters from people who have known them for a long time, their resumes, which tells us about their past history – are much better predictors of future job performance. So, in principle, the interviews could be useful, but what usually happens is that the interview, because it’s so vivid, swamps the rest of our evidence, and we weigh that interview evidence more strongly than we really should. And so, in the long run, we tend to make worse hiring decisions, when we have all this evidence, with the unstructured interviews than without them. So, that’s the received view.
Now, I think it’s important to grant that this view might well be false. I’ve sort of mentioned that there are psychologists who have published interesting evidence-based articles that question parts of the received view. But, that’s not why Goldman, I think, casts doubt on the received view, and it’s not why people reject it upon hearing it.
I suspect that the reason that people reject it is that there’s an intuitively compelling story that goes something like this: being a sales person, or I often hear this with respect to being a teacher, requires certain social skills and a certain personality type, and these personality traits are bound to come across in an interview. And so, to suppose that we could hire a better sales staff without an interview is just crazy. I think a story like that is what’s really underlying people’s skepticism about the interview results.
Now, let me admit that I am sorely tempted by this story as well. My intuitions scream, along with maybe yours, that, of course it’s crazy to try and hire a sales person or a teacher without an interview. But, this is precisely the sort of bad reasoning that leads us to think that homeopathic remedies are effective. What we have in both cases is a causal claim – the particular hiring policy or the homeopathic remedy leads to better hires or cures your headache. But, when our intuitions and our anecdotes clash with scores of well-designed, published studies, I think it’s our intuitions that have to go.
LUKE: Yeah, and, just as a slight aside here, I have lots of discussions with people about arguments where a particular intuition is really at the core of the argument, for whatever position. It could be on a totally different subject than epistemology. And I very often come up on the side that is not very friendly to intuition. I think the point that needs to be made is that, people like you and me usually have the same intuitions as everybody else about certain things; it’s just that, when we look at the evidence, it very often turns out that intuition is not the best guide to truth on this type of question. And so, we’ll try to turn to more reliable reasoning processes, which is the whole point of strategic reliabilism that you present in “Epistemology and the Psychology of Human Judgment.”
MIKE: Yes. And yet, there probably are cases in which following your gut, in fact, there’s some published studies where following your intuitions is the right thing to do. The problem is, psychologists don’t have a good sense of when that is. But again, it seems to me that you are right. Intuitions are often like the sirens who are calling us to move away from the truth. And sometimes what you need to do is just stifle your intuitions even though it is extremely difficult to do.
LUKE: Yeah. It’s kind of like, intuitions, when they deliver you a belief, or evidence shall we say, from the intuitions, basically the definition of intuition is that it delivers the information to you along with an assurance of its truth. So, it’s very hard to stifle intuitions.
But, I should mention that people who are maybe more favorable to the applicability of intuitions to more problems could read a book like, I think Gerd Gigerenzer did one, “Gut Feelings” in which he tries to explore some of the places where intuitions might actually be a good thing to use.
MIKE: Yes. Another book that might be interesting to people on this score that gives you some nice examples of how gut feelings or intuitions can lead you to the truth is Malcolm Gladwell’s “Blink.”
LUKE: Mm-hmm. Right. Well, a very different kind of worry that people have expressed about strategic reliabilism is that you’re claiming that epistemology can be a kind of science, and yet you also think that it’s normative – it has practical recommendations. And the standard question is, but how can a descriptive science be a prescriptive enterprise?
MIKE: Well, that’s a really important question, Luke, and a really hard one. So, let me spell out the problem in a little detail.
So, it seems that science tells us what is the case, and philosophy tells us what should be the case, and never between shall meet. So, your point is that if I claim that philosophy is a kind of theoretical science, how can we extract any shoulds from it. In other words, how can we figure out how we should reason from the science that tells us how we do reason.
And in response, I think I would make two points. First, nobody really knows what makes philosophy normative. Too often we just assume that philosophy gets to tell us about how things should be. But, philosopher’s opinions about what should be the case are influenced by social, cultural, and psychological factors, just like everybody else is. And plus, philosophers disagree a lot about these sorts of issues.
Now, I’m not arguing that philosophy isn’t normative, I’m just pointing out that since there’s no clear explanation for why philosophy is normative, we shouldn’t expect there to be a clear explanation for why science is normative or not normative.
The second point I would make is if you take a look at psychology, and in fact we’ve been talking about some examples of this, it’s full of advice about how people or institutions should reason about certain sorts of issues. For example, if a test for HIV is 99 percent accurate and you test positive, does that mean that you have a 99 percent chance of being HIV positive? Well, probably not.
There’s a terrific article by I think Gigerenzer and Hoffrage that gives us a relatively simple way for how we should reason about these sorts of cases. But, you don’t have to look at psychology. Just consider James Randi. I mean, he gives us lots of really good advice about how we should think about people who claim to exhibit paranormal abilities. Why isn’t this just as normative as what philosophers tell us.
So, let me be perfectly honest here, I haven’t answered your question. I haven’t explained how science can be normative. What I’ve tried to do instead is tell you that parts of science sure seem normative, and the fact there’s no one that can explain how it’s normative shouldn’t be all that troubling.
LUKE: I think one way to go that is maybe the least ambitious in terms of what’s being claimed for normativity is that you can set up the normative framework just in terms of hypotheticals, so you can say if you want to get from Los Angeles to San Diego, go south. That is a very uncontroversial way, or fairly uncontroversial way to talk about normative recommendations. And so, if we wanted to, we could use the same kind of thing in epistemology and say look, if you want to more probably get at the truth about such-and-such issue, then here are the reasoning processes you should employ. And the reason we make that recommendation is because we have a fair bit of evidence that that is what works, just like we have very good evidence that going south from L.A. to San Diego is the way to go if you want to get from L.A. to San Diego.
And so, we don’t have to call upon some kind of, you know, the intrinsic value of having more true beliefs or anything like that. We can just say look, if this is your goal, here’s how you should do it because this is what works. And that would be maybe an unambitious way to talk about the normative consequences of these descriptive facts about human reasoning and what types of processes tend to make better predictions.
MIKE: Yes. That is definitely a way to go, and I’ve been tempted by that. And ultimately that might be the best we can do. I guess my answer was just a bit more cagey than that in leaving open the possibility of something more ambitious.
LUKE: Yeah, and you do hint towards, or mention, something more ambitious and I’ll let you present that in response to another objection that’s been raised to strategic reliabilism by someone who’s very much looking at these things from the same point of view. Steven Stitch has some similarities with your approaches in naturalized epistemology and that kind of thing. But, he says there are no intrinsic epistemic virtues. And after all, he says, false beliefs can sometimes lead to a higher quality of life. You know, you might be more contented or happy if you believe in a divine being whose going to reunite you with all your family when you die or something like that. So, what do you think of this objection from Steven Stitch?
MIKE: Well, I should tell you that Steve was my first college teacher. So there’s a reason why you see some of his influence on, at least my work. I think J.D. Trout and I agree with Stitch on some things. So sometimes false beliefs can lead to good results. That’s absolutely true. And there actually may even be cases in which this is systematically the case.
But, we part with Stitch because we think it is useful to keep different normative realms, the moral, the epistemic, the aesthetic, the practical, at least somewhat independent. And this is one of the reasons why we don’t go with hypothetical, the response to the normativity issue, because that sort of reduces everything to what your goals are. Let’s look at an example. There’s some evidence that being a bit overly optimistic about your health prospects can help you bounce back from illness more quickly.
So, let’s suppose this is true. There’s some controversy about it, but let’s suppose that it’s true. So, if you have a serious illness, what should you believe? Now, I don’t know what the right answer is to this question, and there might not be a single right answer for everybody, but it does make sense from my perspective to think about this issue as one in which we weighed different values against one another – the epistemic value of reasoning well, against the practical value of fostering good health. And so this is why Trout and I think that it makes sense to think of distinctively epistemic virtues.
LUKE: So, the way that gets around the problem is that you don’t have to say that something is intrinsically valuable. You can just say, well, if we’re talking about epistemology, that’s about reasoning processes, and which reasoning processes get at the truth. But then, if you’re talking about practical rationality, then the goals that are relevant there, whichever goals that you have for your own health or happiness or whatever, and it might turn out to be the case that the reasons for action you have in the practical realm outweigh the reasons for actions you have in the epistemological realm for certain situations. Your health might be more important than using the correct reasoning processes about your condition.
MIKE: That’s right.
LUKE: I’m not sure why you can’t still use the hypothetical framework, though, because you can say, well look, just by definition epistemology is about getting at knowledge or getting at true belief, and so epistemology could be the hypothetical of “if you want more true beliefs, then this is how you can do it. You can use the statistical prediction rules and the other kinds of recommendations that you offer.”
And then you could say, well, the practical realm is about certain goals that you have. And so if you want to achieve those certain goals, then you should do this. And then the moral realm is as usual, probably going to be harder, but we could say something like if you want to consider all the reasons for action that exists, some sort of global assessment, that’s the moral recommendation because that’s how we define the word moral, or something like that.
That would be extremely controversial but that would just be one way to go. So, why is the way that you’re going really in conflict with the hypothetical framework if we’re talking about normativity?
MIKE: Let me make two separate points. First, the way I would distinguish between these different normative realms is in terms of what our basic human capacities are. So, we have a capacity for reasoning about the world, we have a capacity for getting along or not getting along with other people, and these different normative realms arise from these different capacities.
Now, the reason that one might want to avoid the hypothetical imperative approach to these matters is what do you say to somebody who has sick or twisted or quirky goals? So, if somebody has the goal of just feeling happy, what might end up happening is that it will not be true of that person that they ought to reason about their child’s health in a particularly careful manner because well, they care more about just being happy in the moment than they do about the future.
So, the reason that philosophers will tend to avoid the hypothetical imperative solution to the normativity problem is that it’s hard to see what you do with the quirky cases. Now, ultimately, you might just have to bite the bullet and say, “Well, then in the quirky case maybe it’s not rational for the person to think carefully about their child’s illness or maybe they shouldn’t think carefully about their child’s illness.” But, many of us would like to see if there’s something more we can say in that case.
LUKE: Yeah, and you talk about how the Aristotelian principle might be relevant here. How does that come into the discussion?
MIKE: Good. The Aristotelian principle, the way we use that is we say, “Look, let’s construct a theory of good reasoning on the basis of what science says. But, scientists can disagree about what counts as good reasoning. So, we’re going to use a basic test, a rough and ready test that isn’t going to always give you the right answer but is good enough to get things started.” And that basic test is that if a piece of reasoning leads to good results then it’s more likely to be good reasoning.
LUKE: What’s the meaning of “good” in that phrase? I mean good relative to the person’s goals or good relevant to epistemic goals or what?
MIKE: Right. Relevant to epistemic goodness. So backing up a little bit, talking about the strategy in the book, what we wanted to say was, “If you look at psychology, they’re making lots of normative recommendations about how we ought to reason.” Well, there must be a framework underlying these normative recommendations and maybe we can extract that framework and that will give us an account of what it is to be a good reasoner.
But, the problem is that if you look at psychology, it’s not always consistent. So, we had to be able to make choices between different psychologists when they were disagreeing about whether something was an instance of good reasoning or not. So, we used the Aristotelian principle as a rough and ready guide to choosing one psychologist’s work over another psychologist’s work.
And so we said, “Usually, the reasoning that leads to good results is going to be the epistemically better reasoning.”
LUKE: Another objection then to strategic reliabilism is, Well, you know every time somebody proposes a new epistemic framework, one of the first questions is, ‘You know, can this framework solve the problems of philosophical skepticism or Humes’ problem of induction? or something like that?” It seems to always be something that philosophers are looking for because they’re probably very frustrated by the problems of skepticism and with induction.
And so one of the objections to strategic reliabilism would be, “Well, this doesn’t solve the problem of skepticism or the problem of induction. So we need something else.” What’s your response to that?
MIKE: Well, I think that’s true. I would grant this objection, I think J.D. and I would grant this objection, it doesn’t solve the problem of skepticism or the problem of induction and is not meant to. The theory tells us how we ought to reason about the world. We will need some other theory to solve those problems. But, a theory that tells you how you should reason about the world is quite ambitious enough I think.
LUKE: Yeah. Now, one more objection to strategic reliabilism is that your approach disparages intuitions in large ways in epistemology but without intuitions, philosophy kind of falls apart. Maybe could you explain the role that intuitions play in standard analytic epistemology and then what your response to this objection is?
MIKE: In epistemology and in philosophy more generally, the term intuition has quite the specific meaning more specific than it does in our ordinary parlance. And epistemological intuition is a judgment about say whether a particular belief is knowledge or is justified.
So, think again about my belief that Elvis is alive. You might judge that this belief is unjustified and that it’s not knowledge, hopefully you do, these are philosophical intuitions. So, the objection that you raise, namely that if we didn’t have any intuitions, if we all had, you know intuitionectomies, philosophy and lots of other things would be really hard and maybe impossible to do. I think that’s maybe true.
But, J.D. Trout and I don’t object to standard analytic epistemology because it uses intuitions. Our objection is that standard analytic epistemology uses only intuitions. For practical purposes, that’s all that their theories are trying to account for and that’s the real problem. And I think this practice of just focusing on our own intuitions keeps philosophy disconnected from other academic disciplines and can make it sort of inbred. I think that a more open philosophy that’s open to other fields, a kind of philosophy that it’s informed but not engulfed by our best science has the potential to make the world a better place. But, we won’t reach that potential by building theories that speak only to our intuitions.
LUKE: So, Mike, what is the future of strategic reliabilism? It sounds like a lot of what you’d be looking forward to is just continuing scientific research on reasoning processes that are more reliable and that’s just going to come from psychology, more of this ameliorative psychology as you call it.
MIKE: Yes, I think that’s right, I think that there’s just a lot of really exciting work these days about how we can reason better and a lot of that exciting work is coming out of psychology.
LUKE: Yeah. And this is kind of even breaking into the mainstream with, I think there have been one or two best sellers by Dan Ariely who’s a behavioral economist and that’s another group that writes a lot about the types of mistakes that we make and how we might overcome them. So, I think we might see some interesting and useful results coming out of that field as well.
MIKE: No, absolutely, behavioral economics is a fascinating and important field and in fact my co-author on this book has written a very good popular book called “The Empathy Gap,” on precisely these sorts of issues about social psychology and what it has to tell us about policy.
LUKE: Well, Mike, it’s been a pleasure speaking with you, thanks for coming on the show.
MIKE: Thank you, Luke, I really enjoyed it.
Previous post:
Next post: