Right now I'm
Undergraduate ethics curriculums typically include courses in metaethics, normative ethics, and applied ethics. And intro ethics courses often include a unit on each. Here’s the problem: one might think or hope that the primary goal or outcome of an ethics course would be to learn how to be a more ethical person. Yet the structure and the content of our courses don’t actually encourage that end. (Not to mention that we as philosophers don’t actively pitch courses that way: we often say that we care more about understanding the possible views and the structure of reasoning than about the content of our students’ beliefs!)
Typically the bulk of the time is spent, first, on vexing theoretical questions in metaethics -- what are moral facts? Are there any arguments in morality that resemble proofs of the sort we see in the empirical sciences? What is the nature of “obligation”? Should we think we have any at all? – and next on comparisons between the traditions in the ethics cannon – Should we think deontology or consequentialism or virtue ethics gets it right? Syllabuses often reserve some time at the end of intro courses for an “applied” ethics unit: usually this means that we look at candidate moral controversies (abortion, assisted suicide) through the lenses of the normative theories we’ve learned. What does the typical student come away with? A sense that philosophy is much harder than they thought: that answers are few and far between and that justification for those answers looks less promising than it looks in the sciences.
Most students enter ethics courses interested in improving moral decision-making--and perhaps even becoming better people-- but our standard methodology and pedagogical practices don’t capture and nurture these interests in ways that translate into increased impact. We need to develop and disseminate a genuinely applied approach to ethics, one that uses an interdisciplinary approach to help anyone improve their moral reasoning regardless of which general moral theory they find most plausible.
This will look like a reasoning course that focuses on moral matters: reasoning about how to be better people. How do people respond to victims as the number of victims increases? Why do we respond this way? What factors are likely to affect the intensity of our empathic responses? How can these responses be manipulated? What is scope neglect? What are the manual reasoning responses we can deploy in situations where we predictably fail to optimize responses? Why do we systematically fail to consider the perspectives of people who disagree with us? How can we do better? Why are we so likely to judge ourselves to be morally better than other people? Why do we take positive outcomes to be indications of our good character and negative outcomes to be unlucky situational flukes, but the reverse about other people? How does this lead to long-term discord? Why are we so prone to drop-in-the-bucket reasoning? What should we do about it?
MacAskell, Ord, Yudkowsky, and Greaves, to name a few, have already made some good headway articulating some of the reasoning deficits that can have significant negative impacts on our collective wellbeing. But this kind of work hasn’t yet had much uptake by mainstream ethics, and there needs to be a systematic approach to identifying more of these ethical reasoning pitfalls and developing (and eventually testing!) strategies to overcome them.
Such a course starts, not from the metaethical ground floor, but from the overwhelmingly shared idea that the needs and wants of other people matter. And it also starts from empirical knowledge about how our reasoning systems have evolved in ways that will systematically skew the feelings that guide our moral behavior, e.g. to make us feel people's needs and wants strongly in our inner circles and weakly or not at all as the circle expands. And we look at the wide range of scenarios in which our moral reasoning falters. Sometimes the strategies for overcoming these deficits can involve adopting different perspectives in order to get ourselves to feel differently, and often they involve understanding the shortcomings of our empathic responses and moral intuitions and learning to override our faulty judgments with calculated System 2 reasoning. In short, there is a great deal of progress that students can make by setting aside structural questions about ethics and pushing the idea that we are good, caring people who reason badly but can learn to do better.
- finishing up an instructor's manual for David Manley's (incredible) reasoning textbook, Reason Better. It's got weekly teaching materials, problem sets, exams, extra teaching instruction, and extra practice problems. Check it out here, and get in touch if you're interested in chatting about using it in your own course.
- working on developing a new kind of ethics course and developing a textbook for that course. Here's a bit about that project. Feel free to get in touch if you have thoughts about it.
Undergraduate ethics curriculums typically include courses in metaethics, normative ethics, and applied ethics. And intro ethics courses often include a unit on each. Here’s the problem: one might think or hope that the primary goal or outcome of an ethics course would be to learn how to be a more ethical person. Yet the structure and the content of our courses don’t actually encourage that end. (Not to mention that we as philosophers don’t actively pitch courses that way: we often say that we care more about understanding the possible views and the structure of reasoning than about the content of our students’ beliefs!)
Typically the bulk of the time is spent, first, on vexing theoretical questions in metaethics -- what are moral facts? Are there any arguments in morality that resemble proofs of the sort we see in the empirical sciences? What is the nature of “obligation”? Should we think we have any at all? – and next on comparisons between the traditions in the ethics cannon – Should we think deontology or consequentialism or virtue ethics gets it right? Syllabuses often reserve some time at the end of intro courses for an “applied” ethics unit: usually this means that we look at candidate moral controversies (abortion, assisted suicide) through the lenses of the normative theories we’ve learned. What does the typical student come away with? A sense that philosophy is much harder than they thought: that answers are few and far between and that justification for those answers looks less promising than it looks in the sciences.
Most students enter ethics courses interested in improving moral decision-making--and perhaps even becoming better people-- but our standard methodology and pedagogical practices don’t capture and nurture these interests in ways that translate into increased impact. We need to develop and disseminate a genuinely applied approach to ethics, one that uses an interdisciplinary approach to help anyone improve their moral reasoning regardless of which general moral theory they find most plausible.
This will look like a reasoning course that focuses on moral matters: reasoning about how to be better people. How do people respond to victims as the number of victims increases? Why do we respond this way? What factors are likely to affect the intensity of our empathic responses? How can these responses be manipulated? What is scope neglect? What are the manual reasoning responses we can deploy in situations where we predictably fail to optimize responses? Why do we systematically fail to consider the perspectives of people who disagree with us? How can we do better? Why are we so likely to judge ourselves to be morally better than other people? Why do we take positive outcomes to be indications of our good character and negative outcomes to be unlucky situational flukes, but the reverse about other people? How does this lead to long-term discord? Why are we so prone to drop-in-the-bucket reasoning? What should we do about it?
MacAskell, Ord, Yudkowsky, and Greaves, to name a few, have already made some good headway articulating some of the reasoning deficits that can have significant negative impacts on our collective wellbeing. But this kind of work hasn’t yet had much uptake by mainstream ethics, and there needs to be a systematic approach to identifying more of these ethical reasoning pitfalls and developing (and eventually testing!) strategies to overcome them.
Such a course starts, not from the metaethical ground floor, but from the overwhelmingly shared idea that the needs and wants of other people matter. And it also starts from empirical knowledge about how our reasoning systems have evolved in ways that will systematically skew the feelings that guide our moral behavior, e.g. to make us feel people's needs and wants strongly in our inner circles and weakly or not at all as the circle expands. And we look at the wide range of scenarios in which our moral reasoning falters. Sometimes the strategies for overcoming these deficits can involve adopting different perspectives in order to get ourselves to feel differently, and often they involve understanding the shortcomings of our empathic responses and moral intuitions and learning to override our faulty judgments with calculated System 2 reasoning. In short, there is a great deal of progress that students can make by setting aside structural questions about ethics and pushing the idea that we are good, caring people who reason badly but can learn to do better.
My primary research interests lie at the intersection of epistemology, ethics, and cognitive psychology. At the University of Michigan, I wrote a dissertation with Maria Lasonen-Aarnio, Peter Railton, and Brian Weatherson on epistemic normativity and the empirical nature of belief. Here's an abstract. Feel free to email me for the full version.
epistemic_norms_and_the_normativity_of_belief.pdf |
In "Evidential Exclusivity and the (Non-)Normativity of Belief" I argue that there's no good way to make sense of the ubiquitous assumption that belief has a standard of correctness such that a belief is correct if (or iff) it's true. Normally epistemologists take belief's supposed standard of correctness to explain the phenomena of evidential exclusivity and transparency in belief formation. I suggest an alternative explanation that accounts for available empirical data better. Here's an extended abstract. Feel free to email me for the full version.
evidential_exclusivity_and_the__non-_normativity_of_belief.pdf |
In "Epistemic Tradeoffs and the Value Connection", I argue that recent attempts to show that epistemic normativity can't be teleological have serious consequences for our understanding of the nature of epistemic value and the reason-giving force of epistemic norms. Here's an extended abstract. Feel free to email me for the full version.
epistemic_tradeoffs_and_the_value_connection.pdf |
Much of contemporary experimental philosophy consists of surveying the philosophical intuitions of ‘folk’ subjects. Experimental philosophers claim that these surveys are surprising and that they can't be predicted from the armchair. In "The Folk Probably Do Think What You Think They Think" (in Australasian Journal of Philosophy with David Manley and Billy Dunaway) we conducted an experiment to test these claims, and found that it isn't true: most philosophers could predict even results that were claimed to be surprising in the literature. We discuss some methodological implications as well as some possible explanations for the common surprisingness claims.
australasianfolkprobablydo.pdf |