Understanding Value
  • Home
  • CFA
  • Programme
    • Understanding Value IX Keynote Titles and Abstracts >
      • Alison Assiter: ‘Kierkegaard on Process and Paradox’
      • Komarine Romdenh-Romluc: ‘Fanon's Philosophy of Language'
      • Myisha Cherry: ‘Value-Based Protest Slogans: An Argument for Reorientation’
      • Shaun Nichols: ‘The Wrong and the Bad’
    • Understanding Value IX Panels - Titles and Abstracts >
      • Group 1: Applied philosophy and practical knowledge
      • Group 2: Ethics, Meta-Ethics and Moral Epistemology
      • Group 3: Politics of Language Use and Social Metaphysics
      • Group 4: Mathematics, Numbers and Value
      • Group 5: Politics of Language Use, Social Epistemology and Ethical Perspectives
      • Group 6: Moral Psychology, Emotions and Perception
      • Group 7: Aesthetics, Political Theory and Ethics
      • Group 8: Feminist Philosophy and Perspectives from Social Epistemology
      • Group 9: Ethics and Value Theory
    • “Structured” and Open Social Sessions >
      • Social Sessions 10.12.
      • Social Sessions 11.12.
  • Registration
  • Location & Access
  • Previous Editions
  • Department of Philosophy
  • Blog
Department of Philosophy, the University of Sheffield

Group 9: Ethics and Value Theory

Friday 11th of December 2020
15:00-16:30 GMT


Andreas Bruns, University of Leeds
 ‘Better Moral Worlds’

Abstract:

Some arguments in moral philosophy take the form, ‘It would be better if p, therefore p.’ One prominent instance of this inference is Frances Kamm’s defence of constraining rights. A constraining right is one that it is impermissible to violate even to minimise the overall number of violations of the same right. For instance, if we are protected by a constraining right against torture, then it is impermissible to torture any one of us even to prevent the torture of two or more others. Kamm argues that we have constraining rights because the elevated moral status they give expression to would make for a better moral world.
There is controversy among philosophers as to whether this type of inference, ‘the better world argument,’ is a valid or cogent form of reasoning. After all, I cannot infer the truth of the statement, ‘It is not raining on my head’, from the truth of the statement, ‘It would be better if it weren’t raining on my head (because I am on a hike)’. Sometimes the world is not such that what would better be true is true. Analogue to the ‘desperate hiker’s argument’, the better world argument may seem like an instance of wishful thinking. Some, like Thomas Nagel, have suggested that the better world argument may exhibit a cogent form of reasoning so long as one argues for a moral rather than a factual conclusion. Others, like David Enoch, have suspected that more needs to be said in order to make the better world argument plausible in moral discourses.
I argue that we should circumvent the debate about the cogency of the better world argument by rephrasing it as an argument about the normative preferability of possible worlds. For instance, if Kamm’s claims about constraining rights and moral status are correct, then it is fitting for us to prefer, on moral grounds, any possible world in which we have constraining rights to any other possible world in which, other things being equal, we don’t. If it is in this sense normatively preferable that we have constraining rights, then we have a decisive moral reason to represent the truth of the proposition that we have such rights in our moral principles. This ‘preferable world argument’ does not establish anything less than the better world argument; yet it can be shown to exhibit an accepted form of cogent reasoning, whereas the better world argument, understood as an instance of the inference, ‘It would be better if p, therefore p’, does not.



Nicholas Makins, London School of Economics and Political Science
‘Value Assignment Under Moral Uncertainty: A Role for Desirability’

Abstract:
This paper presents a unifying diagnosis of a number of important problems facing existing models of rational choice under moral uncertainty and proposes a remedy. I argue that the problems of (i) intertheoretic value comparisons, (ii) severely limited scope and (iii) swamping by “fanatical” theories all stem from the way in which values are assigned to options in procedures such as Maximisation of Expected Choice-Worthiness (MEC). These problems can be avoided if one assigns values to options under a given moral theory by asking something like, “if this theory were true, how much would I desire this option?” rather than, “if this theory were true, how much value would it assign to this option?”. This method of value assignment provides a role for desirability that is curiously absent from the existing discussion of what agents rationally ought to do when uncertain about what they morally ought to do.
Given that MEC and other similar proposals are presented as principles of rationality, it is noteworthy that they make little or no mention of agents’ desires, preferences, ends or goals – notions that are central to traditional theories of rationality – but rather work solely with the values provided by whichever moral theories are under consideration. I will show how one might assign values to options according to how desirable they would be if a given moral theory were true. These values are arrived at through a process of compromise between one’s narrow self-interest and the conditional moral commitments that arise from partial moral beliefs. This builds on an earlier debate between Amartya Sen and Daniel Hausman concerning the ways in which moral considerations might come to feature in conventional models of rational choice.
This method of value assignment would allow one to adopt something like MEC, but sidestep the biggest obstacles facing its conventional application. Firstly, the proposed approach does not require the intertheoretic value comparisons that undermine other forms of “moral hedging”, since the values are taken to represent the same entity, namely desirability.
Secondly, it would allow one to factor in moral theories that do not provide the right kind of values, or indeed any values at all, such as those within deontological or contractualist ethics. Even if a given theory simply ascribes a deontic status to some option, an agent can take this into consideration when evaluating its desirability and this may be numerically representable.
Lastly, assigning values in this way avoids those instances where calculations of expected choice-worthiness are skewed by “fanatical” theories that are considered highly unlikely, but produce values so large that they swamp all other views. If an agent is not rationally required to assign the exact values that a given theory would, then such distorting values need never enter their decision calculus.  
While avoiding these problems, this modified approach would preserve a crucial advantage of MEC over other procedures: the sensitivity of an agent’s choice to both the degree of belief they have in a theory and how much that theory suggests is at stake.


Amit Pinsker, The Hebrew University of Jerusalem
‘Why Not Maximize Expected Choiceworthiness’

Abstract:
Cases of normative moral uncertainty are those in which we know all the empirical facts, but are uncertain regarding which moral theory is correct. The dominant approach in the literature to dealing with normative uncertainty is Maximizing Expected Choiceworthiness (MEC), according to which agents should maximize the expected moral value of their actions, just as they would maximize expected utility under empirical uncertainty. In this paper I give an account of, and defend, a principle for decisions under normative uncertainty known as “My Favorite Option” (MFO), which instructs agents to choose the action they believe is most likely to be morally right.
I argue that the best argument for MEC and against MFO relies on ​inter-theoretical comparisons of moral value - that is, it relies on a background assumption that there is a common cardinal unit between the scales of different moral theories. However, the main reason given in favor of MFO in the literature is that ​there is no such common cardinal unit​, and that moral theories are ​in principle incomparable.​ Therefore the argument for MEC is, at the very least, confused.
I then argue that even if we grant that moral theories are comparable and share a common cardinal unit, MFO is still preferable to MEC, as it successfully reflects the motivation of agents under normative uncertainty: motivation to do the right thing ​de dicto.​ That is, motivation to do the right thing ​as such,​ ​whatever it may be​. I show that MEC fails to reflect this motivation, since (1) it requires agents to choose an action they are certain is morally wrong, and (2) it instructs them to take huge moral risks, even when there is an alternative they are certain is morally right. MFO, on the other hand, never requires agents to choose against their motivation, and is thus preferable to MEC. While some argue against moral motivation de dicto, claiming that it is ​fetishistic​, it is widely accepted that agents under moral uncertainty are necessarily motivated this way. If so, this kind of objection is not available to proponents of MEC - they ought to be committed to that assumption as well.
My argument thus creates an asymmetry between empirical uncertainty and normative uncertainty which, from a theoretical perspective, is considered an undesired result. Nevertheless, I argue that all plausible accounts of MEC implicitly create a partial asymmetry between normative and empirical uncertainty as well. While the asymmetry in the case of MFO is more extensive, it should be considered a virtue: the same reasons that support MFO provide an explanation for that asymmetry. The partial asymmetry imposed by different accounts of MEC, on the other hand, remains unexplained.
Powered by Create your own unique website with customizable templates.