1,166
edits
Line 1: | Line 1: | ||
== Utilitarianism == | == Utilitarianism == | ||
Utilitarianism is probably the most common framework read in LD. All utilitarian type frameworks center around the idea of ''maximizing pleasure''. People read various standard texts, but almost all of them hold that pleasure is good, and pain is bad. An extension of this is that maximizing deaths should be the most important goal, as most utilitarian cases have extinction-level impacts. Although the framework itself is quite simple to understand, there are various justifications that people use. | Utilitarianism is probably the most common framework read in LD. All utilitarian type frameworks center around the idea of ''maximizing pleasure''. People read various standard texts, but almost all of them hold that pleasure is good, and pain is bad. An extension of this is that maximizing deaths should be the most important goal, as most utilitarian cases have extinction-level impacts. Although the framework itself is quite simple to understand, there are various justifications that people use. | ||
=== Framework Warrants === | === Framework Warrants === | ||
''Phenomenal Introspection'' holds that pain and pleasure are intrinsically valuable biologically. Just as we know that a lemon is yellow from its color, we know that pleasure is valuable because naturally strive towards it. For instance, if we put our hand on a hot stove, we instinctively draw it away. | ''Phenomenal Introspection'' holds that pain and pleasure are intrinsically valuable biologically. Just as we know that a lemon is yellow from its color, we know that pleasure is valuable because naturally strive towards it. For instance, if we put our hand on a hot stove, we instinctively draw it away. | ||
''Actor Specificity'' holds that utilitarianism is the only type of framework that governments can use because when making decisions they engage in aggregation to determine whether a specific policy benefits or harms society as a whole. Since most resolutions have state actors, this argument can be strategic. | ''Actor Specificity'' holds that utilitarianism is the only type of framework that governments can use because when making decisions they engage in aggregation to determine whether a specific policy benefits or harms society as a whole. Since most resolutions have state actors, this argument can be strategic. | ||
Line 13: | Line 12: | ||
''Lexical Prerequisite'' says that instinctively, we strive to maximize pleasure. For instance, if I were standing in front of train tracks and an oncoming train approaches, I would instinctively jump out of the way. | ''Lexical Prerequisite'' says that instinctively, we strive to maximize pleasure. For instance, if I were standing in front of train tracks and an oncoming train approaches, I would instinctively jump out of the way. | ||
''No Act-Omission Distinction'' says that there is no distinction between choosing to act, and choosing not to act. This can be used to justify consequentialism because if we are responsible for both active and omitted harms the only solution is to aggregate. | ''No [[Act-Omission Distinction]]'' says that there is no distinction between choosing to act, and choosing not to act. This can be used to justify consequentialism because if we are responsible for both active and omitted harms the only solution is to aggregate. | ||
''No Intent-Foresight Distinction'' says that if one foresees a consequence when making a decision, then they also must intend that consequence. For instance, if I am dropping a bomb near a hospital to kill a soldier who is standing outside, and I foresee the bomb destroying the hospital, even the destruction was not my aim, I would have intended to destroy the hospital. Therefore, if the government sees an extinction level impact coming, and they choose to ignore it, it is essentially the same as willing extinction. While this argument doesn't justify utilitarianism, it could be implicated as a reason why your impacts matter under your opponent's framework. | ''No [[Intent-Foresight Distinction]]'' says that if one foresees a consequence when making a decision, then they also must intend that consequence. For instance, if I am dropping a bomb near a hospital to kill a soldier who is standing outside, and I foresee the bomb destroying the hospital, even the destruction was not my aim, I would have intended to destroy the hospital. Therefore, if the government sees an extinction level impact coming, and they choose to ignore it, it is essentially the same as willing extinction. While this argument doesn't justify utilitarianism, it could be implicated as a reason why your impacts matter under your opponent's framework. | ||
''Degrees of Wrongness'' argues that utilitarianism is the only framework that can differentiate between different impacts because one can weigh between the amount of pain or pleasure a particular impact causes. Unfortunately, aggregation does seem very possible under different frameworks, too. | ''Degrees of Wrongness'' argues that utilitarianism is the only framework that can differentiate between different impacts because one can weigh between the amount of pain or pleasure a particular impact causes. Unfortunately, aggregation does seem very possible under different frameworks, too. | ||
Line 22: | Line 21: | ||
''Necessary Enablers'' is a carded argument by Sinnot-Armstrong that states to achieve a given action, one must complete all the necessary enablers to get to that point. For instance, if I promise to mow the lawn, that doesn't entail promising to start the lawnmower, find gas, etc., so only a consequentialist framework like utilitarianism would will us to complete all the intermediary steps. | ''Necessary Enablers'' is a carded argument by Sinnot-Armstrong that states to achieve a given action, one must complete all the necessary enablers to get to that point. For instance, if I promise to mow the lawn, that doesn't entail promising to start the lawnmower, find gas, etc., so only a consequentialist framework like utilitarianism would will us to complete all the intermediary steps. | ||
=== Other Arguments === | === Other Arguments === | ||
==== Epistemic Modesty ==== | ==== Epistemic Modesty ==== | ||
Line 28: | Line 26: | ||
The strategic value of epistemic modesty is that since most impacts are extinction, that has an infinite magnitude. Meaning, the util debater could be very behind on the framework level of the debate, but since their impact has infinite magnitude, they are virtually guaranteed to win the debate regardless. | The strategic value of epistemic modesty is that since most impacts are extinction, that has an infinite magnitude. Meaning, the util debater could be very behind on the framework level of the debate, but since their impact has infinite magnitude, they are virtually guaranteed to win the debate regardless. | ||
==== TJFs ==== | ==== TJFs ==== | ||
TJFs, or theoretically justified frameworks, are arguments that appeal to fairness or education as to why the use of a particular framework in the round is good. For instance, one could argue that utilitarianism is the best for ground, as it guarantees that there is offense on either side of the resolution under a utilitarian framework, or one could argue that it is the best for education, since debaters will get the policy education that could be used later on in life. | TJFs, or theoretically justified frameworks, are arguments that appeal to fairness or education as to why the use of a particular framework in the round is good. For instance, one could argue that utilitarianism is the best for ground, as it guarantees that there is offense on either side of the resolution under a utilitarian framework, or one could argue that it is the best for education, since debaters will get the policy education that could be used later on in life. | ||
Since fairness and education are considered to come first in the round, TJFs are another way to preclude that actual framework debate, which is often sided against the util debater since the framework is not as rigorously justified through a syllogism. | Since fairness and education are considered to come first in the round, TJFs are another way to preclude that actual framework debate, which is often sided against the util debater since the framework is not as rigorously justified through a syllogism. | ||
=== Readings === | === Readings === | ||
[https://spot.colorado.edu/~heathwoo/readings/mill.pdf Utilitarianism by John Stuart Mill] | [https://spot.colorado.edu/~heathwoo/readings/mill.pdf Utilitarianism by John Stuart Mill] | ||
[https://academic.oup.com/mind/article-abstract/104/414/426/961089?redirectedFrom=PDF Philosophical Naturalism] | [https://academic.oup.com/mind/article-abstract/104/414/426/961089?redirectedFrom=PDF Philosophical Naturalism] | ||
=== Sample Cases === | === Sample Cases === | ||
[[Media:CD Util FW.docx|CD Util FW.docx]] | [[Media:CD Util FW.docx|CD Util FW.docx]] |