Utilitarianism is probably the most common framework read in LD. All utilitarian type frameworks center around the idea of maximizing pleasure. People read various standard texts, but almost all of them hold that pleasure is good, and pain is bad. An extension of this is that minimizing deaths should be the most important goal, as most utilitarian cases have extinction-level impacts. Although the framework itself is quite simple to understand, there are various justifications that people use.
Phenomenal Introspection holds that pain and pleasure are intrinsically valuable biologically. Just as we know that a lemon is yellow from its color, we know that pleasure is valuable because naturally strive towards it. For instance, if we put our hand on a hot stove, we instinctively draw it away.
Actor Specificity holds that utilitarianism is the only type of framework that governments can use because when making decisions they engage in aggregation to determine whether a specific policy benefits or harms society as a whole. Since most resolutions have state actors, this argument can be strategic.
Extinction First says that no matter what the framework is, preventing extinction should be the highest priority. Theoretically, the util debater could be losing the framework debate, but provided they are winning this claim, they could use their extinction-first claim to preclude offense under the opponent's framework. The warrants for this argument are usually meta in nature, such as, we need to be alive to determine what framework we must use, it encourages moral progress, etc.
Reductionism is usually a card that cites some experiment that shows the brain was split into separate sections, and each side of the brain was able to behave semi-autonomously. Therefore, we are not able to have a continuous identity as a moral agent, so the only thing we can do is maximize pleasure. If this argument sounds a bit silly it's because it is.
Lexical Prerequisite says that instinctively, we strive to maximize pleasure. For instance, if I were standing in front of train tracks and an oncoming train approaches, I would instinctively jump out of the way.
No Act-Omission Distinction says that there is no distinction between choosing to act, and choosing not to act. This can be used to justify consequentialism because if we are responsible for both active and omitted harms the only solution is to aggregate.
No Intent-Foresight Distinction says that if one foresees a consequence when making a decision, then they also must intend that consequence. For instance, if I am dropping a bomb near a hospital to kill a soldier who is standing outside, and I foresee the bomb destroying the hospital, even the destruction was not my aim, I would have intended to destroy the hospital. Therefore, if the government sees an extinction level impact coming, and they choose to ignore it, it is essentially the same as willing extinction. While this argument doesn't justify utilitarianism, it could be implicated as a reason why your impacts matter under your opponent's framework.
Degrees of Wrongness argues that utilitarianism is the only framework that can differentiate between different impacts because one can weigh between the amount of pain or pleasure a particular impact causes. Unfortunately, aggregation does seem very possible under different frameworks, too.
Naturalism is an argument that states all ethical principles must be derived from natural properties. Since utilitarianism is inherently a natural framework that focuses on maximizing pleasure, a natural property, it would exclude any other frameworks that stem from non-natural principles.
Necessary Enablers is a carded argument by Sinnot-Armstrong that states to achieve a given action, one must complete all the necessary enablers to get to that point. For instance, if I promise to mow the lawn, that doesn't entail promising to start the lawnmower, find gas, etc., so only a consequentialist framework like utilitarianism would will us to complete all the intermediary steps.
Epistemic modesty is another argument people throw into some util frameworks. It says we should evaluate any impact by multiplying the probability that the framework is true by the magnitude of the impact in question. This is very different than epistemic confidence, how traditional framework debates are resolved, which would say that we first need to have the debate over which framework is true, and even if a framework is 1% ahead we would solely use that framework to evaluate the offense.
The strategic value of epistemic modesty is that since most impacts are extinction, that has an infinite magnitude. Meaning, the util debater could be very behind on the framework level of the debate, but since their impact has infinite magnitude, they are virtually guaranteed to win the debate regardless.
TJFs, or theoretically justified frameworks, are arguments that appeal to fairness or education as to why the use of a particular framework in the round is good. For instance, one could argue that utilitarianism is the best for ground, as it guarantees that there is offense on either side of the resolution under a utilitarian framework, or one could argue that it is the best for education, since debaters will get the policy education that could be used later on in life.
Since fairness and education are considered to come first in the round, TJFs are another way to preclude that actual framework debate, which is often sided against the util debater since the framework is not as rigorously justified through a syllogism.