lhexan: formed of text, to retrieve lost text (retrieving lost text)
[personal profile] lhexan
From a discussion on the CRPG Addict Patreon.

I view deontology and utilitarianism (and virtue) as aspects of ethics, not just competing systems. Deontology is the part of ethics that's universal, utilitarianism is the part that's situational, and virtue is the part that's personal. Virtue ethics is particularly important for disempowered people, who are less likely to face the dilemmas that the other systems address.

With Mass Effect 2, it sounds like the burdensome but common problem where giving the player some amount of agency highlights the agency that they still lack. A story with no choices will feel less limiting than a story with binary or arbitrary choices. If you've got a spare corner of free time for an indie game, Disco Elysium does a fantastic job of giving a wide array of choices, aided by the fact that its main character is very much not a blank slate.

Tristan Gall: I’d say that whenever the rules approach works best, the results approach adopts it. Thus, the utilitarian might think there is no such thing as a ‘right’ but still be all for the idea of teaching kids about human rights.

I think all three approaches can yield a complete ethics on their own ("complete" in the sense that following it is enough to make you a good person), but that each one runs into distinct difficulties. Here are three advantages deontology has over utilitarianism.

First, large-scale utilitarianism ends up relying on arguments about human psychology that aren't empirically grounded. I happen to believe humanity benefits the most when all are accorded full and equal rights, but I do not think this has been established empirically. This is a more acute version of the problem that utilitarianism requires you to choose outcomes that you predict to be most beneficial, but the chaos of human behavior renders such prediction very difficult.

Second, an ethical system must not just be applicable in theory but in practice: it should be helpful in the ethical dilemmas that people actually face. Here utilitarianism has the problem that, while it requires one to weigh the outcomes of various possible choices, in many situations these possibilities are effectively unlimited. Most ethical problems are not trolley problems. Even a simple choice (say, lie or don't lie) subdivides into a larger number of options (lie in an emotionally manipulative way, lie in such a way that minimizes the number of falsehoods said, say the truth in a way that you know will mislead, refuse to answer, say the truth but refuse to elaborate, give all desired information even when you know it will hurt). In practical situations, you may have very little time to decide. A deontological rule will deem an entire subset of options off the table, allowing you to use your limited time and energy to subdivide the fewer options that still remain and choose between them on utilitarian grounds. Even when the rule is not strict, it will often still be stated on deontological grounds. For instance, my rule is not to lie to human beings, while lying to computers or websites is fine.

Third, utilitarians will, like everyone else, sometimes face the dilemma of whether to prioritize their own benefit or benefit to another. An Objectivist can argue that always prioritizing oneself ultimately brings the most benefit to everyone, but I don't believe that. Rather, I think that my happiness is the same as your happiness. And that assertion is deontological, because it is a universal assertion in ethics that does not permit situational considerations. There aren't specific situations in which my happiness will be worth more or less than yours, although there will be situations where I can impact one more than another. So, just like deontological ethics can generally be justified on utilitarian grounds, even pure utilitarianism will contain a nugget of deontology, in the truth that benefit to one person is the same as benefit to another.

This was an excuse to pontificate. Don't worry if you feel no desire to engage. ^^

Tristan Gall: I welcome pontificating :)

I think whatever ethics algorithm you pick, you'll be bound by the usual limitations of imperfect information, time constraints and the rest. I think we're stuck relying on 'rules of thumb' no matter which approach we take. eg I think 'First, do no harm' is a useful default position for utilitarians. Why? because I think the optimal outcome is usually cooperative, rather than competitive and I think that maxim starts things off in the right direction.

December 2025

S M T W T F S
 123 456
78910111213
14151617181920
21222324252627
28293031   

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 29th, 2025 11:18 pm
Powered by Dreamwidth Studios