Why kantian ethics
In other words, if a person's emotions or desires cause them to do something, then that action cannot give them moral worth. This may sound odd, but there is good reason to agree with Kant. I look around for what would be the most fun to do with it: buy a yacht, travel in first class around the world, get that knee operation, etc.. I decide that what would be really fun is to give the money to charity and to enjoy that special feeling you get from making people happy, so I give all my lottery money away.
According to Kant, I am not a morally worthy person because I did this, after all I just did whatever I thought would be the most fun and there is nothing admirable about such a selfish pursuit.
It was just lucky for those charities that I thought giving away money was fun. Moral worth only comes when you do something because you know that it is your duty and you would do it regardless of whether you liked it. Imagine two people out together drinking at a bar late one night, and each of them decides to drive home very drunk. They drive in different directions through the middle of nowhere. One of them encounters no one on the road, and so gets home without incident regardless of totally reckless driving.
The other drunk is not so lucky and encounters someone walking at night, and kills the pedestrian with the car. Kant would argue that based on these actions both drunks are equally bad, and the fact that one person got lucky does not make them any better than the other drunk.
After all, they both made the same choices, and nothing within either one's control had anything to do with the difference in their actions. The same reasoning applies to people who act for the right reasons.
If both people act for the right reasons, then both are morally worthy, even if the actions of one of them happen to lead to bad consequences by bad luck. Imagine that he gives to a charity and he intends to save hundreds of starving children in a remote village.
The food arrives in the village but a group of rebels finds out that they have food, and they come to steal the food and end up killing all the children in the village and the adults too. The intended consequence of feeding starving children was good, and the actual consequences were bad. Kant is not saying that we should look at the intended consequences in order to make a moral evaluation. Kant is claiming that regardless of intended or actual consequences, moral worth is properly assessed by looking at the motivation of the action, which may be selfish even if the intended consequences are good.
One might think Kant is claiming that if one of my intentions is to make myself happy, that my action is not worthy. This is a mistake. The consequence of making myself happy is a good consequence, even according to Kant. Kant clearly thinks that people being happy is a good thing. There is nothing wrong with doing something with an intended consequence of making yourself happy, that is not selfishness.
You can get moral worth doing things that you enjoy, but the reason you are doing them cannot be that you enjoy them, the reason must be that they are required by duty. Also, there is a tendency to think that Kant says it is always wrong to do something that just causes your own happiness, like buying an ice cream cone.
This is not the case. Kant thinks that you ought to do things to make yourself happy as long as you make sure that they are not immoral i. Getting ice cream is not immoral, and so you can go ahead and do it. Doing it will not make you a morally worthy person, but it won't make you a bad person either. Many actions which are permissible but not required by duty are neutral in this way. It is fine if they enjoy doing it, but it must be the case that they would do it even if they did not enjoy it.
The overall theme is that to be a good person you must be good for goodness sake. His argument for this is summarized by James Rachels as follows:. After all, it is not as though people would stop believing each other simply because it is known that people lie when doing so will save lives.
For one thing, that situation rarely comes up—people could still be telling the truth almost all of the time. Even the taking of human life could be justified under certain circumstances.
Take self-defense, for example. Maxims and the universal laws that result from them can be specified in a way that reflects all of the relevant features of the situation. Consider the case of the Inquiring Murderer as described in the text.
Suppose that you are in that situation and you lie to the murderer. This maxim seems to pass the test of the categorical imperative. Autonomy of the will requires inner and outer development of the person to reach a state of moral standing and be able to engage in moral conduct.
This is suggestive of an innate sense of right and wrong. Artificial intelligence in autonomous weapons may allow machine logic to develop over time to identify correct and incorrect action, showing a limited sense of autonomy.
It has no self-determining capacity that can make choices between varying degrees of right and wrong. The human can decide to question or go against the rules but the machine cannot, except in circumstances of malfunction and mis-programming. It has no conception of freedom and how this could be enhanced for itself as well as humans. The machine will not be burdened by moral dilemmas so the deliberative and reflective part of decision-making vital for understanding consequences of actions and ensuring proportionate responses is completely absent.
Robots may have a common code of interaction to promote cooperation and avoid conflict among themselves. Autonomous weapons operating in swarms may develop principles that govern how they interact and coordinate action to avoid collision and errors.
But these are examples of functional, machine-to-machine interaction that do not extend to human interaction, and so do not represent a form of autonomy of the will that is capable of universalisation.
When we talk about trust in the context of using artificial intelligence and robotics what we actually mean is reliability. Trust relates to claims and actions people make and is not an abstract thing.
Algorithms cannot determine whether something is trustworthy or not. So trust is used metaphorically to denote functional reliability; that the machine performs tasks for the set purpose without error or minimal error that is acceptable. But there is also an extension of this notion of trust connected to human agency in the development and uses to which artificial intelligence and robotics are put.
Can we trust humans involved in developing such technologies that they will do so with ethical considerations in mind — ie limiting unnecessary suffering and harm to humans, not violating fundamental human rights? Once the technology is developed, can we trust those who will make use of it to do so for benevolent rather than malevolent purposes?
These questions often surface in debates on data protection and the right to privacy in relation to personal data trawling activities of technologies. Again, this goes back to what values will be installed that reflect ethical conduct and allow the technology to distinguish right from wrong. Technology may be deemed to have rational thinking capacity if it engages in a pattern of logical thinking from which it rationalises and takes action.
But this seems a low threshold raising concerns about predictability and certainty of the technology in real-life scenarios. So there would need to be much greater clarity and certainty about what sort of rationality the technology would possess and how it would apply in human scenarios.
When we compare machines to humans there is a clear difference between the logic of a calculating machine and the wisdom of human judgment. They are good at automatic reasoning and can outperform humans in such activities. But they lack the deliberative and sentient aspects of human reasoning necessary in human scenarios where artificial intelligence may be used.
They do not possess complex cognitive ability to appraise a given situation, exercise judgment, and refrain from taking action or limit harm. Unlike humans who can pull back at the last minute or choose a workable alternative, robots have no instinctive or intuitive ability to do the same. For example, during warfare the use of discretion is important to implementing rules on preventing unnecessary suffering, taking precautionary measures, and assessing proportionality.
Such discretion is absent in robots. Should the technology possess universal or particular moral reasoning? Ongoing developments in the civilian and military spheres highlight moral dilemmas and the importance of human moral reasoning to mediate between competing societal interests and values. Robots in general may need to lack the ability to deceive and manipulate humans so that human rational thinking and free will remain.
Then there is the issue of whether fully autonomous weapons should be developed to replace human combatants in the lethal force decision-making process to kill another human being. Is there a universal moral reasoning that the technology could possess to solve such dilemmas?
Or would it have to possess a particular moral reasoning, specific to the technology or scenario? Human moral reasoning involves a combination of comprehension, judgment, experience, and emotions. It may also be dependent on societal, cultural, political, and religious factors. Arguably, the Universal Declaration of Human Rights provides a common standard of universal moral reasoning in setting out general human rights that are deemed universal, indivisible, and inviolable.
For example, an autonomous weapon that is capable only of targeting and destroying buildings will not have to consider factors relating to the location, appearance, intentions, or activities of a human combatant. On the other hand, if the weapon is employed in uncomplicated and non-mixed areas and is capable of human targeting, it would have to engage in moral reasoning that complies with the principles of distinction, proportionality, and unnecessary suffering.
Machine moral reasoning, however, may or may not be able to interpret the relative significance and value of certain human rights which could lead to arbitrary and inconsistent application. One way to overcome this is to design the technology to be value-neutral in identifying human lives so that it is not based on cultural, racial, gender, or religious biases.
Human dignity is accorded by recognising the rational capacity and free will of individuals to be bound by moral rules, as well as through notions of accountability and responsibility for wrongdoing. How can artificial intelligence express person-to-person accountability and fulfil this aspect of human dignity ie accountability for wrongdoing means respecting moral agents as equal members of the moral community? Utilitarian philosopher John Stuart Mill criticised Kant for not realising that moral laws are justified by a moral intuition based on utilitarian principles that the greatest good for the greatest number ought to be sought.
Elizabeth Anscombe criticised modern ethical theories, including Kantian ethics, for their obsession with law and obligation. As well as arguing that theories which rely on a universal moral law are too rigid, Anscombe suggested that, because a moral law implies a moral lawgiver, they are irrelevant in modern secular society. The Catholic Church has criticised Kantian ethics for its apparent contradiction, arguing that humans being co-legislators of morality contradicts the claim that morality is a priori.
If something is universally a priori i. The theory of the categorical imperative is, moreover, inconsistent. According to it the human will is the highest lawgiving authority, and yet subject to precepts enjoined on it. Another example Kant gives is of our obligation to help out others.
In this case, we would eventually have to break the maxim due to our need for help. Thus, from this, we get the duty that we should sometimes help out others in need. One criticism that Kant faced among his contemporaries was for his stance on lying, since he said that we always have a duty to be truthful to others Metaphysics of Mora ls Suppose that your friend is being pursued by someone who intends to kill him.
Your friend comes to your house and asks to hide. Suppose your friend hears the killer knocking at the door and decides to flee out the back without your knowing.
You lie and tell the killer that your friend is not here, and the killer leaves. Because of this, your friend and the killer bump into each other, and your friend is killed. His general point is that consequences are uncertain. The type of rational approach to ethics that Kant prefers will downplay the importance of consequences due to this unpredictability.
This seems to lead to the implausible conclusion that collecting stamps or collecting anything is immoral. Some who want to defend Kant think that the problem is with how this maxim is phrased. The maxim specifies two actions: buying and not selling. And when we formulate it some ways like in this case with the stamp collecting it leads to a contradiction, whereas formulating it other ways does not.
For Kant, just doing the right thing is not sufficient for making an action have full moral worth. He believes that a good will is essential for morality. This is intuitively plausible because it seems that if an otherwise good action is done with bad or selfish intentions, that can rob the action of its moral goodness.
If we imagine a man who goes to work at a soup kitchen to help out the poor, that seems like a good action. Less intuitive is that Kant thinks the only possible genuine good will is respect for the moral law. That is, when you do something because it is the right thing to do, that alone counts as good will. Schopenhauer thought that good people are good because they want to do good actions and they feel love and compassion towards others. If we return to the example of working in the soup kitchen, if the person is showing up to the soup kitchen because he likes helping people or he feels compassion for the people he helps and wants to improve their lot, Schopenhauer would say this is a good person and thus a virtuous action.
Kant defended his position on good will by saying that an action done out of love or out of compassion is not fully autonomous. Autonomy means self-rule, and Kant saw it as a necessary condition for freedom and morality. If an action is not done autonomously, it is not really morally good or bad.
Again, if our friend at the soup kitchen is working there because of some implant in his brain by which another person is able to control his every action, then the action is neither autonomous nor morally commendable.
Concerning acting out of love and compassion, Kant believed that when people act due to their emotions, then their emotions are in control, not their rationality. To be truly autonomous, for Kant, an action must be done because of reason. An action done because of emotion is not fully free and not quite fully moral.
The important point is that reason you do an action should be because you have determined that it is the right thing to do. The idea underlying the second formulation is that all humans are intrinsically valuable.
What has a price is a thing, but a person has dignity and is thus beyond price and irreplaceable. To treat someone merely as a means is to not give the person the proper respect—to fail to treat the person with dignity, to treat the person as a thing. But if you use a person in such a way, it devalues the person. Similarly, if you harm someone, take advantage of someone, or steal from someone, then you treat that person as a thing, as a means to your ends.
Conversely, if you treat someone as having unlimited value, if you treat the person with dignity and respect, then you treat the person as an end. For example, imagine that your pipes need fixing, and you call a plumber. By paying him the agreed-upon amount, you are making his end earning a living your end. One way to think of the idea of treating someone as ends and means is that, when you treat people as ends, you make their ends your ends, and when you treat people as means, you force them to make their ends your ends.
In our example, you made the false promise because you needed to borrow money to pay off debts; thus, your end was to pay off debts, and by lying to your friend, you are forcing him to make your end paying off debts his end.
If you told your friend that you needed money and might not be able to pay it back, your friend would be able to decide.
0コメント