Top Special Offer! Check discount
Get 13% off your first order - useTopStart13discount code now!
Technological system developers and designers do not consider ethical and moral issues. They are primarily concerned with the machines’ usability and functionality. Several philosophers doubt the systems’ legitimacy as a result of this. Most philosophers advocate for the inclusion of moral and ethical reasoning algorithms in automated machines (Spier, 2003).
The development of robotic cars has brought ethical difficulties, which philosophers have debated in many ways. Are robotic automobiles okay because they follow traffic rules and prevent collisions without regard for ethical issues? Numerous philosophers have developed critical reasoning in response to self-driving automobile technology. It is true that the human beings are more likely to act ethically when it comes to dynamic situations on the roads. It is because the human drivers are able to judge using their minds, unlike the robotic cars that do not reason. The designers of the self-driving vehicles are supposed to consider the ethics during programming of the cars(Gunkel, 2007).
The government street laws are ill-equipped to deal with such a technology. Just because the automated vehicles are able to pass the driving test does not mean they are street-legal. The human drivers have ethics and wisdom that they can use in complicated road situations. The autonomous vehicles are a new technology, and there is no record where they have exercised some ethics or wisdom in any way. The issue of ethics and laws diverge at some point. The ability of the human beings to reason can compel them to act illegally. Sometimes the human drivers can be tempted to exceed the set speed limits in cases of emergency. The automatic vehicles cannot over speed because they lack judgment(Spier, 2003). The self-driving cars are legal in the united states because of the principle that ‘everything is permitted unless prohibited.’
Ideally, the ethics, policies and the law would be in the same line. Unfortunately, this is not the case in the real world. There is a break between the three aspects. For example, overspeeding is against the law but is not always unethical to over speed. It depends on the moral judgment of the situation by an individual. Let’s say, a human driver over speeds to save a person’s life by taking the patient to a hospital quickly. The judgment of the individual to over speed is ethical though he may be entangled in the unlawful behavior of over speeding. Therefore, a separate state law should be amended to govern the autonomous vehicles. The laws should ensure the automated vehicles makes moral sense. Programming an automated vehicle always to follow the law can be sometimes foolish and dangerous. The cars need to have the moral judgment so that they can over-speed in times when there is no traffic and in emergency cases (Winston and Edelbach, 2014).
The modern world is dominated by artificial machines. The systems apply limited form of artificial intelligence and are designed to perform duties automatically. The software that is in control of the self-driven machines is ‘ethically blind’. Decision-making of such systems does not involve moral reasoning. Also, the sensory capacities of the machines are not tuned to ethically relevant characteristics of the world. It is essential for the software developers to make systems that are capable of implementing ethical and moral reasoning in their decision-making.
A common example that is used to explain the moral dilemma of the robotic vehicle by many philosophers is the ‘trolley problem’ (Spier, 2003). Imagine the self-driving vehicle is carrying five passengers on a highway and spots a young boy running after a ball. Should the car risk the passengers’ lives by sliding so that the life of the boy is saved? Or should it crush the boy to save the lives of the five people? The scenario poses a moral and ethical dilemma that the designers need to put into account during programming of the machines. It is unethical for the car to do anything that can risk lives of people. Basing on the recent scientifically report, the cars are programmed to crash anything even if the passengers lives are at risk. Therefore, the algorithms that control the vehicles should include the moral principles to ensure safety of the users.
Many philosophers do not believe that robotic machine can have morals. This is because they lack consciousness and emotions which are special qualities of the human beings. The automated machines can have morals if strong artificial intelligence is implemented in the algorithms that control the robots. Practically, it is not easy to make the machines reason like human beings. Automated cars can be dangerous when they lack ethical and moral judgment. It is important to consider embedding the machines with ‘operational morality’ (Winston and Edelbach, 2014).
Conclusion
The improvement of science and technology has led to an increase in the use of the automated machines. The major problem with the machines is that they lack ethical and moral judgment. They have limitations because of the constantly changing situations that need logical reasoning. In the case of robotic vehicle, they need to reason ethically so that the lives of the passengers are not at risk.
References
Gunkel, D. (2007). Thinking otherwise: Ethics, technology and other subjects. Ethics and Information Technology, 9(3), pp.165-177.
Spier, R. (2003). Science and technology ethics. London: Routledge.
Winston, M. and Edelbach, R. (2014). Society, ethics, and technology. Boston, MA: Wadsworth Cengage Learning.
Hire one of our experts to create a completely original paper even in 3 hours!