Top Special Offer! Check discount
Get 13% off your first order - useTopStart13discount code now!
“Robot troops who execute commands without being influenced by human emotions may help to reduce casualties in conflicts. But who will accept accountability for their actions?”
These are some of the ethical concerns expressed about the role of robots on the battlefield and whether or not robots should be given the ability to kill. Robots are intelligent machines that can accomplish tasks in the real world without explicit human control (Evans). Humanity’s robot innovation is thought to have both beneficial and bad effects on ethical, social, and professional norms. Some critics contend that the deployment of automated killing robots breaches the right to human dignity. Robot ethics looks at artificially intelligent beings and how they harm and benefit the humans. It also evaluates robots right and the subsequent threat to human dignity and privacy. This report explores whether sophisticated autonomous robots should be given the status of “electronic persons.” The topic under discussion is whether or not robots should have the right to kill.
In this first segment of the report, I will present arguments that support the statement that robots should be given the right to kill. Robots need rights too. Robots should be given the right to kill since robots follow a set of formulated laws which will hinder any mishaps on the battlefield. The three Asimov’s laws state that: A robot may not injure a human being or, through inaction, allow a human to come to harm, the second law says that a robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law and the third law states that a robot must protect its existence as long as such protection does not conflict with the First or Second Law (Steve R). These rules, therefore, create ethical standards that will guarantee the safety of humans on the battlefield.
Secondly, robots should be given the rights to kill since program morality will be calculated into the artificial intelligence system of the robot and this will include individual artificial intelligence and the whole network. Benefits attributed to using AIs include the fact that it gives a level of clarity that surpasses that of the enemy which translates to precise decision making in the field. The rapid analysis and decision making will, therefore, create a tactical advantage in the war which could be the difference between life and death. This will, thus, enable the system to dictate any anomalies and the faulty robot will then be disabled and destroyed by the system. This is relatable to what happens with the soldiers in the battlefield if one of the human soldiers them goes rogue.
Another argument that supports the use of robots to kill is favored by the ability of robots to be free from the shortcomings associated with the failures of human soldiers. In the battlefield panic, vengeance, fear, poor cognition and limitation of the human senses depending on the situation at hand can result in reduced judgment. In contrast to what happens to human soldiers, the robots can be used in self–sacrificing scenarios without the guidance of a commanding officer. This, therefore, reduces untimely deaths because the soldier had to use the “shoot first, ask-questions-later” approach (John). Robots, thus, have an added advantage since its system does not respond to emotions that cloud their judgment in the field.
The fourth point of view that encourages the use of robots in killing is because robots will help to reduce the number of death casualties on the battlefield. It is common knowledge that most wars are associated with massive deaths considering both parties never want to concede defeat. This, therefore, means that victory is achieved at the expense of human lives. Use of robots will help to reduce such casualties, thus, preserving human life. In this scenario, the use of automated robots can, therefore, be considered ethical since it protects life.
The fifth point in support of the right of a robot to kill regards to the fact that the robot machines are replaceable once they have been destroyed in the war. This is mainly because the devices are technologies developed by humans and therefore it can easily be replicated once it has been damaged or malfunctioned. On the other hand, human life is a valuable gift that cannot be regenerated or replaced and therefore it’s difficult to estimate the cost associated with loss of life. Use of the automated machines helps to eliminate potential loss related to death and in other cases lifetime injuries that soldiers get while at war. This usually creates emotional stress to the individual, family and the entire nation. In addition to that, such situations typically result in economic burden since the wounded, medical care and a good number of them cannot go back to work. Robots, therefore, help to eliminate such situations in war scenarios.
The second segment of this report presents arguments that do not support the statement that robots should be given the right to kill. In this section of the report, I will evaluate and discuss various reasons why robots should not be given the right to kill.
To begin with, use of Lethal Autonomous Robots (LARs) takes over the responsibility of humans to make the critical decision whether to shoot or not. The automated system will, therefore, infringe on fundamental human rights since it will operate without rational consent (Bryson). I believe that only humans should make ultimate decisions concerning life and death. The critical issue surrounding this argument is who will take responsibility if the robot kills the wrong target.
Secondly, robots should not be given the right to kill since if the technology falls into the wrong hands, it could be manipulated and used to carry out unwarranted target killings. This is because of the ability of the system to strike anywhere in the world. Mass production of cheap robots will be vulnerable to hacking. With the rapid advancement of technology, an expert in programming can hack the system and alter the machine to become very violent. If the command “do not kill” is not in the device, the system becomes open to manipulation, and the rest of the built-in moral controls are rendered useless. This will create fear and mistrust in the world since the robots will be used to terrorize enemies and intimidate the entire population.
The third argument that is against allowing robots to kill is related to the fact that in normal circumstances, humans do not always aim to kill. Soldiers in the fields always try to minimize the number of death casualties. It is therefore ethically wrong to give the machines the ability to kill freely while in reality, we do not advocate for killing our fellow humans; this is documented in most of our religions. In the society, many individuals and human rights groups advocate for the right to life, and this demonstrates that killing is only allowed in cases of defense. It will, therefore, be irresponsible to give a machine that we do not have a full grasp of its capabilities the right to kill. This would mean we endanger our lives.
The fourth argument against the use of a robot in killings is attributed to the fact that all machines are bound to experience a breakdown or failure since they have been developed by the human mind. These malfunctions usually cause the device to carry out commands that were not initially intended for (Patrick). In the case of robots designed for killing on the battlefield, failure of the machine can be extremely dangerous and result in mass deaths. This is especially true in cases where the machine has no programming against killing. This would mean that a slight compromise in the system can be very catastrophic and instead of increasing our safety, the robots can act as a source of potential risk in the society.
Another possible risk associated with using robots in killing is attributed to the ability of the AI of the robot to surpass human intelligence considering technology is rapidly evolving and becoming more sophisticated. This will, therefore, pose a severe problem since humans will not have the capacity to control the activities of the robots. The capabilities of the machine will be beyond our power resulting in massive deaths in the scenario of war. Instead of creating a solution to the war crisis, the devices can have a command that endangers the survival of the human species. In this situation, we will become slaves of our creation and our destiny will be in the hands of the automated robots we would have created.
The sixth argument that is against the use of robots to kill is drawn from the likelihood that incidences of wars will increase across the globe. This is mainly because the machines will cheapen the process of conflict since there will be no soldier casualties. The high mortality rate that is usually associated with wars normally causes the two warring nations to end the war and come up with peace agreements and treaties. This is because human life is precious and therefore the loss of lives resulting from war usually creates an uproar from the affected nation. With the replacement of human soldiers with robots, it will encourage war and violence since the army of the particular country will not be affected.
>Lastly, the seventh argument against giving robots the right to kill is based on the inability of the robot to distinguish between an enemy and a civilian. Entirely autonomous weapons would not have the ability to sense or interpret the difference between soldiers and civilians, especially in contemporary combat environments (Kreiger). The weapons have an automatic response trigger with the presence and proximity of their victim. To make matters worse, the robot would not possess the human quality to assess an individual’s intention. This is a grave concern considering it increases the potential of the machine to kill innocent individuals. The robot will, therefore, be directly violating the international humanitarian law and yet this problem can only be saved by programming specific rules in the artificial conscience of the machine. At the moment, the device has not been equipped with this capability creating room for a violation of human rights. With such limitations, insurgents will take advantage and manipulate the weapons by exploiting behavioral and sensory flaws of the machine.
In conclusion, my position on the subject at hand is that robots should not be given the right to kill at the moment until new cognitive technologies are developed and incorporated into the system. I believe that it is paramount to protect human lives and yet at the moment, the use of robots has the potential to increase the number of non-combat casualties when compared to traditional human warfighters. Some of the external vulnerabilities that inhibit the adoption of the system include cyber-attacks, lack of self-defense, software errors, and weak cognitive skills. It is evident that the use of autonomous weapons with all the limitations it has at the moment will result in killing and subduing of populations, which will destabilize weaker nations in the long run.
The ethical theory that supports my position concerning the right to life, which will be violated by the use of robots, is the Consequentialist theory. In theory, the consequences of one’s conduct are the ultimate basis for any judgment about the rightness or wrongness of that behavior (Scheffler). This, therefore, means that the morality of a particular action is contingent on the outcome of the specific decision. In this scenario, the use of automated robots in war zones will result in a higher number of deaths due to the limitations it has. Therefore, the decision to send robots to war with humans will undoubtedly result in the death of the human soldiers in the field. In worst-case scenarios, innocent civilians may also be engaged with the robots, considering the machine has zero cognitive ability. The leaders and the military, therefore, have a moral responsibility to stall the use of robots in war until more efficient technologies are developed. This will help avoid the negative consequences of innocent deaths and possible manipulation of the technology to increase conflict, which means the right to life will be preserved.
Bryson, J J. “AI Ethics: Artificial Intelligence, Robots, and Society.” Ethics (2016).
Evans, Woody. Roboethics. 2012. 28 11 2017.
John, K. “The Point of No Return’: Should Robots Be Able to Decide to Kill You On Their Own?” Rolling Stone 30 April 2013.
Kreiger, H. “The Robot Revolution.” Advance I (2016).
Patrick, L. “Robots and Human Rights.” Technology (2015): 2.
Scheffler, Samuel. Consequentialism and Its Critics. Oxford: Oxford University Press, 1988.
Steve R. “Robots need rights, and kill switches too.” ZD Net (2017).
Hire one of our experts to create a completely original paper even in 3 hours!