I was always entertained by movies like the Terminator and the Matrix, I do think we should perhaps think more about the consequences - good and bad - of recent progress in robotics, artificial intelligence, machine learning, computer vision and etc. I'm always fascinated by the latest gadgets different militaries are using. Nowadays, many militaries around the world have begun to develop drones, ships, submarines, tanks, and even robotic troops with great levels of autonomy and intelligence.
Many of us believe that the application of AI could potentially reduce civilian casualties and keep more troop out of harm's way. But I think it also limits the possibility of unintended consequences if we are not careful about it. In fact, just a few months about the United Nations Secretary-General declared global recognition to these issues: Weaponizing artificial intelligence is a growing concern. The possibility of weapon that can select and attack a target on their own raises multiple alarms... The likelihood of machines with the discretion and power to take human life is morally repugnant. As I have been thinking about the positive aspects of AI, I was also in a quest to learn more about the role of AI on autonomous weapons. A few weeks ago I read Army of None: Autonomous Weapons and the Future of War by Paul Scharre. What a fascinating read. I highly recommend it. I really enjoyed reading the book. The author is an engaging thinker, with both on-the-ground expertise and a very high-level/macro view. He was an Army Ranger, having served four tours in Iraq and Afghanistan. He later moved to the Department of Defence where he headed the group that outlined the policy on autonomous weapons. Now he is a policy expert at a think tank in DC. Not to mention that he is a prolific writer. I agree with the book that autonomy has excellent benefits in circumstances or environments where humans can't withstand. And the book presents many examples that autonomy has great benefits in environments where humans can't survive (such as flight situation with high G force and etc) or have an unmanned drone, tank, or sub that carries out a clear, limited mission with little communication back and forth with human controllers. There is no doubt in my mind that, autonomous weapons could potentially help save civilian lives. Some robotics experts even argue that autonomous weapons could be programmed never to break the laws of war; robots wouldn't seek revenge. Robots wouldn't get angry, scared or emotional. Yes, they could theoretically take emotion out of the equation. They could kill when necessary and then turn the killing off in a flash. The author also gives us a fascinating story of real-life examples in which human judgment was for preventing needless killing, such as his experience in Afghanistan while serving in the military. "A young girl headed out of the village and up our way, two goats in the trail. It appeared that she was herding goats, but she was spotting for Taliban fighters." He did not fire and the young Afghan girl. And yes, it would have been legal, but he argues that it would not have been morally right. So imagine a robotic sniper following strick algorithms might well have opened fire the second it detected a radio in the young girl's hand. The book ends by discussing the possibility of an international ban on fully autonomous weapons. But realistically speaking I think this kind of absolute ban is not likely to succeed. I do hope that our collective global wisdom could guide countries together to ban specific uses of autonomous weapons, such as those that target individual people. I also think it is essential to establish non-binding regulations that could reduce the potential for autonomous systems to set each other off unintentionally. Maybe as the book suggests, we could even update the international laws of war to insert a universal principle for human involvement in lethal force. I realize these are difficult choices. These are life and death choices. But I agree with the author that we must guard against becoming "seduced by the appeal of machines - their speed, their seeming perfection, their cold precision. " The more I think about it, the more I conclude that we should not leave it up to military planners or the people writing software to determine where to draw the lines. I believe we must all get involved in this discussion.
0 Comments
Leave a Reply. |
AuthorRoozbeh, born in Tehran - Iran (March 1984) Archives
April 2024
Categories |