I was always entertained by movies like the Terminator and the Matrix, I do think we should perhaps think more about the consequences - good and bad - of recent progress in robotics, artificial intelligence, machine learning, computer vision and etc. I'm always fascinated by the latest gadgets different militaries are using. Nowadays, many militaries around the world have begun to develop drones, ships, submarines, tanks, and even robotic troops with great levels of autonomy and intelligence.
Many of us believe that the application of AI could potentially reduce civilian casualties and keep more troop out of harm's way. But I think it also limits the possibility of unintended consequences if we are not careful about it. In fact, just a few months about the United Nations Secretary-General declared global recognition to these issues: Weaponizing artificial intelligence is a growing concern. The possibility of weapon that can select and attack a target on their own raises multiple alarms... The likelihood of machines with the discretion and power to take human life is morally repugnant. As I have been thinking about the positive aspects of AI, I was also in a quest to learn more about the role of AI on autonomous weapons. A few weeks ago I read Army of None: Autonomous Weapons and the Future of War by Paul Scharre. What a fascinating read. I highly recommend it. I really enjoyed reading the book. The author is an engaging thinker, with both on-the-ground expertise and a very high-level/macro view. He was an Army Ranger, having served four tours in Iraq and Afghanistan. He later moved to the Department of Defence where he headed the group that outlined the policy on autonomous weapons. Now he is a policy expert at a think tank in DC. Not to mention that he is a prolific writer. I agree with the book that autonomy has excellent benefits in circumstances or environments where humans can't withstand. And the book presents many examples that autonomy has great benefits in environments where humans can't survive (such as flight situation with high G force and etc) or have an unmanned drone, tank, or sub that carries out a clear, limited mission with little communication back and forth with human controllers. There is no doubt in my mind that, autonomous weapons could potentially help save civilian lives. Some robotics experts even argue that autonomous weapons could be programmed never to break the laws of war; robots wouldn't seek revenge. Robots wouldn't get angry, scared or emotional. Yes, they could theoretically take emotion out of the equation. They could kill when necessary and then turn the killing off in a flash. The author also gives us a fascinating story of real-life examples in which human judgment was for preventing needless killing, such as his experience in Afghanistan while serving in the military. "A young girl headed out of the village and up our way, two goats in the trail. It appeared that she was herding goats, but she was spotting for Taliban fighters." He did not fire and the young Afghan girl. And yes, it would have been legal, but he argues that it would not have been morally right. So imagine a robotic sniper following strick algorithms might well have opened fire the second it detected a radio in the young girl's hand. The book ends by discussing the possibility of an international ban on fully autonomous weapons. But realistically speaking I think this kind of absolute ban is not likely to succeed. I do hope that our collective global wisdom could guide countries together to ban specific uses of autonomous weapons, such as those that target individual people. I also think it is essential to establish non-binding regulations that could reduce the potential for autonomous systems to set each other off unintentionally. Maybe as the book suggests, we could even update the international laws of war to insert a universal principle for human involvement in lethal force. I realize these are difficult choices. These are life and death choices. But I agree with the author that we must guard against becoming "seduced by the appeal of machines - their speed, their seeming perfection, their cold precision. " The more I think about it, the more I conclude that we should not leave it up to military planners or the people writing software to determine where to draw the lines. I believe we must all get involved in this discussion.
0 Comments
Since November of 2017, I've been thinking a lot about Artificial Intelligence (AI). Thanks to our latest venture - ReadyAI! I think it is highly likely that AI will be the most influential cause of change in the 21st century. Just look around, it is already transforming our economy, our culture, our politics, our education, our behavior, and even our bodies and minds in incredible ways.
If we don't have a better understanding of the field AI, we certainly cannot grasp the dilemmas we are facing. When science becomes short-sighted politics, scientific ignorance becomes a recipe for a major global disaster. That's why I also feel strong in bringing AI education to every classroom. Last week I read Life 3.0 by Max Tegmark. It was a fantastic read for me. The book has done an excellent job in describing in very basic terms and key discussions and common myths. I know we are concerned about harmful robots, but the book rightly emphasized that the real problem is with the unexpected consequences of developing competent AI. So before we talk more about AI, let's ask what is AI? AI is intelligence processed by machines, very much like Cozmo that we use in AI-in-a-Box. It includes a machine's ability to learn, perceive, solve various problems, and act on its surrounding and environment. I also find the concept of Deep Learning extremely fascinating. Very much like our brains, deep learning algorithms learn by developing connections or even weights between neurons. Unlike our brains, they are really not wet neurons, but rather, simulated or so-called artificial neurons. Also, the word "deep" in deep learning comes from hidden layers of neurons placed between an input layer that receives information and output layer that gives behaviors or responses. So these hidden layers hold abstract representations, rather than, let's say seeing a mixture of independent pixels through its camera eye, such a deep learning network can be trained to recognize abstract concepts such as faces, cars, animals, and other stuff. The cool thing is that ReadyAI is teaching kids the basics of these really complex topics. At the present day, machines have been able to beat us in areas like mathematics and calculations for a while now. But still very much underdeveloped in language and conceptual thinking. There are also many different dimensions of intelligence. So when we refer to AI we really refer to what experts call AGI: artificial general intelligence or AI that encompasses all tasks all of us are good at. Let me briefly tell you about super intelligence. That is intelligence far beyond that of humans, might arise from a simpler AI. So think of it as an AI that is much better at constructing or creating AI than we are. Just image the AI it creates might produce a still more powerful AI, and it would, in turn, create a yet more powerful AI. Well at this rate some have argued, AI's kind beginnings may quickly plant an intelligence explosion. Should we really worry about AI? For now, I don't have AI anxiety. I won't worry about killer robots taking to the streets or powerful computers somehow becoming conscious or even more mysteriously, becoming evil. The book articulates from the start, robots, consciousness, and evil are not necessary conditions for concerns. For now, an AI connected to the internet would hardly need a robotic body to do real damage. The book does a good job telling us about reasonable concern over AI instead focuses on the so-called alignment problem. If a superintelligent AI has different goals or values than us, our destruction may be planted not by evil, but by our indifference. After we give an AI its initial goals, it may naturally obtain new goals. Let's consider our own kinds. It is our evolutionary goal to reproduce and spread our genes. The author in the book points out, we - humans often use birth control to prevent the goals given to us by natural evolution. So think about it, superintelligence may actually obtain new goals that trump those given to it by us - humans. Maybe, a superintelligent AI redesigns itself, soon finding goals and objectives that are unthinkable to us. Now, that could be very nerve-racking... So such constant-self correction is the meaning of Life 3.0 - or the title of the book. Evolution shapes the bodies and minds of Life 1.0, whereas Life 2.0 (intelligent animals like us) experience additional shaping of the mind by cultural observing and learning. Beyond humanity, Life 3.0 or technological life has the complete ability to develop its own hardware (body) and software (mind). I know that we are talking about a faraway future but even if AI doesn't self-modify to take new goals, we must be very cautious which goals and values we give it. Here is an example; an AI with the goal of eradicating cancer might find a simple solution: killing anyone inclined to cancer. I very much enjoyed reading Live 3.0. I will most likely read it again this year. The book starts with a scary believable story about a super-intelligence AI. From there the author tells us of the nonfictional Future of Life Institute, a think tank co-founded by the author, to begin a very interesting conversation about AI. The book gives us a breakdown of many possible scenarios AI might take. These various trajectories include "gatekeeper" AI (superintelligence that guards humanity against further superintelligence), "Zookeeper" AI (superintelligence that keeps humans as zoo animals), "Enslaved-god" AI (superintelligence kept as a slave), and "benevolent dictator" AI (just what it sounds like). One of my favorite sections of the book is the final chapter's discussion of consciousness. The book is right. Never before has a narrative or conversation about something that could destroy us seemed so fascinating. But the truth is that AI is not entertainment. This book is not entertainment either. Before human level AI appearances, we all must reflect on the values we want machines to have. Let's look around us. Today self-driving cars are already facing ethical difficulties that we don't' know how to really solve. Who should die in an inevitable car crash? The pedestrian who casually jumped into the street? or the drive, when the car avoids the pedestrian by running into an electric pole? Are we now faced with "philosophy with a deadline"? I agree with the author, one of the best way perhaps to improve the future of life is to improve tomorrow. We can and have the power to do so in many ways. We can vote at the ballot box also tell our politicians what we think about education, privacy, harmful autonomous weapons, technological unemployed and list of other issues and concerns. We can also vote every day through what we choose to buy, what news we choose to listen to or watch, what we choose to share and what sort of person we choose to be. Do we want to be someone who disrupts all conversations by checking our phones? or someone who becomes empowered by using technology in an organized and thoughtful way? Do we want to own our technology or do we want our technology to own us? What do we want it to mean to be human in the age of AI? This is not just a question but an important conversation, surely a fascinating one. The more I spend time with ReadyAI, the more I realize the next generation is the guardians of the future of life now as we shape the age of AI. I am a big fan of AI and ReadyAI because I believe our future isn't written in stone and just waiting to happen to us - it's ours to create. We must create an inspirational one together. |
AuthorRoozbeh, born in Tehran - Iran (March 1984) Archives
April 2024
Categories |