First – a brief moment of silence for all the acts of violence occurring in the world.
Second – to focus on a close neighbor. I was sifting through the massive WhatsApp messages when I learned of the Paris attacks and immediately began to send texts to those I know or friends who have family and friends in the area. Similar to the scary recall I have from the morning of 9/11 (walking to school that morning and consoling a friend who was balling her eyes out because her mother couldn’t get a hold of her father who had flown to New York the previous day and was supposed to be near the towers), terrorist attacks and acts of war will remain on our minds for months and years to come.
How does A.I. fit in? Drone airstrikes by allies targeting the ISIS situation ensue. Just last month a self-piloting Black Hawk helicopter was unaided by humans and completed a successful, safe landing for tether-attached cargo. By the way, this was a Lockheed acquisition.
This past August saw a letter by thousands of scientists calling for a ban on lethal weapons controlled by A.I. machines, stating specifically, “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”
Just because we can, does this mean we should? Will humanity fight war one day with machines rather than human lives, or would AI in war reduce the total number of lives lost due to violence? Or will it increase the number of lives lost overall, with less humans IN the machines but more dying because of them? How do we protect the tech from falling into the wrong hands – which, if history has proven *anything* at all – is impossible?