Could an Ethically-Correct AI Shut Down Gun Violence?

The Next Web writes:
A trio of computer scientists from the Rensselaer Polytechnic Institute in New York recently published research detailing a potential AI intervention for murder: an ethical lockout. The big idea here is to stop mass shootings and other ethically incorrect uses for firearms through the development of an AI that can recognize intent, judge whether it’s ethical use, and ultimately render a firearm inert if a user tries to ready it for improper fire…

Clearly the contribution here isn’t the development of a smart gun, but the creation of an ethically correct AI. If criminals won’t put the AI on their guns, or they continue to use dumb weapons, the AI can still be effective when installed in other sensors. It could, hypothetically, be used to perform any number of functions once it determines violent human intent. It could lock doors, stop elevators, alert authorities, change traffic light patterns, text location-based alerts, and any number of other reactionary measures including unlocking law enforcement and security personnel’s weapons for defense…

Realistically, it takes a leap of faith to assume an ethical AI can be made to understand the difference between situations such as, for example, home invasion and domestic violence, but the groundwork is already there. If you look at driverless cars, we know people have already died because they relied on an AI to protect them. But we also know that the potential to save tens of thousands of lives is too great to ignore in the face of a, so far, relatively small number of accidental fatalities…