Article Image
News Link • Drones

US military begins research into moral, ethical robots, to stave off Skynet-like apocalypse

• www.extremetech.com

This multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — the ability to choose right from wrong. As we move steadily towards a military force that is populated by autonomous robots — mules, foot soldiers, drones — it is becoming increasingly important that we give these machines — these artificial intelligences — the ability to make the right decision.

As you can probably imagine, this is an incredibly difficult task. Scientifically speaking, we still dont know what morality in humans actually is — and so creating a digital morality in software is essentially impossible. To begin with, then, the research will use theoretical (philosophical) and empirical (experimental) research to try to isolate essential elements of human morality. These findings will then be extrapolated into a formal moral framework, which in turn can be implemented in software (probably some kind of deep neural network).

Assuming we get that far and can actually work out how humans decide right from wrong, the researchers will then take an advanced robot — something like Atlas or BigDog — and imbue its software with moral competence. One of the researchers involved in the project, Selmer Bringsjord at RPI, envisions these robots using a two-stage approach for picking right from wrong.  First the AI would perform a "lightning-quick ethical check" — simple stuff like "should I stop and help this wounded soldier?" Depending on the situation, the robot would then decide if deeper moral reasoning is required — for example, should the robot help the wounded soldier, or should it continue with its primary mission of delivering vital ammo and supplies to the front line where other soldiers are at risk?

Eventually, of course, this moralistic AI framework will also have to deal with tricky topics like murder. Is it OK for a robot soldier to shoot at the enemy? What if the enemy is a child? Should an autonomous UAV blow up a bunch of terrorists? What if its only 90% sure that theyre terrorists, with a 10% chance that theyre just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans, or will they be held to a higher standard?

At this point, it seems all but certain that the US DoD will eventually break Asimovs Three Laws of Robotics — the first of which is "A robot may not injure a human being or, through inaction, allow a human being to come to harm." This isnt necessarily a bad thing, but it will open Pandoras box. On the one hand, its probably a good idea to replace human soldiers with robots — but on the other, if the US can field an entirely robotic army, war as a diplomatic tool suddenly becomes a lot more palatable. The commencement of this ONR project means that we will very soon have to decide whether its okay for a robot to take the life of a human — and honestly, I dont think anyone has the answer.

Join us on our Social Networks:

 

Share this page with your friends on your favorite social network:


Free Talk Live