When it comes to deciding to kill a human in a time of war, should a machine make that decision or should another human?
The question is a moral one, brought to the foreground by the techniques and incentives of modern technology. It is a question whose scope falls squarely under the auspices of international law, and one which nations have debated for years. Yet it's also a collective action problem, one that requires not just states, but also companies and workers in companies to come to an agreement to forgo a perceived advantage. The danger is not so much in making a weapon, but in making a weapon that can choose targets independently of the human responsible initiating its action.