Moral machines

Google’s driver-less cars are already street-legal in three states, California, Florida, and Nevada, and some day similar devices may not just be possible but mandatory.
 
Eventually automated vehicles will be able to drive better, and more safely than you can; no drinking, no distraction, better reflexes, and better awareness of other vehicles.
 
Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would be immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work. …
 
That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems, NYU Professor of Psychology Gary Marcus writes in The New Yorker.
These issues may be even more pressing when it comes to military robots. When, if ever, might it be ethical to send robots in the place of soldiers? …
 
As machines become faster, more intelligent, and more powerful, the need to endow them with a sense of morality becomes more and more urgent. “Ethical subroutines” may sound like science fiction, but once upon a time, so did self-driving cars.
 
Human Rights Watch released a report entitled “Losing Humanity: The Case Against Killer Robots” earlier this week. It finds that fully autonomous weapons would not only be unable to meet legal standards but would also undermine essential non-legal safeguards for civilians.