FREEDOM FORUM: Discussion

Article Image

Wired

For years, science fiction author Issac Asimov’s Three Laws of Robotics were regarded as sufficient for robotics enthusiasts. The laws, as first laid out in the short story “Runaround,” were simple: A robot may not injure a human being or allow one to come to harm; a robot must obey orders given by human beings; and a robot must protect its own existence. Each of the laws takes precedence over the ones following it, so that under Asimov’s rules, a robot cannot be ordered to kill a human, and it must obey orders even if that would result in its own destruction.

But as robots have become more sophisticated and more integrated into human lives, Asimov’s laws are just too simplistic, says Chien Hsun Chen, coauthor of a paper published in the International Journal of Social Robotics last month. The paper has sparked off a discussion among robot experts who say it is time for humans to get to work on these ethical dilemmas.

Make a Comment

Comments in Response


Comment by PureTrust
Entered on:

Asimov's Sci-Fi was basically for reader pleasure. His writings show that he understood how complicated the underlying programming of robots would have to be. (There were really 4 laws. In "Foundation Earth" the robot Daneel Olivaw identified an even more basic law he called the Zeroth Law.) 

A robot's "understanding" of, and adherence to, any law is still only as good as the programmer. For example, optical character recognition programming is not perfect. And it is in only one small area of near AI-ness, the area of recognizing the printed or written word by a computer program. 

We are still at the stage where training us to use robots is more practical safety-wise than teaching the robot to work with us and our shortcomings. Because of this, robots must be limited in their functioning area. They must be used within certain boundaries, and without the freedom that people and animals have. 

The real question about AI is, do we even have the ability to examine ethics and morals deeply enough that we can understand them ourselves? And, if we happened to be able to understand ethics and morals, will we ever be smart enough to program a robot properly? 

 

Make a Comment