Wednesday, March 7, 2007


Isaac Asimov is one of my favorite authors. If you're not familiar with him, he's one of the "big three" science fiction writers (along with Arthur C. Clarke and Robert Heinlein). Asimov is the Jewish-Russian-American with the massive sideburns, who died back in the '90s. He's a fascinating guy, and one of my heroes. He wrote or edited over 500 books in every category of the Dewey Decimal System except for philosophy, was a vice president of the MENSA organization, and was an accomplished biochemistry professor.

One of the things he is best known for in the sci-fi community is his Robot series of stories, almost all of which seem to revolve around the same basic concept: his Three Laws of Robotics. Every robot in his fictional, futuristic universe was programmed with these laws, the purpose of which was to keep robots from running amok and harming humans. Here are his Three Laws:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Most of Asimov's Robot short stories seemed to revolve around the gray areas not quite clearly defined by these laws (such as, What if harming one human would save the rest of humanity?). For decades, as robots have become more advanced, I've heard again and again how roboticists have planned on implementing these laws.

In yet another example of fiction becoming reality, now some Roboticists in South Korea, and soon in Europe, will use Asimov's three laws as a basis for real-life "robot ethics":

Asimov would be proud.

Last September, South Korea designed a gun-wielding robot to help guard the de-militarized zone. At the same time, that country's robot designers are preparing for a boom in senior citizens who need round-the-clock care – by robots. As these robots become "smarter" and able to blast you away in the tiny fraction of a second of computing time, or inadvertently squeeze the life out of your all-important feeding tube, it's time to think about robotic "morals."

Fine. Go ahead and ruin my hopes for a world as chaotic and interesting as in Robocop II or 2001: A Space Odyssey. Given the problems I've faced lately with my sh*tty work computers, I'm getting used to computers running amok. Simply freezing while I'm analyzing my data is too boring. I want it to threaten my frickin' life! Hey, you wanna piece of me, Mr. fussy Intel computer? Well bring it on! You haven't got the Three Laws! I've got a magnet with your name on it!

Now if only we could apply Asimov's Three Laws to humans….

No comments: