Problems with Asimov’s Three Laws of Robotics

September 24th, 2009

Isaac Asimov’s Three Law’s of Robtics were the basis for many of his short stories and several novels.   They stated:

“
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
”

Well, why would people actually implement these!?

I mean, we do have lawyers doing things like this to cover people’s asses, and harm from robots is one of these things that needs covering.   Potentially.   But wouldn’t a military robot have a law that says something like:

“4. If there is any uncertainty, kill them all and let god sort them out.   I’m just a fucking robot, all right?”

Space.com offers up an article this week concerning a reality check for Asimov’s three laws.   There are going to be a lot of variations of this as we move toward a reality of robots.

What do you think?   Are we actually going to have any control over our creations, or will people operating inside or outside the law going to circumvent anything at all resembling Asimov’s very reasonable and rational vision?

Share/Bookmark

You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.