Recently posted to SSRN:
"The Criminal Liability of Artificial Intelligence Entities"
GABRIEL HALLEVY, Ono Academic College, Faculty of Law
In 1981, a 37-year-old Japanese employee of a motorcycle factory was killed by an artificial-intelligence robot working near him. The robot erroneously identified the employee as a threat to its mission, and calculated that the most efficient way to eliminate this threat was by pushing him into an adjacent operating machine. Using its very powerful hydraulic arm, the robot smashed the surprised worker into the operating machine, killing him instantly, and then resumed its duties with no one to interfere with its mission. Unfortunately, this is not science fiction, and the legal question is: Who is to be held liable for this killing?
Pish.
My one question to all this is: Why would a robot in a factory be programmed to have a "mission", anyway? I don't get that a robot would need to be so zealously invested in its single minded function that it would be willing to "kill" to carry out its function. I don't know, but it seems to me that the idea should have been "encounter obstacle, shut down" rather than slaughter someone accidentally in the way. Somehow the language of the original cite is a little overheated and WAY over the top. "Mission", "calculated" to "eliminate the threat." Come on, now. Really. Remember this is a machine that tightens all the nuts on a panel or something. Programmed to kill? Are you kidding me? If it could conceive of the concept "kill", what's it doing on the shop floor anyway?
Now that I'm through laughing into my hand, I note that this happened, oh, way long ago. I'd think they might have developed some kind of legal work-around for that by now. I don't for a minute believe the shop floor robot to have been assigned all that reasoning ability anyway. What for? Robotics was in the (excuse me) Iron Age, for that matter, and it would have been way beyond anybody to build something that sophisticated that wasn't the size of a room. 1980's=punch cards, remember. This whole thing smacks of a Philosophy 101 project, and I hope they got an "F" for just being plain ridiculous.
I just think we're a way off from being able to prosecute a robot unless it passes the Turing test. And even then, what's a fitting punishment, and could it possibly make a difference to something without a consciousness? And then there's the question of determining what kind of justice could conceivably be exacted upon an entity without knowing whether it makes the robot any difference? A punishment has to mean something or else it is meaningless, itself. And if it could be determined what would be fitting, would you take the robot's word for it anyway? You sure you'd want to ask it?
I happen to believe that a nice warm 2-liter of Coke shaken to a most delicious fizzy destructiveness in the works would end all that nonsense. Take THAT! Oops, sorry, I didn't mean to be insulting, Robby.
Posted by: Meridien | 04/23/2010 at 12:51 AM