Blog Editor

Notices

  • Copyright 2005-20012 by Adam Kolber
    All rights reserved.

« PEBS Neuroethics Roundup from JHU Guest Blogger | Main | PEBS Neuroethics Roundup from JHU Guest Blogger »

Comments

Pish.

My one question to all this is: Why would a robot in a factory be programmed to have a "mission", anyway? I don't get that a robot would need to be so zealously invested in its single minded function that it would be willing to "kill" to carry out its function. I don't know, but it seems to me that the idea should have been "encounter obstacle, shut down" rather than slaughter someone accidentally in the way. Somehow the language of the original cite is a little overheated and WAY over the top. "Mission", "calculated" to "eliminate the threat." Come on, now. Really. Remember this is a machine that tightens all the nuts on a panel or something. Programmed to kill? Are you kidding me? If it could conceive of the concept "kill", what's it doing on the shop floor anyway?

Now that I'm through laughing into my hand, I note that this happened, oh, way long ago. I'd think they might have developed some kind of legal work-around for that by now. I don't for a minute believe the shop floor robot to have been assigned all that reasoning ability anyway. What for? Robotics was in the (excuse me) Iron Age, for that matter, and it would have been way beyond anybody to build something that sophisticated that wasn't the size of a room. 1980's=punch cards, remember. This whole thing smacks of a Philosophy 101 project, and I hope they got an "F" for just being plain ridiculous.

I just think we're a way off from being able to prosecute a robot unless it passes the Turing test. And even then, what's a fitting punishment, and could it possibly make a difference to something without a consciousness? And then there's the question of determining what kind of justice could conceivably be exacted upon an entity without knowing whether it makes the robot any difference? A punishment has to mean something or else it is meaningless, itself. And if it could be determined what would be fitting, would you take the robot's word for it anyway? You sure you'd want to ask it?

I happen to believe that a nice warm 2-liter of Coke shaken to a most delicious fizzy destructiveness in the works would end all that nonsense. Take THAT! Oops, sorry, I didn't mean to be insulting, Robby.

The comments to this entry are closed.