One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Although initially limited to “other men,” the practice of ethics has developed in such a way that it continually challenges its own restrictions and comes to encompass what had been previously excluded individuals and groups—foreigners, women, animals, and even the environment. Currently, we appear to be standing on the verge of another fundamental challenge to moral thinking. This challenge comes from the autonomous, intelligent machines of our own making, and it puts in question many deep-seated assumptions about who or what constitutes a legitimate moral subject. The way we address and respond to this challenge will have a profound effect on how we understand ourselves, our place in the world, and our responsibilities to the other entities encountered here.
Take for example what has got to be the quintessential illustration of both the promise and peril of autonomous machine decision making, Stanley Kubrick’s 2001: A Space Odyssey (1968). In this popular science fiction film, the HAL 9000 computer endeavors to protect the integrity of a deep-space mission to Jupiter by ending the life of the spacecraft’s human crew. In response to this action, the remaining human occupant of the spacecraft terminates HAL by shutting down the computer’s higher cognitive functions, effectively killing this artificially intelligent machine. The scenario obviously makes for compelling cinematic drama, but it also illustrates a number of intriguing and important philosophical problems: Can machines be held responsible for actions that affect human beings? What limitations, if any, should guide autonomous decision making by artificial intelligence systems, computers, or robots? Is it possible to program such mechanisms with an appropriate sense of right and wrong? What moral responsibilities would these machines have to us, and what responsibilities might we have to such ethically minded machines?
Although initially presented in science fiction, these questions are increasingly becoming science fact. Researchers working in the fields of artificial intelligence (AI), information and communication technology (ICT), and robotics are beginning to talk quite seriously about ethics. In particular, they are interested in what is now called the ethically programmed machine and the moral standing of artificial autonomous agents. In the past several years, for instance, there has been a noticeable increase in the number of dedicated conferences, symposia, and workshops with provocative titles like “Machine Ethics,” “EthicALife,” “AI, Ethics, and (Quasi)Human Rights,” and “Roboethics”; scholarly articles and books addressing this subject matter like Luciano Floridi’s “Information Ethics” (1999), J. Storrs Hall’s “Ethics for Machines” (2001), Anderson et al.’s “Toward Machine Ethics” (2004), and Wendell Wallach and Colin Allen’s Moral Machines (2009); and even publicly funded initiatives like South Korea’s Robot Ethics Charter, which is designed to anticipate potential problems with autonomous machines and to prevent human abuse of robots, and Japan’s Ministry of Economy, Trade and Industry, which is purportedly working on a code of behavior for robots, especially those employed in the elder care industry.
Before this new development in moral thinking advances too far, we should take the time to ask some fundamental philosophical questions. Namely, what kind of moral claim might such mechanisms have? What are the philosophical grounds for such a claim? And what would it mean to articulate and practice an ethics of this subject? These are the issues and concerns that comprise what I call "the machine question." Responses to this question will, I believe, have a fundamental and transformative effect on both the current state and future possibilities of moral philosophy, altering not so much the rules of the game but who or what gets to participate.