The New York Times recently ran this interesting article on the future of artificial intelligence. Some people are starting/continuing to worry about what happens when, say, speech synthesis gets close enough to real human speech that bad guys start using the technology to cheat or otherwise victimize people. Oh, and autonomous killing machines. They're worried about that, too.
Here's an excerpt from the NYT article about a recent meeting on the subject organized by the Association for the Advancement of Artificial Intelligence at the Asilomar Conference Grounds in Monterey Bay, CA:
The A.A.A.I. report will try to assess the possibility of “the loss of human control of computer-based intelligences.” It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?
Dr. Horvitz said the panel was looking for ways to guide research so that technology improved society rather than moved it toward a technological catastrophe. Some research might, for instance, be conducted in a high-security laboratory.
The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.
“If you wait too long and the sides become entrenched like with G.M.O.,” he said, referring to genetically modified foods, “then it is very difficult. It’s too complex, and people talk right past each other.”
Comments