Mihailis Diamantis (University of Iowa - College of Law), Rebekah Cochran (University of Iowa) and Miranda Dam (University of Iowa, College of Law) have published "AI and the Law: Can Legal Systems Help Us Maximize Paperclips While Minimizing Deaths?" on SSRN. Here is the abstract:
This Chapter provides a short undergraduate introduction to ethical and philosophical complexities surrounding the law’s attempt (or lack thereof) to regulate artificial intelligence. Swedish philosopher Nick Bostrom proposed a simple thought experiment known as the paperclip maximizer. What would happen if a machine (the “PCM”) were given the sole goal of manufacturing as many paperclips as possible? It might learn how to transact money, source metal, or even build factories. The machine might also eventually realize that humans pose a threat. Humans could turn the machine off at any point, and then it wouldn’t be able to make as many paperclips as possible! Taken to the logical extreme, the result is quite grim—the PCM might even start using humans as raw material for paperclips. The predicament only deepens once we realize that Bostrom’s thought experiment overlooks a key player. The PCM and algorithms like it do not arise spontaneously (at least, not yet). Most likely, some corporation—say, Office Corp.—designed, owns, and runs the PCM. The more paperclips the PCM manufactures, the more profits Office Corp. makes, even if that entails converting some humans (but preferably not customers!) into raw materials. Less dramatically, Office Corp. may also make more money when PCM engages in other socially sub-optimal behaviors that would otherwise violate the law, like money laundering, sourcing materials from endangered habitats, manipulating the market for steel, or colluding with competitors over prices. The consequences are predictable and dire. If Office Corp. isn’t held responsible, it will not stop with the PCM. Office Corp. would have every incentive to develop more maximizers—say for papers, pencils, and protractors . This chapter issues a challenge for tech ethicists, social ontologists, and legal theorists: How can the law help mitigate algorithmic harms without overly compromising the potential that AI has to make us all healthier, wealthier, and wiser? The answer is far from straightforward.
Comments