In my last post, I raised the admittedly speculative possibility that advances in artificial intelligence will lead the law to concretize (by which I mean that it will become more clearly expressed and more transparently applied). I gave the example of autonomous cars which may lead manufacturers to push for more concretized speed limits (unlike the ones we have now in which it's unlikely you'll get ticketed if you travel a little above the speed limit). Superficial appearances aside, actual speed limits are neither clearly expressed nor transparently applied.
Let me offer two more reasons why the law may concretize. First, the law may become more concrete as computers play a larger role in making legally relevant decisions. For example, a group of German researchers is working to develop a computer system “to make automatic decisions on child benefit claims to the country’s Federal Employment Agency . . . probably with some human auditing of its decisions behind the scenes” and is in talks with the agency about how to deploy it. One researcher “hopes that one day, new laws will be drafted with machines in mind from the start, so that each is built as a structured database containing all of the law’s concepts, and information on how the concepts relate to one another.” In other words, when legally relevant tasks are performed by computers, legislation may itself be crafted more algorithmically to facilitate processing. That is a kind of concretization although whether or not such laws are clearer than current laws may be a matter of taste (and of whether you’re a human or a computer).
Second, the law might concretize by creating greater pressure to clarify the theoretical underpinnings of the law. For example, many copyright holders already use automated software systems to scan the Internet looking for copyright violations. Some users make constitutionally protected “fair use” of others’ copyrighted material, but it is difficult to know precisely what constitutes fair use. Before the Internet age, copying audio, visual, and written materials was more difficult, so there was less need to police violations. Furthermore, it was more expensive to police each violation when you could not simply search for violations on the Internet. Thus, fair use determinations were made less frequently. In the Internet age, such determinations are made much more frequently, and there is more political pressure to understand the principles underlying fair-use doctrine in order to make the law more concrete.
In the future, such pressures may apply to some of the most central questions facing moral and legal philosophers. Consider the tricky theoretical issues that underlie the famous trolley thought experiments: A runaway trolley is heading toward five entirely innocent people who are, for some reason, strapped to the trolley tracks. If the trolley continues along its current path, all five will die. You can flip a switch to divert the trolley to an alternate track, but it will still kill one innocent person strapped to the alternate track.
This trolley problem and its numerous variations raise interesting questions about when it is mandatory or permissible to take an action that will save several lives, when the action will also knowingly cause the death of some smaller number of people. There is no consensus solution to all trolley problems. Nevertheless, autonomous agents, especially unmanned military drones, will likely be confronted with real-life trolley problems. We will want these entities to follow rules of some sort, but we cannot program those rules unless we agree on what they should be. It is possible that we will have different rules for humans and nonhumans, but we will at least have to codify some rules for autonomous machines that will require more theoretical clarity and agreement than we have today.
Of course, humans already face trolley-like situations from time-to-time, and we still do not have clear rules to follow. The difference, however, is that after humans have confronted an emergency situation, there is usually quite a bit of uncertainty about what they knew and when they knew it. With autonomous entities, we will know more precisely what information the entity had available to it and how it was processed. Indeed, we will typically have video footage of the pertinent events, along with all of the other data available to the entity. Being clear about the rules is more important when we can no longer hide behind ambiguous facts.
(This post is adapted from "Will There Be a Neurolaw Revolution?" and originally appeared at Prawfsblawg.)
Comments