“First we build the tools, then they build us.” – Marshall McLuhan

The following is an excerpt from my presentation at the 2014 Robotics Alley Conference: “The Intersection of Law and AI: Contemporary View.”

We can frame McLuhan’s observation, as being that these “tools,” (read “AI robots”) will one day be capable of shaping our future. This is not a fantastical, pure sci-fi observation. We live in a time where technology advances are, as Ray Kurzweil pointed out, growing “exponentially.” Kurzweil also argues that we routinely underestimate the speed in which technological advances occur; i.e., they occur much faster.

These hyper-intelligent entities are well positioned to and likely will shape our future. Of course, “shape” is a loaded term. It teases questions such as: “how much,” whether the shape/paradigm, new epistemic reality will be reflected on as being “good” or “bad,” will we, the humans, even survive?

Considered from a legal system-building perspective, McLuhan’s statement might serve as a sobering observation. If it is true that these robots will indeed be able to shape our future, what are we going to do about it before that happens? (A side note is necessary here: It’s interesting to think about this situation in the context of being visited by an alien race. Here we are, employing a plethora of sophisticated technologies in a systematic search for alien life forms in the far reaches of space. And yet, all along, right here on earth, an alien “race” will likely be upon us in roughly 35 years, or faster: Singularity.) Do we even know what sort of legal systems we will need to contain, control this evolution? Combining McLuhan’s observation with those made by Kurzweil, we should at the very least consider the likelihood that we have less time than we think to structure the necessary legal systems. It should also be sobering because it could mean that we are vastly underestimating what these hyper-intelligent AI entities will be like. How they will behave? How will we interact with them? Can we control them? Can or will they control us?  Will they “care” about us?   Consequently, what we think we require today as far as appropriate legal systems are concerned to deal with these AI entities, may very well turn out to be insufficient, inadequate.

A common theme that comes up in these types of discussions centers on AI rights. It’s a tempting topic. It can get emotional. Advocating for it seems to make sense. (Who didn’t feel a bit sorry for “Sonny,” the robot in the iRobot movie?) But I don’t think engaging in this rights-discussion serves to advance the necessary legal-system discussion. To argue that AI will need or demand rights akin to those of humans is to engage in a mildly entertaining pop culture-fueled fantasy.

Humans care about rights. We care about the right of property, right to be free from oppression, the right to freedom of expression and so on. We identify these as core existential rights. These concepts have such a tight grip on our psyche that iterations of it are applied to animals (e.g., prohibiting animal cruelty). And so it is that with the AI-rights discussion we can see that these same principles permeate into the inanimate domain. But, again, I think it is important that those of us who study this subject should not be distracted (at least not for long) with this inquiry.

We can trace discussion of AI rights to contemporary, serious, groundbreaking thinkers. For instance, while the first 2 of Asimov’s 3 Laws of Robotics are not concerned with allowing a robot to protect its right to exist, the third one does. The first two laws focus solely on a robot not harming or allowing harm to come to a human. Makes sense; can’t really argue with that. The self-protection element shows up in the third law, but the human safety aspect is elevated, takes precedence. Can’t argue with that last part either.  What troubles me is that if Asimov was suggesting here that a robot would be concerned with its “survival” he offered no convincing basis for that assertion. Perhaps he did not mean that at all. Maybe by “survival” something else was meant, and if so, there is more to investigate here. Perhaps the “survive” is used to underscore a mission-centric notion and that’s it. That is, Asimov is saying that the robot would be programmed not to abandon its mission unless continuing to conduct it would interfere with the first two laws. Maybe.

There is no reason to think that AI will need or “want” any of these human-centric rights. And if AI is endowed with any rights at all, then that should (emphasis on “should”) be a result of the fact that we think it is necessary for that to be, because we identify a utility in allowing it.  We control that. Not a result of AI entities deciding for us and possibly shaping our existence in ways we will have zero control over.

***Postscript***

June 25, 2019: University of Edinburgh philosopher Andy Clark takes the view that “the most powerful forms of AI emerge when simulated AI agents are able to talk to each other as part of proper communities.” The “community” model can be regarded as part of an organizational continuum that features “capsule networks” and “hive” (e.g., UNTAME) models. A common theme in the latter two  is that these types of AI applications manifest a high level of operational resiliency and capability, which suggests that Clark’s view has merit.