Roboethics: between science fiction and reality

Isaac Asimov was undoubtedly one of the conceptual (much more “philosophical” than technological) fathers of modern robotics; he, as a scientist and science fiction writer, fantasized about the structure and operation of artificial organisms so advanced that they even had a consciousness.

However, starting from this very “goal,” Asimov realized that his “creatures” could quickly turn into robots that were enemies of the very humanity that had generated them. For this reason, he formulated his famous three laws:

    1. A robot cannot harm a human being nor allow it to receive harm because it failed to do so.
    2. A robot must obey human orders as long as those orders do not contradict the First Law.
    3. A robot must protect its existence as long as this self-defense does not conflict with the First and Second Laws.

Without losing the inherent level of detail, this can be summed up in a single ethical “commandment”: robots are not means of war; they must not fight humans and, indeed, must help them at the cost of their integrity.

At this point, the question that occurs to me is: what kind of consciousness can an organism possess that must submit to such conditioning?

To try to answer this, I make a brief introduction. In the field of artificial intelligence and evolved robotics, two strands of research can be outlined that epistemologically draw on two opposing conceptions: the first is the one that holds that consciousness is not a prerogative of humans but rather that it arises from the biophysical activity of the nervous system (particularly the cerebral cortex) and therefore, by appropriate means, it can be replicated (I belong to this “faction”), the second, on the contrary, attributes conscious thought only to humans and considers the behavior of any intelligent machine as the result of a well-designed program executed by a high-powered computer.

In analyzing Asimov’s three laws, I came to the point where I no longer knew where he stood, for if one admits that consciousness (as such) is replicable, there is no point in defining guidelines that not only must serve as an evolutionary basis but must also manifest themselves with such superiority that they transcend the domain of all other behavior. The robot cannot disregard these rules and, therefore, must be programmed by humans so that every interaction with the environment is subordinate to them. However, this is equivalent to stating that its consciousness is not autonomous and capable of generating abstract thoughts that are “disconnected” from any set pattern.

So it is logical to think that Asimov’s science-fiction machines are nothing more than automatons in the same way as ordinary robots used to perform, for example, tasks that are particularly dangerous to human safety, such as demining or inspecting unsafe structures. Still, this idea, however natural, does not agree at all with the writer’s spectacular descriptions, descriptions that, in my opinion, reflected his desire to one day see “silicon men” ethically perfect but also capable of talking, laughing, feeling emotions and why not, even falling in love!

A famous Italian maxim states, “You cannot wish for a full barrel and a drunk wife at the same time,” in this case, it seems to me that it fits the problem perfectly: either hope for conscience or reject it and stick to programs. If one opts for the second possibility, it is always possible to abide by the three laws, as long as one does not build warlike “Cyborgs” aiming at the destruction of humankind, but if one chooses the first, this should not stem from a stance. Still, from a careful analysis of the achievements of artificial intelligence, neuroscience, and cognitive psychology, as well, of course, as progress in the field of electronics, one should accept its most natural consequence, and that is that moral rules cannot be prescribed, but must emerge from the realization that adherence to them is the basis for the preservation of the species and the quality of life.

Moreover, an “emotional” robot should have some empathic connection with humans and their fellow human beings. If a simple program forced it to help a person in distress, it would perform that task unconsciously.

If, on the other hand, one assumes an artificial brain equipped with mirror neuron-like structures, one might think that after realizing a dangerous situation, the robot would “virtually experience” it and make the most appropriate decision.

Ethics is an outcome of consciousness and not the other way around, so if one wishes to talk about this subject as applied to robotic structures, one must first accept that no engineer should ever demand one action over another; he, at most, may try to correct errors, but it must be the machine that self-assimilates the new rules after filtering and adapting them to its internal representation of the environment.

On the other hand, the literal execution of the three laws is very often at odds with human morality itself: imagine that a robot witnesses an argument between two people, and at some point, one of them pulls out a gun and threatens the other to kill him. What should the robot do?

Ostensibly, it is supposed to intervene to save the life of the unarmed man, but this certainly does not guarantee the success of its intent: both could become victims of the attacker who, feeling threatened, would be forced to shoot without even realizing the consequences.

A good negotiator would undoubtedly act differently… No program is capable of evaluating all possible hypotheses in real-time, and only empathic consciousness (insofar as it is capable of ruling out all exaggeratedly inappropriate options a priori) is capable of making a possible bystander, whether human or artificial, understand that a few good words are more than enough to disarm the man with the gun.

By this, I do not mean to say that a well-evolved robot should not be a friend of man and that its implicit “mission” is not peaceful coexistence. Still, it is essential to keep in mind that scientific research, whose goal is subordinated to any set of ethical imperatives, cannot be intended to forge new creatures; at most, it can aspire to improve automata that are already relatively widespread, just as is the case in the field of automobiles or telecommunications.

Is it, therefore, worthwhile to discuss ethics for robotics? In my opinion, no. Let’s wait for science to run its course and, should we one day come across a “Terminator,” first run away and then, with a clear mind, discuss the problem and try to define all those rules that “the new children of man” must learn to abide by!


 

Share this post on:
FacebookTwitterPinterestEmail