Cybernetic Existentialism

This article stems from a dialogue with a good friend of mine who is a psychologist and who, again with great understanding and critical spirit, has highlighted some fundamental aspects of modern artificial intelligence by comparing them with the cornerstones on which the very substratum of life from which this discipline inevitably takes its cue rests. In particular, he pointed out to me that while humankind is driven by a powerful instinct — to be ascribed not to the long list of more or less evolved drives but rather to the phylogenetic basis of existence itself — of preservation, an intelligent machine, however well designed, would have no valid reasons for setting itself the same goal.

I immediately thought of the inevitable failures that the electronic or mechanical components would suffer over time. So, at first glance, I replied that the preservation of the species (understood as a group with similar characteristics) would still be necessary to avoid the gradual destruction of the member elements; however, reflecting more calmly, it seems evident to me that the problem of failure and its resolution is far from being a necessary condition that can allow one to speak of a “conservation instinct.” The reason is very simple, and the explanation can only be inspired by human reality: if I fracture my arm, a long series of endogenous stimuli, among which pain certainly stands out, signals to me that a harmful and dangerous condition has occurred in my body and that I must therefore immediately remedy it.

Any reasonable person would be “forced” by the status quo to go to the hospital to deal with the necessary treatment; at this point, it seems clear to me that there is no reason why a machine cannot do the same, indeed, nowadays it is rare to find electronic or mechanical systems that do not include a self-diagnosis scheme for failures, and it is not at all unrealistic to think of machines that can adopt behaviors based on adaptive controls that are, in turn, able to make the best choices based on several internal and environmental variables. In short, the problem of automatic diagnosis and repair of failures is routine in almost all fields of technological engineering, but no one dares to claim that their computer, when it reports an excessive processor temperature, is somehow flaunting its irrepressible desire to have progeny; if anything, it is concerned (more or less intentionally) with safeguarding the integrity of its vital structures to avoid at most the economic inconvenience of a repair.

For man, the situation is undoubtedly different: he does not see reproduction as a means of self-repair (which, moreover, is absurd) but as a necessary condition of existence that only in retrospect we can define in macroscopic terms; in fact, the concept of mating for reproductive purposes is not inherent in the social policy of a community but is inevitably found in every single member of it almost as if it were innate cultural baggage.

Of course, in saying this, I do not wish the reader to think that I am advocating the thesis of innatism too lightly; in fact, I am convinced that the awareness of being able to spawn a human being arises first and foremost from the knowledge, more or less profound, of copulation, and therefore, in the final analysis, it is essential that each element of a group be primarily able to distinguish the members compatible with mating from the others.

Unless we consider the paradoxical situation of general hermaphroditism, it seems evident that the individual can only gain awareness if placed within an appropriate context. From my point of view as an intelligent machine designer-the need for the continuation of the species is certainly not a factor of primary importance. Still, from the perspective of artificial consciousness, it is interesting to analyze what requirements a machine would have to openly manifest a desire for progeny.

First of all, as I have already said, I regard this tendency, albeit individual, as if it were an emergent property of a socially formed group. In other words, in my opinion, it is almost impossible to assess the degree of interest in the reproduction of an individual unless one contextualizes the latter’s existence. While trivial, this thesis highlights the need to observe reality as a whole that includes the observer himself as an integral part and, therefore, shifts the point of view from pure psychology to more general sociology by taking it for granted that it is the species that wants to continue its existence, it destroys the myth of a superman capable of perfectly representing the macrocosm where he is. Of course, this does not mean that the individual acquires the ability to reproduce, but rather that this peculiarity is “awakened” by the continuous interaction among community members.

For these reasons alone, I have allowed myself to treat the problem as if the active agent, man or machine, does not matter much. It is such if and only if other homologs are compatible with it and aware of mutual existence. It is worth pointing out, however, that once such a process has taken place, singular identity loses some of its constitutive value in ensuring that the community loses the compactness necessary for it not to break down into smaller and smaller subgroups and eventually reach extinction; this is another reason why it is much more convenient to describe the instinct of preservation as an emergent property of a system in an attempt to understand what local and global factors can influence it.

A machine, taken individually, has a minimal existence: it can operate as prescribed by design algorithms, or it can evolve quite randomly, giving rise to a temporal dynamic that is initially unknown and definable only in probabilistic terms; in any case, it could never cross the threshold that separates individuality from the awareness of belonging to any context.

Therefore, an isolated intelligent system can only potentially be capable of consciousness. Still, lacking the wide range of exogenous stimuli characteristic of humans, it will “live” its life with the inherent awareness of its uniqueness.

It will be an atom in a universe without any acting force between such particles. Therefore, from a purely existential point of view, it will have the full right to deem itself the universe, unconsciously limiting any possibility of experiencing different and more far-reaching realities. Therefore, the isolated machine can’t exhibit reproductive interest behavior for conservation purposes, but what happens when a context is created where multiple intelligent agents are present? To answer this question, we need to do a little virtual experiment: suppose we create a three-dimensional arena where several robots are placed free to move and interact with each other; for example, one of them could ask the others where a certain object is located and receive a response from the one or those who first located the target.

The type of interaction does not matter; what matters is that each robot is perceptually active and fit for communication according to any protocol. Let us further assume that each system has incorporated a control device that performs continuous monitoring of the robot’s “vital” functions and signals in time when any part of it is in a condition close to failure; in this way, we are assuming that the individual agent is designed to have awareness of both its limitations and the damage that its structures may suffer, so we have unwittingly imposed the condition that each member of the small community possesses an existential consciousness that leads it to act with inherent limitations in mind.

From a design point of view, it is also possible (and desirable) for a “sick” robot to take all necessary emergency measures so that its damage can be repaired, and this again confirms the intentionality of the agent’s behavior: it wants to continue its life and, in a sense, “fears” its termination. Although this may seem paradoxical, it must be kept in mind that there is no metaphysical justification for the desire to live. Every person seeks to preserve himself or herself and is afraid of death only for purely cultural reasons. Therefore, it is not absurd to think of programming a robot so that it desires life, just as it is pretty normal to teach a child not to take certain risks because they might cause him or her serious injury; instead, what is very important is the eventual awareness inherent in the transition from a general state of life to one that is its logical opposite.

The instinct for the continuation of the species takes shape from this very factor and is developed based on considerations that can be located in the social sphere. One of them is the general usefulness of the function performed by each member. This affiliation arises from synergistic relationships, or the desire to maintain one’s presence according to the value of the individual and his works.

Underlying everything is the fundamental concept of oneness: the impossibility of replacing oneself through cloning: the living energy that feeds the deepest of drives, the preservation of the self. However, as is clear to anyone, this craving is constantly thwarted by the conscious perception of the structural and functional limits of the substrate that holds all conscious activity. Thus, a struggle is created between wanting and not being able, which can only always tend toward the second contender. Thanks to rationality, every person realizes that a transition must take place sooner or later and that this moment will be unique, unrepeatable, and, above all, irreversible; when this happens, the dominance of reason unveils its most fearsome weapon against all forms of limitation: reproduction.

Therefore, we have three distinct phases: 1) self-preservation, 2) The observation of the natural decay of cells, and 3) the overpowering of the latter through the procreation of new members. The reader needs to pay attention to the necessity of all three parts of the process since not even the recourse to emergentism mentioned earlier would be explicable otherwise: only within a community is the transition from the second to the third phase feasible. However, it is undeniable that each thinking entity must necessarily agree with the triad. Apparently, this may seem like nonsense, but if we analyze a city’s demographic trends and simultaneously catalog personal ideas about mating, we quickly discover that while the average population remains almost constant – in the face of normal fluctuations -, a great many people do not have the slightest desire to procreate or, at the very least, they do not schedule such an event as primary and fundamental to their very existence!

Let us now move into the realm of machines and resume our virtual experiment: as stated, the only way to verify the presence of a sure preservation instinct is to assess the degree to which each robot can be aware of the above triad. Automatic fault diagnosis systems surely ensure the first point; therefore, we can be sure that the “robotic self” is safeguarded consistently and efficiently enough. The second point is perhaps more critical, but again, the problem can be circumvented by considering the design of a device for evaluating the goodness of components that works based on what is known as MTBF, or Medium Time Before Failure; this parameter is characteristic of every human artifact although only proper process engineering can allow for its accurate estimation, e.g., every light bulb possesses an MTBF, it is tough for it to be calculated for a chair or a bath mat. However, it is good to remember that any object undergoes deterioration. Therefore, it is always potentially possible to arrive at an estimate of the mean life of each element.

In the human case, the matter is much simpler in that there are several organizations, both national and international, that periodically calculate the value of the human MTBF, and its dissemination is so widespread that each person very often becomes aware of his or her age precisely by relating it to the average value prescribed by statistical tables…

It is by no means accurate that at age 75, a man is about to die. Still, it is undoubtedly true that, on average, in a population, the number of deaths in the 70-80 age group has a significantly higher percentage than in any other. By this, I mean that the second point of the triad is influenced both by endogenous factors (mainly the occurrence of senile degenerative diseases) and also by the cultural diffusion of information emerging only at the community level and difficult to obtain through local analysis. Once again, emergentism seems to hold sway, which may cast what has been said about machines in a bad light. However, the fundamental difference that exists between humans and artificial systems is precisely related to the ability to self-assess the state of its components: a well-designed possibly in a multi-modular fashion could, in principle, check the total number of active units and compare it with the of its now unusable counterparts, based on this observation the machine can make a sufficient number of estimates and arrive at an individual MTBF.

If we then consider homogeneity factors (same components, same environment, exact causes of wear and tear), using with some license the central limit theorem, we can say that the value of the MTBF is distributed according to a Gaussian characterized by a mean value and a precise variance: the same parameters that lead ISTAT (Italian Statistical Agency) or any other statistical agency to define the age groups at most significant risk of death.

Having clarified this point, we come to the most crucial issue: the culmination of the triad, reproduction for conservation purposes. We have said that the human drive toward procreation arises from factors generally related to the person and his or her works. In a sense, we could say that the (innate) desire for indirect continuation is the final compromise of the triad. Therefore, its existential scope is the real key to the entire life process of an organism.

In our arena full of robots that live by interacting with each other and the environment and possibly even completing several particular tasks, is there such a key?

To answer this, we must assume the position of the programmer mentally simulating the behavior of artificial organisms: suppose robot one is engaged in some work and suddenly finds that its servomechanisms controlling locomotion have failed. It is then forced to stop and seek help. In the worst case scenario, the mechanical damage could have been caused by a short circuit in the electronic systems, which, in turn, could have been irreparably damaged; suppose, however, that a small part of the modules is still active and it is precisely this that causes an internal condition that we might call “agony.” Can the machine prefigure such a state? The transition from functioning to failure is necessarily binary, i.e., there will always exist an instant before which the robot will still be, even minimally, functioning, and after which all its sub-systems will be de-powered and unable to perform any function; the succession of internal states must therefore necessarily end, and the transition from the last active state to total lack of states will be perfectly equal to the transition between any two other previous states. In other words, the robot can never be aware of “passing away.” It will continually assess its condition, albeit desperate, as a general fault that needs to be fixed before it can resume its tasks.

But still, suppose we “force” the robot’s knowledge by informing him that his problems have no solution and, at most, he will be able to give birth to new organisms through some mechanism of reproduction (the most trivial one starts with dismantling); again the problem arises of observing the configuration of internal states after this tragic communication: is there any particular sign visible that informs us about the eventual awareness acquired by the robot? The answer is no, and the reason is relatively trivial: the system cannot imagine, analytically or figuratively, as with EPMs, a state whose characteristic is that it cannot exist!

From this, we can infer that the robot cannot inherently think about death; therefore, the triad cannot close. It does not matter what value the robot attaches to itself and its work because, in any case, what matters is the relationship between being at a particular point in space-time and not being able to be there or anywhere else; when this situation occurs, there is an awareness of a continuum that must somehow break down, but if that eventuality is banished from the functional dynamic itself, then any foreshadowing of total lifelessness is impossible.

If, therefore, one looks for the roots of the instinct of preservation in the fulfillment of the triad, it is more than evident that a machine can never contemplate an autonomous internal state that drives it toward some process of reproduction-assuming, of course, that it exists and is viable-unless one programs (in the most algorithmic and literal sense of the term) it to cope with a series of new assemblages. In this case, which ostensibly might hint at the emergent property of procreation, machines would adopt a behavior very similar to that of a human community, but this would be nothing more than a pure illusion since there would no longer be any reason to refer to instincts or drives as any form of tacit awareness would inevitably be lost.

In conclusion, I would like to remind the reader that my analysis is based on comparing groups of humans and intelligent robots. However, I have not defined what I mean by intelligence applied to the machine in the paper.

As much as this shortcoming may cause controversy and criticism, I would like to point out that intelligence is definable only from studying man. All extensions that are implemented by etiology or engineering cannot but always keep in mind the basic model that is a source of both inspiration (as far as design aspects are concerned) and constant study to assess which parameters-if and- belong exclusively to the human race and which others are common to more heterogeneous families of organisms.

If we start from this assumption, the value to be attributed to the word intelligent robot is somewhat arbitrary since it is limited by the consideration that the behavior under consideration (the self-preservation instinct) is not a prerogative of humans alone but appears evident in all animal species; our machine can be any artificial structure capable of possessing internal states and, only for the sake of more significant similarity with living beings, also equipped with a bivalent perceptual apparatus, i.e., capable of grasping information flows from both outside (exteroceptive sensors) and inside (proprioceptive sensors) and a locomotion-interaction system that allows the robot to come into complete contact with the preconstituted environment-context.

Any other meaning of the word “intelligent” is always welcome. Still, it cannot be taken into account in our examination in order not to make the mistake mentioned above of mistaking an externally willed algorithmic process for any form of decision made based on the existential considerations summarized in the triad.

Bibliographical references

    1. Schroedinger E., Che cos’è la vita ?, Adelphi
    2. Heisenberg W., Fisica e Filosofia, Il Saggiatore
    3. Von Neumann et alt., La Filosofia degli Automi, Boringhieri
    4. Bonaccorso G., Il significato e la stanza cinese, Saggi su IA e Filosofia della Mente (2004-2005)


Share this post on: