Frieder Nake

Semiotic Animal | Semiotic Machine: Machinizing the Work
of Our Heads—A Feast for the Inhuman Human


The German mathematician, Felix Hausdorff (1868–1942), in one of his non-mathematical essays wrote of the human as the semiotic animal.[1] He did it almost entirely passim, and I would like to take this as an indication of the power of the brainwave he must have experienced when that thought got hold of him (this is no more than a remark on the human and his or her territories.)


Others have, perhaps, gone in the same direction by using seemingly similar, but distinctively different words. John Deely dedicates an entire enlightening book[2] to discussing the matter of the semiotic animal from a very broad and decidedly postmodern perspective. There he singles out the human as the only animal capable of using signs and reflecting upon them: “All animals signify, many animals make symbols, but only human animals are capable of developing semiotics.”[3] Animals are, as living beings, “semiosic” animals; but humans are, in the animal kingdom, “semiotic” animals.[4] What a difference an s and a t make.


Before his first emphatic use of the term in 1990,[5] Deely observes that Hausdorff (1897) and Rossi-Landi (1978) had taken up the word, and Petrilli after him (1998).[6] I am just fortunate that Max Bense knew of Hausdorff a.k.a. Paul Mongré and told us. Thus he put me in a position of using the term in much of my teaching and some of my lectures over many years. Through Deely it has now become a line of demarcation and unity at the same time, superseding at once the medieval rational animal and the modern res cogitans.


Nothing, these considerations suggest, is as human as semiosis, the “way of signs.” As I write this on a sunny, silent, spring morning with birds in the air all around, I’m listening to enthusiastic melancholic songs by the German bard, Konstantin Wecker. For decades he has been singing about great godly joys and sufferings that we are capable of experiencing in nature, in our flesh, in our souls and minds. Tears fill the eyes and the spine shivers under the sound of the instruments. Signs may create immediate physical reactions in our bodies, and they force our ideas and aspirations to move towards long forgotten sensations or to tortures never experienced.


Signs may also entice us to critically take them up in meta-processes that, because they are semioses about semiosis, remain semioses. In postmodern times, as we may call our times for lack of a proper name, things and objects come to us through signs. In Charles S. Peirce’s semiotic discipline they are recursive processes without beginning, without end, always already present and never closed or shut down, not even when we are sleeping.


As moralists, we like to think and talk about the inhuman in an attempt to exclude from the human domain those kinds of activities we do not want to tolerate. We observe that they are the everyday capacity of human beings. They infuriate us and appeal to us to such an extent that we brand them with a label that semiotically expels them from our world. It is the attempt to semiotically transfer the signified by changing its signifier. We try it in hope of gaining human territory as a morally good territory. This hope will be in vain, I’m afraid.


As the semiotic animals that we intrinsically are, we have been confronted with semiotic machines since the middle of the twentieth century. The computer is a semiotic machine insofar as its machinic operations take their starting point in signs and sign processes. In some ways this is impossible. For sign processes require activities of interpretation. Machines are, however, inherently incapable of interpreting. They lack contexts, and even more they lack any capacity that would depend on the peculiarities of the situation they find themselves thrown into. This is so obvious to everyone who does not give up his or her human semiotic capacities that it does not need any further justification. The computer’s inability to interpret is as evident as a chopped-off hand that will not attach itself to the arm again.


So in the semiotic machine we meet a thing and an object that seems to share humans’ most exclusive capacity but that utterly lacks precisely that human semiotic capacity of free interpretation. In the semiotic domain, the machine seems close to us and yet, this apparent closeness and nearness is totally impossible. Being close in infinite distance. Only aesthetic experience can get us there.


The lecture I gave in Stuttgart on July 29, 2010, as a prelude to a discussion of some aspects of Björn Franke’s work, was about such abstract and distant observations. It was not directly about Franke’s work. It was meant only to establish a background for a discussion of that work. Akademie Schloss Solitude gave me total freedom to make whatever I wanted of my topic. I assumed they were hoping for something which would fit into the general framework of Territories of the In/Human. When a title such as this makes use of parentheses, it is implicitly indicating a dialectics, a contradiction, a dynamic change. I like this, because the world is a moving world. And movement needs contradiction to occur. Some call this dialectics. Thinking, radical thinking, can only exist dialectically.


As a note of caution, I want to warn the reader not to expect an essay as he knows it. The following text was not written as a more or less coherent essay. Rather it is only a sequence of short remarks that follow the slides of the oral presentation.


The subtitle of the presentation, “Machinizing the work of our head”, refers to what computers, computing, and computer science are about. They owe their existence to the eternal urge of capitalist production to transform all human labor into machinery. Thus, mental capacities of human work are analyzed so scrupulously that they can be handed over to computers to execute them. I call this process “machinization.” In English, a word like “mechanization” might sound more familiar. Yet, it is less precise because the computer is not a mechanic machine. It does not belong to the mechanical epoch. Its time came afterwards. So machinization is the transfer and transformation of mental capacities of human work (or labor) to a machine that suits the purpose. Machinization can never completely succeed. It is only ever some (perhaps important) aspect of human work that may be described in algorithms.[7]




In their constant attempts to understand their peculiar form of existence, humans have repeatedly turned to the latest machinic paradigm as a perspective from which their existence seems to make sense. What they found, they then used as a model for their own lives. In pre-revolutionary France, for example, the nobleman Julien Offray de La Mettrie (1709–1751) was notorious for describing the human as a machine: L’homme machine. His book caused a scandal, and he was forced to leave his home country of France. When we read it today, it may sound silly to us in various respects. However, we should not be surprised if in 250 years from now our great-grandchildren, reading current-day speculations about the essence of human existence, laugh at our obsession with L’homme ordinateur.




Humans are great at depicting or sculpting the naked body, both female and male, in the most exciting and ideal-real forms. The bull on the left may be seen as pure power and sexuality. The super-computer on the right (it was a Cray-1) stands for pure power and computability. Male and female are typically at the center, surrounded by others, animal and machine.




So we come to semiotics and, therefore, the concept of a sign. The most influential semiotic theory is by Charles S. Peirce (1839–1914). His sign concept, as a triadic relation, distinguishes in the sign the three relata of “representamen,” “object,” and “interpretant.” More abstractly, they are called the first, second, and third. Or firstness, secondness, and thirdness. The sign is a relation in which a representamen stands for an object by virtue of (or by generating) an interpretant. The interpretant is itself a sign (and certainly not a human interpreter). Thus, the Peircean concept of signs is recursive. Because in order to describe it, we must refer to it itself. Even more important than the sign itself are the processes (called semioses) signs may engage in.[8]




The meaning of a sign is made up of a cultural and a situational aspect. Culture determines the object generally referred to in a sign; situation determines any interpretant specifically inferred from the sign. We may get at the meaning of a red traffic light as: its object is that paragraph in the traffic regulations requesting that you stop and wait for a better time; the interpretant, however, is what you or I conclude from that event. It could, for a rational person at least, be: proceed, if you are a pedestrian, with care (but watch for police, children, and cars).




Firstness, secondness, and thirdness are the three new categories Peirce came up with in 1891.




“Der Mensch ist ein semiotisches Tier.” That’s a remarkable observation that Hausdorff made. He may have been the first to actually write it down, even though he was afraid of damaging his reputation as a great mathematician if he did so under his own name. Therefore, he used the pseudonym of Paul Mongré, which also helped him in other cases of literary production.[9] There is now some recognition of this interesting view of the human.[10]




For some time researchers have tried to describe the computer as a machine (and insofar belonging to the modern world), but also as a machine of specifics that cannot be better described than as “semiotic.”[11]




Hans Vaihinger wrote his philosophy of the “as-if” long before the first modern computer was even conceptualized (of course, Charles Babbage had conceived its general structure one hundred years earlier.) Since nothing exists in a computer other than through data and programs, we deal with reality in a state of as-if when we approach it by computer.[12]




In the fetish, a currently present physical something is taken for another that is absent. This act of turning a given thing into an object of totally different qualities allows the given thing to become a fetish. As such, it may be worshipped or it may become a threat. The process is known in many religions. The computer has usurped fetish (or intelligent) qualities insofar as it stands for many as the thinking (or intelligent) machine. The reason for this is the semioticity of things on a computer. The fluid and volatile character of things (programs, data) on a computer originates in their relational and, thus, semiotic nature. The computer, however, cannot and will never be able to interpret anything in the free and open way we interpret. Therefore, although a semiotic nature must be attributed to computer things, they are not really signs. They are reduced signs only.




The reason behind the computer—a machine, that is the inhuman— (even if only apparently) taking on human-like capabilities is that mental work must be described for the computer to do anything, and it must be described in algorithms, that is: computable functions. All the computer is doing is evaluating those functions, performing the algorithms, running the programs. Think of an algorithm or a program as a tin can containing mental labor in conserved form. When you open the can, you are effectively setting the canned labor into motion again.




Without the successive three transformations of semiotic, syntactic, and algorithmic reduction of reality, nothing may appear on a computer as canned mental labor. Those transformations are the conditio sine qua non for the blessings of computing.




The ubiquitous “user” has become the scapegoat of software developers and programmers, as well as dozens of psychologists and young and eager students: what just about everybody talks about when they talk about what you can do with a computer and how it should be done. To reduce a fully developed human being to a user is a subtle form of de-humanizing the human. The word has, oddly enough, nevertheless been accepted and welcomed by nearly everyone. People are even willing to speak of themselves as “users.” The user, however, is a real human being. He is not a reduction of a human to an eye-with-finger that he becomes when he accepts the user ideology. That innocent person is in command of piles of canned human labor. He or she feels like a commander pushing around entire armies of slavish workers. Human grandeur as total deprivation.




Beyond any doubt, we are human, that is semiotic animals. We are not machines, not even semiotic machines, because we have the capacity to freely interpret signs. However, we are only humans genetically. Each and every concrete aspect that we develop is due to a situation and a context. Each of us, as an individual assembly of bones, flesh, nerves, and muscles, belongs to the species of semiotic animals. But it is up to us what we become in detail.




Some authors, particularly artists, keep telling us that we have entered a post-human era. Now, it has become an amusing game to take a word (like modern, history, or human) and post-prefix it (into postmodern, post-history, post-human). As a conservative, I know, first of all, that I am, and I remain to be, human. But secondly, this means I am changing all the time. Have I become a machine? Oh yes, in a sense. For all of us have acquired certain habits that we perform much like machines. Not quite so, I should insist, but in principle. There are no operations on a machine that were not carried out in a machinic way before.




If computers appear to have gained the power of interpretation, this is definitely a false belief. It stems from the correct observation that formally an act of interpretation is happening when a program reads an input. But that interpretation, upon closer analysis, is the utmost extreme of a formal interpretation. It is the determination of the only operation the machinic system is supposed to carry out when a particular input is read.[13] So interpretation on and by a computer (under control of a program) is the trivial case of interpretation that does nothing but determine what the one, and only one, meaning is.




No computer does anything without a program running under an operating system and, nowadays, supported by a powerful run-time system of software. Programs are executable descriptions. They are text and machine at the same time. They are text for us to read and, perhaps, to change and develop. They are machines for the computer to run because the program effectively turns the general computing machine into a specific one for the current single purpose. This unity of text and machine has a great literary precursor in Franz Kafka’s short story In der Strafkolonie [In the Penal Colony]. There, the visitor witnesses to a prisoner being slowly executed by inscribing through a matrix of metal pins onto his back and thus, into his body, his death sentence and its justification. By the time the prisoner dies, the machine has convinced him that justice is happening to him. Can there be a better territory of the inhuman human? The text becomes machine.




[1] See Paul Mongré: Sant’ Ilario: Gedanken aus der Landschaft Zarathustras. Leipzig 1897, p. 7.

[2] John Deely: Semiotic Animal: A Postmodern Definition of “Human Being” Transcending Patriarchy and Feminism. South Bend 2010.

[3] Ibid., p. 47.

[4] Ibid., p. 100.

[5] Idem: Basics of Semiotics. Bloomington 1990.

[6] Feruccio Rossi-Landi: Ideologia. Milan 1978 (English translation: Marxism and Ideology. Oxford 2005); Susan Petrilli: Teoria dei segni e del linguaggio. Bari 1998.

[7] See Frieder Nake: “Informatik und die Maschinisierung von Kopfarbeit,” in: Wolfgang Coy et al. (eds.): Sichtweisen der Informatik. Braunschweig 1992, pp. 181–201.

[8] See Charles S. Peirce: Phänomen und Logik der Zeichen. Frankfurt 1983.

[9] See Mongré: Sant’ Ilario (1897).

[10] See Deely: Basics of Semiotics (1990); Semiotic Animal (2010).

[11] See Mihai Nadin, Paul Bouissac (eds.): Semiotic Machine: The Online Encyclopedia of Semiotics. Status: 12/04/2011, http://www.semioticon.com; Winfried Nöth: “Semiotic Machines,” in: Cybernetics & Human Knowing, 9.1 (2002), pp. 5–21; Frieder Nake: “The Semiotic Engine: Notes on the History of Algorithmic Images in Europe,” in: Art Journal, 68, 1 (Spring 2009), pp. 76–89.

[12] See Hans Vaihinger: Die Philosophie des Als Ob. Aalen 1986 (reprint of the 9th/10th edition of 1927).

[13] See Frieder Nake: “Surface, Interface, Subface: Three Cases of Interaction and One Concept,” in: Uwe Seifert et al. (eds.): Paradoxes of Interactivity: Perspectives for Media Theory, Human-Computer Interaction, and Artistic Investigations. Bielefeld 2008, pp. 92–109.




Print        Back to Top