Jump to ContentJump to Main Navigation
I Think I AmPhilip K. Dick$

Laurence A. Rickels

Print publication date: 2010

Print ISBN-13: 9780816666652

Published to Minnesota Scholarship Online: August 2015

DOI: 10.5749/minnesota/9780816666652.001.0001

Show Summary Details

Startling Stories

Startling Stories

(p.285) Startling Stories
I Think I Am

Laurence A. Rickels

University of Minnesota Press

Abstract and Keywords

This chapter explores the question of consciousness as a transferrable or reproducible attribute from the perspective of Gotthard Günther. In 1953, Günther published an article in the pulp science fiction magazine Startling Stories titled “Can Mechanical Brains Have Consciousness?” in which he introduces into the model of consciousness as logical feedback mechanism the element of “confrontation” or self-difference as the very synapse of conscious thought. The question of consciousness as a transferrable or reproducible attribute often rests on one of two assumptions. A skeptical viewpoint holds that we can never know what consciousness is, anyway, or, more religiously, that consciousness is a manifestation of man’s unknowable but divinely bestowed soul. The other point of view holds that we do not know what consciousness is because it is only a label for the abstract sum of all our perceptual and apperceptual functions.

Keywords:   consciousness, Gotthard Günther, science fiction, Startling Stories, mechanical brain, logical feedback mechanism, confrontation, self-difference

I readily admit if it comes to the adding up of grocery bills and similar mental activities you can’t beat the mechanical brains but they will never write Hamlet.

—GOTTHARD GÜNTHER, “The Soul of a Robot”

Günther claimed that the gist of his philosophizing could be located in the gaps and overlaps between his American-language and German-language works. Exile in the American-language world with his Jewish wife was a career move that bordered on pop cultural success or access. While the better half of his life was German, the portion first set aside for reflection on science fiction was American. In Startling Stories in 1953 Günther published a brief article, addressed to the readership (and authorship) of science fiction, titled “Can Mechanical Brains Have Consciousness?” Here Günther sees only greater difficulties in extending consciousness to robot brains—at least as far as the direction progress in construction of artificial intelligence was headed goes. He introduces into the model of consciousness as logical feedback mechanism (which he attributes to Hegel’s Phenomenology of Spirit) the element of “confrontation” or self-difference as the very synapse of conscious thought.

The question of consciousness as transferrable or reproducible attribute often rests on one of two assumptions. A skeptical viewpoint holds that we can never know what consciousness is, anyway, or, more religiously, that consciousness is a manifestation of man’s unknowable but divinely bestowed soul. The other point of view holds that we don’t know what consciousness is because it is only a “label for the abstract sum of all (p.286) our perceptual and apperceptual functions. Ergo, if we reduplicate all those functions of sensitivity, memory, learning, capacity to make decisions, quantitative and qualitative reasoning, etc., through the medium of mechanical procedures, we have produced consciousness and thinking in a man-made machine.” There is, therefore, no consciousness just as there is no animal. “There exist horses, dogs, birds, and snakes; but there exists no animal. ‘Animal’ is just a name, and so is ‘consciousness.’” However, consciousness, as Kant proved, is a mechanism that exists apart and separated from its own functional proceedings. That’s transcendental logic. But “the established results of that new logical discipline have not yet penetrated into the circles of cyberneticists and designers of computing machines.” You need to be well versed in any number of -ologies to hit bottom in transcendental logic. (Günther includes in his list, alongside psychology, psychiatry too.) If we know what consciousness is—reflection in-itself—then we should be able to build it. But what’s that? What always comes out in the watch for analogies is the movies.

The next thought, as new and improved, can flush the thought I had last, the one that I must try to make last. A technical medium only has a past. Photography-and-cinema today occupies the highpoint of projections that began with the printing press and the 3-D frame of representation. The first new media entered the psychic apparatus by the work of analogy that is the discursive corollary of Freud’s “work of mourning.” No one who reads could, by the same analogies, find that Freud’s thought is therefore outdated. To add “digital” to “media” doesn’t change much. Digitalization has added to film-versus-video, for example, a synthetic third and supplemental alternative for special technical difficulties or differences, notably those involved in editing. Digitalization re-collects all the offshoots of the screen medium, a move both into reduction and out of excess. Film is still the medium of mutations that we have only begun to read and reclaim. The first technical images, which always already broke down into pixels, transformed the visual arts into painting by numbers. But given its synthetic nature, there is also a contrary momentum in digitalization that summons discarded, discounted mediations in the allegorical mode to interrupt this immediacy of numbers. The oldest new media preserve prehistory, the vanishing starting point of my last thought.

No one in his right mind, Günther admits, “would say that the screen has consciousness. For the screen does not know what is happening…. The story would be entirely different if the light were not thrown back at us, the audience, but were instead reflected back upon the projector (p.287) and its optics process of projecting the images against the screen.” Now it’s time to place screen in quotes. “Now: consider your own consciousness a sensitive ‘screen.’ This ‘screen’ receives, through your I sensorial system, messages from the outer world. Neuronic impulses coming from your eyes, your ears, your skin, your muscles, etc. impress themselves upon that ‘screen’ and are reflected. But this reflection is not thrown back at the world-system from which it came…. Instead, it is thrown into a deeper recess of your brain, turns around and appears a second time on your brain-‘screen,’ superimposing a second reflection on the first. This second appearance establishes the miraculous phenomenon which we call consciousness.” Only if we could determine what happens to the message after it was first received on the brain screen and before the last moment, “when it returns to it with the stamp ‘acknowledged’ and produces consciousness by its second impact upon the screen,” could we design a technical incarnation of consciousness. But our roundtrip via cinema analogy and theory of brain processes “during the roundtrip of our message” in turn brings us back to what we all along were illustrating: transcendental logic.

Aristotelian logic was for jar- or potheads. Chaotic fluids pour into our jug-like minds. The jug or pot stills the downpour and gives it form. The latter is formal or Aristotelian logic; the former represents the material world. The pot does not become conscious of the liquid pouring into it. The difference that had to be made lay in the human soul. In addition to the synthesis of form and content, you need a self that watches that synthesis to produce consciousness.

Kant eliminated soul from this formula. In its stead he identified a second mechanism in the brain, which observes entirely different principles. “It does not form messages any more but carries them through processing stages and finally returns them to the original ‘screen,’ the identity level of the formal logic.” Transcendental logic is named after this all-important carrying capacity.

If the input is processed in a certain way, then the concepts “I” and “perception” are added. The additions alone do not produce consciousness, however. “Only when the thus modified message returns to the screen is consciousness actually produced.” The modified message returns to two sections of the screen: memory and identification.

The memory still retains the original pattern (unconscious):

“a rose”;

on which is superimposed (unconscious):

“I see a rose.”

(p.288) Identification now produces a confrontation by attempting to establish a one-to-one correspondence relation between the original pattern and the enriched second message. This does not work! It turns out to be impossible to establish, by confrontation, a one-to-one correspondence between “a rose” and “I see a rose.” … The reflection-initself produces something that cannot be identified with the mere content “a rose.” A tension of meaning is created—a tension between identity and non-identity. And this is the moment when consciousness and conscious thought come into existence.

What technical reproduction of consciousness requires, then, if it is to succeed, is some carrier or transference mechanism “that permits the information to bounce off the screen and return to it in a modulated manner for the purpose of ‘confrontation.’”

From 1954 to 1955 Günther published, again in Startling Stories, a four-part series of articles of the constitution of a new techno relationship with the alien other as the limit of our thought. As the editor’s intro blurb proclaims: “Modern logic may have begun with Aristotle, but it will not end with him.” Günther’s first round of reflections, “The Seetee Mind,” contradicts the ambiguity of the editorial “with.”

The two-value system of Aristotelian logic corresponds to the on/off positions of the neuronic switches of the brain (which in turn reflect the positive and negative electric charge particles comprising our physical world). All rational beings, whether terrestrial or galactic, must use the same logic if they face the same universe and are materially constituted as the same. But what if there is “contraterrene” matter? It would be a state of material existence in which the particles have reversed their electrical charges. The resulting “seetee” mind would be based on a total reversal of our logical values. But what does total reversal mean here? “The seetee mind, so far as we are concerned, is the complete and consistent ‘liar.’” One’s knowledge about everything must be complete in the case of consistent lying about everything. Such knowledge is the prerogative of the divine mind. We, however, are incapable of total negation, the radical step required to mediate the gulf between the Aristotelian and the contra-Aristotelian mind. A rational human being could perform total negation only by negating all statements, by negating, in other words, the existence of his own mind. Thus only suicide comes close. “In fact total negation is the logical definition of death…. It is absolute death that separates the terrene Aristotelian from the contra-Aristotelian seetee mind.” If this is a typo, it is the archetypo: “the twin shall never meet.”

Since no direct contact between our mind and the seetee mind is possible or survivable, in his second installment, “Aristotelian and (p.289) Non-Aristotelian Logic,” Günther projects a future receiving area for our indirect communication with the alien intelligence. While it will most likely be the case that mechanical brains will think in non-Aristotelian forms of reasoning, this will be the absolute requirement for one type of robot brain: “the thought translator.”

Aristotelian logic is limited to making valid statements only about past events. But: “the strict alternative to the two-valued logic of ‘to be or not to be’ does not adequately cover the pattern of future events. Therefore we need at least a three-valued logic, and any statement about the future should be phrased according to the laws of such a non-Aristotelian system of logic thought.” We thus leave a discourse of “probablities” behind. “In a three-valued logic there exists an additional rejectional relation apart from the mutual rejection of any two values.” The third value “rejects the preceding alternative of true and false, and so to speak displaces them.”

But we don’t have to understand this third value. That’s a job for the robot brain. We know the most basic law of any three-value logic. “First find out what the common denominator of the first two values is—in other words the general basis upon which they negate each other—and then deny this very basis. But you might well ask: is it always possible to determine the common denominator? You are quite right, that is where the difficulty comes in and why a three-valued logic is a matter for somebody else, but not for us.”

“There is one alternative of absolute generality the human mind is capable of. It is contained in Shakespeare’s famous line: ‘To be, or not to be. That is the question.’” What would be the three-value version? What’s the common denominator of “being” and “not being”? There is none. “‘To be or not to be’—that is the final question that takes precedence over everything.” It’s not enough to conclude that at this level there is no third value. “Obviously somewhere something is missing in our present conception of the relation between man and cosmos.”

Although the Aristotelian and contra-Aristotelian mind mutually do not exist for each other, we know, if contra-terrene matter can be postulated as existing, that somehow they must coexist. “We shall probably never contact a seetee mind physically because between its realm and ours yawns an existential void where only mutual self-annihilation of physical matter governs the rules of a possible encounter. But there exists a ‘third’ in this creation beside Matter and the energetic Mind: it is Information.”

If information can bridge the cosmic gulf, then a mechanical brain must be designed to hold the between position. “Between” sets a spell with (p.290) “being two.” To be two, or/and not to be two—that is the “between” position.

In the third part in the series, “The Soul of a Robot,” Günther contemplates the quality of the mechanical brain by tipping the human scale. The goal of cybernetics is not some alchemical reproduction or rebirthing of the human brain. The wheel was invented alongside our legs. Mechanical legs were not fabricated in order to elongate our stride.

Günther’s example of the Aristotelian limit concept (or limit to conception): “There is no third sex.” While we can calculate the laws of three-valued logic, we can’t employ them as our own brain functions: “here lies the proper destiny of all cybernetic science not to build a duplicate of the human mind, but a non-Aristotelian brain that works along a three-valued thought pattern.” We must posit again the alien mind of total reversal of our logical thought. “What is true for the human mind is false for the seetee mind, and therefore has the combined characteristic—it is true and false at the same time. It is to clarify this superficial contradiction that the third value must be introduced.” We must translate, transmit, and mediate the “and,” the Aristotelian concept of conjunction, as terms of a three-valued system of thinking. A robot soul would be based not on identity but on “tridentity”: “it could shift the personal center of its mental life and reconcile contradictory viewpoints.”

In the fourth and final installment, “The Thought Translator,” Günther models the robot brain’s defining function as translator between irreconcilable logics on the episode in Lewis Carroll’s Through the Looking Glass featuring figures Günther refers to as the “Tweedle-twins.”

Tweedledum addresses Alice, “If you think we are Wax-works you ought to pay, Wax-works weren’t made to be looked at for nothing. No how!” And Tweedledee adds: “Contrariwise, if you think we’re alive, you ought to speak.” The alternative of mutually exclusive terms is in this case, of course, dead or alive. Any other total alternative might do as well, but they all boil down to the purely logical one:

  • it is

  • or

  • it is

  • not.

“The contra-Aristotelian meaning of AND, however, is our terrestrial meaning of OR (inclusive). Because OR is always true if at least either p or q are true.”

While he starts with the contrariwise twins to give approximation of thought translation between human and seetee minds via the robot brain, (p.291) he concludes with a literal mirror. Write down value sequences for conjunction and disjunction in a horizontal line. Then turn the paper away from you and step up before a mirror. Don’t forget that the reversal is true at the same time. Thus you have stepped through the looking glass. Between these two illustrations, another demo is part of the new definition. “The thought translator … transforms those two separate and mutually exclusive alternatives of the Aristotelian and the contra-Aristotelian mind into one and only one equally strict alternative by rotating the three values either ‘clockwise’ or ‘counterclockwise.’” “The machine produces … its own alternative logic of two ‘values.’ Only the new ‘values’ are now no longer the individual values 1, 2, and 3, which we have used before, but the two opposite rotational shifts. These shifts partake necessarily in the human as well as the seetee range of thought at the same time.” Translation thus becomes possible in the gap and overlap between human and seetee thought. Psychologically even it is impossible for human beings to think the three different meanings of AND: “even if we do not use it for our own subjective thought-procedures, we can calculate with it and find out how the mechanical brain translates our concept of AND into the conjunction of the seetee mind and, by a reversal of that process, transposes seetee ideas into human concepts.”

In Do Androids Dream of Electric Sheep? Dick gives the android—renamed only in the screen version “replicant”—the succinct nickname “andy.”