top of page
  • Writer's pictureScott Robinson

Space: 1999 - Can a Machine Be Conscious?



"The Infernal Machine” 

 

“I am Delmer Powys Plebus Gwent of the planet Zemo. A man of considerable importance on that planet, perhaps not fully recognized as the scientific genius I am. I created this entity, an extension of myself; my entire personality is here, and combined with it, is the superior ability of a computer's brain, and all the might and power known to our planet. I am impervious to destruction and powerful enough to destroy an entire universe. That is who I am, Delmer Powys Plebus Gwent.”  

~The computer Gwent, to the Alphans  

 

A huge alien starship comes upon Moonbase Alpha. A message arrives from one of its occupants, self-identifying as Gwent, asking permission to land nearby the base. 


Koenig, Bergman and Russell board the alien ship and find themselves in a vast chamber, where they encounter an old humanoid man. He speaks to a disembodied voice – Gwent – and identifies himself as Companion. 


It rapidly becomes clear to the Alphans that Companion is subservient to Gwent, who proceeds to demand that the Alphans give them supplies they need. Koenig looks over the list – they can spare what Gwent needs – but refuses to have the supplies brought over until they have returned to Alpha. 


Gwent proceeds to throw a tantrum and demonstrate his power, downing two nearby Eagles. Companion reveals that Gwent is his master, and is all-powerful, and pleads with the Alphans to just do as he says. He then reveals that Gwent isn’t just another occupant of the ship, speaking to them from some other location; Gwent is the ship. Gwent is an artificial intelligence.  


Moreover, Companion reveals that he is Gwent’s creator – a scientist from another world who built a machine in his own image, only to have its personality run amuck. 

 

 

It’s a very old and very popular conundrum: can a human mind exist anywhere but in a human brain? 


You’ll find this one all over the Internet, of course, and some of the finest minds living have weighed in. There’s a wide-ranging summary of those thoughts below, but first, here’s a survey of some of sci-fi's most well-known implementations:20 

 

  • HAL 9000. The granddaddy of them all, 2001’s misguided supermind descends into murder, demonstrating no empathy but plenty of fear; 

  • Cmdr. Data, Star Trek: The Next Generation. The resident android aboard the USS Enterprise NCC-1701-D is noted for his Pinocchio-like desire to become a human being; 

  • Andrew Martin (“The Bicentennial Man”). Mentioned earlier, Isaac Asimov’s android seeks to become human, just as Data did, and pretty much achieves it; 

  • The androids of Westworld. Many of the park’s “hosts” achieve sentience when their creator, Robert Ford, presents them with a cognitive puzzle designed to unlock self-awareness; 

  • H.A.R.L.I.E. Author David Gerrold’s sentient computer works its way to psychological maturity through interaction with a human friend, then fights for its right to survive. 

  • Holmes IV. The supercomputer in Robert Heinlein’s The Moon is a Harsh Mistress achieves sentience as it oversees a lunar penal colony. 

  • VALIS. A sentient alien satellite orbiting Earth, acting as a de facto deity, conceived by Philip K. Dick; 

  • Domino. Companion to beloved traveling journalist Michaelmas in Algis Budrys’s novel of the same name, this Internet-inhabiting sentience helps the latter run the world; 

  • Samantha, Her. This laptop operating system is in fact an Internet intelligence that develops a personality, then falls in love with her user – with hundreds of users, actually... 

 

There are many dozens more, of course. And lots going on here, from computer-based sentients that emerged spontaneously to those created with sentience in mind; from AIs designed to mimic human cognition and self-awareness to alien-built intelligence designed to play God. 


It’s all great fun, and challenging to ponder; but what’s the reality, when it comes to sentient intelligence living in a machine? 

 

 

Douglas, Daniel, John: An Infernal Chinese Room  

 

John Searle’s Chinese Room argument is the pacesetter for debate on this question – debate so long-running and heated that most of it has been shuffled off to the back of this book as an appendix (see below). But the gist is this:  

 

“Imagine that you are locked in a room, and in this room are several baskets full of Chinese symbols. Imagine that you (like me) do not understand a word of Chinese, but that you are given a rule book in English for manipulating these Chinese symbols. The rules specify the manipulations of the symbols purely formally, in terms of their syntax, not their semantics. So the rule might say: ‘Take a squiggle-squiggle sign out of basket number one and put it next to a squoggle-squoggle sign from basket number two.’ Now suppose that some other Chinese symbols are passed into the room, and that you are given further rules for passing back Chinese symbols out of the room. Suppose that unknown to you the symbols passed into the room are called ‘questions’ and the symbols you pass back out of the room are called ‘answers to the questions’. Suppose, furthermore, that the programmers are so good at designing the programs and you are so good at manipulating the symbols, that very soon your answers are indistinguishable from those of a native Chinese speaker. There you are locked in your room shuffling your Chinese symbols and passing out Chinese symbols in response to incoming Chinese symbols…from the point of view of an outside observer, you behave exactly as if you understood Chinese, but all the same you don’t understand a word of Chinese. But if going through the appropriate computer program for understanding Chinese is not enough to give you an understanding of Chinese, then it is not enough to give any other digital computer an understanding of Chinese.”21 

 

Can machines think? Philosopher John Searle turned his attention to this question in 1980. On a flight to an academic conference, he created the thought experiment above to demonstrate his conviction that no digital computer, however convincing in elocution and mannerism, could ever be said to truly think. 


In this age of Siri, Alexa and their emerging sisters, and the endless stream of prompts, updates, and recommendations they provide us, we are beginning to take talking machines for granted. With the advent of "smart cloud" technology in the workplace, we will soon take digital assistants for granted - ethereal digital managers who keep us on schedule, screen our calls and emails, and review our work before it goes out. 


But despite our casual acceptance of machine influence in our thoughts and actions, these novelties aren’t actual thinking machines. Many decades after Alan Turing proposed his optimistic challenge, we have nothing out there (yet) on the digital landscape that is truly capable of passing for human. 


We can, however, given our interactions with our digital helpers, see that the Turing Test (if you can’t tell you’re conversing with a machine, then the machine can be said to be intelligent) may not be the true measure of the thinking machine.  


Put another way - if Searle, who has a human brain and all its linguistic and semantic potential, functionally translate Chinese convincingly and still not understand Chinese, what chance does a microprocessor have? 


Many were the outraged replies from the artificial intelligence community, when Searle put forth this argument: 

 

The Systems Reply: “Neither the person in the room nor the rule book understand Chinese, but the entire system understands Chinese.” 

 

The Robot Reply: “The Chinese Room can be placed inside a robot, roam the landscape, and develop causal connections between the symbols and the objects they represent.” 

 

The Brain Simulator Reply: “Suppose the rule book is so finely detailed that it simulates the responses of every neuron in the brain of a native Chinese speaker. Then there is no distinction between the room and an actual Chinese mind.” 

 

There are many other replies, some of which are used in combination, but in a nutshell, there has been no serious ontological answer to Searle’s thought experiment in the intervening decades. What the Chinese Room has done, however, is opened up deeper examinations of its terms: what does it mean to ‘understand’? Where is the line drawn between syntax and semantics? What is ‘meaning’? These are questions not only deserving of deep answers, but of frequent revisitation… 


For more than 40 years, scientists, philosophers, nerds and fanboys have raged at each other over the Chinese Room argument, the majority in defense of the sacred-cow belief that computers are people, too. In the age of Chat-GPT, it’s becoming increasingly clear that more is needed to demonstrate sentience that a firm command of English. The Chinese Room argument is still interesting and informative, but the debate has now shifted considerably. 


Here are some more contemporary views on machine sentience. 

 

 

Neurophysiologist Christof Koch  

 

In Scientific American, 2019: 

 

“Although experts disagree over what exactly constitutes intelligence, natural or otherwise, most accept that, sooner or later, computers will achieve what is termed artificial general intelligence (AGI) in the lingo. 

 

The focus on machine intelligence obscures quite different questions: Will it feel like anything to be an AGI? Can programmable computers ever be conscious? 

 

By “consciousness” or “subjective feeling,” I mean the quality inherent in any one experience—for instance, the delectable taste of Nutella, the sharp sting of an infected tooth, the slow passage of time when one is bored, or the sense of vitality and anxiety just before a competitive event. Channeling philosopher Thomas Nagel, we could say a system is conscious if there is something it is like to be that system. 

 

Consider the embarrassing feeling of suddenly realizing that you have just committed a gaffe, that what you meant as a joke came across as an insult. Can computers ever experience such roiling emotions? When you are on the phone, waiting minute after minute, and a synthetic voice intones, “We are sorry to keep you waiting,” does the software actually feel bad while keeping you in customer-service hell? 

 

There is little doubt that our intelligence and our experiences are ineluctable consequences of the natural causal powers of our brain, rather than any supernatural ones. That premise has served science extremely well over the past few centuries as people explored the world. The three-pound, tofulike human brain is by far the most complex chunk of organized active matter in the known universe. But it has to obey the same physical laws as dogs, trees and stars. Nothing gets a free pass.


We do not yet fully understand the brain’s causal powers, but we experience them every day—one group of neurons is active while you are seeing colors, whereas the cells firing in another cortical neighborhood are associated with being in a jocular mood. When these neurons are stimulated by a neurosurgeon’s electrode, the subject sees colors or erupts in laughter. Conversely, shutting down the brain during anesthesia eliminates these experiences. 


Given these widely shared background assumptions, what will the evolution of true artificial intelligence imply about the possibility of artificial consciousness?” 

 

Koch goes on to address an important point about sentient machines: it isn’t enough for them us to perceive that they are conscious, self-aware, through our interactions with them (which could simply amount to superb simulation); they must actually be conscious and self-aware, our perceptions aside:  

 

“To create consciousness, the intrinsic causal powers of the brain are needed. And those powers cannot be simulated but must be part and parcel of the physics of the underlying mechanism. 


To understand why simulation is not good enough, ask yourself why it never gets wet inside a weather simulation of a rainstorm or why astrophysicists can simulate the vast gravitational power of a black hole without having to worry that they will be swallowed up by spacetime bending around their computer. The answer: because a simulation does not have the causal power to cause atmospheric vapor to condense into water or to cause spacetime to curve! In principle, however, it would be possible to achieve human-level consciousness by going beyond a simulation to build so-called neuromorphic hardware, based on an architecture built in the image of the nervous system. 


Whether machines can become sentient matters for ethical reasons. If computers experience life through their own senses, they cease to be purely a means to an end determined by their usefulness to us humans. They become an end unto themselves.” 

 

Koch has also said, in MIT Review 

 

“We know of no fundamental law or principle operating in this universe that forbids the existence of subjective feelings in artifacts designed or evolved by humans.” 

 

 

Then again... 

 

The philosopher David Chalmers maintains that sentient machines are impossible because sentience itself is still beyond human explanation: 


“The tools of neuroscience cannot provide a full account of conscious experience,” he wrote, “although they have much to offer.” 


Steven Weinberg, the Nobel Prize-winning physicist, has questioned whether the existence of consciousness can even be derived from physical laws. If it can... how? 

Giorgio Buttazzo, in his article “Artificial Consciousness: Utopia or Real Possibility," writes, “Working in a fully automated mode, [computers] cannot exhibit creativity, unreprogram-mation, emotions, or free will. A computer, like a washing machine, is a slave operated by its components.” 

 

Possibilities 

 

In a series of papers presented at the 16th Annual Conference in Artificial General Intelligence in Stockholm in 2023, Michael Timothy Bennett proposed a framework for practical artificial general intelligence. Following the reasoning that human beings are themselves “mechanisms” - networked physical processes that interact with the world – there is no reason in principle why those processes can’t be duplicated in a mechanism that is aware of itself – put another way, we are mechanisms that are self-aware, therefore self-awareness can be the province of mechanisms. 


The key, he declared, was to construct the mechanism in such a way that it perceives causality – that is, to recognize what objects or events in the world impact other objects or events. When it can sense that it caused something to happen, or that something happened to it, then the journey to self-awareness – and, eventually, consciousness – has begun. 


Interacting with the world brings out another important piece of the conscious machine puzzle: we’re not just talking about computers; we’re talking about computers that learn (neural networks), and computers that can have experiences in the world. Consciousness, after all, exists to augment our experience in the world. 

That means that our best bet for a conscious machine will also be a robot. 


History professor Brian King summarizes what such a robot might look like, suggesting three possibilities in Philosophy Now: 

 

  • A possibly conscious robot could be made from artificial materials, either by copying human brain and body functions or by inventing new ones. 

  • Another way to make conscious robots could be to insert artificial parts and materials into a human nervous system to take the place of natural ones, so that finally everything is artificial. 

  • A robot could be made of artificial organic material. This possibility blurs the line between living and non-living material, but would possibly be the most likely option for the artificial production of a sentient, conscious being capable of feeling, since we know that organic material can produce consciousness. To produce such an artificial organism would probably necessitate creating artificial cells which would have some of the properties of organic cells, including the ability to multiply and assemble into coherent organs that could be assembled into bodies controlled by some kind of artificial organic brain. 

 

King also circles back to the point made above about ChatGPT – it responds in English, but it doesn’t understand anything in its response (which the core of Searle’s argument). King addresses the importance of understanding to the question of robot consciousness: 


“Certainly, robots can be made to use words to look as though they understand what’s being communicated – this is already happening (Alexa, Siri). But could there be something in the make-up of a robot which would allow us to say that it not only responds appropriately to our questions or instructions, but understands them as well? And what’s the difference between understanding what you say and acting and speaking as though you understood what you say? 


Well, what does it mean to understand something? Is it something more than just computing? Isn’t it being aware of exactly what it is you are computing? And what does that mean? 


One way of understanding understanding in general is to consider what’s going on when we understand something. The extra insight needed to go from not understanding something to understanding it – the achievement of understanding, so to speak – is like seeing something clearly, or perhaps comprehending it in terms of something simpler. So let’s say that when we understand how something works, we explain it in terms of other, simpler things, or things that are already understood that act as metaphors for what we want to explain. We’re internally visualising an already-understood model as a kind of metaphor for what is being considered. For instance, when Rutherford and Bohr created their model of the atom, they saw it as like a miniature Solar System. This model was useful in terms of making clear many features of the atom. So we can see understanding first in terms of metaphors which model key features of something. This requires there to be basic already-understood models in our thinking by which we understand more complex things.” 

 

 

From human to machine 

 

Gwent the alien scientist built a machine to replace himself, and in so doing, replicated his own fears and anxieties: 


“My experience over these years... travelling the universe alone... blind... dependent on Companion, has left me untrusting, suspicious, cynical, perhaps paranoid,” he tells Koenig and Bergman. “You see, having built this, yes, machine, to preserve my personality, I discovered too late its inherent weakness. I need... company. None of us exists except in relation to others. Alone, we cease to have personalities. Isolation. Do you understand?” 


That sounds like conscious experience, to be sure, and the implication is that once we’ve created machine consciousness, we’ll need to think long and hard about just how humanlike we really want it to be. We’d do well to remember the words of Isaac Asimov: 


“Can we help but wonder whether computers and robots may not eventually replace any human ability? Whether they may not replace human beings by rendering them obsolete? Whether artificial intelligence, of our own creation, is not fated to be our replacement as dominant entities on the planet?” 

2 views0 comments

Recent Posts

See All

コメント


bottom of page