top of page
  • Writer's pictureScott Robinson

Asimov's Manifesto: Prescience for the Human in the Machine



Among the many bequests of Isaac Asimov to the human conversation was his take on the robot – its nature and role. Emerging in science fiction’s Golden Age, Asimov took robots – largely used as mechanical straw men in pulp fiction, metallic menaces – and made them what he realized they would actually become in the real world: tools, more than anything, and partners with humanity rather than threats. 


He proceeded to write stories and novels about them that came to redefine them both in science fiction and in the public consciousness. His creations tended to be exceptions to the passive standard of robot utility that he advocated, exceptions that proved his rule. By testing the limits of robot-human interaction in a carefully-defined framework, he was able to tease out just what sort of attitudes and biases were likely to emerge in a humanity that was about to witness the rise of the robot. 


Along the way, he came up with a vast array of interesting robots – from Robbie and Speedy and Herbie in the original I, Robot to R. Daneel Olivaw in The Caves of Steel and The Naked Sun, et al – each of whom presented us with something new to consider about the nature of machine intelligence and how it might impact society. And, of course, with his Three Laws of Robotics, he started a conversation about robot ethics that persists to this day. 


But nothing in the Asimov robot canon compares to his most human robot, Andrew Martin. 


The story of the story is legend, now: an editor wrote to Asimov about a sci-fi collection themed around the US Bicentennial and asked him to contribute. He took off with just that one word, and wrote a robot story having nothing to do with America’s birthday: he created a robot that lived 200 years, pursuing the most unlikely of goals – to become a human being. 


This, of course, is just the Pinocchio theme rendered in gears and circuits, and it’s now a thoroughly conventional idea, thanks to Star Trek’s Data, who arrived a decade after “The Bicentennial Man” was published. 


And it wasn’t just that Asimov got there first: in his 200-year tale of the robot that wanted to be a real boy, he covered Andrew Martin’s entire journey, from off-the-shelf domestic robot to world-changing human being. And the great part is – along the way, his story predicted almost everything of importance that has happened in the evolution of artificial intelligence since. 


Let’s take a look... 

 

 

Narrow AI vs. AGI 

 

When we meet Andrew Martin – an NDR model of domestic robot in the home of Gerald Martin – he is a fairly straightforward representation of a generalized AI. He is able to perform a wide range of tasks, as would be expected of a domestic robot; he is able to verbally communicate with the family he serves, able to understand and be understood. He is able to learn new tasks. And as his role leans more into being the playmate of Martin’s daughters, “Miss” and “Little Miss”, than doing actual housework, he becomes a part of the family. 


Over the course of the story, as generations of Martins come and go, Andrew demonstrates his generalized intelligence by realizing he requires a new skill he does not possess, then studying appropriately to acquire it. He becomes both an artist and an inventor, soaking up new problem-solving capacity through sheer exercise of intellect – the very definition of AGI. 


On the face of it, this is just a convenient narrative – write about a robot that wants to be human, and of course it will cultivate its generalized intelligence, since human intelligence is generalized. But the interesting facet of AGI that Asimov illuminates is how we wrestle with it today, as AI takes its place in society. 


In the story, domestic robots like Andrew have been brought to market by US Robots and Mechanical Men in order to gentle the consuming public to the idea of Robots Among Us, after decades of keeping them off Earth and only working in space, catering to the fears of the masses. Generalized intelligence, it is thought, will make robots more human-like and therefore less scary. 


This doesn’t work out very well, and a few decades after Andrew’s story begins, US Robots reverses its policy: it has found AGI to result in robots that are “too unpredictable” - Andrew Martin being the obvious example. When Andrew visits US Robots and speaks with Smythe-Robertson, its CEO, he is told, ‘Our robots are made with precision now and are trained precisely to do their jobs.” 


“Yes,” replied Paul Martin, grandson of the now-departed Little Miss, “with the result that my receptionist must be guided at every point once events depart from the conventional, however slightly.” 


This exchange captures perfectly the current discussion about narrow AI (the kind we have today) and AGI (which is on its way): when we have AGI, will it be unpredictable? Uncontrollable? And on the other hand, is narrow AI too limited, in the long run? Won’t we soon need AI that is able to improvise, to generalize solutions and problem-solve when confronting something new? 


Asimov saw clearly that these would be important questions. 

 

 

The black box of deep learning 

 

At the time “The Bicentennial Man” was written, it was still widely believed that symbolic AI – based on symbol processing, or purely algorithmic programming – was, by default, going to yield true thinking machines. This was a natural assumption, since at the time, symbol processing was the only game in town; multi-layer neural networks hadn’t even been conceived of, yet. So strong was this assumption that almost everyone mocked or ridiculed anyone who challenged it, especially if their name was Searle. 


After 40 years of getting nowhere, the symbolic AI crowd was left behind, as sophisticated neural network systems – based on the workings of neurons in human brains – produced the results that moved the field forward. Such systems, applied to vast oceans of data in selected domains, could surface patterns in that data that were otherwise beyond detection. Almost overnight, this style of processing – machine learning – began enabling a breathtaking array of applications, from medical diagnostics to image recognition to self-driving cars to fraud detection. Its cousin, deep learning, was able to achieve the same thing with less structured data. 


And Asimov saw it coming. 


He described Andrew’s positronic robot brain in terms that echoed the analogy of the neural network to the human nervous system. Andrew speaks of his “positronic pathways” as if he were talking about nerve fibers, even going so far as to describe his positronic activity in emotional terms: “[Making art] makes the circuits of my brain somehow flow more easily,” he said to Gerald Martin early in the story. “I have heard you use the word ‘enjoy’ and the way you use it fits the way I feel.”  


And the neural platform, whether etched in carbon or silicon, is indeed as uncertain as Asimov predicted when he put these words in the mouth of US Robots robopsychologist Merton Mansky (read: Marvin Minsky): 


“Robotics is not an exact art, Mr. Martin. I cannot explain it to you in detail, but the mathematics governing the plotting of the positronic pathways is far too complicated to permit of any but approximate solutions.” 


Forty years after his story’s publication, this is the reality of neural networks – the foundation of real-world AI. 

 

 

Generative AI – making art, writing books 

 

Asimov’s Bicentennial Man is the very embodiment of the highest-profile implementation of AI in the here-and-now: systems that can write and create art. 

Andrew Martin creates a wood carving for Little Miss, and her father is so impressed that he challenges Andrew to create more. It turns out he has a knack for it, and before long both his wood carvings and custom furniture designs become the family business, bestowing generational wealth on the Martin family. 


Andrew acts intentionally in improving this ability by studying the craft and applying what he learns, and though much of the public’s demand for his work is based on the novelty of he, its creator, being a robot, the feat itself stands out. 


Later, after Andrew has expressed a desire for freedom from ownership by a human, and after he has drawn attention to societal questions of robot rights and ethics, his study of his own kind leads him to begin writing a history of the evolution of robotics from the point of view of a robot. Put another way, he begins crafting a new narrative by harvesting useful information from old ones. He becomes, essentially, a walking, talking, ChatGPT. 


To be sure, Andrew Martin is presented not only as an AGI – which does not yet exist in the real world – but also as a self-aware, conscious intelligence, as well. So the parallels between his wood art and book-writing, and the generative AI systems that currently can produce art and text on demand, are only in-principal parallels; no generative AI today can spontaneously decide to take up wood carving, or conclude that what the world really needs is a book about robotics written by a robot. That requires intentionality, a feature that is far beyond the capability of any existing AI. 

 

 

Machines designing machines 

 

Asimov was one of the first to posit machines designing machines – robots conjuring their own replacements. We’ve seen this a lot in both print and film; in Alien: Resurrection, we meet Call, an android designed and built by other androids, whose inner workings are unfathomable to mere humans; and in Steven Spielberg’s A.I., we see the latest of many generations of robots designed by their predecessors, centuries after the extinction of the human race. 


In Asimov’s formulation, Andrew Martin’s desire to be fully human leads him to begin designing an android body, containing synthetic organs that he himself has to design. As he replaces his robotic components with more-human synthetics, he becomes the inventor of a whole new field of science and manufacturing – prosthetology, which becomes an industry with the potential to extend human life across the board.  


That’s another spot-on prediction, and already closer than most people are aware. AI is already moving the pharmaceutical industry ahead by decades, as it is able to generate and analyze possible configurations of molecules, and the approaches to synthesizing them, millions of times faster than humans (this is how the Covid vaccines were discovered so quickly). The same applies to materials science, where AI can deliver new components for lightweight batteries, moving that industry (and, by extension, electric vehicles) ahead by years. 


It won’t be long before most design work in tech is turned over to AI. Maybe that’s scary – but it’s coming up fast. 

 

 

Public apprehension over AI 

 

One of the most interesting aspects of Andrew Martin’s story – and other Asimov robot tales, as well – is the general public’s apprehension and fear of robots and AI. So great is this apprehension, in Asimov’s universe, that there are periods of decades when robots aren’t permitted on Earth, but can only be used in space. 


This idea, counterintuitive amid two generations of C-3PO- and R2D2-loving kids, has sure enough surfaced with the advent of real-world AI. It takes only an hour or two of online news-surfing to trip over half a dozen articles about the dangers of AI – from the threat that it will break lose and begin taking over systems everywhere to the more realistic danger it poses to the job market and economy. 


Some of this fear arises from ignorance. The idea that chatbots, for all their life-like mimicry of human service personnel, are actually conscious, is a ludicrous idea – yet, amazingly, not uncommon. And despite the ready availability of reliable information about the nature of those chatbots, and the severe limitations of even the most sophisticated machine learning applications, many people will continue to believe ludicrous assertions about AI – and the media will go on abetting, as it always does, for clicks. 


When something like Andrew Martin does actually arrive – all bets are off, where public apprehension is concerned. As Asimov foresaw. 

 

 

One of the family – Consciousness, strange loops, and “I” 

 

The most endearing facet of the Andrew Martin story that falls under Asimov’s purview as AI prophet is the nature of Andrew’s consciousness. 


We’re still far from knowing all there is to know about the nature of consciousness, but we know much more than most people assume. As the disciplines of psychology and physiology have merged into the new field of neuroscience, we have learned several of the essential components of consciousness, and have some idea of how they interact. 


We’ve already touched on one of these – intentionality – above. In the philosophy of mind, this refers to the quality of mental states – thoughts and beliefs, desires and dreads – that consists in their application to some object or some situation or state. This can include everything from deciding to prepare a meal to locking the front door at night. 


But there are other components preceding intentionality in conscious processes. To have intentionality, a conscious being must have an understanding of their own agency in the world; that they are an entity navigating an environment, able to act and affect that environment and objects in it. 


And understanding of one’s own agency requires that one have a sense of self – that they are an “I” and that others are “they” - and an awareness of that self. 


Self-awareness is often used synonymously with consciousness (though the two words do not mean precisely the same thing). No machine today is self-aware; no machine understands that it is a machine and is aware of what it is doing. Even the most self-aware-seeming – such as ChatGPT, when asked if it is conscious, and then replies with paradoxically apparent self-awareness that it is not – have no idea that they are responding to a question, no awareness of the questioner, and no grasp at all of the content of their own response. 


But all conscious beings – from humans to their house pets – are self-aware. They understand that they are a distinct I among they, and more significantly, sometimes part of a we. This joint perception of self and others is called the Theory of Mind in psychology – the understanding that other human beings have the same internal experiences we do. 


This isn’t the entire parts list of consciousness, but it’s a good start. We clearly see the I in Andrew Martin in his interactions with the Martin family and others, most prominent in his references to himself. We see his self-awareness in his understanding of his role in the family; his perspective on the individual relationships; in his desire – and intention – to contribute to their well-being. 


We see that self-awareness become pronounced as he begins to experience intentionality to pursue humanity – a goal that is both highly abstract and very concrete. His sense of self, his I, is so strong that he is able to assess it and feel the overpowering impulse to alter it. Asimov got all of this right (not just with Andrew, but also with his robot cop R. Daneel Olivaw). 


On the other hand... 


Any robot, conscious or not, would need these traits to implement the Three Laws. Even an utterly unconscious, self-driving-car-like robot would require a basic understanding of its own separateness from others and its agency in the world in order to make the distinctions required by the Laws. 


There must be more to consciousness to enable an Andrew Martin and elevate him above self-driving cars. 


The cognitive scientist Douglas Hofstadter, author of the Pulitzer-winning Gödel, Escher, Bach: An Eternal Golden Braid, provides a powerful answer. 


In GEB, Hofstadter posits the strange loop: the presence of recursive, self-modifying patterns found in nature and art. Spiral shells are a non-human example; fugues by Bach, self-referential art by M.C. Escher, and the theorums of Kurt Gödel are human instances Hofstadter cites. 


He proposed that strange loops, and the human capacity to both sense and create them, represent a marker of advanced intelligence. This argument has stood up well in the decades since he proposed it. But he advanced the argument later, after the unexpected death of his wife. 


He noted after her passing that many of her patterns of thought – her cognitive and emotional strange loops – persisted in him. This led him to look for them in his children, and to extend the idea: he proposed that the strange loop, as it is experienced in memory and cognition, is a component of consciousness. Not just in human beings, but in their pets (with whom they would also exchange strange loops) and within animals as well, particularly those with strong social characteristics. 


Asimov wrote two years before Hofstadter’s first description of the strange loop, and a full 30 years before Hofstadter advanced the idea of strange loops as a foundation of consciousness. Yet we see them in the story of Andrew Martin. 


Andrew gets his I from the Martin family, who treat him as one of their own. When Miss and Little Miss make a playmate of him, his sense of self intensifies, and his bond with Little Miss in particular flourishes. He absorbs the concept (and feeling) of loyalty from her, and the concept (and feeling) of respect from Sir. Throughout his journey, we see Andrew absorbing the strange loops of those he is close to – just as Hofstadter described. Andrew becomes the sum total of the strange loops he has soaked up during his journey, including those with whom he works closely after the Martin family line is ended. 


Of course, Asimov may never have even heard of strange loops, and wrote before he possibly could have. What he has gone on inside Andrew Martin can be the same dynamic we as humans observe in ourselves and those we are close to: children pick up traits from their parents; friends and partners exchange traits. This is the human experience; all Hofstadter did was suggest a neurophysiological stratum for it to live in. 


And yet – in building it into Andrew Martin, Asimov demonstrated a deep understanding of conscious experience, and was able to provide a window into its development in a robot. So well-conceived was this window that Andrew gives us insight into our own relationships and self-understanding.  

 

Always the prophet, Asimov, anticipating the nuances of space travel, home computers, the coming of the Internet. With Andrew Martin, he foresaw – more than any of his peers – exactly how AI would be when it got here. 

13 views0 comments

Recent Posts

See All

Comments


bottom of page