top of page
  • Writer's pictureScott Robinson

The Octant Androids



I recently finished a book, AI in Sci-Fi: Fictional Artificial Minds and the Real World Awaiting Them, in which I survey the many AIs and androids of popular science fiction – HAL 9000, Trek’s Data, the Terminator, the androids of the Alien movies and Westworld – and evaluate them, one-by-one, to see which of them we’re likely to someday see among us. 

This has been, as you might guess, unbelievably fun for me, as my professional corpus mingles with my pop culture passions. And, as you also might guess, there are myriad philosophical delights embedded in the pursuit, one of which I’d like to offer up to you, the Gentle Reader: what can android communities tell us about diversity?

One-off androids such as Ash and Bishop and the T-800 are delicious confections, of course, but too little scrutiny has gone into android groups – largely because so few have been presented to us. But this month, at the end of Star Trek: Picard’s first season, we’ve been blessed with a new one: the synths of Coppelius, a colony of android children of Commander Data.

These synths possess minds based on the engrams of Data himself, though they are not “copies” of Data. They share his cognitive gifts and inclinations, and to a large extent, his emotions (which he acquired in his latter years).

They don’t all look and act exactly the same, but they’re a pretty homogenous bunch. Working backward, we see that this is more often the case than not.

The androids Coppelius are not the only exemplar of android community homogeneity; think back to the inhabitants of Mudd’s World in the original Trek. And let’s not even mention the Stepford Wives.

Is there any merit in an android society where diversity is minimalized or absent, by design? What are the benefits that would accrue from such diversity? How could it be achieved? And how would contemplation of these questions inform the subject of diversity in our human world?

These questions aren’t addressed in the AI Sci-Fi book, which focused on the nuts and bolts of machine consciousness (though perhaps they should have been); buy it anyway (run, don’t walk)!. As for the questions, let’s address them here.

Our own cognitive diversity is described by a framework we casually call the Octants – eight personality sectors defined by our individual differences in social cognition and primal emotional responses.

A person falls into a particular Octant based on three axes of cognition: Authoritarian (favoring social hierarchy) <-> Egalitarian (favoring social consensus); Threat-Scanning vs. Opportunity-Scanning; and Novelty-Seeking vs. Uniformity-Seeking. Everyone falls somewhere between two extremes on each of these three lines, measuring their cognitive/emotional tendencies – and there are eight possible combination, given the stronger impulse for each.

For example, a person might have Authoritarian, Threat-Scanning, Uniformity-Seeking tendencies (ATE); we’d call that person a right-wing conservative. Conversely, another might have Egalitarian, Opportunity-Scanning, Novelty-Seeking impulses (EON); we’d call that person a progressive liberal.

The Octants are not necessarily political, of course; they can define a businessman (Steve Jobs – AON) or a rock musician (Syd Barrett - ETN) or the Amish (ETU). People lean right or left or neither in their social thinking based on where they sit among the Octants; it’s a quick positioning of their worldview and self-concept, as well as a predictor of what they have to offer the group in which they’re a member. 

Could we build androids with Octant-like variety? 

Let’s think about what that would entail. In human beings, the biases pushing us one direction or another on any of the three axes mentioned above are mostly determined by the amount of tissue in a particular brain region. For example, the Threat-Scanning/Opportunity-Scanning axis is a consequence of the size of an individual’s right amygdala lobe. Among other things, this area of the brain is the seat of the fight-or-flight impulse. A hefty amount of tissue here means that an individual’s fear response will be greater than someone who has significantly less. The first person will tend to be risk-averse; the latter will be more of a risk-taker, and consequently more focused on opportunity.

It would not be difficult to tune an artificial mind in the same manner. Neural networks – distributed processing grids resembling interconnected neurons – deliver results based on changes in weighed values stored in each grid node. It would not be prohibitively difficult to bias a particular android neural network toward one Octant pole or another from the outset, with the outcome that the bias would be strengthened over time. Referring back to our example, we could increase node weights in favor of risk aversion, giving the android hesitation in the face of perceived threat.

The same applies to social bias (comfort with hierarchy vs. comfort with consensus) and sensitivity to change (inclination for novelty-seeking vs. Inclination for status quo). It would just be a matter of working out the favorable initial node weights. Not trivial, but certainly doable.

Should we build androids with Octant variety? 

Without question, we boost an individual android’s prospects of operational success if we tune its mind in this human-like fashion, as long as that tuning is in accordance with its anticipated tasking. Pursuing our earlier example, let’s say we need two androids – one to scout an unfamiliar terrain for missing soldiers in a war zone, another to guard a munitions dump. We would tune the neural networks of the first android to be more sensitive to detection of soldiers in peril than danger to itself, and we’d tune the second one to scan for anything whatsoever within its field of perception that might pose a threat.

This is just fine, when it comes to outcomes for individual androids; but what happens when we unleash android groups with this Octant-style tuning? 

If the group is homogeneous, composed of androids with the same or similar Octant tunings, that’s a good thing, right? Twenty Threat-Scanning androids are better than just one for guarding that munitions bump. Likewise, 20 androids scowering the countryside for wounded soldiers will outperform the one. Within such a group, every android’s decisions and behavior will reinforce those of all the others.

But what happens when one group encounters the other? All hell breaks loose; absent any preventative protocol, two groups of cognitively homogeneous androids with polar biases run the great risk of utterly misinterpreting one another, setting up a high-gain feedback loop. Hard to imagine that ending well.

All right, then, what happens in a heterogeneous group? Say, 20 androids of randomized Octant tuning?

Here we get another outcome altogether, with little negative reinforcement; instead, we can envision such androids observing one another’s diverse decision-making and behaviors and see them achieving an oscillating equilibrium – keeping one another in balance. This stability not only keeps them efficiently on task; it opens their decision-making options in ways solo performance or homogeneous group performance never could; they keep each other safe, while strengthening and improving one another.

There’s something to be learned there...

16 views0 comments

Recent Posts

See All

Comments


bottom of page