Christmas, AI and 'The Uplift Wars'
Dec 23, 2004 5:00 AM PT
I've been rereading David Brin's first Uplift series -- as astonishingly self-consistent a vision of galactic life as any science fiction writer has ever offered and quite appropriate to the Christmas season. In Brin's imaginary universe, a mysterious and long gone race known as the progenitors set in place a unity of life across five galaxies largely by focusing moral valuations around the development and protection of sentience.
In the books, the first contact between humans and these galactic races is presented as having happened after humans have brought dolphins and chimps to full sentience. That not only makes humanity a wolfling culture, meaning one which evolved intelligence without outside help, but one with the prestige of already having two "clients" of its own. That combination engenders both fear and envy among the galactics, many of whom don't believe wolflings possible and deeply resent humanity for having the sheer effrontery to compound the crime of existence by "uplifting" dolphins and chimps without galactic help.
It is those emotions set against the relationships, real and implied, between humans and members of other cultures that fuel the story lines in the books.
The books are wonderful, but they don't tell you anything about aliens because, of course, the emotions and relationships are, like the political structures and expediences, all as human as the author. Nothing else is, I believe, possible for sane authors since a human writing about the sound a tree makes falling in the forest can only reflect that sound as heard, or imagined, through human ears. We're pretty closely related to dolphins, but I very much doubt we're capable of imagining how a dolphin would understand that event, because the things that make life beautiful to it are not the same as the things that make life beautiful to us.
It is shared evolution, not language, that unites humanity: We are what we see, our behaviors, responses and feelings co-evolved with our perceptions of the world outside ourselves. Thus, most people experience roughly the same emotional reaction to the sight or idea of a sheltered valley, but we have no idea what evolutionary response the same stimulus would trigger in a dolphin. More to the point of this column, we have no idea what it should mean to an artificial intelligence, supposing we were able to build one.
Tracy Kidder's book, The Soul of a New Machine, isn't about the soul of the machine at all, but about the commitment of the engineers developing it. Implicitly, however, there are assumptions of both value and transfer in the book: value in the sense that the human commitment, emotions and drives are assumed to be worthwhile, and transfer in the sense that the effect of these factors among the developers is presented as adding value to the machine.
No Working Definition of Intelligence
You don't see consideration of anything remotely like that in the writings credited as fundamental among the artificial intelligence community. In fact, look closely at the literature in that field and it appears that none of the basic problems affecting artificial intelligence have been addressed by anyone recognized as important among the reigning experts. There isn't, for example, a working definition of intelligence that can be used to unambiguously differentiate what is, and is not, intelligent. Apparently, they'll know it when they see it -- and meanwhile there's no need for them to know what they're working toward in order to work toward it.
In fact, after reviewing the literature, it's not hard to believe that the longevity of the Turing Test (conversation indistinguishable from a human) as a working definition of intelligence reflects the general liberal tendency to believe that nearly all of the people they don't know are subnormal. And it's correspondingly hard not to wonder if the liberal repudiation of religion isn't partially motivated by religion's insistence on the fundamental equality of man.
The galactic cultures in Brin's series see themselves as doing the work of God: transforming potential intelligence produced through natural evolution into true sapience through the uplift process. In their system of belief, Darwinian evolution cannot make this step by itself, thus the occasional appearance of a wolfling species like humans, or the progenitors themselves, reveals the hidden hand of God.
Brin admits of machine intelligence in his uplift universe, but doesn't give them the obligations that come with sapience. There is a deep environmentalist position here in that making the jump from intelligence to sapience is presented as requiring a moment when the intelligence looks outside itself to see beauty in its own fundamental unity with the environment in which it evolved. Thus, Brin offers a position that accepts both evolution and creation: to his galactics, evolution can take an organism to pre-sentience, but it takes external intervention to put a soul into that new machine -- an event he puts 50,000 years in our own past.
It is this event that powers the religious message in the Brin books, and it is the complete absence of any consideration of such an event that dooms the present coterie of AI researchers to continued irrelevance -- whether or not we've already evolved computers to the point that uplift is actually possible.
Paul Murphy, a LinuxInsider columnist, wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry, specializing in Unix and Unix-related management issues.