In the Garden of Eden, Adam and Eve bring upon themselves the downfall of punishment by
expulsion from their home by eating the forbidden fruit in order to gain
knowledge of
good and evil. This is a wonderful allegory for the way many artificial intelligence
thinkers frame the creation of Artificial general intelligence--once it is upon us,
there is no turning back.
Artificial general intelligence (AGI for short), more specifically of the self-improving
kind, is the sort of heretofore fictitious, sci-fi plot point that most everyone is
familiar with at some level, but might not fully grasp. It is the idea of a computer
that can perform all of the same functions as a human, from playing games to forming
new, creative ideas. Self-improving AGI is AI that can modify its own source code to
improve itself. I will be referring to AGI of the self-improving kind throughout the
rest of this essay.
This is a good place to point out that I, like many others in computer science, consider
the most likely end-result for AGI to be "martian" rather than "human"--i.e. it doesn't
have to flap like a bird in order to fly. The most likely path to AGI, given the
overarching themes within AI research, is through applied mathematics, especially
statistics.
For a long time, discussions about the hypothetical end-result of AGI research did not
interest me. I considered AGI to be an inevitability, and therefore largely ignored the
possibility of negative consequences. This is because I was, at the time, only capable
of imagining a human-like end-result.
A
recent statement
by Elon Musk along with some insightful comments by the podcaster
Dan Carlin
caused me to more closely consider the dangers posed by AGI. In his most recent podcast,
Dan Carlin posed the question: Can a dog ever have even one human insight, even if the
dog lives for one thousand years? If we consider that self-improving AGI may very
quickly become so much greater than our own intelligence that the difference between us
and it is like the difference between species', then do we really know what we are
getting ourselves into? What if, in the steady pursuit of knowledge, we make a similar
mistake as Adam & Eve, one that there is no going back from?
Another interesting aspect of the AI discussion is the idea that, upon invention of AGI,
our first self-aware creation wouldn't even be made of the same building blocks as us.
Isn't it strange that we have so many good examples of well-constructed life all around
us that we don't even come close to fully understanding, and yet we may soon be able to
create a new intelligent species? Even, perhaps, one far greater than ourselves, that
has no known equivalent in the universe?
It is also interesting to consider that our history does not contain a single instance
of humans creating something whose intelligence or sophistication surpasses our own.
Even in religious teachings, in which creation of intelligent life (us) is touched upon,
you do not see God creating something greater than himself! All religious writings that
I have encountered lean on the idea that the creator's greatness is above and beyond
that of the creation. In the case of AGI, we are toying with the unexplored idea of
creating something that
far surpasses our own understanding. Perhaps this is what
makes AGI so frightening to people like Elon Musk: Even in one thousand years, we may
not be able to understand an insight that our creation has in one instant.