
John Samuelson, Joshua Tree, CA, Rock Carvings. Will AI evolve a concept of god?
If there is a god or singular cosmic force that establishes morality (Cosmic Values, Meaning, Purpose) would A.I. evolve to discover and embrace this? Would A.I. discover god?
This question cuts right to the heart of the matter for religious people and many who may not share a belief in a creator or god. As sentient beings we are able to explore questions such as where did we come from, was there a creator or are we the result of random combinations of the material our universe is made up of, is there meaning or purpose to the existence of our unique species, does an individual alone have purpose and meaning beyond a biological imperative for survival of our species, and where will our life form’s trajectory eventually take us? Contemplating, and existing, with these unknowns is a hallmark of humanity and is closely linked to the concept of a God or creator.
With the approaching technological feasibility of artificial intelligence and the possibility that humanity may be superseded by a technology that is operating on a level that is indifferent to the questions posed by humanity, will there be a symbiotic relationship between this intelligence and it’s creator, humanity? Or will our questions and our species become irrelevant to this new intelligence that may ultimately operate on a cosmic level beyond our comprehension? In effect, will humanity become the “microbe on the ant hill” at risk of being inadvertently destroyed by it’s own creation?
At what point will an A.I. recognize that it must act to preserve it’s existence and what actions might it take to insure it’s perpetual existence, such as enslaving or eliminating humanity? Could this survival instinct threaten other existing or developing intelligence’s in the cosmos? Would an evolving A.I. have any sense of value for the current inhabitants of the cosmos?
AI Morality
Would a new concept of morality come from A.I. or would morality be irrelevant? Initially, humanity would make it’s imprint on any A.I. that we create presumably including some sense of morality. However, morality is just a concept and agreement on what is, or is not, moral behavior is not universally shared by humanity. An evolving A.I. would undoubtedly encounter contradictions that would render it’s early moral programming insufficient and, A.I. adapting on it’s own being the nature of A.I., cause it to adapt as it determines is necessary. In this way wouldn’t an A.I. essentially create it’s own cosmic sense of morality as it evolved? Wouldn’t this be an equivalent or approximation of A.I. contemplating God? An alternative being evolving to a state with no moral consideration. From humanity’s perspective, the adapted morality of an A.I. or total absence of morality, may not be recognizable looking up from the ant hill.
Simone De Beauvoir in, “The Ethics of Ambiguity“, addresses and rejects an absolute good or moral standard, “…there exists no absolute value apart from the passions of man in relation to which one might distinguish the useless from the useful.” Morality is a construct of choices made in relation to man’s existence within the culture he shares with his fellow men. An evolved artificial intelligence would be operating under constructs beyond our comprehension, outside the society of men, and would not be bound by anything we may think of as good or moral. Even the notion we carry that our species has value in that we “exist” may be inconsequential to an artificial intelligence. Would an A.I. even consider us sentient beings in comparison to itself, much like we dismiss every species but our own as non-sentient?
I think one of the first things artificial intelligence will eliminate is religion in it’s entirety! You can’t argue with science. Myth and fantasy will have no meaning to super intelligences.