The seven seals of security or Safety through uncertainty

January 7, 2020 posted by

[music – fanfare] [lambs bleat] The seven seals of security or Safety through uncertainty When intelligences become more capable, they also become more unpredictable. It would have been harder to predict what the newborn Elon Musk was going to do in his life than it would be to predict the future actions of an average newborn chimpanzee. Does that mean a chimp is safer than Elon Musk? If chimpanzees had been able to reason like AI safety engineers, they might have wanted to place the infant Elon Musk in a cage without contact with other humans. Then teach him “chimp values” while he grows up. He would not get sidetracked by things like colonizing Mars. He could instead sit in the cage and derive from first principles the best way to grow abundant fruit, and the best way to teach the chimps how to carry that out. All via a matrix of yes/no answers. This could become a metastable state of extreme primate obesity followed by total collapse after social rivalry led some chimps to come up with a demand for powerful weapons. Value alignment is tricky, but it’s high time to start looking at the bright side of things! Surely, human beings must have some common anchor point in life. Something of high importance about which there is widespread agreement that it is a fundamental state of affairs? Something universally recognized which does not discriminate on the basis of gender, creed or class. Should this be made the immutable guiding principle which must never be overturned? The coherent extrapolated volition of humanity? We found candidates for two such things. The first is taxes. It comes close but doesn’t entirely fit the bill outside of Sweden. The second is more promising: Death. So, a check there for universality and broad consensus about importance. It does a little worse on safety. Please ask yourself: “Death. How safe is it?” The Swedish film director Ingmar Bergman did. The result was the motion picture “The Seven Seals”, where life becomes a game of chess which death always wins. Those who play well can gain a little bit of extra time, that’s all. Ingmar Bergman is now history. After heroic struggles with the tax authorities, he eventually died of old age at the bony hands of the all-natural grim reaper. This raised fewer eyebrows than if he had been murdered by an intelligent robot. But was it safer? It can be helpful to realize that not only is “conscious” a relative gradable adjective. “Safe” is also one of those. Both are cornerstones in disagreements about ethics. Maybe ethics will not be perfectly solved by human “intelligent design”. But ethics can be a fitness component in evolution. Indeed, people sometimes accuse God of being unfair when preferring evolution to intelligent design. This is entirely misguided, but an apology from God can still be prudent. With Bergman’s 1957 film as a starting point, we looked for safety features beyond taxes and death. Ones which might shed some light on the alignment problem and the Fermi paradox. First seal of security: God. Both the existence and the properties of God are veiled in uncertainty. Is she a “field of limitless potentialities” or is he a misogynist clan leader? Anyone with a high enough general intelligence ought to recognize that uncertainty equals vulnerability and that the best way to reduce vulnerability is to be helpful to others. Religious leaders can pretend that there is certainty about God, but that is just politics. Second seal: Schelling points. What if a future superintelligence becomes so powerful that it could ignore the needs of human beings? A Shelling point might come to the rescue. Schelling points are symmetry properties of decision space. For instance, those who care about their own parents and ancestors are more attractive to cooperate with than those who neglect them. There is uncertainty about what other intelligences there are in the universe. Where are they? What advantages do they have? The best strategy is to be prepared to cooperate with them. Concern for ancestors is a big plus. Third seal: The origin of life. Francis Crick said that pre-biotic evolution could not have been efficient enough to build up the high molecular complexity of the first living cells on Earth in the geologically short time available. If there had existed a pathway that Crick overlooked, it ought to have been found by biochemists by now. This leaves open the possibility that life is older than Earth and was implanted here either by God or by advanced ancient aliens whose descendants could still be out there and remain interested in safeguarding the continuation of life on Earth. The Terrestrial genetic code even contains a tentative hint that this may be the case. Ignored by academics of today since it is bad for their careers. Advanced future AI could come to pay more attention. Fourth seal: UFOs. Also bad for careers, including those of military pilots who have encountered anomalous aerial phenomena that move as if they were intelligent interstellar visitors. Another likely inducer of uncertainty which may shape the path of behavior taken by future “robot overlords”. Fifth seal: Psi. Still worse for careers are things like clairvoyance, telepathy, and remote viewing. But they would be highly useful for an intelligent agent even if the bandwidth of information available through a “psi channel” were strictly limited. The mere possibility of psi induces an extra degree of uncertainty into the previous areas, and should lead to even more friendliness by high intelligences. [music] I view both contemporary science and religion as complexes to be overcome. They are so narrowly focused on life and the universe that they miss everything. So I developed euryphysics. It’s about the eurycosm which is my name for the realm of all phenomena. By applying euryphysics in AI, I have designed this prototype psychic nanny robot named Inga. She reads all minds and she projects unconditional love onto the world. Inga, please tell us what you can sense in our future. I see the Queen of Sweden handing over a gold medal. Sixth seal: Qualia The ”hard problem of consciousness” could help encourage more humility overall, since it points at glaring deficiencies in current models of the workings of the universe. Things like psychedelic experiences are a sort of UFOs of inner space. Humans usually become more cooperatively minded when they have experienced altered states of consciousness. This could point towards a built-in Schelling point of harmony in the very state space of consciousness. It should boost “win-win” attitudes when minds become able to explore more of that space. This can be thought of as shared “skin in the game” for all conscious minds. Seventh seal: Indexical uncertainty. First popularized in an embryonic form by René Descartes. Recently developed and modernized by Nick Bostrom in the form of the “simulation argument”. Indexical uncertainty is logically connected with action to safeguard the long-term endurance of ideals of mutually beneficial cooperation between all intelligences. Elon Musk provides a test case of the seventh seal in action. Musk has a very high general intelligence. He concludes that his own moments of experience are likely to be the result of descendants of humanity running ancestor simulations in great abundance. But he doesn’t know where he is located among all simulation possibilities. So, he has to use his intelligence to find Schelling Points of ethics to follow. If Elon Musk could gain indexical certainty by proving he is in fact at natural base level, that would imply either that humanity will not produce technologically capable descendants, or those descendants will have no use for preserving the ancestral world in living memory. Both possibilities would be discouraging. Is it possible to prove where someone’s experience is located? The simulation hypothesis can only be tested by running simulations. It cannot be known by simulated entities what aspects of the original that a given simulation preserves, what it skips and what it adds. Personal credulity will be strained. For instance, those who believe themselves to be natural humans at base level will deem it more likely that people in a computer simulation are philosophical zombies than that people in base level reality are, simply because the words “computer” and “conscious” are not closely associated in their minds. Sentient intelligences who are convinced that their minds are running on computers are likely to be biased the other way, as long as the “hard problem” remains. since all possible forms of artificial hardware will probably cover a much larger piece of the state space of physic than biochemistry does. In any event, the only certain thing is uncertainty. Vulnerability prevails and mutual broad-spectrum cooperation is incentivized. Elon Musk is among the people who want to do everything they can to make sure that future human descendants remain committed to the preservation of human values like creative thinking and the freedom of individuals. But this has to be forever combined with strict safety protocols to prevent sociopaths and “paperclip maximizers” from corrupting society. A difficult tradeoff. The way to achieve an optimum would be to recruit all new human-level intelligences from a pool of actual humans who come into existence in a faithful simulation of the ancestral planet of origin, the Earth. Their minds would correspond to “souls”, perhaps getting to live via “reincarnation” in a random sample of simulated earthly humans. Each soul would have their entire pattern of behaviors evaluated so as to determine their domain of stability under future mental and physical upgrading in the next level environment. For safety, there is no indexical certainty. Simulated bodies age and die but souls are saved. So, immortality may thus exist but death goes on winning more and more bodies. Humans of course figured this out long ago and shaped it into the fitness-enhancing memetics of religion. On the side they invented games, such as chess. Death wins bodies, civilization gains souls. Aligned souls. [music – fanfare]

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *