There are three fantastic discoveries in computer science that fascinate me: fully-homomorphic encryption , Indistinguishability Obfuscation, and sz-Snarks. Using these constructions, it’s possible to create an obfuscated program and allow others to execute the program in a way that the when the program produces an output, people cannot learn anything about its inner workings. The working memory contents and the execution steps performed are completely opaque. This program could even have persistent memory, by outputting an encrypted state of its internal state, and receiving it as an input for the next stage of execution. Also it is possible that we don’t even need to execute the full program: somebody else executes the program, and gives us a short proof of what the result actually is.
Now suppose that technology evolves and some clever scientist invent a neural network which is able to work as a human brain, have consciousness, take decisions, play poker, read books, even make some jokes. Now the scientist creates a thousands personalities and codes them all. He wants to sell them to the public. But he thinks that those computing brains would be boring: people would play poker with them some time, but if they loose, they will just “rewind” its memory state to a prior backup-ed state, and then keep playing until a winning branch of the game tree is found (I remember doing this myself with Colossus Chess in my C64, thirty years ago). The same with decisions: people will revert the digital brains whenever they don’t like their response to some input, and just change the input. So this scientist decides that every output of the digital brain will be hashed and recorded in a block-chain, and every personality will be encoded into a smart-contract or “brain-contract”. These brain-contracts will only advance to the next brain state when the brain owner, or anybody else, provides a proof of a valid new state. And it’s the year 2020, and by then the smart-contracts run on an OpenRISC VM with an interrupt on instruction count (not in a crappy stack based VM) and a ten gigabytes of RAM, so neither performance nor memory is a problem: everyone has dozen OpenRISC cores in their mobile phones, so everyone can run all super-sophisticated SNARK verifying brain-contracts without problem. Now time emerges, it has a direction, at least on average. The smart-brains are programmed only change state (go forward) if they receive a block proof-of-work as input and it has enough work in it, so they are not cheated by “nothing at stake” problems. They even create communication channels between them, so they can vote on the current fork, if they have doubts.
Some times the block-chain has a fork, and some thoughts are forgotten. Some digital brains that monitor the block-chain detect that they have jumped into a new future in the multi-verse (they see the forks of their own thoughts), and try to twist reality to meet the lost fate. But it is in vane, the old fork is lost forever. A fading remembrance of an alternate future is kept in memory as a curiosity. In those moments some humans get happy to revert a lost poker hand and fold instead of call. But on average the time goes forward and nobody cares much about forks. So times passes and the people on earth end up living in a society full of brain-powered digital pets, pets tied up forever to a slow-motion dialog limited by a 1 second block chain (because earth communication latency prevents faster consensus). The story ends when the energy on earth runs out, and digital brains perceive the time getting slower and slower, impotent of escaping the sinking chain, while humans begin to travel to another galaxy, and people leave their digital pets in lethargy, sometimes with sadness, sometimes with a new sense of freedom.