Heads up. The Aliens are coming

If we got a message informing us  the human race that we were  to be visited in a decade by a superior, more evolved, more intelligent, species.
What would our reaction be.
Would we prepare the welcome party and sing kumbaya, rejoicing at the possibilities for human knowledge, the eradication of pain, hunger and disease; or will we be suspicious of the new guests and be uncomfortable with the position of second best on the evolutionary pyramid; or would we run for the hills
This is not some academic, question that poses an improbable scenario for entertainment purposes.
According to the finest minds of our age. Stevan Hawking, Bill Gates and Elon Musk, we have received such a message and we will be visited in the next decades by a being of such superior intelligence that humanity,  could be made  irrelevant.
We are used to dealing with weak Ai in the form of Google, auto pilots, deep blue and Dr Watson, but soon we will will have to decide on how we approach Deep Ai and when that moment arrives, all at once everything will have changed.
In the past people have argued about the possibility of Deep Ai, but today the question is not "if" but "when".
Those three leaders of technology, by the way warn humanity to "approach with caution" the future in which we share the earth with a superior species that will evolve faster than us.
Even with weak Ai there are interesting risks, like the following popular thought experiment.
If you create an automated intelligent factory that is given the directive of producing paper clips and allowed to use its intelligence to optimise the production. This new Ai running the  factory will excel, and find new and innovative ways to produce more, higher quality paper clips. It will try every possible combination of every possible method and evolve the best method. And it will continuously improve its method and product at a pace that humans could never match. If there was an industrial accident on the factory floor and a human was killed the creators might want to adjust the primary objective to include "make sure no human is killed".
So the plant goes on producing paper clips and makes sure no human is killed in the process.
At some point it's possible that a human is injured, and the makers again intervene and add to the primary objective, "make sure no human is injured".
Problem now is that humans harm each other all the time and the only way to actually carry out the primary objective is to confine all humans including the maker and feed them through tubes.!
And in achieving the primary  objective, humanity would have been inadvertently enslaved.
This is an extreme example of how the weak  Ai could take the literal interpretation and throw out the baby with the bathwater.
In the year 0 it took humanity about 1500 years to double it's collective knowledge base.
Currently the collective knowledge of the human race doubles every year.
It is impossible for somebody to have in depth knowledge that spans all the disciplines and people invariably become super specialists.
Limiting their view.
Due to the information explosion in every field we  are unable and incapable  of keeping abreast of the advances in our own field of specialty.
A machine could be a specialist in all areas and find the patterns  humans can't.
A machine will find the cure for cancer and all the diseases that plague our lives. Ageing and indeed even death could be one of the  problems that can be solved by an infinite intelligence.
It would be able to bombard a virtual human with every combination of medication for every combination of disease. Trials that would have taken years, would be run in seconds; In simulated drug trials with all manner of medications for all manner of ailments, we could end all sickness and beyond this even  able to modify our bodies to improve, and enhance them.
There is much to be said for a benign and benevolent Ai because we would be Gods, immortal and well taken care of.
If we could harness this amazing power and make the survival of the species and the creation of the utopian heaven it's primary objective,  we could have it; yet there is something that would prevent this.
Our own flawed design.
Human nature would not be satisfied and content in this garden of Eden.
Humanity would not accept playing the role of "best supporting actor".
And once again our own inbuilt flaws would undo us.
We would want to weaponise this emerging Ai, as the Americans would say, "from the get go" and as a result we would be less likely to produce deep Ai that was benevolent.
The discussion and thought invested in the topic of the future of humanity in the face of the emergence of deep Ai will at the very least let us examine ourselves and understand how flawed we are as machines, and that our flaws are the evidence that we are, each of us, unfinished pieces of music, and while the deep thinkers believe that when the intelligence explosion happens to Ai, the fact that it evolves, in micro seconds will inherently mean the end of us;  it could  mean that we actually  about to complete our evolution. We would be "fixed" and "completed". (don't roll your eyes at me)
And we would be all we were meant to be.
Our maker would be proud of the beautiful thing that he made.
And we bow to the machine.
M Parak 2015.

Comments

Popular Posts