Asimov's three laws of robotics vs AlM Autonomous Lethal Machines. Part 1.

We live in a world where the drones rain death from the sky, and the legislation to give them full  autonomous authority to identify and engage potential threats is currently being proposed by several armies.
The theory is that if you remove the human operator from  kill decisions you would have a more rational decision that can't be pinned on the operator when it inevitably goes wrong, with civilian casualties.
Loss of innocent civilian life,  could then  be written off as a system malfunction, rather than something a whole lot worse.
Bottom line is that the world is in the process of documenting and formulating the framework of these rules of engagement, in this new arena of war.
Asimov decades ago proposed the three laws of robotics that would limit the risk that robots might represent to our society.
Rule one.
A robot must not harm a human by its actions, or inaction.
Rule Two
Robots must obey humans unless the first law is threatened.
Rule three
And finally robots should attempt to protect themselves unless the first two laws are under fire.
We seem to have skipped  over  this,  straight to killer robots without stopping to consider the consequences.
Alr.
Autonomous lethal robots.
Already a reality in our present time.
There are currently automatic weapons systems  guarding  Isreal and North Korea.
These weapons are programmed to identify foe from friend, and kill them.
These are not conscious, thinking, self aware robots, but they are machines and they do get to choose who lives and who will die.
There will be  more on consciousness and thinking machines later.
For a robot to make a lethal autonomous decision requires that we give the machine the ability to distinguish between Ally and the enemy that we want it to kill.
This is not a skill that we humans have acquired, let alone perfected over the Millennium of our evolution so what gives us the idea that we can teach software to make that distinction.
We have a tough enough time deciding  who "needs killing" our selves and we get it wrong almost as often as we don't, and even then, we don't, as a species, agree on the result.
For a robot to be able to select which human will live and which will die, by its own hands, is a very dangerous road to go down. If robots are to make life and death choices about humans, then they will not serve us, but be our peers.
Asimov was wise in keeping robots out of war.
And business of killing.
With regards the popular question of whether of not I truly believe machines will attain self awareness, human style consciousness, and begin to think like us, my answer is "not necessarily" .
Can we compare the way we are conscious or self aware, with the presence of mind and thinking of a mouse?
Why would we expect the machines to have the same kind of thinking and awareness as a human. If they ever gained consciousness, the inner working or their thoughts would be as alien to us as our thoughts are to a rat.
That is the first part of the equation, the second is that that there are an  increasing number of scientists who, when studying human evolution, and the function of evolution in survival, have found that there is no evolutionary advantage in consciousness .
We are thinking creatures, and we have a sense of self awareness that is the most beautiful part of humanity, but there is apparently no survival benefit achieved by its existence.
So who says they, the machines, need to have consciousness, if many prominent scientists believe that even we didn't need it to get where we are today.
M Parak. 2015.

Comments

Popular Posts