Monday, November 10, 2014

Will AI Surpass Human Intelligence

Will AI Surpass Human Intelligence?

What is “Human Intelligence”?
First of all, it does not seem easy, to me, defining “human intelligence”.

The Third Chimpanzee
I´ve read a book called “The Third Chimpanzee”, by Jared Diamond ( see http://www.jareddiamond.org/Jared_Diamond/The_Third_Chimpanzee.html ), where the author proposes the thesis that humans are but one more animal species, and that much of our behavior is conditioned by evolution – and even that some peculiarities are not leading to the species preservation anymore but, oppositely, to its destruction.  It´s hard to deny that we are indeed depleting a series of natural non-renewable resources.  For example:  right now, in Brazil, São Paulo, a city of 22.000.000 habitants, is running out of water (see Brazil drought: Sao Paulo sleepwalking into water crisis - http://www.bbc.com/news/world-latin-america-29947965 , also Water Crisis Seen Worsening as Sao Paulo Nears ‘Collapse’ - http://www.bloomberg.com/news/2014-10-21/sao-paulo-warned-to-brace-for-more-dramatic-water-shortages.html ).   The way the growing vital problem has been ignored both by authorities and population seems to me insanely dumb;  animals who act this way can hardly be qualified as “intelligent”, no more than any other species that blindly consumes its environmental vital resources until extinction.

Cogito, ergo sum
I´m using this René Descartes quote to remember that, oppositely, there are some unique creations of the human spirit.  Descartes himself created the Analytic Geometry.  Ancient Greece´s philosophers´ realizations amazed me still more:  Eratostenes was able to measure the Earth, using nothing more than Mathematics and… intelligence!  See Eratóstenes y la medición de la esfera terrestre - http://www.astromia.com/biografias/eratostenes.htm , for details).  The Greek Atomism (see Atomism - http://en.wikipedia.org/wiki/Atomism ), also, was a remarkable intuition – incorrect in the details, but conceptually according with scientific discoveries made thousands of years later!

So, in this brief context, I would take as “human intelligence” the capability of control and change the world according with one´s desires or necessities.  This capacity derives, mostly, from an understanding of the world in a utilitarian way and – possibly, but not necessarily - in its essence.   Also, “satisfying the desires” not necessarily means the greater good for the environment or the species itself in long term.
I see this as quite different from “animal intelligence”, but just in a matter of grade, not of essence.  We are animals;  just like tigers, for example, are physically powerful, we are mentally powerful, we are mental tigers.

The Technical Singularity
The Technical Singularity hypothesis (see The Coming Technological Singularity:  How to Survive in the Post-Human Era - https://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html , see also Technological Singularity - http://en.wikipedia.org/wiki/Technological_singularity ) is that accelerating progress in technologies will cause a runway effect wherein Artificial Intelligence will exceed human capacity and control, thus radically changing or even ending civilization in an event called the singularity.  John Von Neumann was the first to call this event “singularity”. 

Do we have the Technology for AI?
Well, we have all kinds of robots, we have drones, we have virtual reality…  Of course we already have “kind of” Artificial Intelligence, and probably very soon – in at most 20 years, I would say, we will have Artificial Intelligence able to supersede our own.  Maybe not “philosopher machines”, but most human beings – and the human kind as a whole, itself – does not excels at long term wisdom. 

Will we restrain ourselves of creating something potentially dangerous and which we can´t control?
I don´t think so.  Just take a look at what we have done in recent history.  Depletion of natural resources was predictable, even evident.  Did this stop us?  Nuclear power waste is lethal for thousands of years; do we have a way to dispose of them?  No!  Do we stop producing more and more? No!  The list of insanely stupidities we are doing is almost endless.  So I don´t think that, no matter how dangerous it could be, what kind of unpredictable events could follow the singularity, we will very likely proceed and walk into it.

Are we doomed?
I have sons and grandsons, so I hope we´ll be able to find a way to restrain ourselves of reaching the singularity – at least until we are able to control AI.  Maybe the capabilities we´ll have to develop to deal with the environmental crisis make us wiser.

   


No comments:

Post a Comment