Artificial Intelligence as a technology is still in its infancy. What goes by the name of A.I. today, in most cases, strictly do not fall under the genre. While algorithms have been used from the time business or scientific computing has been in vogue, use of complex algorithms where decision rules encompass manifold options and hence are complex and also access huge – as a matter of fact humongous – amounts of Data pass as AI.
It has been rendered possible with the increase in computing speed as well as capabilities to store higher volumes of data. The computer program Deep Blue was able to defeat Gary Kasparov because it could evaluate myriad options within a shorter time period than Gary. E-commerce software, similarly, is able to classify millions of consumers into clusters based on their purchasing behavior because of its ability to handle large volumes of data – enabled through innovative Software like Hadoop. The recent controversy over Cambridge Analytica is fresh in our minds. They could sift through a large volume of posts pertaining to a few million individuals and determine their profiles in order to influence them. Google’s self-driven car also encompasses a set of algorithms, as we understand.
The earliest programmes like Payroll and Inventory were also substituting the work of intelligent human beings. It is just that with stellar technologies like Big Data, facial recognition, pattern recognition, Natural Language Processing, speech recognition, robotics and IoT the nature of Artifical Intelligence as we experience it today, though not qualitatively different from traditional Intelligent Computing operates on a larger and richer scale.
The underlying advantage of machines lies in that human memory tends to be much shorter in span, takes longer to decode or remember and declines in capability over time.
Where the human being scores, is in its capacity to learn. To illustrate, when a child touches fire for the 1st time, his hand will get burnt but he will be cautious the next time. This has not been pre-programmed into his brain through an algorithm (many other items do get genetically coded) but learnt through experience. Our growth and maturity primarily comes through practice and not through pre-coded algorithms. Internally algorithms are automatically and continuously coded through experiences gained every single moment of our conscious existence. As if a very efficient programmer is sitting inside our brains.
A true artificially-intelligent system is, therefore, one that learns on its own – networks resembling neural networks which can make connections and reach meanings without relying on pre-defined algorithms but capable of replicating the human learning process. Where we still lag is in coming anywhere close to mimicking the human brain. It is naïve to believe that this can be achieved soon because of the sheer magnitude involved. The brain typically contains 300 trillion synapses. Synapses are the interfaces between neurons which constitute the circuits transmitting information across.
The real qualitative change will emerge from a totally different category – characterized as “Deep Learning”. Circuits and Software are being built today to incorporate Deep Learning as a technique which may not be anywhere close to replicating the brain but nevertheless will be able to trudge a few baby steps in that direction.
The day the brain is totally or even substantively mimicked, we will witness the birth of an alternative class of robots, machines, Droids – call them as you will – a potential threat to Mankind. However, as stated before, since the Brain is much more complex than we think, it may take a few million years to achieve that state.
Simultaneously, there is evidence that the Brain too is evolving in the Darwinian sense and the gap may just be increasing. So the human race need not be paranoiac – at least not just yet.