A 1969 McKinsey article claimed that computers were so dumb that they were not capable of making any decisions. In fact they said, it was human intelligence that drives the dumb machine. Alas, this claim has become a bit of a “joke” over the years, as the modern computers are gradually replacing skilled practitioners in fields across many industries such as architecture, medicine, geology, and education. Artificial Intelligence, Machine Learning, Data Science, and Deep Learning are pushing these changes in ways that are only just being understood.
In the current scenario, many buzzwords are being employed in the evolving IT industry, especially in the various research areas around and within Data Science. For many years, the world has known about experiments (with varying degrees of success) in Artificial Intelligence (AI), but recently, rapid strides were made in this field of study, leading to allied research areas of Machine Intelligence, Machine Learning, and now, Deep Learning. So how are these specialized sub-domains under AI similar to or different from each other? This article takes a look.
Artificial Intelligence: This “umbrella” term encompasses all these areas of research. According to field experts, the definition of AI has suffered many detours, thus rendering the term nearly useless over the years. McKinsey’s 2015 Report titled Disruptive technologies: Advances that will transform life, business, and the global economy suggests that about 12 disruptive technologies will create a great global impact 10 years from now. Among these 12, at least five have been determined to be related to AI and Robotics, which includes: automated “knowledge” tasks, Robotics, Internet of Things, 3D Printing technology, and self-driving cars. The total economic impact of these combined technologies has been estimated to reach between $50-99.5 trillion by 2025!
Machine Intelligence (MI): Many Data Scientists believe that Machine Intelligence and Artificial Intelligence are interchangeable terms. The term “Machine Intelligence” has been popular in Europe, while the term “Artificial Intelligence” with its scientific slant has been more popular in the US. MI indicates an involvement of a biological neuron in the research process with a more superior approach than the one usually employed in Simple Neural Networks.
Machine Learning (ML): As an intrinsic part of AI, Machine Learning refers to the software research area that enables algorithms to improve through self-learning from data without any human intervention. McKinsey’s An Executive’s Guide to Machine Learning states that ML’s strength lies in its capability to learn from data. Machine Learning assumed the status of a separate discipline in the late 1990s when advanced technologies and cheap computers allowed Data Scientists to train computers to generate algorithms. The explosive volume and variety of data has enhanced the importance of Machine Learning. An interesting debate on whether Machine Learning can replace Data Scientists is found in Will Data Scientists Soon Be Obsolete?
A good market application of Machine Learning can be found in Second Spectrum, a California-based start-up that that prepared predictive models of basketball games for the US National Basketball Association. In Europe, the banking sector uses ML techniques to for various banking functions which helped them achieve target business growth and savings. In this article, Quora provides an interesting comparison of Machine Learning and Artificial Intelligence.
Deep Learning (DL): Deep Learning is really an offshoot of Machine Learning, which relates to study of “deep neural networks” in the human brain. Deep Learning tries to emulate the functions of inner layers of the human brain, and its successful applications are found image recognition, language translation, or email security. Deep Learning creates knowledge from multiple layers of information processing. The Deep Learning technology is modeled after the human brain, and each time new data is poured in, its capabilities get better. Deep Learning for Marketers discusses a market application of this unique technology.
The Research Areas in Practice
According to a recent article in Information Week AI and ML are gradually evolving from the science fiction era to on-the-ground reality. Over half of global enterprises are experimenting with Machine Learning, while top enterprises like IBM, Google, and Facebook have invested in open-source ML projects. This article reports that global enterprises are experimenting with “smart computing,” and seriously investigating whether Artificial Intelligence can be applied to business solutions.
A Numenta blog presents a detailed comparative study of the various technologies . The common perception is that there are no standardized definitions for the four terms, and people still use the terms loosely without understanding the scientific significance of each. Also, a significant evolution has taken place in the meaning of the terms. What people meant by AI in 1960 was distinct from what AI means today. Datanami’s How Machine Learning Is Eating the Software World explains that the majority of smart applications today depend on Machine Learning to interpret the results in the real world.
In an An Executive’s Guide to Machine Learning, Machine Learning 1, 2, and 3 have been aptly described as descriptive, predictive, and prescriptive stages of applications. The predictive stage is happening right now, but the ML 3.0, or the prescriptive stage, provides a great opportunity for the future. The article titled CIOs Need to Invest in Machine Learning Now provides a clear analysis of this technological preparedness of global businesses.
The Research Areas in Action
An inherent danger of the overpowering influence of data technologies have been mentioned in the Gartner blog post titled Don’t Blame the Technology. In this post, the US has been cited as an example of the dangerous impact of advanced technologies on human society, where data technologies are enabling wealth creation for the few while the other 99 percent are left in economic darkness. So, the question remains whether such superior technologies and technological research can benefit the masses? The article says not to blame the technological advancements; it is the wrong prioritization that is upsetting the world.
McKinsey’s Artificial Intelligence Meets the C Suite suggests that brilliant machines will soon take over the business world and the daily lives of senior executives will become subservient to these incredible machines. In order to survive and succeed, senior leaders will have to learn to let go of their egos and co-exist with smart machines. Top business leaders will still have plenty of opportunity to contribute through questioning, confronting exceptional situations, and providing solutions that machines can’t.
Too frequently, business functions show a tendency to hoard and politicize information. Maybe the advanced Machine Learning models of the future will truly break down data silos and pave the way for shared data architectures across businesses. In such an era, even frontline managers will be powerfully equipped with insights from powerful computers before making critical decisions. This way could democratize the distribution and use of data analytics across the managerial chain. The Future of Machine Learning: Trends, Observations, and Forecasts discusses some of these observations further.
The machine revolution has certainly started. The IBM supercomputer Watson is now predicting patient outcomes more accurately than physicians and is continuously learning from medical journals. A future challenge will certainly be deploying the advanced Machine Learning models in other real world applications. However, an encouraging pattern has already been established by the likes of Amazon, Netflix, and Flickr who have successfully replaced many traditional “brick and mortar” giants in business with their superior digital business models. It’s possible, some years ahead, far higher levels of Machine Learning will be visible throughout the global business environment, with the development of “distributed enterprises,” without the need for extensive human staff. So while many of these research areas in and around Artificial Intelligence, Data Science, Machine Learning, Machine Intelligence, and Deep Learning show much promise, they are also not without significant risks if employed inadvertently and without proper planning.