Artificial Intelligence (AI) is reshaping today’s business landscape.
From the increasing investments in AI ventures to the proven benefits of AI deployments for organizations, the relevance of AI is undeniable today. Yet, in spite of the current hype, many of its key concepts remain surrounded by a surprising lack of clarity.
Machine Learning (ML) is a case in point. Business leaders often struggle to understand the capabilities of new ML techniques and how to identify use cases to which ML may be applied productively. Therefore, understanding what exactly ML is and debunking misconceptions are prerequisites for unlocking business opportunities and gaining a competitive edge.
What is Machine Learning?
Defined as the “field of study that gives computers the ability to learn without being explicitly programmed” by one of the field’s pioneers, ML currently underlies a range of applications we use every day- from product recommendations to voice recognition.
Fitting models to data has always been a crucial aspect of scientific endeavors. Scientists like Galileo or Newton would design experiments, make observations, and collect data. Then they would try to build models to account for the observed data and make further predictions. When their predictions failed, more data was collected and used to revise the models.
A similar process of data collection and model building is also what characterizes ML. However, in this case it is computer programs that analyze data and automatically extract information from it. This means computers can learn from their own experience and make decisions with minimal human intervention.
All of this might still sound highly abstract, but applications of ML can actually be very concrete. You don’t need to be a data scientist to experience their effects on a daily basis. For instance, consider spam filtering. Leaving historical and conceptual reflections around spam aside, what matters is that we all know how annoying unsolicited emails can be.
You’ve got… Machine Learning.
Leading email service providers wouldn’t be able to deliver robust anti-spam filters without ML algorithms that classify emails and filter out the spam. While spam-generating programs are getting increasingly sophisticated, computers are also becoming better and better at classifying emails by applying statistical methods and discovering patterns and associations between words.
Don’t be surprised the next time you find an email from a “Nigerian Prince” or messages containing words such as “shipping” in your spam folder. This is the result of millions of users classifying such terms as spam, training the machine over time. As more emails containing those words are processed, the filtering will only become more accurate. In fact, Google claims that their software is performing with 0.1% spam rate due to their unique ML model.
While this was just one example of automated process happening in the background and making our lives easier, ML has been driving success across many different industries and applications, ranging from Google’s language translation app to autonomous cars, diagnostics, and portfolio management. In light of the widespread adoption of ML, it is high time that we clear up some misconceptions around the field and tease out potential business significance.
What Machine Learning is not.
(1) It is not human learning.
A widespread misconception is that ML adequately represents the way humans learn. According to a popular view, some of the most famous approaches to ML are directly inspired by the study of the human brain. We’re referring to “Deep Learning”, the most celebrated approach to ML these days. Amazon, Facebook, Google, Microsoft, and Uber have all made substantial investments in Deep Learning systems, and it is undeniable that the current success of their data hungry algorithms contributed to the recent surge of interest in AI.
You’ve probably encountered statements that equate the functionality of Deep Learning algorithms with that of the human brain. In order to explain how those algorithms work, the processes that underlie them are treated as analogous to signaling amongst neural networks in the brain.
We really like analogies, but this one is far from helpful. Here’s why: humans are typically able to learn from very few data points, whereas statistical learning thrives when plenty of data is available. Consider a task such as handwritten character recognition. This is a real problem for children and adults, which they typically learn to solve after seeing just a few examples of each. Computers would typically require hundreds or thousands of training examples to achieve the same goal.
In a business context, this difference matters a great deal. The emergence of big data is undoubtedly driving new successful applications of ML. However, the statistical learning underlying essential processes necessitates a significant amount of data- an amount that many organizations do not have. So while it’s estimated that by 2020 every person on earth will generate 1.7 MB of data every second, many organizations won’t be able to leverage big data when applying ML to their use cases. Given the increasing amount of data needed to competitively utilize machine learning, companies should be ready to embrace new approaches to learn from small data as well.
(2) It is not new.
A second misconception is that ML is a nascent field. The definition presented earlier in this article was actually provided by Arthur Samuel in 1959. So there should be no doubt that the field of ML is far from new.
This also applies to Deep Learning. Given that the 2018 Turing Award went to the fathers of this popular approach to ML, one might think that it is a fairly new approach. Yet, this also turns out to be incorrect.
While the data that has enabled Deep Learning’s key successes has only recently become available, this does not hold for its algorithms. Deep Learning can actually be seen as the latest wave of connectionism-a movement in cognitive science that hoped to explain intellectual abilities using artificial neural networks and had its Golden Age between 1980 and 1995.
The factor that distinguishes the past and present when it comes to Deep Learning is not its existence. Instead, it is the methodology employed, which is a function of the tools available to use. Recent breakthroughs in Deep Learning have largely been the result of turning away from human thought processes and utilizing the awesome data-mining power of supercomputers to grind out valuable connections and patterns without trying to make them understand what they are doing.
Linking cutting-edge ML to its historical connectionist roots provides some interesting insights for organizations, as it sheds light on some of the limitations of Deep Learning applications. These will become evident by clearing up a third misconception about ML, which concerns the relationship between ML, Deep Learning and AI.
(3) ML is not the same as AI or Deep Learning.
As we have seen, Deep Learning is a subset of ML, which is in turn a subset of AI. While these very hot buzzwords often seem to be used interchangeably, doing so would count as a serious mistake. ML is certainly a key component of AI right now- that much is true. A case in point lies in a special section on AI published in the July 15, 2015 issue of the prestigious scientific journal, Science- while the title announced a focus on AI, the dominant theme was ML. However, conflating ML with AI would serve to eclipse everything else that AI has to offer.
As mentioned above, ML’s most successful approach right now can be seen as the latest wave of connectionism which was popular in cognitive science and AI decades ago. Back then, connectionism was not the dominant paradigm. Instead symbolic artificial intelligence dominated the scene and relied on strings representing real-world entities or concepts (aka Good, Old-Fashioned AI (GOFAI). No matter how important ML has become it is still only one part of a much more expansive AI landscape- and its influence in this space is not immutable.
One of the reasons why symbolic AI has not been getting as much attention these days is that it’s much more “manual” than ML in a lot of ways. This makes it less scalable than the most popular ML and Deep Learning approaches, while also being more brittle given that performance does not degrade gracefully. However, it also has modern merits, as symbolic AI typically delivers more intelligent results that humans can interpret, predict and use quickly.
Recently, there has been a surge of interest in symbolic approaches due to clever new ways to combine symbolic and sub-symbolic approaches as well as knowledge graphs popularized by big players like Google and Facebook. As the name suggests, knowledge graphs precisely store knowledge and represent it as a set of entities and relationships, facilitating understanding and inferences. Due to their potential and broad applicability, it should not come as a surprise that they have been identified as one of the top emerging technologies.
It is crucial that business leaders fully understand how ML can create value for their business and where its limitations lie. Realizing what ML is, and is not, helps them to do just that. The concerns about big data availability and the historical considerations mentioned above suggest that both dogmatism and one-size-fits-all approaches should be avoided when it comes to ML applications.
At Coveo, we believe that our customers can greatly benefit from a hybrid approach to AI, which successfully combines the best of symbolic AI and statistical ML. Symbolic AI helps us gather real understanding of concepts behind users’ queries, whereas scalable ML helps us capitalize on data and constantly learn to automatically deliver the most relevant results and proactively recommend content.