
Machine learning has a very long history before deep learning reached its peak. The rise of the internet and the greater availability of useful data helped machine learning evolve to a new level in the 1990s. With this, the field of artificial intelligence slowly began to change from a knowledge-driven strategy to a data-driven approach, paving the way for the machine learning models we see today.
Although deep learning was consistently being improved from the 1980s on, due to the lack of valuable data and high-end computing resources, it remained in its primitive phase as a technology. In 2012, deep learning attained a whole new level when Geoffrey Hinton’s team won an AI competition to classify ImageNet using neural networks. The last decade has been the decade of neural networks and their advancements.
Machine learning algorithms use several annotated examples and learn from them by the process called “training.” Deep learning is the subset of machine learning that doesn’t require annotated examples to learn; instead, it relies on neural networks that function similarly to the human brain and neural systems. However, this process will require an extensive increase in datasets compared to the traditional machine learning approach.
Traditional machine learning algorithms might only be extinguished when people consider deep learning as their first solution, even simpler problems. It will only complicate understanding when deep learning is used to complete simpler tasks that could be solved using logistic regression, decision trees, support vector machines, etc.
Traditional is Reliable
Comparatively, traditional machine learning algorithms have been around developers for a longer period than deep learning algorithms.
Despite the rapid advancement and the availability of various deep learning platforms, traditional ML algorithms are easily understandable and human brains can evaluate their results on every step.
While deep learning algorithms are more like a black box as their results can only be interpreted but not completely evaluated, deep neural networks consist of a lot of inner layers, with each layer consisting of numerous nodes. Each of these nodes learns on its own, recognizing different patterns.
We cannot interpret what they have learned; we can just study the output. This is a black box problem, and thus when it comes to applications for police surveillance, hospitals, banks, etc., ethical concerns arise. Traditional machine learning is thus more reliable based on interpretability.
Training Data is Insufficient
Real-world data is messy. After data cleansing, only a few percentages of samples could be helpful for model training. Using deep learning in these situations will just lead to model underfitting.
Consider problems like clickthrough rate prediction. A clickthrough rate is a ratio that shows how often people who see the ads end up clicking it. In the real world, an amount as low as 1 percent of ads could be clicked among those sent to the customers.
The clicked and the not clicked ratios here are highly uneven. In such cases, the actual data might be insufficient to make a robust model. Thus, traditional machine learning approaches like factorization machines are used to train a model.
One Size Doesn’t Fit All
While working on a machine learning problem, algorithms like linear regression, decision trees, random forest, etc. can be used if it is a regression problem.
Logistics regression, random forest, or SVM can be used if it’s a classification problem. While clustering algorithms like KMeans are used when it is an unsupervised learning problem. Although deep learning can be used to solve regression, classification, clustering, dimensionality reduction, etc., this doesn’t mean that it is always the best solution.
If traditional machine learning algorithms require data samples in 1000s, a deep learning algorithm requires it in millions. Moreover, traditional machine learning requires less computational power compared to deep learning. With proper domain knowledge, hidden Markov models for speech recognition, wavelets for images, etc., could produce better results than deep learning models. This also implies why deep learning algorithms are not always the way to go.
A Long Way to Go
Although deep learning has become the most popular approach for many developers in the recent decade, the computational requirements, model complexity, and the need for a large amount of data still make the traditional machine learning approach the first choice for many practitioners. The problems discussed also support that the deep learning algorithm still has a long way to go to make traditional machine learning obsolete.
Be the first to comment