February 14, 2020

Machine learning, Artificial Intelligence, Data Science, Data Mining, … The terms have certainly changed over the years but how many of them are really new and how many have actually been around for decades already? Will machine learning still be around in 20 years or will the boom simply be over? This article will be a realistic, fact-based check of where we were, where we are and where we are going.

The most important thing to know about machine learning is that it is fundamentally nothing more than algorithmic embeddings of statistical principles that have been known for a very long time. Bayes’ Theorem was published in 1812 and machine learning as we know it today would be unthinkable without it.

1950s to 1970s: The Basics of Machine Learning

However, just having neat formulas without computers able to execute them is not quite enough to be able to do machine learning. This is why we skip to the 1950s, where the foundations for genetic algorithms and neural networks were laid and the first computers were able to actually learn something. Obviously, people were not able to learn much because both, computation power and available data, were extremely limited. However, the algorithmic mechanics are widely identical to what we are using today and the topic generated a great deal of excitement. It was broadly covered in the media: People were talking about “electronic brains” and similar to what we often hear today people were afraid of or excited by their potential power. This led to ground breaking research and discoveries throughout the 1950s and 1960s.

Since computational power and available data were still negligible compared to today, pessimism about machine learning effectiveness and possibilities manifested itself. The so-called AI Winter started and pretty much lasted throughout all of the 1970s. Research progress was minimal and because of most people’s perceptions, machine learning was dead because there were no achievable real-world benefits. After the 1970s, however, machine learning has only become more and more popular.

1980s to Early 2000: The AI Hype Grows

The 1980s and 1990s were spectacular decades with the (re)discoveries of backpropagation (finally allowing neural networks to efficiently learn from data), random forests, support vector machines (both remaining popular and effective machine learning algorithms to this day) and much more. From there on, public awareness spread steadily. In the 2000s, the first major open source machine learning libraries were made available alongside these major competitions were hosted. The most famous of these competitions, which was widely covered by the media, was hosted by Netflix in 2006. It promised a prize of US$1,000,000 to the team which was able to improve Netflix’ movie rating predictions the most.

In the following years, the hype began. Machine learning and neural networks became terms not only known to computer scientists and mathematicians but in every sector across every industry until it reached the popularity it has today.

Today: Data Availability and Computing Power Give Machine Learning a Chance

So, what changed? Why is machine learning an industry-independent phenomenon now and why has it not been previously? What certainly has not changed is the methodology. The way algorithms are evaluated and the mathematical mechanics behind it all largely remained the same over the decades. The algorithms? Well, we have seen significant advances, especially over the last decade. New and more clever neural network architectures were developed, more frameworks and algorithms were released as open source and the community as a whole has grown a lot larger. But these are rather the symptoms of the hype than the cause of it.

What really accelerated everything was the availability of data and cheap computation power. Most of the companies I have worked with started collecting data on a large scale at the very earliest in the late 00s. However, many companies to this day have still not understood the value of their data and are, accordingly, not collecting data that could be directly monetized. The awareness and according commitment to transform companies to data-driven decision making is still, by far, not reached its peak. Furthermore, all the data in the world is of no use if the only computers available to process it are too weak. In the 1980s, the most powerful supercomputer in the world was able to process 2.4 gigaflops. In comparison: The laptop I am writing this text on has approximately 61 gigaflops. This is the game changer: Having data plus the hardware so affordable that everybody can use it.

Machine Learning Offers Real Added Value – That’s What the Future Looks Like

Since these are the main two variables that have really changed over time, we are now able to deduct how the field will develop over the coming years and, probably, decades. What is certain is that the amount of data collected will not decrease. In fact, it is estimated that over the last two years alone, 90% of the data in the world was generated. That trend will certainly continue for years on end and something similar holds for computation power. For decades, Moore’s law has proven correct stating that the computation power will roughly double every two years. While this trend will not last forever, its peak has also certainly not yet been reached. Since both the computation power and the amount of available data will only develop favourably for machine learning; it is rather certain that machine learning is indeed here to stay – as it has been for decades.

On the other hand, public perception will decrease at some point. Right now, machine learning is penetrating many business and research areas. VCs are investing absurd amounts of money into everything related to machine learning. The topic was talked about at pretty much every tech conference over the last couple of years and even rather theoretical research fields such as formal verification are using their knowledge to either further their cause or apply their knowledge to machine learning algorithms.

This trend, in my opinion, will not continue for much longer. The expectations of too many stakeholders are so high that it will be impossible to meet them. This, however, does not mean that machine learning will become less relevant. The added value is too substantial for that to happen. The only thing that will change is that machine learning will become less of a hype and more of a topic for the genuinely interested. It will stop being a marketing catchphrase and simply focus on what can really be achieved with respect to data-driven decision making. In my opinion, that would be a wonderful thing.


AUTOR

Max Uppenkamp

Max Uppenkamp has been a Data Scientist at INFORM since 2019. After previously working in Natural Language Processing and Text Mining, he is now engaged in the machine-learning-supported optimization of processes. In addition to accompanying customer projects, he translates the knowledge gained into practice-oriented products and solutions.