There’s huge hype around people regarding Machine Learning. The news doesn’t help either. You can find headlines such as, “Machines created a language between them,” “Skynet is real?” and “Machines taking everyone’s job in the near future.”
Some news is quite accurate while other news is created by writers who don’t have a clue about Machine Learning and what it means. Here’s a bit of an introduction to the subject at hand. We should start with the basic questions:
What is Machine Learning?
Machine Learning is a technique that uses algorithms for a computer to learn how to have an answer to a question. The question is called an input and the answer is called an output. The output is created by rules and algorithms, which would be considered the program itself.
An easy example is a machine that knows how to play chess. Some of us remember chess champions playing against machines but these are old news. Which tells us something: Machine Learning has been around much longer than everyone thinks.
Short answer? Big data, computational power. Long answer: We live in an age where the processing power held in our hands is far more powerful than the one they had when Machine Learning was a new topic of conversation, computational power. Machine Learning allows the machine make a decision according to an algorithm, but that’s the really short description. There’s the matter of how it learns.
The algorithms are run over big volumes of tests, and each test allows the machine to know what would be the output for the input. Going back to our chess program comparison, that would mean that we upload hundreds or millions of games into the algorithm so it can learn which were the cases when you won when you lost, and which moves were the best to have in order to win.
So, going back to the “why now?” question: the volumes of the tests machines can process nowadays are immense. A software can process thousands of terabytes (Big Data) in a logical timeframe. Ten years ago, that was unthinkable. We are in a time where even your cell phone can process great amounts of data in a small amount of time.
What are its limitations?
The learning process itself has its limitations. Without entering into too much detail like Solomonoff’s Induction Problem or Naturalized Induction, the main limitation is that the machine can’t give an output that defies all possible known answers (or outputs) to solve the problem. The machine will always have an output that was learned and won’t be able to create a new output that wasn’t in any test or learned case.
There’s a huge collective mind working on this problem and trying to discover how we can make a program be both creative and still find unlearned answers that are logical. That’s the main worry from gurus like Elon Musk who say that there should be an organization that controls AI development, but that’s another long rant.
We will be talking about machines taking over our jobs, end of the world theories, and how we can harness such power to build a better tomorrow in other posts. You are free to comment if you liked the article or you want any more insights into any of these subjects. We’ll take the time to explore any requests.
Thank you for joining the conversation!
Federico Marinic is an electronic engineer who graduated in Buenos Aires, Argentina from UdeMM with honors and received them as well from the National Engineering Academy of Argentina. He would describe himself as a curious mind who can never stop learning. He’s currently the systems manager at Carfacil. His expertise includes project development and implementation, business intelligence, and data analytics. His interests are mainly in the tech industry (machine learning, virtual reality, and augmented reality) and how to integrate those technologies into daily life (education, finance, politics, etc).