Machine Learning Applications: The Good, The Bad, and The Unrealistic

The Latest Innovations and Controversies of Machine Learning

Up until the 1950’s, computers needed incredibly detailed commands, in highly-specific languages, to accomplish even the simplest of tasks. By 1959, Arthur Samuel had developed the Samuel-Checkers-playing Program—a computer game that became the world’s first ever self-“learning” machine. Rather than pre-program his computer to tell it what moves to execute, Samuel instead engineered a machine that would teach itself to play against its human opponents. Over time, the computer would solve the problems posed to it, entirely independent of its creator.

From this simple game of checkers came the foundations for the artificial intelligence industry, and “Machine Learning” was born.

How Machine Learning Augments Our Processes

Machine learning is a subset of computer science that allows for the architecting of mechanisms that closely imitate human thought processes.

By leveraging statistical data and predictive, learning algorithms, machine learning technologies now have the ability to surpass certain elements and capabilities of the human brain. This branch of intelligent tech uses a broad range of computing tasks to form theoretical models for troubleshooting and predicting the outcome of difficult or infeasible tasks (e.g. email filtering, intrusion detection, weather forecasting, and even how long your current relationship will last).

Tech, Made by Humans, to Improve Human Life

Machine learning’s main advantage is the fact that it can quickly produce highly reliable and repeatable results as well as make relevant correlations from within a massive range of data points.

In our defense, humans can certainly create their own predictive algorithms and produce logical predictions; however, we struggle to consume and synthesize the massive amounts of data available to us. Not to mention the fact that our human computation time is considerably longer than an AI machine’s due to the necessitated manual nature of our calculations. For this reason, machine learning has been increasingly utilized within the medical field, and it has revolutionized the way that healthcare professionals provide care for their patients.

Google, for instance, has developed a machine learning algorithm that detects cancer with an 89% accuracy—a rate that surpasses the 73% averaged by doctors. Google’s research “show[s] that it was possible to train a model that either matched or exceeded the performance of a pathologist who had unlimited time to examine the slides.”

A company called Face2Gene is leveraging the capabilities of machine learning to offer more proactive diagnosis and treatment options. Through next-generation phenotyping, Face2Gene has developed an application that can recognize genetic disorders (in patients as young as infants) from a simple facial photo upload. The app then produces an automatic calculation of “anthropometric growth charts” and suggests “likely phenotype traits” to better inform patients and healthcare providers about causation and future treatment.

But machine learning doesn’t just use its “sight” to help humans view data more accurately; it also can now “listen” attentively to human voice commands.

Tech that Can See and Hear Us

Voice recognition platforms like Siri, Cortana, and Alexa have quickly become a household staple, allowing us to ask the tech any conceivable question (from basic trivia, to local news, or the current buy-one-get-one-free deal at the local Papa John’s, and so on).

Speech recognition software has become so advanced in recent years that computer scientist and AI researcher, Andrew Ng, predicts that “as speech recognition goes from 95% accurate to 99% accurate, it will become a primary way that we interact with computers.” He believes that, with time, machine learning will ultimately replace our heavy reliance upon physical keyboards and touchpads. Although machine learning is still working its way towards that 99% accuracy goal, the world’s population of physically disabled people has already found speech recognition to be a life-changing mitigation for many of the challenges that they face day-to-day. For individuals with certain physical limitations, a dictation-to-text software is a convenient solution to their most basic typing and communication needs. More recently, machine learning and speech recognition technologies have also been applied to real-time speech controlled systems for prosthetic arm control, with the hopes that one day, engineers can return the “normal function” of a limb to the amputee.

While speech recognition alleviates physical limitations, it also can also become a pedagogical supplement for learning disability instruction. For instance, individuals with dyslexia (a learning disability that impacts the way that people interact with the mechanics of language) will be able to dictate their speech to a machine learning platform where the words will be correctly spelled on the screen before them. This audio + visual aid will cultivate learning and reinforce “reading [as well as] recognizing and processing letters, words, and their sounds.” For older dyslexic individuals operating in the corporate world, platforms like Dragon Dictate will augment their productivity and remove the stress of process while allowing them to simply “focus on the substance of their writing.”

So far, we have only touched on two different genres of machine learning solutions, and we’ve barely scratched the surface. When it comes to machine learning and AI, there is seemingly unending application. However, even within this excitement of possibility, it is important to understand the limitations of the technology as well.

When Machine Learning Doesn’t Succeed

Despite its grand successes, machine learning is not a silver bullet. While many envision that machine learning can reach a point where it “does all of our thinking for us,” current findings prove otherwise.

OpenCog Foundations’s, Ben Goertzel, has researched the “pathological misidentifications” that machine learning commits under certain circumstances in random image tests. He and other researchers discovered that they could construct random-to-the-human-eye images that, due to a few outlying data points, were consequently classified by the technology as “particular kinds of objects with high confidence.” In some cases, these nonsense images were determined by the software to “look exactly like a frog or a cup.” Goertzel stressed the ease in which one perturbation of the data could cause machine learning to completely misidentify the input data (this becomes especially troubling with facial recognition applications). Goertzel concludes that machine learning can closely imitate facets of the human brain, but it lacks the skill—ironically enough—of data generalization.

Anthony Ledford of Man AHL also stresses the pitfalls of machine learning. Ledford and his fellow researchers spent three long years “building a machine-learning model to do something mere mortals often can’t: find fresh ideas in an avalanche of data.” But this time, they applied their efforts to the prediction of marketing and hedge fund trends to solve for “underperformance.” Unfortunately, at the end of his three three years of research, Ledford concluded that machine learning:

… which learns on its own to find investment ideas by hunting through troves of data, requires a heavy commitment of time and money, and a high tolerance for failure … Hedge funds need a team of scientists and researchers and many months, even years, to shepherd a single algorithm from development and testing to live trading. That’s because most algorithms fizzle out along the way.

Ledford pointed out the rampant unrealistic expectation of the end-user. Technology is only as good as its programming and its inputs. But we have yet to reach a level of innovation where machine learning can “do everything for us.”

There is still a long way to go.

How to Use Intelligent Technology the Smart Way

As we have seen, machine learning can be a highly finicky process, and, contrary to the science fiction fantasies of many consumers, it certainly is not a system that can be wound up and simply let go to run on its own. A great deal of time, effort, expertise, and money is required to prepare the technology in such a way as to produce the results that are needed. But when these parameters are met, we have also witnessed the profound impacts of its capabilities.

Of course, to process the type of information above, you also need the right type of technology. HPE’s Apollo series, combined with the power of NVIDIA GPUs, gives you the power to apply machine learning and AI algorithms in a powerful, yet manageable, way.

Ready to take your first steps into Machine Learning? Contact TSA to design the right environment.