So, it seems that this time around, Artificial Intelligence (AI), now known as Machine Learning (ML), might actually go somewhere. In fact, it already is.
First, there is IBM’s Watson, who appears to converse with cool people on Superbowl spots. Then there is the fact that the results of your Internet searches are getting more fluid and more, well, human. Financial institutions are using AI to sift through large amounts of data and detect patterns that indicate suspicious activity to raise their fraud detection scores.
Even digital health startups are using Machine Learning trends to look at vast troves of patient data and predict, very accurately, which patients in the ICU are likely to get into real trouble - i.e. close to death - 24 hours in advance. Health care professionals find this at once helpful - what patients should I focus on - and frustrating - the software cannot say anything about why the patient is about to tank, just that she is likely to. Humans love to know the “why”, not just the “what”; causality is hardwired into our brains.
Charting Artificial Intelligence Trends
Now, AI has been around for a very long time, and as technology hype cycles go, we’ve been in the trough for a while. It came and went in the ‘70s, ‘80s, and '90s, usually fueled by Federal research funding. Its breakthrough this time is due to a combination of three things.
Second, the Internet has provided developers and users access to very, very, very large amounts of data, which can be used to “train” an ML program (more on this in a bit). Access to large amounts of data, coupled with the third factor, helped drive ML forward.
Third, researchers have improved the models they use, leaning on a combination of old-school statistical techniques (e.g. Bayesian and Gaussian theory) and new ones (e.g. neural networks.)
There is a fourth factor, the one that inevitably sets in when the hype around a new technology dies down and that is folks’ expectations for ML are much more modest than they used to be. The fact is that ML can sift through the first 80% of a large amount of data to understand it well; the last 20% is nearly impossible and much better left to humans. Researchers understand this now.
How do Machine Learning Solutions Work?
Broadly speaking, Machine Learning works in four ways:
Supervised learning | In supervised learning, the computer is “trained” with data sets that include the desired outcome from the analysis (e.g. a customer with a particular financial profile should get this credit score). |
Unsupervised learning | In unsupervised learning, the computer is not fed historical labels but rather is used to explore patterns in the data set being examined. When you are on a consumer website, the “Recommended for you” listing may have been organized using Unsupervised learning. |
Reinforcement learning | In reinforcement learning, the computer acts as an agent, operating in an environment (what it is learning about) with a set number of actions. This method is often used with robotics where it has to learn the most efficient way of accomplishing a task through machine learning. |
Deep learning | Deep learning uses neural nets to process very large amounts of data - e.g. facial recognition - to identify patterns. |
The last 20% that I mentioned a couple of paragraphs ago should give those who are concerned that if machines get smart (and autonomous), there will be nothing left for us to do. McKinsey pointed out that about 45% of our GDP could be automated and that even about 20% of the tasks of highly skilled individuals (CEOs, surgeons, and such) are clerical. But that last 20% is what will keep humans in business and gainfully employed. How wonderful it would be to be freed of the clerical tasks that take up a chunk of my workday.
So, Siri may want to talk to you, but you needn’t worry: you’re not going anywhere soon. If you want to learn more, contact us today!