Interpreting Interpretability in Algorithmic Trading

This article was written by Talia Shakhnovsky, a Financial Analyst at I Know First

Interpreting Interpretability in Algorithmic Trading

“If a machine learning model performs well, why [don’t] we just trust the model and ignore why it made a certain decision?” – Christoph Molnar, author of Interpretable Machine Learning

Summary:

  • An Anecdote on Algorithmic Interpretability
  • What is Machine Learning?
  • Interpreting Interpretability
  • Is Interpretability Ever Insignificant?
  • The Importance of Algorithmic Interpretability
  • Algorithmic Trading: Interpretability in I Know First’s Forecasts

An Anecdote on Algorithmic Interpretability

Envision the near future. Self-driving vehicles roam the roads, and car accidents are a nightmare from the past. Society questions how people could have driven such dangerous machines they weren’t qualified to control.

Until, one day, a headline reads, “BREAKING: Bicyclist Dead in Hit-and-Run”. Shock runs through this new world, but everyone agrees – it must have been an exception, a bug in the code. Then, it happens again, and again.

Without an understanding of how the self-driving algorithm makes its decisions, social discussions become social turmoil. People suspect a robotic takeover, a future where machines no longer value human life.

If the algorithm could describe its decision, however, it might explain that based on its training data set, bikes have two wheels. Perhaps, these crashes were due to a new brand of bike that had bags covering the rear wheels, so the algorithm did not identify them as cyclists. With this information, the algorithm could be fixed, and lives could be saved. Without it, cars might continue to hit cyclists, and people might revert to a pre-AI world in fear of algorithms devaluing humanity.

What is Machine Learning?

We live in a rapidly expanding world of data. Over the past two years alone, 90 percent of the world’s data was generated. This influx of data is rapidly increasing the relevance of machine learning: a set of computer methods that analyze data to make and improve predictions about the world.

For instance, in a predictive machine learning algorithm that analyzes the stock market, the computer may analyze the patterns of past stock activity. Machine learning algorithms try to minimize a score or loss function. For a stock market, the aforementioned algorithm tries to minimize the difference between the predicted price and actual asset price. By analyzing market data and finding stock trends, the machine learning algorithm can suggest whether stocks will increase or decrease.

Machines are faster, more reproducible, and more scalable than humans, along with surpassing humans in many complex tasks such as chess. The thinking of machines, however, is often hidden. Models are increasingly opaque and infinitely complex. Understanding a decision involves understanding a deep neural network or the voting structure of hundreds of trees – an impossible task. Moreover, the best models blend together several others, making interpretation impossible.

Interpreting Interpretability

Ironically, algorithmic interpretability itself is inconcrete and has no formal mathematical definition. Fundamentally, interpretability in machine learning is the extent to which a person can understand the cause of an algorithm’s decision. The higher the interpretability of a model, the easier it is for a human to comprehend why the model made its prediction.

In order for a model to be interpretable, the algorithm itself must generate explanations that humans can understand. A regression model, for example, is understandable because the weights of each variable highlight what the model prioritizes. Other potential explanations include if-then statements, decision trees, or natural language. An explanation must answer the question, “why?”: e.g. “Why will this stock increase?”

An example of a regression model: linear regression

Is Interpretability Ever Insignificant?

In low-risk environments where a prediction mistake has few consequences, knowing ‘why’ may be unnecessary. For instance, interpretability is not crucial to Netflix’s movie recommendation algorithm because mistakes are of little significance. Well-studied problems like optical character recognition also do not require interpretability because practical experience emphasizes the effectiveness of these already existing models; extra insights are superfluous. Finally, interpretability may influence the behavior of the user of a model. If a loan applicant knows performing a certain action will improve their credit score, the applicant will change his/her actions, affecting the algorithm’s analysis, even though the applicant’s probability of repaying the loan did not actually change. As a result, algorithmic interpretability is not always necessary.

The Importance of Algorithmic Interpretability

Interpretability involves an inherent trade off. Knowing ‘why’ a prediction is made may decrease a model’s predictive performance. On the other hand, knowing ‘why’ provides more information about the problem and the data, and can even help explain where a model might fail. Thus, for many machine learning problems, algorithmic interpretability is key.

Human curiosity is one of the key drivers of the quest for interpretable algorithms. As people, we want to learn, to ask “why?”, and to understand the unexpected. We try to find meaning in our world. Some AI algorithms today have already begun to satisfy this need, like Amazon’s algorithmic product recommendation which explains that the suggested items are frequently bought together. People are more likely to accept algorithms into their daily life if they can understand; explanations facilitate this integration. Furthermore, human curiosity includes the study of science and a thirst to learn. With big data-sets and complex machine learning models, the models themselves often become the source of insights, not the data. With interpretability, researchers can obtain knowledge from the models themselves. Consequently, interpretability is important for satiating human curiosity and facilitating the integration of machine learning into daily life.

Interpretability, though, has impacts far vaster than simply satisfying curiosity. Explanations of algorithm decisions are critical for safety in applied machine learning. For example, in the case of a self-driving car that detects cyclists, an explanation may reveal that the pattern the algorithm uses to detect cyclists is the presence two wheels. With this information, algorithm developers can account for edge cases like side bags that cover wheels – a measure that may save lives. Interpretation helps explain errors, and even suggests how to fix mistakes.

Finally, interpretability makes it easier to check key characteristics of good machine learning models. Machine learning models should be unbiased, and explanations can help detect biases from training data. A model that automatically approves or rejects credit applications may discriminate against minorities if its goal is to grant loans in a low-risk way, because data itself may have inherent demographic discrimination. Thus, interpretability can ensure fairness, or unbiased predictions. Moreover, explanations can indicate if sensitive information is protected, accounting for privacy. By looking at an algorithms decisions, creators of a model can also check that it only picks up causal relationships. Interpretability further shows if a model is reliable: will a small change in input cause a vastly different explanation? By ensuring that models are unbiased, private, causal, and reliable, interpretability helps ensure that machine learning models are ‘good.’

Algorithmic Trading: Interpretability in I Know First’s Forecasts

I Know First, a globally renowned Fintech company, exemplifies the use of an interpretable algorithm. I Know First’s unique AI-based self-learning algorithmic forecast uncovers the best investment opportunities in the market utilizing elements of deep learning, genetic programming, and chaos theory to model the markets as complex dynamic systems where small events can trigger major changes. Everyday, the system uses new data inputs from market changes to learn from past information and improve its predictive performance. Today, I Know First’s algorithm considers over fifteen years’ worth of market data, and then outputs forecasts for six different time horizons spanning from three days to one year.

I Know First’s AI forecasts price dynamics for over 10,500 assets including stocks, ETFs, currencies, commodities, interest rates, and over fifty stock markets. The forecasts are presented as an easy to read heatmap with numeric indication of signal and predictability. The signal indicator represents how the asset will behave in comparison to the other assets on the forecast with high positive numbers suggesting that the stock will surge. The predictability indicator represents how accurately the algorithm has been able to predict that stock’s movements in its previous forecasts.

While I Know First’s algorithm appears complex, it is actually an interpretable model. Customers often ask why a certain stock may appear on the buy list, while a seemingly similar stock appears on the sell list. For customers, the algorithm’s outputs may seem like a black box, but the interpretability of the algorithm is more akin to a secret recipe. The creators of I Know First’s algorithm use concepts from interpretability to analyze algorithmic output. The forecast is a highly non-linear function of many interdependent variables, each with different weights. Along with utilizing sensitivity analysis and averaging variable contribution over time, scientists and researchers at I Know First separate important causal variables with interpretability. Accordingly, I Know First’s research team is able to gain further understanding of the model’s behavior, providing better forecasts for their investors.

Artificial Intelligence

Conclusion: Interpretability and Algorithmic Trading

As the amount of data continues to increase, and a future of AI approaches ever closer, interpretability in algorithmic trading is becoming more and more important. I Know First’s algorithmic trading highlights the cruciality of interpretability. Overall, as more and more AI algorithms are developed, the creators of these new algorithms should look to I Know First as a leader in using interpretability in complex models.

A brief look at I Know First’s performance: Evaluation Report for S&P 500

Further reading: Interpretable Machine Learning: A Guide for Making Black Box Models Explainable by Christoph Molnar