ISSN: 2168-9458
+44 1223 790975
Review Article - (2015) Volume 4, Issue 1
Economists assess the efficiency of financial markets in absolute, all-or-nothing terms. However, this is at odds with a no-nonsense physics approach. Here, I describe how the relative efficiency of markets can be gauged taking advantage of algorithmic complexity theory. This is not physics-envy because the approach is superior in considering the proper randomness present in complex financial markets.
<Keywords: Algorithmic complexity theory; Efficient market hypothesis; Financial market efficiency; Relative market efficiency; Mild type I randomness; Wild type II randomness
If a piston engine is rated as 30 percent efficient, this implies that, on average, 30 percent of the engine’s fuel is consumed for useful work, with the remaining 70 percent lost to heat, light, or noise. When you buy a refrigerator, you are informed how efficient it is based on this inputoutput ratio. That is, a machine’s efficiency measures its effectiveness in transforming the device’s energy input to its work output. Efficiency, in short, is measured as a ratio of the measured performance of the machine to the performance of an ideal machine. For example, the energy efficiency of refrigerators has improved during the past three decades — a typical new refrigerator uses about half the energy used by a typical refrigerator in 1990. This concept of efficiency is useful for consumers to make sensible choices. For instance, if your refrigerator is old, taking into account the informed efficiency can be used for you to decide whether it may make sense to replace it. Such a measure of efficiency is relative in that the evaluation of whether a machine is efficient is considered relative to a 100 percent-efficient, ideal machine that perhaps will never be built.
When economists talk about financial market efficiency, what do they mean? You perhaps are expecting something similar to the refrigerator example: efficiency measured relative to an ideal fridge. Surprisingly, no relative efficiency is considered, but instead, economists consider somewhat the absolute efficiency.
A market is considered efficient if its price always fully reflects available information. The price not only already reflects all known information, but also changes fast to reflect new information. As a result, no one can outperform the market by using the same information that is already available to all investors, except through luck. If an investor wants to beat the market by predicting a stock’s price rise of 10 percent tomorrow based on publicly available information, it will succeed only if the market is failing to convey this information today. And because a financial price changes only in response to unpredictable news, it is unpredictable as well. That is, all the available information is instantly processed when it reaches the market and is immediately reflected in a new value for the price. Note that a price move conveys only nonredundant information, that of unanticipated news. Therefore, in an efficient market, price moves are random.
How do economists decide whether a market is efficient? They perform econometric tests. The technical details can be analyzed by reading surveys, such as that of Beechey et al. [1], which I personally find very informative. Further information on the “efficient market hypothesis” described above, is available in any financial econometrics textbook. There is a common theme throughout these studies: Efficiency is an all-or-nothing problem to be solved. It is considered in absolute terms. I myself embarked on this unsatisfying research in the past by assessing the (absolute) efficiency of the Brazilian stockmarket [2].
After presenting an overview of market efficiency in their classic financial econometrics textbook, Campbell and co-authors [3] observed that (p. 24), “the notion of relative efficiency, i.e. the efficiency of one market measured against another may be a more useful concept than the all-or-nothing (absolute) view taken by much of the traditional efficiency market literature.” Furthermore, the authors made an analogy with physical systems that are usually given an efficiency rating based on the relative proportion of energy converted to work. Although the authors recognized the need for an approach based on relative efficiency, they suggested comparisons between markets, not comparing a market to the ideal market, as in the refrigerator example.
The book of Campbell et al. was published in 1997, and you perhaps are now expecting me to share the literature on relative market efficiency that followed from the authors’ insight. Surprisingly, no economic literature regarding this subject was published thereafter.
The follow-up came from the physicists themselves — the guys interested in the efficiency of refrigerators, produced the next key insight in the development of a full-fledged approach to relative market efficiency. I had an “aha!” flash while presenting Chapter 2 of Mantegna and Stanley’s book [4] to graduate students in my econophysics course. The authors presented the efficiency market hypothesis in connection with stuff called “algorithmic complexity theory.” My students and I then learned that a time series with a dense amount of nonredundant information presents, in practice, statistical features similar to random time series. Now remember that an efficient market time series possesses a dense amount of nonredundant information. The final link was then obvious to me: Measuring the deviation from randomness yields the relative efficiency of a market. Actually, Mantegna and Stanley came very close to this conclusion after observing that (p.12), “measurements of the deviation from randomness provide a tool to verify the validity and limitations of the efficient market hypothesis.” Following the excitement of the discovery of the connection between the efficient market hypothesis and algorithmic complexity theory, it came as a big surprise to me. There was practically no literature on this theme among physicists. In the end, Mantegna and Stanley were not providing a literature review of the topic: They were suggesting an avenue for research!
One student shared my interest and showed his desire to research the subject of relative efficiency. A number of papers followed from his dissertation work, co-authored with physicist colleagues and a professional statistician [5,6]. Considering the interpretation based on algorithmic complexity theory, we could rank, in terms of their relative efficiency, 36 stock exchange indices; 37 individual company stocks [5]; 20 U.S. dollar exchange rates [6]; and the stocks listed on the Brazilian Bovespa index using both daily [7] and high-frequency data [8]. Using the relative efficiency approach, we were also able to detected drops in the efficiency rates of the major stocks listed on the Sao Paulo Stock Exchange in the aftermath of the financial crisis in 2008 [9].
To get a glimpse of the methodology we employed, consider the following three strings, each with 10 binary digits:
A 0000000000
B 0101010101
C 0110001001
You might correctly guess that A is less random, so A is less complex than B, which in turn, is less complex than C. We calculated a complexity index to coincide with such an intuition. According to the approach, the expected information content of a series is maximized if the series is genuinely random. In this situation, there is maximum uncertainty and no redundancy in the series. The algorithmic complexity of a string is the length of the shortest computer program that can reproduce the string.
What do our results mean? Can financial prices be predictable? Do they mean you now can beat the market? If not, how can they be useful still?
In our results, we did not find any financial price to be 100 percent efficient, considering the ideal market as the benchmark. Within algorithmic complexity theory, a series is unpredictable if the information embodied in it cannot be reduced to a more compact form: The best algorithm reproducing the original series has the same length as the series itself. Thus, when we find a market less than 100 percent efficient, this means some degree of “compression.” At first glance, this would also mean some degree of predictably, but this is not the correct way to interpret this result. To see why, further information on the meaning of randomness is necessary.
When the efficient market hypothesis was formulated in 1965 by Paul Samuelson, he proved mathematically that properly anticipated prices fluctuate randomly. This randomness refers to a fair game, a game played with fair dice. The technical term is “martingale.” Andrey Kolmogorov and Gregory Chaitin independently developed algorithmic complexity theory at the same time as the application of the martingale to economics. However, despite the connection between the theories, the type of randomness implicit in algorithmic complexity is different.
Efficient market theory assumes what Nassim Taleb called “type I randomness” [10]. This occurs (p.32), “when your sample is large, [and] no single instance will significantly change the aggregate or the total. The largest observation will remain impressive, but eventually insignificant, to the sum.” The economic literature assumes financial data are implicitly modelled by a Gaussian distribution. Even if analyzing a series where a datapoint can affect the total—an extreme event, it is ruled out as an “outlier.” Economists then impose mild type I randomness to data because the theory assumes a fair game played with fair dice. This is called a “ludic fallacy” (p. 303) by Taleb, which means considering the randomness of actual financial markets similar to the one of the narrow world of games and dice. An additional layer of uncertainty concerning the rules of the game in real life is missing. Economists are gambling with the wrong dice. Unlike them, the approach of algorithmic complexity considers wild type II randomness, where one single observation can disproportionately impact the aggregate or the total. Fat-tailed distributions (here you can Google for a Paretian distribution) replace the Gaussian, and there is no need to rule out extreme events (what Taleb called “black swans”). With algorithmic complexity, in principle, no past observations of a series will be dismissed as an outlier, and therefore the randomness considered is of the right type.
However, the approach still falls for the ludic fallacy, because it considers the financial markets as a game, though one played with loaded dice. For example, when the S&P 500 index is 99.1 percent efficient and the Brazilian Bovespa index is 67.8 percent efficient [6], this means investors can make a decision to expose themselves and invest in some Bovespa stocks because Goddess Fortuna will likely favour them more than the S&P 500 stocks. Investors could make more unpredictable gains in the Bovespa than in the S&P 500 because the dice are more loaded in the Brazilian market. So, the approach of relative efficiency is able to track what Taleb called “gray swans.” This is reassuring, at least for me, because in principle, black swans cannot be captured by models calibrated using past data. According to Taleb, extreme events in complex series such as those of financial prices are not amenable to modeling. But at least you can stop assuming all swans are white, as if Australia had not yet been discovered. Black swans do exist in Australia.