Investigating Chaos on the Johannesburg Stock Exchange

This study investigates the existence of chaos on the Johannesburg Stock Exchange (JSE) and studies three indices namely the FTSE/JSE All Share, FTSE/JSE Top 40 and FTSE/JSE Small Cap. Building upon the Fractal Market Hypothesis to provide evidence on the behavior of returns time series of the above mentioned indices, the BDS test is applied to test for non-random chaotic dynamics and further applies the rescaled range analysis to ascertain randomness, persistence or mean reversion on the JSE. The BDS test shows that all the indices examined in this study do not exhibit randomness. The FTSE/JSE All Share Index and the FTSE/JSE Top 40 exhibit slight reversion to the mean whereas the FTSE/JSE Small Cap exhibits significant persistence and appears to be less risky relative to the FTSE/JSE All Share and FTSE/JSE Top 40 contrary to the assertion that small cap indices are riskier than large cap indices.


Introduction
Financial crises, such as the ones that occurred in 1987, 1998, 2000 and then recently in 2007, have been brushed off as anomalies by proponents of the Efficient Market Hypothesis (EMH) who maintain that markets remain informationally efficient. However, the frequency with which these crises occur cannot be explained by the underlying assumptions of an efficient market. Although a study by Bendel, Smit and Hamman (1996) provides a special impetus on the behaviour of the stock market time series using a variety of indices, results were somehow mixed across indices. However, evidence of long-run persistence in the overall share returns were observed suggesting that future returns are influenced by past returns at least in the long term (Bendel, Smith & Hamman, 1996) which cultivates the need for further interrogation of the behaviour of share returns in modern economies. Classical finance theory is based, inter alia, on the assumptions of investors being rational, of informationally efficient markets and market equilibrium. Equilibrium infers the nonexistence of emotional forces like greed and fear, which trigger the economy to evolve and to adjust to new conditions. Regulating such human tendencies are desirable to minimise their effects, but doing away with them, however, "would take away the life out of the system, including the far from equilibrium conditions that are necessary for development" (Peters, 1996: 5).
This study applies the BDS test as described by Brock, Dechert and Scheinkman (1996) to test for the null hypothesis that the return series of the selected indices are pure noise or completely random. The BDS test, inter alia, has the ability to identify different kinds of deviations from randomness be it non-linear or linear stochastic processes and deterministic chaos. The BDS test is the most popular test for non-linearity and was originally created to test for the null hypothesis of independent and identical distribution (iid) aimed at identifying non-random chaotic dynamics (Zivot & Wang, 2006). The study further applies the rescaled range analysis developed by Hurst (1951) to detect persistence, mean reversion or randomness on the Johannesburg Stock Exchange (JSE) with the aim of providing more adequate assumptions and consequently more realistic models of financial behaviour on the JSE. Closely related to the rescaled range analysis is the Hurst exponent, which is indicated by H, sometimes referred to as 'the index of dependence', which measures three kinds of trends in a given time series, namely, mean reversion, persistence and randomness. The rescaled range analysis was widely used in financial analysis when the application of chaos theory in financial analysis was popular in the early 1990s (Voss, 2013).
As risk remains a fundamental consideration in any investment strategy, an appropriate evaluation of risk based on empirical evidence rather than theoretical postulations will provide practitioners a more comprehensive understanding of risk. Moreover, with the use of fractal statistics, it would be possible to improve financial risk models and provide an alternative discussion of financial markets which differs from the neoclassical assumptions of equilibrium, rationality, perfect markets and the mathematical hypotheses of continuity and symmetry. Chaos Theory and fractal science offers a description of the messiness and the fractal characteristic of financial markets and provide sufficient perspective as well as the mathematical tools required to analyse it. These tools will be beneficial to finance theories as they offer more suitable and realistic assumptions and models of financial market behaviours. This study is conducted on the time series of selected indices on the JSE in South Africa (FTSE/JSE All Share, Top 40 and Small Cap). The JSE is the 19th largest stock exchange in the world by market capitalization, it is the largest and the first stock exchange in Africa established in 1887 during the first gold rush in South African, with 383 listed companies and $ 997.17 billion in market capitalization as at June 2016 (JSE, 2013, World Federation of Exchanges, 2016.South Africa is ranked number one in terms of securities exchange regulations out of 144 countries according to the World Economic Forum's 2014-2015 Global Competitive Index Survey making it the fifth consecutive year the JSE has remained number one in the survey, also ranked number three in the ability to raise capital through the local equity market, number three again in terms of the effectiveness of corporate boards and number two in protecting the rights of minority shareholders (African Securities Exchanges Association, 2016).

Literature Review
As financial crises are becoming pervasive, the assumption of efficient markets is increasingly being criticised. Velasquéz (2009) proposes adapting Chaos Theory and Fractal Science to explain financial phenomena. Chaos theory is the study of systems that appear to follow a random behaviour, even though they are actually part of a deterministic process, and the random behaviour is given by their typical sensitivity to initial conditions that leads the system to unpredictable dynamics. One of the founders of chaos theory, Edward Lorenz, summarises this theory elegantly: "Chaos: when the present determines the future, but the approximate present does not approximately determine the future" (Hand 2014: 45). Financial markets are non-linear dynamic systems characterised by positive feedback and fractals, and therefore "what happened yesterday influences what happens today" (Peters, 1996:9). Peters (1996) therefore proposed the Fractal Market Hypothesis (FMH) for modelling financial markets. Benoit Mandelbrot, who is regarded as the father of fractal geometry, first discovered the distinguishing characteristics of fractals in financial time series, but many economists rejected his ideas so he began to lose interest in fractals in finance, and turned to physics. In the field of physics, he developed the fractal geometry of nature (Velasquéz, 2009). Mandelbrot spotted that the variance of prices misbehaved, culminating in abnormally big changes. This behaviour was manifested in "fat-tailed" and high-peak distributions, which commonly followed a power law with the implication that graphs, will not descend toward zero as strikingly as a Gaussian curve. However, the most distinctive property was that these leptokurtic (fat-tailed and high-peak) distributions seemed unchanged irrespective of time scale (weekly, monthly or yearly). Mandelbrot therefore concluded that "the very heart of finance is a fractal" (Mandelbrot & Hudson, 2005:147).
With the underlying classical assumptions of financial markets behaviour being heavily criticised, Buchanan (2013) suggests adopting a disequilibrium view of financial markets, claiming that the disequilibrium view submits that the crashes of 6 May 2010 or of October 1987 or of 2007-2008 were not any more abnormal than the March 2011 earthquake in Japan or the April 1906 quake in San Francisco. Market economies are self-referential and self-propelling systems intensely propelled by expectations and perceptions, and these systems regularly foster explosive amplifying feedbacks. Buchanan (2013) asserts that it is not easy to foretell the instant when a bubble will collapse, and equilibrium economics has resolved, therefore, that bubbles do not exist. A classic example is the refusal of Eugene Fama to admit to the existence of bubbles, for example in an interview in November 2013 on National Public Radio's Planet Money. Fama states that the word 'bubble' drives him crazy given that there is nothing to prove that anyone can predict when prices will go down, claiming that markets work and so bubbles cannot be predicted (NPR, 2013). The first comprehensive research on daily stock returns was done by Fama (1965) who discovered that stock returns were negatively skewed; therefore more observations were in the left-hand tail than in the right-hand. Furthermore, the tails appeared fatter and the peak round the mean was higher than what the normal distribution predicted. According to Corhay & Rad (1994), empirical findings reveal the existence of nonlinear dependencies that the random walk model fails to explain. Sterge (1989), in an additional study of financial futures prices of treasury bonds, treasury notes and Eurodollar contracts, finds the same leptokurtic distributions. Sterge (1989) notes that "very large (three or more standard deviations from the norm) price changes can be expected to occur two to three times as often as predicted by normality." McLean & Pontiff (2016) studied the return predictability of 97 factors that academic studies have shown to predict the cross-section of stock returns using out-of-sample and post-publication and found that factors lose 26% of their power after discovery. This inter alia, may be attributed to the effects of data mining. Factors further lose 32% of their predictability power after they appear in academic papers suggesting that investors only learn about this mispricing only after they have been published in academic papers. British hydrologist H.E. Hurst published a paper in 1951 with the title "Long-Term Storage Capacity of Reservoirs", which dealt with modelling of reservoirs, while he was trying to find a way to model the river Nile levels so that architects could build a reservoir of appropriate size (Peters, 1996). This work by Hurst paved the way for a statistical methodology that distinguishes random from non-random systems and for identifying the persistence of trends -a methodology referred to as rescaled range analysis (R/S analysis) (Mansukhani, 2012). While researching the fractal nature of financial markets, Mandelbrot chanced on Hurst's work and recognised it's potential and therefore introduced it to fractal geometry (Mansukhani, 2012). The Hurst exponent measures long-term memory of time series. The exponent relates to the autocorrelations of a given time series, and the rate at which such autocorrelations diminish as the lag between pairs of values increases. According to Peters (1996), a higher value of H depicts less noise and more persistence and a more distinct trend than lower values with higher values showing less risk albeit exhibiting abrupt changes.
On the JSE, Jefferis and Smith (2005), adopting a GARCH methodology with parameters that vary with time, and employing a test of evolving efficiency (TEE) over the period 1990 to 2001, concluded that the JSE is weak form efficient. Adelegan (2003Adelegan ( , 2009) finds the JSE to be informationally inefficient, by testing the reaction of market participants to changes in dividend policies of listed firms. Smith (2008), however, rejects the random walk hypothesis on the JSE, using tests of four joint variance ratios. In the following section, we describe the data and the methodology for conducting the BDS test and deriving the Hurst exponent. Section 3 provides the results and discussion of our findings Section 4 concludes the paper and section 5 provides the list of figures referred to in section 3.

Methodology
This section discusses the data selected for the study and the methodology the study adopts in testing for non-linearity and chaos on the JSE.

Data:
The data for this study were obtained from the database of McGregor BFA, based in Johannesburg, South Africa. McGregor is a prominent provider of stock exchange and accounting data to firms and researchers. McGregor has standardised financial data dating from 1972 to date, and has information for all companies and industries on the JSE. This study investigates the fractal nature of the JSE over the period 15 June 1995 to 12 November 2014. The indices investigated are the daily returns of the FTSE/JSE All Share (J203), which represents 99% of the full market capitalisation of all eligible shares listed on the main board of the JSE; FTSE/JSE Top 40 (J200), which represents the largest 40 companies on the JSE ranked by market capitalisation; and FTSE/JSE Small Cap (J202), which consists of all the remaining companies after the selection of the top 40 and mid cap companies. The study takes 8 cycles of sub-samples from a large sample of n = 4840, with n = 2420 in the second cycle with 2 sub-samples, and so on until 20 sub-samples of n = 242.

The BDS Test:
The test for correlation integral is the main concept behind the BDS test (Zivot& Wang, 2006). The correlation integral measures how frequent temporal patterns are repeated in a given time series. The BDS test is designed to spot non-linear dependence (Oppong et al., 1999). For a given time series xt for t =1,2,…,T with its m-history as = ( , −1 , … , − +1 ), we can estimate the correlation integral at embedded dimension m by: = − + 1and ( , ; )represents a signalling function equal to 1 if −1 − −1 < for i = 0,1,…,m -1 and zero otherwise. Instinctively, the correlation integral is an estimation of the probability that any m-dimensional points being in a distance of of each other. This implies that it calculates the joint probability: PR( − < , −1 − −1 < , … , | − +1 < − +1| < ) If xt are iid, then this probability must be equal to: (Brock et al., 1996) define the BDS test as: Where , is the standard deviation of ( , − 1, ) and can be consistently estimated, as documented by Brock et al. (Brock et al., 1996). Under conditions of fairly moderate regularity, the BDS test converges in distribution to N(0,1): , (0,1) One advantage of the BDS test is that it requires no distributional assumptions on the series to be tested.

The Hurst Exponent:
In proposing the FMH, Peters (1994) applied a modified rescaled range (R/S) procedure, which was pioneered by Hurst (1951). Peters (1994) and Howe, Martin & Wood (1997) review the steps for computing the R/S analysis. First, the index series of the JSE is converted into logarithmic returns, St, at time period t of the series of the JSE index. Using raw daily price data in stock markets has many limitations because prices are generally non-stationary (Mehta, 1995) and therefore interfere with estimating the H exponent. The series is therefore converted into logarithmic rates of returns to overcome the problem. In line with Peters (1994), the study divides the time period into A sub-periods with a length of n, so that A × n = N, with N being the length of the series . The study labels each sub-period where a = 1,2,3…, A. The study further labels each element in is categorised , where k= 1,2,3,…,n. The average value, for each of length n is defined as: The range is given as the maximum minus the minimum value , , within every sub-period given as: being the time series of the accumulated divergence from the mean for each sub-period. Each range is divided by the sample standard deviation that corresponds to it to normalise the range. The standard deviation is given as: The mean R/S values for length n is given as: Finally, an OLS regression is applied with log(R/S) as the dependent variable and log(n) being the independent variable. The Hurst exponent, H, is obtained from the slope coefficient of the regression. An H of 0.5 means the series under investigation exhibits characteristics in line with the random walk hypothesis. An H greater than 0.5 denotes persistence while an H lower than 0.5 denotes anti-persistence.
Once H is computed, the autocorrelation within the time series is computed as: = 2 (2ℎ −1) − 1 According to Peters (1994), the CN represents the percentage of movements in the time series that can be explained by historical information. A CN = 0 signifies randomness in the time series under consideration pointing to a weak-form efficient market where historical information cannot be relied on to outperform the market. Figure 1 shows the market capitalisation of the selected FTSE/JSE indices for the study. Figure 2 shows the statistical depiction of the data the study used. The kurtosis values for the indices selected are all larger than 3, which is the value for normal distribution signifying that all the series of the indices have fat tails compared to a normal distribution and leptokurtic. The returns of the indices therefore have frequent extremely large deviations from the mean with the FTSE/JSE Small Cap exhibiting the highest leptokurtosis. The series of all the indices are also negatively skewed, again with the FTSE/JSE Small Cap displaying the highest (in absolute terms) of negative skewness. The Anderson-Darling test also rejects the null hypothesis of a normal distribution at the 0.01 significance level. The implications of these findings are that the series of indices considered in this study show significant and frequent deviations from the mean, and therefore applying statistical models that do not take fatter tails into consideration will underestimate the likelihood of very good or very bad outcomes.   The series are examined up to 10 dimensions in line with Oppong, Mulholland and Fox (1999) and Bhattacharya & Sensarma (2006). The z-statistic is given as the BDS test divided by the standard error and is the final step that is used to test the null hypothesis. The null hypothesis of iid is not accepted if the zstatistic is greater than 2.58 at 0.01 level of significance. Given that the z-statistics are all greater than 2.58 for all the ten dimensions for the indices selected and p-values of 0.0000, the study concludes that the times series of returns for all the three indices do not exhibit randomness at 0.01 significance level.

Rescaled Range Analysis:
Hypothetically, the H suggests some trading strategies, for example, H greater than 0.5 signifies persistence in the time series, and an H less than 5 signifies reversion to the mean, and H = 0.5 signifies randomness in the time series, therefore the more divergent the H, the less efficient the market is. Figures 6 and 7 present the outcome of the R/S analysis of the FTSE/JSE indices selected for the study.  Given that the FTSE/JSE All Share is a free-float market weighted index, the time series of its returns will be significantly influenced by the large caps companies and therefore the H for the series will be similar to that of the FTSE/JSE Top 40 as can be seen from figure 7. A high H according to Peters (1996), implies less risk, clearer trend and less noise and therefore the FTSE/JSE Small Cap can be construed to be less risky than the FTSE/JSE All Share and FTSE/JSE Top 40 contrary to the popular notion that small cap indices and stocks are riskier. Jefferis and Smith (2005) conclude that the JSE is weak form efficient. Peters (1996;18), however, posits that the efficient market hypothesis in its pure form does not accept only iid observations and does not necessarily entail independence over time, asserting that "if returns are random then the market is efficient. The converse may not be true, however." The study corroborates the conclusions of Smith (2008), that the JSE does not exhibit a random walk.
The findings of this study are in line with the assertion that small cap companies are less explored or totally ignored by many analysts and a large population of investors, and therefore the market for small stocks tend to be inefficient compared to their large cap counterparts, leading to prices deviating from fair values (Fundamental Index, 2008;Foley, 2014;Credit Suisse, 2014). Kuppor (2013) argues that small cap markets require less efficiency, otherwise this market that historically has created jobs, brought about break-through technologies while rewarding investors with price escalation will seize up for good. This finding further corroborates the assertions of McLean & Pontiff (2016) who argue that mispricing exist in financial markets and investors learn about these mispricing from academic publications. Financial markets can therefore not be construed to incorporate all relevant information since factor models purely reflect risk-return trade-offs and should not be affected by the publications done by academics. There are at least 316 factors that have been tested by financial market researchers that explain the cross-section of expected returns and many of the factors discovered are only significant by chance (Harvey, Liu and Zhu, 2015).

Conclusion
The study finds that the time series of returns of the JSE are not random. The FTSE/JSE Small Cap exhibits a high persistence while the FTSE/JSE All Share and the FTSE/JSE Top 40 exhibit slight mean reversion. Given that the JSE All Share is a free-float market cap-weighted index, the time series of its return will be heavily influenced by the large market cap companies, and therefore will exhibit characteristics similar to the FTSE/JSE Top 40. The study concludes that the FTSE/JSE Small Cap exhibits highly exploitable inefficiencies relative to the FTSE/JSE All Share and Top 40. As the small market cap companies are less popular, they will not be as highly researched by analysts and investors as their large cap counterparts, and therefore will exhibit exploitable inefficiencies. The study further concludes that the FTSE/JSE Small Cap exhibits less risk, less noise, clearer trend and more persistence, and therefore, contrary to the popular belief that small cap companies are riskier than large cap companies, at least on the JSE, the small cap index is less risky than the top 40 index and all share index as the H exponent of the FTSE/JSE Small Cap is significantly higher than 0.5 as compared to the FTSE/JSE Top 40 and all share index. This finding can be corroborated by the high standard deviation of 0.005934 for the FTSE/JSE Top 40 and 0.005393 for the FTSE/JSE All Share, as compared to 0.002919 for the FTSE/JSE Small Cap. In line with Peters (1996), we find an index with a higher H to be less risky than an index with a low H. This study therefore recommends a fractal approach to evaluating risk, as this provides a more adequate description of financial market behaviour. This paradigm would permit practitioners in financial and risk management to work with appropriate models to achieve their objectives as this would imply better analytical tools which can augment their awareness and understanding of the risk in financial markets. Table 8 presents the results of the linear regression of log N and log R/S.  Table 9 presents the R/S values of all the sub samples used in the study.