Fundamental Modeling Exchange Rate using Genetic Algorithm: A Case Study of European Countries

: Genetic Algorithms (GAs) are an adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. In this study we apply GAs for Fundamental Models of Exchange Rate Determination in exchange rate market. In this framework, we estimated absolute and relative purchasing power parity, Mundell-Fleming, sticky and flexible prices, equilibrium exchange rate and portfolio balance model as fundamental models for European Union’s Euro against the US Dollar using monthly data from January 1992 to December 2008. Then, we put these models into the genetic algorithm system for measuring their optimal weight for each model. These optimal weights have been measured according to four criteria i.e. R-squared (R 2 ), mean square error (MSE), mean absolute percentage error (MAPE) and root mean square error (RMSE). Based on obtained Results, it seems that for explaining of EU Euro against the US Dollar exchange rate behavior, equilibrium exchange rate and portfolio balance model are better than the other fundamental models.


Introduction
Economists have never had much luck in forecasting asset prices in general or exchange rates in particular. Forecasting financial time series such as the stock prices or the exchange rates is important to the investors and the government. A good forecasting of a financial time series requires strong domain knowledge and good analysis tools. With economic globalization and the increasing interactions of international development, countries can't ignore the role of foreign economic sector in macroeconomic policies. Giving these interactions, each country's main connection with the rest of the world market in goods and asset markets, will be exchange rates. A growth of exchange rate literature has been witnessed during the past few decades along with inclinations to consider the importance of exchange rates as an important variable in open economy. Exchange rate has become a vast area of research for economists (Preminger and Franck, 2007). In international economics literature, there are two approaches for exchange rates forecasting. First, the fundamental approach that predicts exchange rates is based on factors in the framework for exchange rates determination models. The second approach is single-variable approach that uses only the past behavior of the exchange rate, to predict the future trend of them and due to lack of attention to other macroeconomic variables, this is known as the technical approach (Neely, 1997).
The work of Meese and Rogoff (1983) showed that fundamental exchange rate models were not able to beat the simple random walk in out-of-sample prediction. Cheung et al. (2005) repeated a similar exercise, adding data of the last 20 years and confirming the earlier results of Meese and Rogoff (1983). The main reason for failure of fundamental models that comes from economists is that the economic variables used in these models, affect minimal impact on the daily or weekly exchange rate (short run), but in the long-run these models have very interesting results. Some case studies suggest that more than 50 observations, causes the fundamental models to explain exchange rate behavior better than the technical models (Marcellino, 2004). These results and evidences suggest that the most successful models for exchange rate behavior forecasting, should be a combination of various fundamental models (Manzana and Westerhoff, 2007). In other words, combined models have higher explanatory power than individual models. Therefore, to check out which model or models are the best options to evaluate the behavior of exchange rates, a tool that is able to address the research is needed. To solve this problem, we can use the Genetic Algorithms (GA) as a new technique and a powerful tool in solving complex optimization problems that can find the best model among exchange rates models. Therefore, the aim of this paper is finding the best model of exchange rate.
The genetic algorithm has been increasingly employed to the model of behavior of economic agents in macroeconomic models. Arifovic and Gencay (2000) have shown with the genetic algorithm (GA) that the exchange rate will fluctuate forever and never converge to a stationary equilibrium, and so any exchange rate level can be reached by the economy. In Lux and Schornstein (2004), a special assumption about the agents in the economy is that agents only have one period memory, and all realized returns older than one period are irrelevant in the decision-making procedure. It makes the equilibrium investment decision unrestricted in the sense that all investment decisions will yield the same return. Therefore, even if the GA can converge to a stationary equilibrium, it cannot prevent the invasion of strategies that only change the portfolio composition. Schwaerzel and Bylander (2006) in their research titled "Predicting Currency Exchange Rates by Genetic Programming with Trigonometric Functions and High-Order Statistics" investigated and forecasted the daily rates and returns of two major currencies -the British Pound and the Japanese Yenagainst the US Dollar for the period from January 1th, 1990 to September 16 th , 2005 using genetic algorithm.
They measured performance using MSE (mean square error), HITS (number of correct predictions), HIT percentage, APC (average percentage change), and profit. The results show that genetic algorithm is more an accurate prediction than the other methods for both exchange rates. Neely and Weller (2001) in their paper investigated the use of genetic programming to forecast out-of-sample daily volatility in the foreign exchange market. Forecasting performance is evaluated relative to GARCH (1,1) and Risk Metrics models for two currencies, DEM and JPY. Although the GARCH/Risk Metrics models appear to have an inconsistent marginal edge over the genetic program using the mean squared error (MSE) and R 2 criteria, the genetic program consistently produces lower mean absolute errors (MAE) at all horizons and for both currencies. Dempster and Leemans (2006) developed an automated foreign exchange trading system based on adaptive reinforcement learning. The parameters that govern the learning behavior of the machine-learning algorithm and the risk management layer are dynamically optimized to maximize a trader's utility. Chun and Park (2006) proposed a regression case-based reasoning technique where concepts are investigated against the backdrop of a practical application involving the prediction of Korean stock price index. Shin and Lee (2002) proposed a GA approach to bankruptcy prediction modeling, which is capable of extracting rules that are easy to understand for users like expert systems. Some technical trading rules using GA (Allen and Karjalainen, 1999) have been used to analyze the profit from financial market. In addition, some researches combined neural network, GA and knowledge-based techniques. The rest of the paper proceeds in the following steps: Section 2 will introduce our fundamental framework. Section 3 introduces Genetic Algorithm Application. Section 4 gives methodology and details on the genetic algorithms, which we apply in models of Exchange Rate Determination. Section 5 presents data and results. Finally, section 6 is this paper's conclusion.

Review of Literature
Theories and models of exchange rate determination are divided into fundamental and technical models. In this framework, we introduce absolute and relative purchasing power parity, Mundell-Fleming, sticky and flexible prices, equilibrium exchange rate and portfolio balance as fundamental models.

Absolute and Relative Purchasing Power Parity Model (PPP):
One of the fundamental principles of international finance, purchasing power parity (PPP) has undergone considerable testing over the past few decades. While in theory, exchange rates should be determined by countries' price levels, so that goods cost the same in different countries, this has often been shown not to be the case (Taylor and Taylor, 2004). Violation of PPP seems to be attributable to, among many other factors, the existence of tariff and non-tariff barriers, taxes, transportation costs, as well as the heterogeneous composition and weight of commodities included in the basket to produce the aggregated price indices (Nagayasu and Inakura, 2009).

Mundell -Fleming Model (MFM):
Prior to the 1970s, the dominant paradigm in macroeconomics was the Keynesian school of thought. Keynesianism was used to model all aspects of the macro economy by academic economists, sparking a proliferation of macroeconomic models based on Keynesian principles. One such model that was developed to illustrate the mechanics of an open economy was the Mundell -Fleming model. Embedded within this model was the process of exchange rate determination, which viewed the exchange rate as being determined by trade and capital flows. In most studies in this model, the most influential factors on the exchange rate are actual domestic income, money market variable (domestic money supply), domestic government spending, domestic real interest rates and domestic taxes (Mark, 2000).
Monetary Models: By the late 1970s, however, the analysis of exchange rates was entering a new phase. Critics of the Mundell-Fleming model had argued that the process of exchange rate determination should be viewed not as an ongoing flow, but rather as a consequence of the effort to adjust asset stocks to the levels that economic agents desired. This new view on the process of exchange rate determination spawned the monetary model of exchange rates. It is considered to have been formally developed in the late 1970s in an attempt to explain the increased volatility and erratic nature of exchange rates faced by the major industrial nations after the collapse of the Bretton Woods system.
The monetary model of the exchange rate is the standard instrument of analysis in international finance. In a way, this is surprising as the empirical support for this model of exchange rate behavior is at the most doubtful using data from the post-Bretton Woods period. Monetary models are studied in Flexible Price Model (FPM) and Sticky Price Model (SPM). In sticky price model that is associated with Dornbusch (1976) work, the short -run exchange rate can be rather than its own long-term equilibrium that is known to overshooting (Rogoff, 2002). As briefly mentioned before, an overshooting occurs when the short-run exchange rate depreciates more than the long-run equilibrium exchange rate.
In this system, there are jump variables (exchange rate and interest rate) that compensate sticky of other variables (price level). To construct the theoretical framework, we adopt and modify the basic framework of the Dornbusch (1976) model. The basic structures of the model are presented in the following subsections. In Goods Market, price reaction function is a simple Phillips Curve (PC) without inflation expectations. This is the same Philips curve (PC) equation adopted by Dornbusch (1976). In money market, demand for real money balances as a function of domestic real income, a domestic nominal interest rate and the expected change in the exchange rate. Finally, in international asset market, one of the key equations of the sticky price model is the uncovered interest parity condition. The flexible price model starts from the definition of the exchange rate as the relative price of two monies and attempts to model that relative price in terms of the relative supply of and demand for those moneys. Another building block of the monetary model is absolute purchasing power parity (PPP), which holds goods-market arbitrage that will tend to move the exchange rate to equalize prices in two countries.

Equilibrium Exchange Rate Model (EERM):
The next fundamental model is equilibrium exchange rate model (EERM). The equilibrium approach to exchange rates that commonly stands in textbooks and course syllabus are Stockman (1980) and Lucas (1982). Blanchard and Quah (1989) believe that nominal and real rates move together in the short run, but they are turn apart as time goes by. This suggests the presence of two types of shock. The first type of shock can be thought of as a real one because it similarly affects the short run path of both nominal and real rates. The second type can be seen as a nominal shock affecting the real exchange rate only temporarily. In other words, there are only two types of shocks, and nominal shocks cannot affect the real exchange rate.
In this approach, the equilibrium exchange rate is directly estimated using an appropriate set of explanatory variables. The long-run relationship between the exchange rate and explanatory variables is derived and interpreted as the equilibrium exchange rate. Following Zhang (2001) and Kim and Korhonen (2005) we estimate the following long-run relationship between exchange rate and four variables (GDP per capita as a proxy for the Balassa-Samuelson effect, investment represented by the share of gross fixed capital formation as a percentage of GDP, share of government consumption as a percentage of GDP and degree of openness as measured by the share of the sum of the volume of exports and imports as a percentage of GDP).

Portfolio Balance Model (PBM): An increasingly important strand of theoretical work on open economies
views the exchange rate as being determined in asset markets, rather than in goods markets. Portfolio balance models imply that exchange rates, jointly with interest rates, result from the equilibrium of supply and demand for domestic and foreign assets, where these assets are allowed to be imperfect substitutes for each other. Dynamic adjustment of the exchange rate over time results from the fact that current account surpluses (deficits) correspond to accumulation (de-accumulation) of foreign assets, and that the current account itself depends on both the exchange rate and the stock of foreign assets. This model is an exchange rate model based on fundamentals for which focused treatments are few in number and not recent on average. This and the fluctuating fortunes of other structural models suggest that more attention be paid to the empirical analysis of the portfolio balance model. The central assumption of portfolio balance models is that assets in different countries are not perfect substitutes. The exchange rate enters through valuation effects in the supply and demand for assets, and a risk premium appears in the interest parity condition. Purchasing power parity is not assumed. (Cushman, 2007).

Methodology
Genetic algorithms (GAs) were advanced by Holland (1975), and expanded by Goldberg (1989). GA is search algorithms, inspired by evolution and applied in searching for the global optimum for many applications. Furthermore, GA has been successfully applied in economic and financial prediction (Allen and Karjalainen, 1999). These algorithms encode a potential solution for a specific problem into a simple chromosome-like data structure and apply recombination operators to these structures to preserve critical information. The steps of GAs in the proposed model, based on Goldberg (1989), reorganized for this study, are as follows: Step 1: Initialization: This step generates the initial population containing NP chromosomes, which are used to find global optimum initial seeds, where Np is the number of individuals in each generation. Simultaneously, the probability of crossover PC, probability of mutation PM, and the maximum numbers of generations NG are also initialized.
Step 2: Evaluation: After the initialization step, each chromosome is evaluated using a user-defined fitness function. The fitness value of each string is an index of the problem's design improvement suitability and the probability of survival of reproduction in genetic algorithms.
Step 3: Check termination criteria: After the previous steps, the processes, from step 2 to 7, are repeated until the termination criteria are satisfied. The proposed algorithm is terminated if either one of the following conditions is satisfied: 1. The maximum number of generations is achieved, or 2. The same solution has not been changed for the present generation.
Step 4: Elitism mechanism: In order to ensure the propagation of the elite chromosome, GA uses the Elitism mechanism (Shimodaira, 1996). This mechanism selects P% individuals, which have the best fitness values, to be the offspring of the next generation, while the remaining individuals execute the genetic operations (i.e., selection, crossover and mutation).
Step 5: Selection: Selection is a process in which suitable chromosomes from the parents' populations for the next generation are chosen. In this step, the selection of this model is tournament selection (Blickle and Thiele, 1995). Pairs of chromosomes are selected at random to produce their own fitness values. The chromosomes with the best fitness values will be chosen. This step is repeated until the number of chromosomes selected is equal to the number of the population.
Step 6: Crossover: The crossover operates by swapping corresponding segments of a string representation of the parents and extends the search for a new solution. Such positional bias (Eshelman et al, 1989) implies that the schemas with long-defining lengths suffer biased disruption. In order to reduce positional bias, this model uses uniform crossover, which can be disruptive especially to the early generations.
Step 7: Mutation: The mutation is a GA mechanism. It randomly chooses a member of the population and changes one randomly chosen bit in its bit string representation (Syswerda, 1989).

Figure1: Flow chart of all GAs for comparative study
Step 8: Convergence: If the GA has been correctly implemented, the population will evolve over successive generations so that the fitness of the best and the average individual in each generation increases towards the global optimum. Convergence is the progression towards increasing uniformity. A gene is said to have converged when 95% of the population share the same value. The population is said to have converged when all of the genes have converged. As the population converges, the average fitness will approach that of the best individual. Fig1. Shows structure of all the genetic algorithms. Number of observations (in genetic algorithms: population size) of the efficiency of genetic algorithm are effective and decisive parameters. For example if the number of observations to be considered is smaller than normal size, it may lead to early convergence (Fish et al., 2004). Therefore, considering the efficiency of problem solving and algorithm execution time, in empirical literature, the population size would be appropriate 25 to 300 (Wang and Hsu, 2008). In this study, each fundamental model after making estimations for weighting enters the genetic algorithms system. Optimal weights of each model will be measured according to these four criteria: R-Squared (R 2 ), mean square error (MSE), mean absolute percentage error (MAPE) and root mean square error  [4] In other words, the objective function in genetic algorithms is so determined that model which has higher R 2 , mean square error, mean absolute percentage error or root mean square error, will be given less weight. The fitness functions used in the genetic algorithm are as follows (Vose, 1991): In equation (5), the Probability of selection of Chromosome k is P (K), CRIi consists four criteria (R 2 , MSE, MAPE and RMSE) corresponding Chromosome k, max (CRIi) is maximum criteria in population and finally M is number of exchange rate models. Table 1 demonstrates the parameter settings of GA.   Table 2 summarizes the performance of the fundamental exchange rate models. Results based on genetic algorithms shows that Equilibrium Exchange Rate and Portfolio Balance Models have better results than the other fundamental models of exchange rates relatively. In addition, Relative Purchasing Power Parity model was the worst model of fundamental exchange rates determination.

Conclusion
The aim of this article is to explaining the power of the fundamental models of exchange rates for EUR / USD using monthly data from January 1992 to December 2008. In this regard, genetic algorithms and how it work was described briefly. Then, optimal weights of these models were extracted using genetic algorithms. Weight of each model was selected according to four criteria R-squared (R 2 ), mean square error (MSE), mean absolute percentage error (MAPE) and root mean square error (RMSE); So that, if a model has larger amounts of these four criteria, it will have less weight. The results showed that according to R-squared (R 2 ), mean square error (MSE) and root mean square error (RMSE) criteria, Equilibrium Exchange Rate and Portfolio Balance Models have better results than the other fundamental models of exchange rates relatively. With mean absolute percentage error (MAPE), the best models were Portfolio Balance Model and Absolute Purchasing Power Parity. Also with all criteria, Relative Purchasing Power Parity model was the worst model of fundamental exchange rates determination. These results adopt with works of Zhang (2001) and Cushman. This research would be interesting to apply the models including other fundamental models. This would, of course, require a careful treatment of the different approaches. In addition, these models could be extended in order to capture the dynamic solutions of exchange rate determination models. In other hand, there exist several methods for modeling equilibrium exchange rate. Each method is a normative concept, which defines the equilibrium exchange rate in different way. Other viewpoint is the time horizon of the equilibrium exchange rate. We suggest for next researches that these other ways have been established by them. They can distinguish short-term, medium-term and log-term equilibrium. The resulting misaligned exchange rate is not the consequence of the "inactive" market forces, but rather suggests the future development of the exchange rate. This is left for future research.