An Analysis on the Experimental Design of “My Money or Yours: House Money Payment Effects”

: Considering the expanding usage of experiments in Economics, the present article chooses one published paper in the area, dealing with the house money effect and analyzes it in a didactic way as concepts relating to the experimental design of lab experiments are evoked and discussed. In order to do so, three sections are outlined. First of all, the house money effect is explained and the article under scrutiny is placed in the context of what had already been done before; secondly, some of the experimental design concepts are summarised and then applied to soundly describe the experimental design of their experiment. Finally, after briefly presenting their results, there is an analytical overview of what has been done after their work and a personal take on possible lines for further research.


Introduction
In light of the growing employment of experimental approaches to gain a better knowledge of Economics, one of the paramount aspects is designing the experiment well, so the expected effects can be captured, if, and only if, they exist. While there are several handbooks addressing experimental design in Economics, and also experiments or papers on particular features of it, no article, to the best of our knowledge, has taken one published article to review these concepts and see how they were -or were not -applied by the researchers. Doing so allows for a very didactic exposition of experimental techniques that can particularly benefit junior researchers aiming at gaining a greater understanding of hands-on Experimental Economics. Professors teaching courses within this realm are also expected to benefit from this paper for their lectures.
How do we apply experimental methods in practice to put in place effective, complete and parsimonious experiments? To answer such a question, our goal is to analyze Davis et al. (2010) while we present and briefly discuss multiple concepts as applied in reality. Although talking about lab-in-the-field in some moments, our scope is mainly limited to lab experiments. The article under examination talks about the house money effect, discussing the payment of show-up fees; our first section presents these two concepts. The second section reviews some of the main experimental design concepts, making use of them to scrutinize the article in question. The third and last section presents experiments done after theirs and suggests possible lines for further research.

House Money and Show-up Fee
To begin with, Davis et al. (2010) focus on one particular aspect of experiments: payment. Paying subjects is necessary to have credible results, as stated by Xu et al. (2018) which show that running an experiment using hypothetical money elicits less risk aversion as compared to real money; at the same time, the such difference also happens between smaller real incentives as compared to greater ones. Not only does behavior differ in hypothetical versus real choices, but Camerer and Mobbs (2017) were also able to show that brain activity often diverges from one scenario to the other.
Moreover, besides adopting real incentives during the experiment tasks, experimenters are also used to paying a show-up fee -in other words, a participation fee that is independent of the subject's performance. Normally, the latter accounts for 5 to 30% of the total compensation, the remainder being tied to performance in the activity. Guidebooks in Experimental Economics frequently overlook the timing of the show-up fee, as if this was a matter of less importance, but that is precisely the point Davis et al. (2010) propose to investigate. within the two most recent years in the journal they submitted theirs. Of the 18 experimenters that replied to them, 14 paid a participation fee (almost 80% of the responders), while, from these 14, roughly 85% paid only at the conclusion of the experiment (12 out of 14) and only two paid upfront, alleging habit or easiness as a reason to such a choice. In the end, is this just a detail or in fact choosing to pay upfront or at the conclusion of the experiment yield different results, specifically in risk-taking? Thaler and Johnson (1990) consider how prior outcomes influence risk decisions. For instance, is winning a thousand dollars after hitting the jackpot in a casino perceived in a different manner as seeing, while you walk in the casino, that your stocks have gone up and you are a thousand dollars richer? If we were capable of always thinking only in terms of marginal costs, the answer would probably be a "no". However, as, by and large, humans regard historical or sunk costs, the authors were able to prove, through experiments, that, under certain circumstances, prior gains can raise participants' willingness to engage in gambles -labelled by them the "house money effect". The expression comes from gamblers saying they are "playing with the house money" when they are ahead (either they win more or they go through reductions in gain instead of losses).
Further evidence of the house money effect has been gathered by several authors. Cárdenas et al. (2014) did an experiment where a group of participants received the endowment 21 days in advance and the other group received it on the day, they were going to take part in it. The authors found a small house money effect, in the sense that subjects who had less money with them during the experiment (because they considered the endowment previously received as part of their disposable income and had already spent a chunk of it) were more risk averse. There is evidence of a house money effect on professional traders as well, as attested by, among others, Frino et al. (2008). Analyzing data from the Sydney Futures Exchange, they observe a greater tendency to take risks when trading with profits rather than with initial capital.
On the other hand, Clark (2002) ran an experiment on the voluntary contribution mechanism for public goods with more than a hundred students divided into two treatments that were exactly alike except for the fact that in one of them (treatment O), participants were asked to bring their own money to fund their personal investment account (US$ 8, to be precise, which was exactly their pay-off if they chose to invest all their money in the private good instead of investing it on the public one). Afterward, the US$ 8 was returned, without previous announcement, to participants in treatment O as a participation fee, in a way that the earnings distributions of the two treatments would end up being equal. They found no evidence of a house money effect. Davis et al. (2010) investigate the relevance of the timing to pay the participation fee in the light of the house money effect. To do so, they design an experiment to examine subjects' willingness to purchase assurance information -its most common form being an audit opinion provided by accountants that reduce the variance of possible values for a good -, the idea being that more risk-averse agents would be more willing to snap assurance information up. The main result they find is that paying subjects the show-up fee upfront does lead to more risk-averse behavior: those participants were more prone to purchasing assurance information and to paying a fee to reduce variability when they had the option. Before diving into the experimental features of their article though, a summary of the most relevant aspects of experiment designing is called for.

Experimental Design
Choosing the appropriate experimental design to apply is quite an art, as much as an architect making a plan for a building. Experimental biases must be avoided, while at the same time as we try to approach reality, we face constraints, relating to time, money, tractability, and ethics (to sum up three of the most important rules regarding the ethical aspect: real individuals, real incentives and no deception). We are supposed to choose the treatments we want to test and also use a baseline treatment or period for comparisons, to choose whether to do a between-subjects design (where each participant takes only one treatment). Or a within subject's design (where the same participant undergoes more than one treatment), and do proper randomization before to make sure we will be able to discern a treatment effect if it exists. We also have to choose if we are going to do it in a laboratory, take the laboratory to the field (lab-in-the-field) or do a web experiment -the choice of the format usually implies a trade-off between control and external validity. We also choose how we plan to analyze the data afterward, choosing the test's power we want to determine how big our samples must be and so on (Eber & Willinger, 2012;Jacquemet & L'Haridon, 2018).
Then, we should also consider if we want to observe actions (called the "hot" method) or if we want to observe strategies (called the "cold method"). The latter yields a wider range of data for analysis, while the former may be preferred in some cases if there is good justification for it to be used. Brandts and Charness (2000), using the Prisoner's dilemma and the Chicken game with undergraduate students in Barcelona, found no difference in behavior between the two elicitation methods. Such a result is in favor of the cold method since it allows experimenters to acquire more data at a low cost. Fast forward some years and both authors got together again to write a new paper on what the most recent decade of studies comparing the two elicitation methods had shown us. Surveying 29 studies, Brandts and Charness (2011) relate that in 16 of them there is no difference. Between the standard direct-response method and the strategy method. Four of them showed differences and nine of them presented mixed results. Situations involving punishment as well as situations with a lower number of decisions lead to a higher disparity between the methods, although the authors point out there may be a possible "publishing bias" -studies that do find an effect may be more likely to be published.
In addition, there are specific methods that attempt to elicit intertemporal preferences, such as the Multiple Price List or the Convex Time Budget; to elicit cognitive capabilities, such as the Cognitive Reflection Test; and to elicit social preferences, such as a trust game. More importantly in the context of the present paper, there are specific methods whose objective is to elicit risk preferences. Without entering into too much detail, there are experimental methods, such as the certainty equivalent, the choice of a lottery among a group of lotteries (Binswanger, 1980), comparing more and less risky lotteries (Holt and Laury, 2002), choosing a portfolio, i.e., the division between a less risky asset and a riskier one (Gneezy and Potters, 1997), and the bomb elicitation risk task (Crosetto and Filippin, 2013). There are also declarative measures that do not involve monetary incentives and genetic methods such as the 2D/4D ratio.
Finally, to finish up this short revision of experimental features, a paragraph on remuneration methods shall be written. Based on the assumption that subjects must be paid (the "real incentives" rule mentioned above), three questions arise. First of all, should all tasks be paid? No, according to Charness et al. (2016). They gathered evidence from previously published articles and, while mentioning potential risks existing in each of the two options, showed that overall, the evidence goes in the direction that paying only one randomly selected task is at least -and possibly even more -as effective as paying all tasks. Secondly, should all participants be paid? No, according to Clot et al. (2018). They conducted a dictator game in Montpellier using a control group (receiving full payment) and three treatment groups (one with hypothetical payment and two with random payment, subdivided into a high and a low stake group). The three monetarily incentivized groups presented similar transfer distributions and individuals were less selfish and more egalitarian when making hypothetical choices. Lastly, how can we learn about how participants make choices in real life if we cannot make them lose money? Ethics among experimentalists say participants have to be provided with an endowment to be used in the experiment -they cannot use their disposable income to participate. This generates the house money effect we are discussing in the present paper -after this detour; we are back where we left off. Demanding participants to do real-effort tasks at the beginning of the experiment to be able to get their endowment to play is one of the main solutions found until the present (Cárdenas et al., 2014).
In the experiment ran by Davis et al. (2010), subjects had the opportunity to buy a good whose value was comprised in the interval [40,50], with equal probability for each one of the integers within it. Participants knew the distribution but had to decide whether to buy or not before knowing the actual value of the good. The price was fixed at the mean, $45. Instead of using software or an online tool to choose randomly, the authors went old school: they used a bingo cage from which they drew a ball that determined the value of the good after the subject had made his or her choice. The game unfolds into 13 periods, the first five without the possibility of buying assurance information before choosing, and the last eight with that possibility on the table. An effort was made to hide the exact duration of the game to avoid differences between rounds due to a possible end-of-game effect.
They used a 2 x 2 factorial design, which means two variables were considered, the timing of the participation payment and the quality of information. The timing of the participation payment, as already explained, varied between one group receiving a lump-sum payment at the end of the experiment comprising both participation fee and experimental earnings, and the other group receiving the payment of the show-up fee upfront in cash -experimenters made sure that the second group had taken physical possession of the money, putting it in their pockets or purses. The variation in information quality, on the other hand, accounts for one group where subjects who choose to buy information are provided with the exact value of the good, and the other group where buying information reduces the range of possible values for the good, lessening its variance (in this last treatment, the information given to participants was the actual value 80% of the time and was less or equal to the actual value 20% of the time). The price set for this piece of information in the "certain treatment" is set at $1.25, a bit below its risk-neutral value, $1.36; in the "uncertain treatment", information could be purchased at $1, equally slightly below the risk-neutral value, $1.10. All decisions are private.
The goal is to test four hypotheses: (i) payment of the show-up fee upfront does not affect information purchase; (ii) information quality does not affect information purchase. (iii) information quality does not affect information purchase conditional on upfront payment of the show-up fee, and (iv) payment of the show-up fee upfront does not affect the rate at which subjects purchase the good without purchasing information before. The experiment was conducted on paper and pen at Michigan Technological University with 124 undergraduate students as participants, divided in a similar but not equal way among the four groups (making this a between-subject design). It lasted between 60 and 75 minutes and was paid to each participant, on average, $8.38, with a range going from $3.75 to $12 -a simple calculation shows the total budget used for this experiment was over $1000. Davis et al. (2010) showed that the timing of the payment of the show-up fee has an effect on risk-taking decisions, because individuals consider the upfront payment as their own money, while the payment at the end of the experiment is viewed as house money. In their discussion session, they wondered if this difference could be due to the upfront payment being unanticipated and, thus, viewed as a windfall gain, while a payment at the end could be viewed as part of the earnings of the experiment, turning the house money explanation upside down. This possibility goes hand in hand with the possibility of the terms "show-up fee" and "participation fee" until here used indistinctly, having different connotations. After all, are we talking about a fee to simply show up or about a fee to participate?

Subsequent and Further Research
The authors further develop the recruitment process -another part of the experimental design -to reason that their focus was on the gains of income, not on its unanticipated aspect; it was clear from their announcements that there would be a $8 fee just to show up. They confirmed that idea by running six more sessions, with an unannounced show-up fee this time: paying upfront leads to more risk-averse behavior regardless of whether or not the payment was previously announced. Luckily, we can continue to use "showup fee" and "participation fee" as indistinct terms. Since that publication, a few articles have been published on the house money effect, some of which are commented on below. Rosenboim and Shavit (2012) implemented a prepaid mechanism, in which the endowment was distributed to participants two weeks in advance of the experiment -the control group received it on the spot. Intuitively, that approach has a greater appeal than the one considered in the last paragraphs, since it gives participants some time to let sink in the idea that the money, they received is really theirs to play within the experimenta distinction that is not so crystalline when it is just between the beginning and the end of the experiment, a time frame of an hour or so, instead of two weeks. They found evidence of the house money effect in the domain of losses, where participants who received the endowment in advance exhibited more risk-averse behavior -that is, the observed behavior in the experiment closer to the one they would exhibit in real-life situations, therefore making the experiment more reliable to understand real human behavior and to design better policies.
After them, Cárdenas et al. (2014), as mentioned in the first section, did a similar experiment, but using three instead of two weeks between the distribution of the endowment to the treatment group and the day of the experiment. They found evidence of a house money effect. Carlsson et al. (2013), in turn, validated in the field the difference between an earned endowment and windfall gains. They ran a dictator game at a Chinese university (laboratory) and a supermarket close to the university (field), applying one treatment in which participants' endowment fell from the skies and another one in which participants had to undergo a lengthy questionnaire in an effort to obtain their provision. Those who receive their money as a windfall gain, be it at the lab or in the field, presented a more prosocial behavior than others.
Other articles that rely on different games systematically find evidence of a house money effect (Dankováa and Servátka, 2015;Scrogin, 2017). On the other hand, Hackinger (2016) conducted a public goods game with German students -by the way, he used the Cognitive Reflection Test mentioned before to elicit cognitive skills -, being able to show that the house money effect showed up only for participants with low cognitive ability. While that result needs to be further replicated and its robustness verified, it is safe to say that, most of the time, researchers interested in understanding human behavior and policymakers focused on designing behaviourally informed policies, consider individuals as a whole -the exception being research or policymaking specifically focused on high-capability subjects.
Finally, in their paper, Hvidey et al. (2019) ran an experiment in South Korea using the Holt and Laury (2002) method to elicit risk preferences. Half of them would be endowed with a monetary stake upon arrival, and the other half would have to complete an effort task during half an hour to earn their endowment -either peeling potatoes or making envelopes. Besides their creativity to choose the effort tasks, the authors put some performance targets that participants would have to attain to be able to keep participating. Otherwise, they would receive the show-up fee and go home. They were told that their work would not be wasted -a point that will be left without further discussion here, but that is relevant to avoid the crowding-out effect. The pay was calibrated to be seen as slightly above a typical hourly wage in the country, but not excessively so, as to avoid the possibility of it being considered as a windfall gain. Results, at this stage, could be guessed: participants under the earned endowment treatment took less risk. The authors finish up their article urging experimenters to avoid small endowments and implement an "earned stakes" protocol, with the aim of getting closer to reality in their experiments.
Almost all of the comments on the experimental design used by Davis et al. (2010) were made as we went along, describing first what the house money effect and the participation fee were, then breaking down every single part of their experimental set-up to better understand it, using the concepts learned in experiment designing. The last step was to see what has been done since then to "improve" their work, or complement it. While results were often mentioned, since they are necessary to understand or to suggest the next experimental design to be used in the following experiment, the focus has always been on the experimental design. Now, without developing them extensively to not run out of our scope here, some ideas for future research arise.
For instance, both the idea of paying the endowment in advance and the idea of requiring participants to engage in an effort task to obtain their endowment are seen promising, as they allow experimenters to approach themselves more to reality, rendering the experiment itself more credible, not only to the scientific community but, in the first place, to those who take part in them. Nevertheless, one implementation that does not seem common but would be quite patent now would be to require participants to exert an effort task well in advance of the experiment to get their endowment. Imagine we combine the peeling potatoes task -by the way, they had to peel 25 potatoes in 30 minutes (is this even possible?) -with the experiment in which participants received the money three weeks before the experiment. By doing so, we could be almost sure participants would consider the money as theirs (the "sure" part comes, of course, by running experiments to validate that intuition).
We know that experimenters cannot make participants lose money in the experiment. More than an ethical point, making an experiment with potential losses could give room to a selection bias. What if, say, a trading company authorized researchers to run a field experiment with its traders, making it compulsory for every employee to take part in it? That by itself would solve the selection bias, while the ethical concern would be solved by considering the endowment as part of the variable component of their wages on the next month (which normally depends upon the profits they make on trading operations) -in other words, they would be playing with their own potential money. Researchers would gain knowledge of behavioral biases in traders, while the company would benefit from the results to have a detailed view of how its own employees are prone to biases in their daily operations, which ultimately reduce profits.
Summing up, the ideas presented here consist of trying to bring experiments closer to reality. The first one does so by giving an earned endowment well in advance. It would also be important that the task used were as meaningful as they could be -worthless tasks sometimes used in the laboratory may not really convey the idea that the endowment is not a windfall gain any more than giving it without no task at all would. The second one reaches the outlined goal by allowing subjects to undergo losses, finding a way around the experimental rules set in place, since even though participants could end up with negative earnings from the experiment, not only would they have the fixed component of their income as usual, but they would also have the variable one coming from all the other successful trades to compensate for a conceivable loss. All in all, the house money effect can be overcome by using a real effort task detached from the experiment day, or by allowing participants to incur losses, using a way around to not break any rule.