Experimental Methods for Behavioral Science

Prerequisites (knowledge of topic)

 

None

 

Hardware

We will need to use a computer lab during some course times (though not all of them). Students will not need to use laptops during class.

 

Software

We need to be able to load experiment software (e.g. zTree) onto the computers in the computer lab. Again, this will not be on students’ individual computers.

 

Course Description

This course provides an introduction to the use of experiments in the social sciences. Well-designed experiments, in both the lab and the field, provide the control necessary to identify causal relationships and to cleanly test theoretical constructs. Experiments are an important part of the social science toolkit, and are becoming increasingly indispensable parts of the methodological toolkit in economics, political science, and other disciplines.

 

Students in this course will develop an understanding of the elements of good experimental design and implementation and examine the interaction between theory and experimental design. Students will further advance their knowledge through experience: they will work individually or in teams throughout the course to develop and pilot their own experiment design, and they will receive feedback from instructors and fellow students.

 

Topics covered during this course include: design and implementation of experiments, field and internet experiments, classic games, and individual preference experiments (risk, time, social). We will use studies in specific topic areas (environmental and natural resource applications, and social policy, especially in interventions targeted to the poor) to demonstrate central concepts in social science experimentation.

 

Course Objectives

Students who participate in this course will develop an understanding of the role of experimental methods in the social sciences. Additionally, students will become familiar with key elements of good design, including how to cleanly test their research questions. Since hands-on experience is necessary to master experimental concepts and techniques, students will participate in daily experiments. Finally, students will work together to develop and pre-test an experiment.

 

Structure

Lectures will be held 9:15am-12:00pm and 1:30pm-3:15pm daily. Lecture sessions will involve both hands-on participation in experiments like those we will read about and those you will design, and traditional classroom lectures.

 

Students will work with their groups outside of class to develop their experimental designs. Professors Jacobson and Linardi will be available directly after the afternoon lecture to consult with the groups on their projects.

 

Course Schedule

Date Time Topic
Day 1    
9:15 – 12:00 Experimental Design Essentials
1:30 – 3:00 Classic Games
3:00 – 3:15 Instruction and Coordination for Idea Blitz

 

Day 2    
9:15 – 11:00 Reputation and Social Image
11:00 – 12:30 Idea Blitz!
1:30 – 3:15 Nudges
Day 3    
9:15 – 12:00 Risk & Time Preferences
1:30 – 3:15 Applications to Policy
Day 4    
9:15 – 12:00 Nuts and Bolts: General and Lab
1:30 – 3:15 Nuts and Bolts: Field and Online
Day 5 9:15 – 12:00 Group Project Pilots and Presentations
1:30 – 3:15 Group Project Pilots and Presentations

  

Literature

 

Mandatory:

Marked with * in list below

 

Supplementary / voluntary:

Listed but not marked with *

 

Mandatory readings before course start:

The readings listed for the first Monday class

 

Full reading list:

You will need to purchase:

Friedman, Daniel and Shyam Sunder. 1994. Experimental Methods: A Primer for Economists. London: Cambridge University Press.

You should be able to find an inexpensive used copy online. All of the other required readings, and most of the recommended readings, will be posted online.

 

Day 1

1A. Experimental Design Essentials

*Friedman, Daniel and Shyam Sunder. 1994. Experimental Methods: A Primer for Economists. London: Cambridge University Press. Chapters 1-3.

*Croson, Rachel and Simon Gächter. 2010. The Science of Experimental Economics. Journal of Economic Behavior & Organization. 73.1: 122-31.

 

Additional Interesting Readings:

Druckman, James N., Green, Donald P., Kuklinski, James H., Lupia, Arthur, 2006. The Growth and Development of Experimental Research in Political Science. American Political Science Review, 100(4), 627-635.

Falk, Armin and James J. Heckman. 2009. Lab Experiments are a Major Source of Knowledge in the Social Sciences. Science. 326.5952: 535-538.

McDermott, Rose. 2013. The Ten Commandments of Experiments. PS: Political Science and Politics. 46.3: 605-10.

 

1B. Classic Games

*Goeree, Jacob K. and Holt, Charles A., 2001. Ten Little Treasures of Game Theory and Ten Intuitive Contradictions. American Economic Review. 91.5: 1402-1422.

*Eckel, Catherine C. 2014. Economic Games for Social Scientists. In Laboratory Experiments in the Social Sciences, edited by Murray Webster and Jane Sell, 335-55. London: Academic Press.

 

Additional Interesting Readings:

Berg, Joyce E., John W. Dickhaut and Kevin McCabe. 1995. Trust, Reciprocity, and Social History. Games and Economic Behavior. 10.1: 122-42.

Chaudhuri, Ananish, 2011. Sustaining Cooperation in Laboratory Public Goods Experiments: A Selective Survey of the Literature. Experimental Economics, 14(1), 47-83.

Engel, Christoph. 2011. Dictator Games: A Meta Study. Experimental Economics. 14.4: 583-610.

Isaac, R. Mark and James M. Walker. 1988. Group Size Effects in Public Goods Provision: The Voluntary Contributions Mechanism. Quarterly Journal of Economics. 103.1: 179- 99.

Johnson, Noel D. and Alexandra A. Mislin. 2011. Trust Games: A Meta-analysis. Journal of Economic Psychology. 32.5: 865-889.

 

Day 2

2A. Reputation & Social Image

*Andreoni, James, Justin M. Rao, and Hannah Trachtman. “Avoiding the ask: A field experiment on altruism, empathy, and charitable giving.” Journal of Political Economy 125, no. 3 (2017): 625-653.

*Jones, Daniel, and Sera Linardi. “Wallflowers: Experimental evidence of an aversion to standing out.” Management Science 60, no. 7 (2014): 1757-1771.

 

Additional Interesting Readings:

Andreoni, James, Ragan Petrie, 2004. Public goods experiments without confidentiality: a glimpse into fund-raising. Journal of Public Economics, 88(7), 1605-1623.

Carpenter, Jeffrey, and Caitlin Knowles Myers. “Why volunteer? Evidence on the role of altruism, image, and incentives.” Journal of Public Economics 94, no. 11 (2010): 911-920.

 

2B. Nudges

*Delaney, J., Jacobson, S., 2016. Payments or Persuasion: Common Pool Resource Management with Price and Non-price Measures. Environmental and Resource Economics, 65(4), 747-772.

*Schultz, P. Wesley, Nolan, Jessica M., Cialdini, Robert B., Goldstein, Noah J., Griskevicius, Vladas, 2007. The Constructive, Destructive, and Reconstructive Power of Social Norms. Psychological Science, 18(5), 429-434.

 

Additional Interesting Readings:

Archambault, Caroline, Chemin, Matthieu, de Laat, Joost, 2016. Can peers increase the voluntary contributions in community driven projects? Evidence from a field experiment. Journal of Economic Behavior & Organization, 132, Part A, 62-77.

Ferraro, Paul J. and Price, Michael K., 2013. “Using Nonpecuniary Strategies to Influence Behavior: Evidence from a Large-Scale Field Experiment.” Review of Economics and Statistics, 95(1), 64-73.

Holladay, J.S., LaRiviere, J., Novgorodsky, D.M., Price, M., 2016. Asymmetric Effects of Non-Pecuniary Signals on Search and Purchase Behavior for Energy-Efficient Durable Goods. National Bureau of Economic Research Working Paper Series, No. 22939.

 

Day 3

3A. Risk & Time Preferences

*Burks, Stephen, Jeffrey Carpenter, Lorenz Götte, and Aldo Rustichini. 2012. Which Measures of Time Preference Best Predict Outcomes: Evidence from a Large-Scale Field Experiment. Journal of Economic Behavior and Organization. 84.1: 308-320.

*Charness, Gary, Uri Gneezy, and Alex Imas. 2013. Experimental Methods: Eliciting Risk Preferences. Journal of Economic Behavior & Organization. 87: 43-51.

 

Additional Interesting Readings:

Cappelen, Alexander W., James Konow, Erik Ø. Sørensen, and Bertil Tungodden. 2013. Just Luck: An Experimental Study of Risk-Taking and Fairness. American Economic Review. 103.4: 1398-1413.

de Oliveira, Angela C. M. and Sarah Jacobson. 2017. (Im)patience by Proxy: Making Intertemporal Decisions for Others. Working paper.

Exley, Christine. 2016. Excusing Selfishness in Charitable Giving: The Role of Risk. Review of Economic Studies, 83(2), 587-628.

Frederick, Shane, George Loewenstein and Ted O’Donoghue. 2002. Time Discounting and Time Preference: A Critical Review. Journal of Economic Literature. 40.2: 351-401.

Güth, Werner, M. Vittoria Levati and Matteo Ploner. 2008. On the Social Dimension of Time and Risk Preferences: An Experimental Study. Economic Inquiry. 46.2: 261-272.

 

 

3B. Applications to Policy (both)

*DellaVigna, Stefano. “Psychology and economics: Evidence from the field.” Journal of Economic Literature 47, no. 2 (2009): 315-72.

*Incekara-Hafalir, Elif, and Sera Linardi. “Awareness of low self-control: Theory and evidence from a homeless shelter.” Journal of Economic Psychology 61 (2017): 39-54.

 

Additional Interesting Readings:

Bergman, Peter Leopold S., Rogers, Todd, 2017. The Impact of Defaults on Technology Adoption, and its Underappreciation by Policymakers, Working Paper.

Callen, M., Isaqzadeh, M., Long, J.D., Sprenger, C., 2014. Violence and Risk Preference: Experimental Evidence from Afghanistan. American Economic Review, 104(1), 123-148.

Rogers, Todd, Green, Donald P., Ternovski, John, Ferrerosa Young, Carolina, 2017. Social pressure and voting: A field experiment conducted in a high-salience election. Electoral Studies, 46, 87-100.

 

Day 4

4A. Nuts and Bolts: General and Lab

*List, John A., Sadoff, Sally and Wagner, Mathis, 2011. “So you want to run an experiment, now what? Some simple rules of thumb for optimal experimental design.” Experimental Economics, 14(4), 439-457.

*Gazzale, Robert, Jacobson, Sarah, and Linardi, Sera, 2018. “Nuts and Bolts: Designing and Running an Experiment.”

 

Additional Interesting Readings:

Brandts, Jordi, and Gary Charness. 2011. The Strategy versus the Direct-Response Method: A First Survey of Experimental Comparisons. Experimental Economics. 14: 375-398.

Charness, Gary, Uri Gneezy, and Michael A. Kuhn. 2012. Experimental Methods: Between-Subject and within-Subject Design. Journal of Economic Behavior & Organization. 81.1: 1-8.

Cubitt, Robin P., Chris Starmer and Robert Sudgen. 1998. On the Validity of the Random Lottery Incentive System. Experimental Economics. 1: 115-131.

Moffatt, Peter G. (2015) Experimetrics. Chapters I and II.

Humphreys, M., R. Sanchez de la Sera, and P. ven der Windt. (2013). “Fishing, Commitment, and Communication: A Proposal for Comprehensive Nonbinding Research Registration.” Political Analysis 21(1): 1-20.

Webster, Murray. 2014. “Funding Experiments, Writing Proposals.” In Laboratory Experiments in the Social Sciences, 2nd Edition, edited by Murray Webster and Jane Sell, 473-502. San Diego, CA: Elsevier.

 

4B. Nuts and Bolts: Lab-in-Field and Online

*Arechar, Antonio A., Gächter, Simon, Molleman, Lucas, 2017. Conducting interactive experiments online. Experimental Economics.

*Condra, Luke N., Mohammad Isaqzadeh, and Sera Linardi. “Clerics and Scriptures: Experimentally Disentangling the Influence of Religious Authority in Afghanistan.” British Journal of Political Science (2017): 1-19.

 

Additional Interesting Readings:

Delaney, Jason, Jacobson, Sarah, Moenig, Thorsten, 2018. Preference Discovery. (Updated version to follow)

Lucking-Reiley, David, 1999. Using Field Experiments to Test Equivalence between Auction Formats: Magic on the Internet. The American Economic Review. 89.5: 1063-1080.

Mason, Winter, Suri, Siddharth, 2012. Conducting behavioral research on Amazon’s Mechanical Turk. Behavior research methods, 44(1), 1-23.

Tanaka, Tomomi, Colin F. Camerer, and Quang Nguyen. “Risk and time preferences: linking experimental and household survey data from Vietnam.” American Economic Review 100, no. 1 (2010): 557-71.

 

Examination part

Evaluation will be based on group projects (85%) and class participation (15%).

Each student will be a member of a group of two or three students. Each group will design and pre-test or pilot a novel experiment that can answer a research question. The group’s deliverables will include an in-class presentation, a written report outlining the design, with relevant references, and instructions and/or screen shot mockups of the experiment. We will provide detailed guidance and a grading rubrics to reflect our expectations for the paper and the presentation.

All group members are expected to contribute to all parts of the project, and to contribute equally overall, though some division of labor is wise. Each student in a group need not get the same grade; we will evaluate not just the project overall but each student’s contribution to it. Individual students’ contributions will be assessed based on: students’ evaluations of their own and others’ contributions; the professors’ interactions with students during the project development; and the quality of the section(s) for which they had primary responsibility (if appropriate).

Each student’s class participation grade will depend on his or her preparation before class and participating in ways that demonstrate that preparation and showing the student’s intellectual engagement with the material.

 

Grades will be calculated as follows:

Final report & materials (group grade +/- individual deviation) = 55%

Final presentation (group grade +/- individual deviation) = 30%

Class participation = 15%

 

Supplementary aids

N/A

  

Examination content

There are no examinations. The group project must reflect solid principles of experimental design and conduct, including: design of treatments that isolate the phenomena of scholarly interest; incentive structure; and execution principles like anonymity, payment procedures, institutional review, sample size, randomization and rematching.

  

Literature

See reading list above.