Name of Method
Brief description
Type/Level of Method
Challenges
Conducting field experiments requires at least some basic knowledge of data collection and data analysis (A/B testing, t-tests or ANOVA)
Problem, Purpose and Needs
The method provides evidence based methodologies. The results of conducting field experiments can be shared with the stakeholders to support informed decision making
Relevance to Climate Neutrality
Challenges
Thematic Areas
Impact Goals
Issue Complexity
Issue Polarisation
Enabling Condition
Essential Considerations for Commissioning Authorities
Engagement Journey
Governance Models and Approaches
Enabling Conditions
Democratic Purpose
Spectrum of participation
Communication Channels
Actors and Stakeholder Relationships
Participant Numbers
Actors and Stakeholders
Participant Recruitment
Interaction between participants
Format
Social Innovation Development Stage
Scope
Time commitment
Resources and Investments
Typical duration
Resources and Investments
In-house
Step by Step
1.Specify what is the “intervention” (i.e., a policy supporting social innovation) and what is the effect that is aimed for.
2. Define indicators of the desired effect. For example, if the aim is to test a policy for increasing sharing practices, the effect of the policy could be measured by counting the number of initiatives related to sharing practices, counting the number of people and organizations involved, measuring how many items have been shared and by whom, measuring the satisfaction of people with the service, etc.
3. Define the experimental design: to test the effect of an intervention, you need to compare it to (1) the status before the intervention, and/or (2) a control group (i.e., collecting the same data in another district where the policy was not implemented), and/or (3) one or more different interventions.
It is more informative to conduct randomized control trials, comparing different interventions to control groups, in multiple locations.
4. Define which contextual factors could affect the intervention’s effect, i.e., gender, age, socio-economic status, culture, district, engagement mechanisms, involved partners, communication, etc. Information regarding these factors should be included in the data collection, to better understand the effectiveness of the intervention.
5. Collect data according to step 2, 3 and 4, i.e., with questionnaires, interviews, online surveys, apps, etc.
6. Analyse the results and utilize them for evidence-based policy making.
Evaluation
The method is an evaluation and is usually not evaluated; eventually it is complemented with qualitative data collection methods to overcome the limits of quantitative data collections.
The “evaluation” of the quality of an experiment can be inferred by the quality of the measurement instruments utilized to run the experiment. For instance, validated scales and measurement instruments should be utilized to collect responses (thus, questions should not be “invented” but selected from existing validated scales).
Connecting Methods
Experiments have been utilized together with other complementary methodologies such as diaries (that is, users are requested to write down their reflections or feelings), follow-up interview or focus-groups.
Flexibility and Adaptability
Experimental methodologies are very flexible: data can be collected before and after and intervention, or to compare multiple interventions (between them and/or to a control group), or to understand the influence of contextual factors on the intervention (i.e., the designed policy might work only for certain age groups). It requires the collection of quantitative data (i.e., with questionnaires, or collecting performance data or behavioural data). Optionally, qualitative data can complement the understanding (i.e., with follow-up interviews or focus-groups to give a deeper meaning to the quantitative results).
Data should be collected on a large enough sample (min. 100 subjects) for the analysis to be reliable.
Existing Guidelines and Best Practice
Gandhi, R., Knittel, C. R., Pedro, P., & Wolfram, C. (2016). Running randomized field experiments for energy efficiency programs: A practitioner’s guide. Economics of Energy & Environmental Policy, 5(2), 7-26. http://www.iaee.org/en/publications/eeeparticle.aspx?id=126
Generic practical reference for designing experiments:
Field, A., & Hole, G. (2002). How to design and report experiments. Sage.
A free online application for experimental data analysis: https://jasp-stats.org/
References and Further Resources
Scientific reference on field experiments:
Experimental and quasi-experimental designs for generalized causal inference
Practical reference for designing experiments:
Field, A., & Hole, G. (2002). How to design and report experiments. Sage.
A useful guide to experiments analysis:
Field, A. (2013). Discovering statistics using IBM SPSS statistics. sage.
A free online application for experimental data analysis: https://jasp-stats.org/
Application to climate neutrality:
Bernstein, S., & Hoffmann, M. (2018). The politics of decarbonization and the catalytic impact of subnational climate experiments. Policy Sciences, 51(2), 189-211.
Gandhi, R., Knittel, C. R., Pedro, P., & Wolfram, C. (2016). Running randomized field experiments for energy efficiency programs: A practitioner’s guide. Economics of Energy & Environmental Policy, 5(2), 7-26. http://dx.doi.org/10.5547/2160-5890.5.2.rgan
Applications to policy making:
Banerjee, A. V., & Duflo, E. (2019). Good economics for hard times. PublicAffairs. [Nobel prize winners]
Banerjee, A., Banerjee, A. V., & Duflo, E. (2011). Poor economics: A radical rethinking of the way to fight global poverty. Public Affairs. [Nobel prize winners]
King, G., Gakidou, E., Ravishankar, N., Moore, R. T., Lakin, J., Vargas, M., ... & Llamas, H. H. (2007). A “politically robust” experimental design for public policy evaluation, with application to the Mexican universal health insurance program. Journal of Policy Analysis and Management, 26(3), 479-506.
Comments ()