This work in progress paper presents an example of conducting a systematic literature review (SLR) to understand students’ affective response to active learning practices, and it focuses on the development and testing of a coding form for analyzing the literature. There is a growing body of literature on the benefits of active learning. However, instructors are hesitant to adopt these new practices, partly due to concern about negative student response. By analyzing findings from multiple studies, our SLR will allow others to better recognize underlying patterns and provide a compelling case for how instructors can overcome student resistance to active learning.
Specifically, this paper will compile results of an SLR regarding affective student response to active learning to answer the following questions:
(1) In published studies about active learning, what evidence is used to measure student affective responses to active learning? What are the relative strengths and weaknesses of each type of evidence?
(2) What patterns exist in the literature with respect to student affective response to active learning. Specifically, what instructor strategies, course content, class size, assessment methods, and active learning practices are presented?
(3) Overall, are student reactions to active learning instruction generally positive or negative?
We conducted database searches with carefully-defined search queries (e.g. keywords included “active learning”, “student response”, and “engineering education”) which resulted in 2,754 abstracts from 1990 to 2015. Each abstract was then screened by two researchers based on meeting inclusion criteria (e.g. describes an active learning intervention, includes empirical evidence of affective student reaction, and is in an undergraduate STEM education course), with an adjudication round in the case of disagreement. We used Refworks, an online citation management program, to track abstracts during this process. To date, we have identified 340 abstracts which satisfied our criteria.
Following abstract screening, we developed and tested a manuscript coding guide to capture the salient characteristics of each paper. We created an initial coding form by determining what paper topics would address our research questions and reviewing the literature to determine the most frequent response categories. We then piloted the form, using Google Forms to compile the data from multiple researchers, to identify and address unclear term definitions and overlapping questions. We tested the reliability of the form over three rounds of independent pair-coding, with each round resulting in clarifications to the form and mutual agreement on terms’ meanings. This process of developing a manuscript coding guide highlights the importance of iterating between pair-coding and discussion stages and demonstrates how to use free online tools, such as Google Forms and Google Sheets, to inexpensively manage inputs from multiple coders and to manage a large SLR team with significant turnover.
Currently, we are in the process of applying the coding guide to 340 full texts. When complete, the resulting data will be synthesized by creating and testing relationships between variables, using each primary source as a case study to support or refute the hypothesized relationship. We will present preliminary results from this analysis in the full paper.
Are you a researcher? Would you like to cite this paper?
Visit the ASEE document repository at
for more tools and easy citations.