Background – There is a body of information literacy (IL) literature applied to undergraduate engineering students, much of which discusses different methods for teaching, such as classes/one-shots, online tutorials, gaming, and other interventions. It is important for librarians to know which methods of teaching engineering information literacy (EIL) are most effective for student learning, in order to make efficient and effective use of student and librarian time.
Purpose/Hypothesis – The authors reviewed the existing literature to find indications of the most effective methods for teaching and/or integrating EIL, both in face-to-face and online instruction.
Design/Method – The authors have completed the first stages of a systematic literature review (SLR), through the creation of the final dataset. The initial searches generated a set of 1224 papers prior to duplicate removal. Duplicate removal and multiple rounds of review, using authors-created inclusion and exclusion criteria, narrowed the final dataset to 13 papers.
Scope/Method – The lessons learned in the process around searching, tools for data evaluation, and articulation of criteria are presented. As a result of this portion of the SLR process, the authors identified characteristics of the undergraduate-focused EIL literature that are shared.
Results/Discussion – A brief summary of the process to arrive at a final dataset of 13 papers, the challenges in the process, and the refinements made at each step are outlined.
Conclusion – There are several preliminary conclusions to be drawn, many of which will not be surprising to the engineering librarian community. The dataset came down to just 13 items because much of the EIL literature is based on student self-report data on how the class went, or was it enjoyable, rather than on actual student learning gains. As such, these papers did not meet the criteria for demonstrated learning gains as a measure of effectiveness. In addition, some papers were excluded for lack of clarity about methods. In these studies it is not evident how either the intervention and/or the assessment was conducted, with regard to timing, instrument used, etc. Some additional papers were excluded because a control or comparison group was not included to establish “effectiveness” of the intervention. Overall, the authors note the EIL literature frequently reports descriptive statistics, showing that data has been gathered, but sometimes falls short of a full analysis that allows the researchers to draw meaningful/well grounded conclusions from the data.
Are you a researcher? Would you like to cite this paper?
Visit the ASEE document repository at
for more tools and easy citations.