Bibliography

Cherney, I. D. (2008). The effects of active learning on students’ memories for course content. Active Learning in Higher Education, 9(2), 152-171.

  • Authors employed two studies to compare the effectiveness of active learning vs. class videos and lectures. Moreover, they wanted to examine whether active learning was more or less effective for lower level psychology courses or higher level psychology courses. In study one, 250 undergraduates were recruited from different psychology courses and were asked to write down previously covered topics. In study two, 64 psychology students completed the same procedure, but authors were also able to investigate student understanding and achievement. Between the two studies, active learning activities led to more retention of class content and a greater understanding of this content. Lastly, there were no significant differences between class level type, suggesting that active learning is beneficial for both lower and upper level psychology courses.
  • This study demonstrates that active learning may be more effective than other teaching styles (i.e. lecture), and I have incorporated this into my own classroom. I use active learning strategies in both Introduction to Health Psychology (lower level) and Research Methods 2 (upper level) every single class period. In my health psychology class, about 25% of each class period is dedicated to active learning, and in my methods class about 50% of each class period is dedicated to active learning.

Dickson, K. L., Miller, M. D., & Devoley, M. S. (2005). Effect of textbook study guides on student performance in introductory psychology. Teaching of Psychology, 32(1), 34-39.

  • Authors sought to test the effects of study guide use on exam performance. 236 Introduction to Psychology students were randomly assigned to one of two groups: 1) use study guide before exam 2) control condition. Students in the study guide group had significantly higher exam scores than the students in the control condition. It was also found that students who completed 75% of the study guide did not have significantly different scores than the students who completed 25% of the study guide. This interesting finding suggests that completing even a small proportion of the study guide leads to significantly better grades.
  • When I first started teaching, I did not utilize study guides. However, this study has led to the implementation of study guides in my class

Hackathorn, J., Cornell, K., Garczynski, A., Solomon, E., Blankmeyer, K., & Tennial, R. (2012). Examining exam reviews: A comparison of exam scores and attitudes. Journal of the Scholarship of Teaching and Learning, 78-87.

  • This study investigated whether certain types of study guides were more or less effective than others in regard to preparing students for exams and students’ attitudes towards study guides. 78 Social Psychology students were followed over the course of the semester in which there were 3 exam reviews, followed by a subsequent exam. Using a within-subjects design, participants were exposed to three different types of study guides: trivia style game, traditional study guide, practice exam. It was found that both trivia style and traditional style reviews were significantly associated with higher exam scores when compared to practice test review style. Moreover, students were less confident, felt less prepared following the traditional review.
  • These results suggest that more active based reviews may not only be more effective in preparing students for exams, but increases student confidence and perceived preparedness. Thus, I not only use study guides in my classes, but I try to make them more interactive and applied. For example, in my methods class, I create an application-based study guide and dedicate a full class period to allowing students to work on this in groups. After the study guide is completed, we go over the answers together as a class.

 

Higgins, R., Hartley, P., & Skelton, A. (2002). The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in Higher Education, 27(1), 53-64.

  • This paper does not conduct an empirical study, but rather takes a theoretical approach to examine feedback in higher education. One of their major points is that students do value feedback on their work and doing so can help facilitate a deeper understanding of class content.
  • When I first started teaching, I was unsure how feedback was valued by students. This article ensured that it is in fact valued by students, and helps them achieve a greater understanding of their class content. Thus, I often use feedback in all of my classes. I provide (detailed) feedback for assignments, exams, and semester long projects.

Juwah, C., Macfarlane-Dick, D., Matthew, B., Nicol, D., Ross, D., & Smith, B. (2004). Enhancing student learning through effective formative feedback. The Higher Education Academy, 140.

  • This is a long theoretical manuscript that uses theory and case studies to describe what characterizes the most effective form of feedback. They outline 7 different principles: 1) allows students to think more deeply about the content they are learning 2) facilitates discussions between student and instructor 3) helps students understand the purpose of the assessment 4) helps students achieve the instructor’s learning objectives 5) feedback is only truly helpful when it is detailed and delivered in a timely manner 6) can help make learning a more positive experience and 7) can help the instructor improve in their own teaching and assessment.
  • Given these principles, I have made several adjustments to the use of feedback in my classroom. First, I ensure that I provide the feedback as quickly as I can. Second, I try to provide as much (detailed) feedback as possible. For example, in my methods class, I scaffold the research proposal project, providing three rounds of very detailed feedback to the students. Third, I allow students to provide feedback to fellow classmates by using peer review.

Kingston, N., & Nash, B. (2011). Formative assessment: A meta‐analysis and a call for research. Educational Measurement: Issues and Practice, 30(4), 28-37.

  • This meta-analysis examined the efficacy of formative assessment and found an effect size of .20. The results suggest that the efficacy of formative assessment may not be as strong as once thought, but the literature to date is small (N=42) and there are also many factors that moderate this effect.
  • While I still use formative assessment in my classes, I do not use many common forms of them (i.e. quizzes). Instead, the formative assessments in my classes are built into the active learning activities that are done each day in class. Moreover, I monitor student progress (or lack thereof) by the scaffolding of the major class project. For example, in my Introduction to Health Psychology course, a behavior modification programs occurs in stages throughout the semester. Each stage uses class content from that unit. Thus, if students are struggling with a certain component of the project (or any of the in class activities) I am able to identify where the students are struggling and address it.

Momsen, J., Offerdahl, E., Kryjevskaia, M., Montplaisir, L., Anderson, E., & Grosz, N. (2013). Using assessments to investigate and compare the nature of learning in undergraduate science courses. Life Sciences Education, 12(2), 239-249.

  • This study uses Bloom’s taxonomy examine assessments in two different science classes: introductory biology and introductory physics. They found that both classes assessed lower cognitive levels, but in different ways (i.e. biology assessed more for knowledge, while physics assessed more for comprehension). Authors also examined the relationship between Bloom’s taxonomy and student performance and found that there was a small but significant relationship between the two in the physics class. Authors argue that these findings highlight an issue pertinent to student expectations, such that they have a perception of how a class should be (i.e., biology is strictly for memorization).
  • While the findings of this study were interesting, it was what I derived the most value from. I found that the rationale behind this study was important and it has influenced how I use assessment in my classes. This is particularly useful for me because I teach one lower level class and one upper level class. Accordingly, I have different learning types of learning objectives. If I am truly doing so, then my assessments should have different Bloom levels as well. Using the Bloom’s taxonomy, I can ensure that my assessments match the class’ learning level and objectives.

Sana, F., Weston, T., & Cepeda, N. J. (2013). Laptop multitasking hinders classroom learning for both users and nearby peers. Computers & Education, 62, 24-31.

  • Authors tested whether use of a laptop affected classroom performance. In the first experience, 44 undergraduate students were randomly assigned to one of two conditions: 1) take notes on laptop only 2) take notes and perform tasks not related to the class. Participants in the second condition had a significantly lower score on the subsequent task, suggesting that multitasking on a laptop during class has adverse effects on classroom performance. In the second experiment, it was tested whether sitting by a classmate who was multitasking on a laptop would score more poorly on a subsequent task. 39 undergraduate students came into the lab and listened to a lecture. No participants were allowed to use their laptop, while several confederates were laptop multitaskers during lecture. Participants were randomly assigned to either sit by, or not to sit by, a laptop user. Participants who sat by a laptop multitasker scored significantly lower on the test compared to those who didn’t.
  • With the continued development and use of technology, it will become an increasingly prevalent issue for instructors to deal with in their classroom. This study has influenced how I approach my classroom laptop policy. While I do not ban laptops from the classroom, I spend a few minutes on the first day of class discussing this study. Specifically, I explain the results and how the use of a laptop may affect their own grades. Further, if they are to use a laptop, I implore them to not use it for non-class related content, as it can adversely affect their own grades and the grades of the nearby classmates.

Smith, M. K., Wood, W. B., Adams, W. K., Wieman, C., Knight, J. K., Guild, N., & Su, T. T. (2009). Why peer discussion improves student performance on in-class concept questions. Science, 323(5910), 122-124.

  • This study tested whether small group discussions could enhance student performance. 350 undergraduate students participated in a semester long study. In each 50-minute class, approximately 5 clicker questions were asked. Each time a clicker question was posed, students were encouraged to discuss it with students sitting next to them. Overall, the results suggest that peer discussions do in fact enhance student understanding, regardless of whether the group knows the correct answer right away.
  • This paper has influenced how I conduct my in-class activities. Instead of having participants complete these activities by themselves, they work in small groups. Then, after working on the activity with a small group, we come together as a whole class and work through it together.

Stowell, J. R., & Bennett, D. (2010). Effects of online testing on student exam performance and test anxiety. Journal of Educational Computing Research42(2), 161-171.

  • Authors sought to examine how test anxiety and performance would be influenced by whether they were taken online or in a regular classroom. In this counter-balanced within subject design, 69 psychology students took one exam online and a second in a regular classroom. A majority of students preferred taking the exam online compared to the classroom setting. In contrast to their hypotheses, test anxiety and performance were not significantly different between exam type. However, for students who have high levels in anxiety, taking the online exam reduced their anxiety. In contrast, those who have low classroom anxiety actually reported more anxiety when taking the exam online. Authors argued that this anxiety could be contributed to the novelty of taking an exam online (i.e., getting the online platform to work correctly).
  • While this study did not find conclusive evidence that taking exams online was beneficial for test performance and anxiety, it has still influenced my exam policies. Since I started teaching, I have been administering my exams online. Throughout the duration of my teaching, I have been fine tuning this process to make it as smooth of a process as possible.

Taylor, A. K. (2011). Students learn equally well from digital as from paperbound texts. Teaching of Psychology, 38(4), 278-281.

  • This study examined the impact of textbook type on test performance. 74 undergraduate students were randomly assigned to read either a digital or paperbound copy of a specific chapter and then take a test over its content immediately after, and one week after, reading it. There were not any significant differences in test performance between the two groups, suggesting that textbook type does not make a difference in student learning.
  • With the continued advancement of technology, digital textbooks are becoming more common each year. One concern this brings me is how this may impact student learning. Do students learn differently when using a digital textbook? While there is still much more research to be done on this issue, it appears that it would not impact student learning. While I have not yet implemented the use of programs that use digital textbooks (e.g., Perusall), it is something that I am working towards doing.