Increasing Student Course Feedback Survey Rates

Recommendations for Increasing Student Response Rates for the Course Experience Feedback Survey given at the end of each course. These recommendations can apply to any course format. 
 
Summary of Recommendations for Faculty from the Academic Quality Committee (approved by the Faculty Senate March 2023) 
  1. Dedicate instructional time for feedback survey completion.
  2. Provide incentives (points-based or otherwise) that begin at a threshold of completion (e.g., 80% of students completing the survey).
  3. Communicate the value of the feedback survey for instructors and students, using specific examples.
  4. Provide your own reminders, in addition to the automated reminders generated by the survey system.
  5. Provide guidance for writing helpful feedback on the survey. 
  6. Administer midterm evaluations, and respond to student feedback swiftly.
  7. Emphasize social norms and social responsibility.
  8. Help AQC to improve the quality of the feedback survey instrument. 

For more specific guidance, read the following.  

More Detailed Recommendations 

  1. Dedicate Instructional Time to Feedback Survey Completion. For face-to-face or hybrid courses, allocating class time for completion of the feedback survey (either with laptops or other electronic devices) can substantially increase response rates (Young et al., 2019). We recommend 15-20 minutes dedicated at the beginning (rather than the end) of a class period (Standish et al., 2018) during the time period in which feedback survey are available. Note that some students may require additional time or resources to facilitate universal access. The instructor should provide links to students for direct access and completion. Instructors should also prepare students ahead of time, through the syllabus and/or email, that this time will be dedicated to feedback survey, and that electronic devices will be used in class. During this process, instructors should not monitor students’ responses in a way that will compromise anonymity.

    For online courses, dedicated synchronous class time may not be possible. However, instructors may build encouragement of feedback survey completion into instructional modules in other ways. For example, students may be asked to confirm that they have access to the feedback survey form as a release condition for accessing other course activities. As another example, a typical course component (quiz, activity, discussion, etc.) may be removed from a module near the end of the course, and the instructor may communicate to students that the time it takes to complete the feedback survey is assumed in place of this typical component.  

  2. Provide Threshold-Based Incentives. Providing tangible incentives for students can increase response rates significantly (Goodman et al., 2015). Instructors should be allowed to provide incentives for completion of the feedback survey. Given the anonymous nature of the evaluations, it is not possible to provide incentives responsibly to individual students. Therefore, a threshold-based incentive, which applies only if a certain percentage (e.g., 80%) of students complete the evaluation, could be used instead. The incentive can be points-based (e.g., extra credit totaling no more than 1% of the course grade), or not points-based (e.g., ability to bring a notecard to a final exam; a fun activity for the students). We do not recommend punitive incentives (e.g., grade reductions, or withholding the release of grades).    

  3. Communicate the Value of the Feedback Survey. Students are more likely to complete feedback surveys if they believe that their feedback will have value for themselves or other students, and that the feedback will be used to improve teaching and learning (Guder & Malliaris, 2013; Hoel & Dahl, 2019). Therefore, when reminding students to complete the feedback survey, instructors should emphasize the value of survey responses for both students and instructors. We recommend communicating the following:   

    1. Describe why we collect feedback survey, particularly regarding how the information is used by faculty and departments to make changes to courses or teaching practices. 

    2. Provide specific examples of how you have personally used feedback from student survey to change teaching practices in your own courses. 

  4. Provide Your Own Reminders. Even though the evaluation platform will send reminders automatically, it helps to receive personal nudges from faculty. This can occur in-person, via email and/or via D2L announcements.  

  5. Provide Guidance on Writing Helpful Feedback. When communicating with students, it may be helpful to provide guidance for how to write informative comments by offering examples of helpful feedback, both positive and negative. (See this example from the University of Michigan:  Feedback Guidance Example)

  6. Administer Midterm Evaluations. Faculty should consider collecting anonymous feedback regarding the course and instructor prior to the end of the term (e.g., at midterm), and then explicitly and directly responding to this feedback during the same term. Doing this can demonstrate that feedback from the student survey will be considered, and that this feedback can have a positive influence on teaching practices and student satisfaction (Overall & Marsh, 1979; Young et al., 2019). To accomplish this, instructors can either administer their own copies of the questions from the formal evaluations or use another form.

  7. Emphasize Social Norms and Social Responsibility. Norm-based appeals can increase survey response rates (e.g., Misra et al., 2012). For example, once you have reached a 50% response rate, you may update your reminder by noting that “Most of your fellow students have completed the feedback survey for this course. Please join your fellow students in improving the quality of future courses by filling out the feedback survey.”  

  8. Help to Improve the Quality of the Feedback Survey Instrument. The instrument and items should support valid data for the assessment of student responses to courses. They should also be as simple as possible and be consistent for face-to-face and online courses. The AQC recommends the optional inclusion of a limited number of custom questions—at the department or course level—to better fit the needs of individual faculty. Revising and improving the feedback survey is an ongoing process, and we welcome input from all stakeholders.   

 


 

References

Adams, M. J., & Umbach, P. D. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments.  Research in Higher Education,  53(5), 576-591. https://doi.org/10.1007/s11162-011-9240-5  

Bacon, D. R., Johnson, C. J., & Stewart, K. A. (2016). Nonresponse bias in student evaluations of teaching.  Marketing Education Review,  26(2), 93-104. https://doi.org/10.1080/10528008.2016.1166442 

Chapman, D. D. & Joines, J. A. Strategies for increasing response rates for online end-of-course evaluations. International Journal of Teaching and Learning in Higher Education, 29(1), 47–60. https://files.eric.ed.gov/fulltext/EJ1136018.pdf

Gerbase, M. W., Germond, M., Cerutti, B., Vu, N. V., & Baroffio, A. (2015). How many responses do we need? Using generalizability analysis to estimate minimum necessary response rates for online student evaluations.  Teaching and Learning in Medicine,27(4), 395-403. https://doi.org/10.1080/10401334.2015.1077126

Goodman, J., Anson, R., & Belcheir, M. (2015). The effect of incentives and other instructor-driven strategies to increase online student evaluation response rates.  Assessment & Evaluation in Higher Education,  40(7), 958-970. https://doi.org/10.1080/02602938.2014.960364

Gruder, F., & Malliaris, M. (2013). Online course evaluations response rates. American Journal of Business Education, 6(3), 333–338. https://files.eric.ed.gov/fulltext/EJ1054451.pdf

Hoel, A., & Dahl, T. I. (2019). Why bother? Student motivation to participate in student evaluations of teaching.  Assessment & Evaluation in Higher Education,  44(3), 361–378. https://doi.org/10.1080/02602938.2018.1511969

 Misra, S., Stokols, D., & Marino, A. H. (2012). Using norm-based appeals to increase response rates in evaluation research: A field experiment. American Journal of Evaluation, 33(1): 88–98. 

Nulty, D. D. (2008). The adequacy of response rates to online and paper surveys: What can be done?. Assessment & evaluation in higher education, 33(3), 301–314. https://doi.org/10.1080/02602930701293231

Overall, J. U., & Marsh, H. W. (1979). Midterm feedback from students: Its relationship to instructional improvement and students' cognitive and affective outcomes.  Journal of Educational Psychology, 71(6), 856–865.  https://doi.org/10.1037/0022-0663.71.6.856

Standish, T., Joines, J. A., Young, K. R., & Gallagher, V. J. (2018). Improving SET response rates: Synchronous online administration as a tool to improve evaluation quality. Research in Higher Education, 59(6), 812–823. https://doi.org/10.1007/s11162-017-9488-5

 Young, K., Joines, J., Standish, T., & Gallagher, V. (2019). Student evaluations of teaching: the impact of faculty procedures on response rates.  Assessment & Evaluation in Higher Education,  44(1), 37–49. https://doi.org/10.1080/02602938.2018.1467878

 

Further Recommended Reading 

Alvero, A. M., Mangiapanello, K., & Valad, J. (2019). The effects of incentives, instructor motivation and feedback strategies on faculty evaluation response rates in large and small class sizes. Assessment & Evaluation in Higher Education, 44(4), 501–515. https://doi.org/10.1080/02602938.2018.1521913

 Aragón, O. R., Pietri, E. S., & Powell, B. A. (2023). Gender bias in teaching evaluations: the causal role of department gender composition. Proceedings of the National Academy of Sciences of the United States of America, 120(4), e2118466120. https://doi.org/10.1073/pnas.2118466120

Brown, P. (2014). Make it stick: The science of successful learning. Harvard UP. 

Cavanaugh, S. (2016). T he spark of learning: Energizing the college classroom with the science of emotion. WVUP. 

Darby, F. & Lang, J. (2019). Small teaching online. Jossey-Bass. 

Hanstedt, K. (2018). Creating wicked students: Designing courses for a complex world. Stylus Publishing.

Hodges, L. C. & Stanton, K. (2007). Translating comments on student evaluations into the language of learning.  Journal of Innovation in Higher Education, 31, 279–286 (2007). https://doi.org/10.1007/s10755-006-9027-3

 Kember, D. & Leung, D.Y.P. Disciplinary differences in student ratings of teaching quality. Research in Higher Education, 52, 278–299 (2011).  
https://doi.org/10.1007/s11162-010-9194-z

 Kuh, G. D. & O’Donnell, K. (2013). Ensuring quality and taking high-impact practices to scale. American Association of Colleges and Universities. 

 Marcham, C. L., Ade, A., Clark, P., & Marion, J. (2020). Bias and trends in student evaluations in online higher education settings. Collegiate Aviation Review International, 38(2). https://commons.erau.edu/publication/1501

  Murray, D., Boothby, C., Zhao, H., Minik, V., Bérubé, N., Larivière, V., & Sugimoto, C. R. (2020). Exploring the personal and professional factors associated with student evaluations of tenure-track faculty. PloS One, 15(6), e0233515. https://doi.org/10.1371/journal.pone.0233515

 Ray, B., Babb, J., & Wooten, C. A. (2018). Rethinking SETs: Retuning student evaluations of teaching for student agency. Composition Studies, 46(1), 34–56. https://www.jstor.org/stable/26497805

 Stein, S. J., Spiller, D., Terry, S., Harris, T., Deaker, L., & Kennedy, J. (2012). Using student evaluations to enhance teaching practice: Closing the loop. Ako Aotearoa – The National Centre for Tertiary Teaching Excellence.