I have to admit that I have a love/hate relationship with the Likert Scale. I almost jumped with joy when our institutional research director sent me our old National Survey of Student Engagement (NSSE) data so I could attempt to measure the impact of service learning and deep learning from a conference presentation I heard at the IUPUI Assessment Conference. On top of my love for regression analytics, pivot tables, and Venn Diagrams, a deep enthrallment enters my brain when I get an Excel sheet full of 1s, 2s, 3s, 4s, and 5s, all representing different Likert responses. However, as the data comes to life, I have to wonder if we are really demonstrating student learning through these questions.
Throughout my undergraduate and graduate experience, never once in class have faculty asked if I “Strongly Agree, Agree, Disagree, or Strongly Disagree” that I understood the basics of servant-leadership philosophy. Instead, our faculty friends would ask something in the nature of “Describe the core principles of servant-leadership philosophy.” The open-ended wording of this question allows the participant to unpack all the knowledge that they can about the subject while the Likert-based version only allows the student to unpack what they think they might know. The purpose of an assessment question is not to measure satisfaction (95% of students reported having a positive experience) but to demonstrate the learning that takes place during the event (I learned xyz through this..).
So what do we have to do? Ask open-ended questions that are connected to your student learning outcomes. If the information is not relevant to your program or your center’s learning outcome, do not ask it! While its much easier to score 1s and 3s on an Excel sheet than identifying themes in essays, you will have access to better data and be doing more authentic driven assessment.
While we’re at it, let’s also drop the pre-test and post-test. Growth in knowledge should be expected after completing any structured activity where content is given. On top of the tendency for people to self-report at higher rates of knowledge than they actually have, I’m just not convinced this is our best opportunity for data. Instead of the pre-test and post-test, perhaps we should implement writing prompts immediately after an event and another a month later to see how knowledge was gained and retained. Further, what if we use a retrospective method that asks immediately what we learned but also at the same time asks us to reflect back and rate our knowledge prior to the event?
Finally, this work is not easy. I’m far from an expert, and I try to purchase as many books and attend as many conferences and webinars as possible on the subject of student affairs assessment. I think if you really think out learning outcomes and develop some solid open-ended questions, you will begin the journey of seeing and gathering all the amazing stories of our students learning.
Want to learn more? Check out my Slideshare below…
This post is part of our #SAassess series on the importance of assessment in student affairs as a state of mind. A variety of knowledgeable and relatable perspectives will be portrayed throughout the month of November. We hope you will gain inspiring insights and take time to reflect on how you make meaning of your data collection and assessment practices. For more information, check out the intro post by Kim Irland. Be sure to read the other posts in this series too!
> BONUS <
Podcast With Kedrick Nicholas on Assessment of Student Programming