I recently read this Inside Higher Education article on outcomes and (lack of) accountability and immediately knew I had to attempt to formalize my thoughts for this blog. This article delves into a realm I like to call data negligence. My technical definition for this is the failure to reasonably use data resulting in harm or stalling progress for an institution. Practically speaking, this is being unaware, ignoring, or failing to use data. While I’d love dive deeper here (future dissertation note to self), let’s focus on what stood out from the IHE article.
The article underscores a lack of clarity for meaning behind “student outcomes”. What are we actually talking about here? Specific goals in the classroom or related to services/programs? Or, broad, university goals such as retention, graduation, or job placement? Some, all, or none of these?
It’s important to define what is meant since the definition could determine appropriate audience(s) or stakeholder(s). Without doing the work to define the term, it enables vague or naive statements, priorities, or criticisms.
In addition to ambiguity around outcomes, the top response from survey takers said “everyone” is responsible for student outcomes. While I appreciate this idealistic mentality, I cannot recall an institution where this is reality. Rarely is everyone on campus actively involved or aware how their actions relate to outcomes.
If you defined student outcomes, you could identify how key players (or roles in the “everyone”) are responsible or could be held accountable for specific outcomes or parts of student success.
Everyone is busy, though. “The top organizational barrier preventing colleges from improving student outcomes, according to 63% of respondents, is initiative fatigue — that they simply have too many pilots and projects going on to focus.” First of all, what are these pilots and projects if not related to improving student outcomes? Consequently, what could be more important than increasing or improving student outcomes?
It’s scary how many pilots and projects are created without supporting data regarding student needs or institutional trends, or worse, clear differentiation from existing services.
Before launching something new, be clear what need or outcome it’s serving. Confirm if you need something new at all by checking relevant data sets from existing or complementary services. If the pilot or program is still justified, reinforce or clarify its contribution to student success. If you can’t do those things, you should probably table the effort or at least not divert resources from outcome-based initiatives.
To prevent this madness, reflect on these questions:
1) What do “student outcomes” mean or look like to you/your institution? Which outcomes matter or are priorities for you, your area, your institution? Identify and prioritize these if they aren’t already.
2) With respect to outcomes identified in #1, who is responsible for what? Are roles and alignment to outcomes clear? Consider if this information is clearly and widely understood across the institution.
3) What are you doing to improve these outcomes? Are there any launching or developing pilots or projects diverting resources from outcome-improvement efforts? What can you do to re-invest in outcome improvement?
> BONUS <
Podcast With Kedrick Nicholas on Assessment of Student Programming