In today’s higher education environment, the question of how to assess the value of what we as professors do should engage us all. Within this context, Donald R. Bacon and… Click to show full abstract
In today’s higher education environment, the question of how to assess the value of what we as professors do should engage us all. Within this context, Donald R. Bacon and Kim A. Stewart’s (2016) essay, “Why Assessment Will Never Work at Many Business Schools,” is a laudable effort with important insights. Indeed, I am for the most part in substantial agreement with the authors’ analysis as far as it goes. My belief, though, is that the assessment picture is more complex around the edges than Bacon and Stewart describe. There is complexity at both the micro level—that of the individual instructor—and at the macro level—that of our collective ability as business professors to articulate measurable learning goals. While it is tempting to assume otherwise, this complexity needs to be an ever-present part of our assessment discourse. Bacon and Stewart’s (2016) thesis is that business pedagogical research is often statistically problematic, mainly because of the frequent use of small student samples. Along with small sample sizes, they identify numerous practical issues such as low reliability, variable effect sizes, and impractically long learning cycles. Their proposed solution is to turn to the discipline of
               
Click one of the above tabs to view related content.