An unfortunate match takes place too often across LMS courses: Having indicators that are too basic to gauge the quality of an online intervention; and trusting them too much, as the basis for instructional design, or worse, policy-making. Developers and promoters of educational technologies, Moodle included, would like apps to be used more profusely. This is only natural. But lacking a sense of what would constitute deliberate LMS-based practice, we risk assuming more screen time is better always.
Blackboard’s Timothy Harfield, using feedback from John Whitmer’s data, adds some color to the debate. Despite the fact that Moodle and similarly-featured LMS keep track of student behavior and performance in a level of detail never experienced before, educators still struggle to answer what students do online and how they make the best use of the tools available on the LMS. The problem confounds even world-class teachers.
For Harfield, a straightforward way to tackle the problem is by measuring use against results. We can take into account the fact that not all time spent online is equal, that the effectiveness of one more hour spent on LMS activities peaks at some point after which it begins to decline, and that the ideal amount of time we wish students spent in front of the screen is highly context-dependent. As Harfield defines it, the question varies according to the level of “LMS maturity” of an organization.
He brings up Dr. Whitmer’s active and ongoing research dojo, which (thanks to Blackboard’s ample customer base) offers one of the most robust data sets on this topic. The diversity of LMS use cases is categorized in 5 “archetypes,” or roughly speaking, the depth with which organizations leverage LMS features. Then, it measures how each archetype fares on a debatably basic indicator: GPA.
As a preliminary finding, there seems to be no significant relationship between GPA and archetype, which might be even more confounding. Why should we think about LMS’ use in archetypes then? The lesson, while preliminary, could be a show of confidence to some educators, particularly those resisting marketing or top-down pressures to “get the money’s worth” and keep students in front of the LMS needlessly longer. It might be time to turn our heads towards LMS data and the creation of unique indicators that are more reflective of each organization’s unique LMS usage pattern.