The Learning Analytics Roadmap: Do All Roads Lead To Forecasting? [LAR Series #7]

0
1912
The Learning Analytics Roadmap: Do All Roads Lead To Forecasting? [LAR Series #7]

Over my professional experience I have witnessed the many assumptions professionals tend to make about the use and value of statistics in their operations. For a regrettably long time, my work was victim of a common one: that forecasting was the sole and ultimate purpose of data-based work.

I was biased towards forecasting. Rather than using facts to think critically about a given issue, I turned all analysis efforts into a sports-like race where models boasting high significance and low error rates were always preferred. It did not matter if they were at odds with predominant theory. Or if it led to crimes of overfitting, which, I joke, is what happens when your model “understands too much”. Ironically enough, overfitting can quickly render a forecasting model useless, as soon as a new, never-before-seen eventuality comes up.

To be fair, theory is not always right: just the vast majority of time. And forecasting is a practice of great value. In learning, forecasting is palpable at every turn. Each time you wish you are making the right choice for your students, that a video would make a content more relatable, or whether it is better to give more or less homework, you are summoning the dark powers of a forecast.

The problems that appear in learning are statistically unique. Add to it the fact that, despite the many advances in science and technology, the box is still pretty black. In learning, we cannot be content with having a vague idea of what makes most students engaged. Unlike a diversified portfolio of stocks, where it’s OK if some investments fail as long as the average yield is positive, a learning intervention must ensure it is beneficial beyond its cost for each of the assets, i.e. the students.

You might ask, what made forecasting to step back, in my mind and practice? In one word: risk. In two, risk management.

Moodlerooms White Paper on Engagement for Long-Term Success

Learning Analytics and Research Director John Whitmer shares thoughts on the relationship between engagement and what they describe as “long-term success”, in a Blackboard white paper. Whitmer’s concern is a common one:

How can we predict learner risk with enough advance notice to change outcomes?

What can we do to identify course materials and activities that are particularly effective?

How can we make sure that learners are engaged?

In the white paper, Whitmer frames the discussion about the role of a Learning Management System (LMS) as a reservoir of learning history. Applications within an LMS can use that reservoir to track activity, identify patterns and inform decisions for both content and design. Visual reporting is close to becoming a standard, if it has not already: As Whitmer claims, “visual representations that are quickly understandable and easy to access are essential“. It recommends teachers to adopt a hands-on approach to data provided by analytics tools. From a “risk management” approach, the best tool are not those that merely produce an expected final GPA, but suggests corrective mechanisms on an individual basis, in a way that is “usable: detailed, contextual, and [that] reflects the immediate learning context” in “real-time“.

I am only starting to understand how to harness the power of risk management into learning. Combined with an LMS like Moodle, it can allow us to talk about truly personalized interventions. That means not based only on a student characteristics, but on his personal path of discoveries and pitfalls. The best assumption I can make explicit today is that no already defined rubric will be the outright best fit for a given new learner.

One thing is clear: a data thinker must have knowledge of both forecasting and risk assessment techniques, as well as wisdom to know when to use which.

To access the white paper, click on the link on this page and fill out the survey.

Follow our series on The Learning Analytics Roadmap.