Quantcast
Channel: Nicola Osborne » learning analytics
Viewing all articles
Browse latest Browse all 3

DiCE Seminar: Logging Students: Understanding Learning One Click at a Time – Gregor Kennedy, Melbourne University [LiveBlog]

0
0

This afternoon I am attending a seminar from Gregor Kennedy, University of Melbourne, organised by the Digital Cultures and Education research group at University of Edinburgh.

As usual this is a liveblog so please let me know if you see any typos, have corrections to suggest, etc. 

My background is in social psychology and I decided to change fields and move into educational technology. And when I started to make that change in direction… Well I was studying with my laptop but I love this New Yorker cover from 1997 which speaks to both technology and the many ways in which Academia doesn’t change.

I also do a lot of work on the environment, and the ways that technology effects change in the wider world, for instance the way that a library has gone from being about physical texts to a digital commons. And my work is around that user interface and mediation that occurs. And in the first 15 years of my career was in medical technology, and in interfaces around this.

Now, the world of Digital Education is dominated by big platforms, from early to mid-2000, enterprise teaching and learning systems that provide, administer, etc. Platforms like Blackboard, turnitin, Moodle, Echo. And we have tools like Twitter, blogging tools, YouTube, Facebook, Second Life also coming in. We also see those big game changers of Google and Wikipedia. And we have companies/tools like Smart Sparrow which are small adaptive learning widgets with analytics built into them. And we see new big provicers of Coursera, EdX, Future Learn, the mass teaching and learning platforms.

So, as educators we have these fantastic tools that enable us to track what students do. But we also can find ourselves in an Orwellian place, where that tracking is all the time and can be problematic. But you can use all that VLE data in ways that really benefits education and learning. Part of that data enables us to see the digital footprints that students make in this space. And my group really look at this issue of how we can track those footprints, and – crucially – how we can take something meaningful for that.

Two of the early influential theorists in this space are Tom Reeves and John Hedberg. Back in 2003 they wrote about the problematic nature of auditing student data trails, and the challenges of doing that. Meanwhile there has been other work and traditions, from the Intelligent Tutoring Systems in the 1970s onwards. But part of the reason I think Reeves and Hedberg didn’t think meaningful interactions would be possible is because, at their most basic level, the data we get out of these systems is about behaviour which is not directly related to cognition.

Now we have to be a bit careful about this… Some behavioural responses can be imbued with a notion of what a student is thinking, for instance free-text responses to a discussion list; responses to multiple choice questions. But for much of what we collect, and the modern contemporary learning analytics community is talking about, that cognition is absent. So that means we have to make assumptions about students intent, motivation, attitude…

Now, we have some examples of how those sort of assumptions going wrong can be problematic. For instance the Amazon recommendation system deals poorly with gifts or one off interests. Similarly Microsoft Clippy often gets it wrong. So that distinction between behaviour and cognition is a major part of what I want to talk about today, and how we can take meaningful understanding from that.

So I want to start with an example, the Cognition and Interaction project, which I work on with Barney Dalgarno, Charles Sturt University; Sue Bennett, University of Wollongong. We created quite flat interactive learning objects that could work with learners who were put in an fMRI machine, so we could see brain activity. For this project we wanted to look at how learning design changed cognition.

So, we had an “Observation Program” – a page turning task with content screens and an introductions with background terminology. They saw changes in parameters being made. And an “Exploration Program” where students changed parameters themselves and engaged directly with the material. Both of these approaches were trialled with two examples: Global Warming adn Blood Alcohol. Now which one would you expect to be more effective? Yup, Exploration. So we got the results through and we were pretty bummed out as there was very little difference between the two. But we did notice there was a great deal of variation in the test scores later on. And we were able to use this to classify Students Aproaches:

  • Systematic Exploration – trying a variable, seeing the result. Trying another, etc…
  • Non-Systemaic Exploration – changing stuff all over the place.
  • Observation group – observation

So we re-ran the analysis and found there was no difference between the Non-Systematic Exploration and the Observation group, but there was a difference between the Systematic Exploration and the other groups.

So, why is this interesting? Well firstly students do not do what they are supposed to do, or what we expect them to do. The intent that we have as designers and educators is not manifest in the way students engage in those tasks. And we do see this time and time again… the digital footprints that students leave show us how they fail to conform to the pedagogical intent of the online tasks we set for them. They don’t follow the script.

But we can find meaningful patterns of students behaviour using their digital footprints… interpreted through the lens of the learning design of the task. These patterns suggest different learning approaches and different learning outcomes…

Example 2: MOOCs & LARG

One thing, when we set up our MOOCs, we set up the Learning Analytics Research Group, and this brings people together from information technology, informatics, education, educational technology, etc. And this work is with members of this group.

So, I want to show you a small snapshot of this type of work. We have two MOOCs to compare here. Firstly Principles of Macroeconomics, a classic staged linear course, with timed release of content and assessment at the end. The other course is Discrete Optimization which is a bit more tricksy… All of the content is released at once and they can redo assessments as many times as they want. There is a linear suggested path but they are free to explore in their own way.

So, for these MOOCs we measured a bunch of stuff and I will focus on how frequently different types of students watched and revisited video lectures across each course. And we used State Transition diagrams. These state transitions illustrate the probability of people transitioning from State A to State B – the footfall or pathways they might take…

We created these diagrams for both courses and for a number of different ways of participating: Browsed – did no assessment; Participated – did not do well; Participated – did OK; Participated – did well. And as outcomes improve these transitions/the likelihoods of state transition increases. And the Discrete Optimisation MOOC saw a greater level of success.

So, again, we see patterns of engagement suggesting different learning strategies or approaches. But there is a directional challenge here – it is hard to know if people who revisit material more, do better… Or whether those who do better revisit content more. And that’s a classic question in education, how do you address and help those without aptitude…

So, the first two examples show interesting fundamental education questions… 

Example 3: Surgical Skills Simulation 

I’ve been working on this since about 2006. And this is about a 3D immersive haptic system for e-surgery. Not only is the surgeon able to see and have the sensation of performing a real operation, but the probe being used gives physical feedback. This is used in surgical education. So we have taken a variety of metrics – 15 records of 48 metrics per second – which capture how they use the surgical tools, what they do, etc.

What we wanted to do was provide personalised feedback to surgical trainees, to emulate what a surgeon watching this procedure might say – rather than factual/binery type feedback. And that feedback comes in based on their digital trace in the system… As they show novice like behaviour, feedback is provided in a staged way… But expert behaviour doesn’t trigger this, to avoid that Microsoft paperclip feedback type experience.

So, we trialled the approach with and without feedback. Both groups have similar patterns but the feedback has a definite impact. And the feedback from learners about that experience is pretty good.

So, can we take meaningful information from this data? Yes, it’s possible…

I started with these big buckets of data from VLEs etc… So I have four big ideas of how to make this useful…

1. Following footprints can help us understand how students approach learning tasks and the curriculum more broadly. Not so much whether they understand the concept or principle they are working on, and whether they got something correct or not… But more their learning and study strategies when faces with the various learning tasks online.

2. If we know how students approach those learning tasks and their study, it does give us insight into their cognitice and learning processes… Which we can link to their leanring outcomes. This method is a wonderful resource for educational research!

3. Knowing how students approach learning tasks is incredible useful for teachiers and educational designers. We can see in fine detail how the educational tasks we create and design are “working” with students – the issue of pedagogical intent, response, etc.

4. Knowing how students approach learning tasks is increadibly useful for designing intervantions with students. Even in open and complex digital learning environments we can use students digital footprints as a basis for individualised feedback, and advise students on approaches adopted.

So, I think that gives you an idea about my take on learning analytics. There are ways we can use this in quite mundane ways but in educational research and working across disciplines we have the potential to really crack some of those big challenges in education.

Q&A

Q1) For the MOOC example… Was there any flipping of approaches for the different courses or A/B testing. Was there any difference in attainment and achievement?

A1) The idea of changing the curriculum design for one of those well established courses is pretty difficult so, no. In both courses we had fairly different cohorts – the macroeconomics course . We are now looking at A/B testing to see how potential confusion in videos compares with more straightforward “David Attenborough, this is the way of the world” type videos, so we will see what happens there.

Q2) What

A2) There is some evidence that confusion can be a good thing – but there is productive and unproductive confusion. And having productive confusion as part of a pathway towards understanding… And we are getting students from other disciplines looking at very different courses (e.g. Arts students engaging with chemistry courses, say) to cause some deliberate confusion but with no impact on their current college courses.

Q3) On that issue of confusion… What about the impact of learning through mistakes, of not correcting a student and the impact that may have?

A3) A good question… You can have False positive – provide feedback but shouldn’t have. False negative – don’t provide feedback but shouldn’t have. With our system we captured our feedback and compared with a real surgeon’s view on where they would/would not offer feedback. We had about 8% false positives and 12% false negatives. That’s reasonably good for teaching excercise.

Q4) How do your academic colleagues respond to this, as you are essentially buying into the neo liberal agenda about

A4) It’s not a very common issue to come up, its surprising how little it comes up. So in terms of telling teachers what they already know – some people are disheartened by you providing impirical evidence of what they already know as experienced teachers. You have to handle that sensitively but many see that as reenforcement of their practice. In terms of replacing teachers… These are helper applications. The feedback side of things can only be done in a very small way compared to the richness of a human, and tend to be more triage-like applications that forms a small part of the wider curriculum. And often those systems are flagging up the need for a richer interaction or intervention.

Q5) Most students think that more time on task maps to more success… And your MOOC data seems to reinforce that… So what do you do in terms of sharing data with students, and especially students who are not doing as well?

A5) It’s not my research area but my colleague Linda does work on this and on dashboards. It is such a tricky area. There is so much around ethics, pastoral care, etc.

Students with high self efficacy but behind the pack, will race to catch up and may exseed. But students to low self efficacy may drop back or drop out. There is educational psychology work in this area (see Carol Dykal’s work) but little on learning analytics.

But there is also the issue of the impact of showing an individual their performance compared to a group, to their cohort… Does that encourage the student to behave more like the pack which may not be in their best interests. There is still a lot we don’t know about the impact of doing that.

Q6) We are doing some research here with students…

A6) We have a range of these small tasks and we ask them on every screen about how difficult the task is, and how confident they feel about it and we track that along with other analytics. For some tasks confidence and confusion are very far apart – very confident and not confused at all although that can mean you are resistent to learning. But for others each screen sees huge variation in confidence and confusion levels…

Q7) Given your comments about students not doing what they are expected to do… Do you think that could impact here. Like students in self-assessments ranking their own level of understanding as low, in order to game the system so they can show improvement later on.

A7) It’s a really good question. There isn’t a great motivation to lie – these tasks aren’t part of their studies, they get paid etc. And there isn’t a response test which would make that more likely. But in the low confusion, high confidence tasks… the feedback and discussion afterwards suggests that they are confused at times, and there is a disjoint. But if you do put a dashboard in front of students, they are very able to interpret their own behaviour… They are really focused on their own performance. They are quite reflective… And then my colleagues Linda and Paul ask what they will do and they’ll say “Oh, I’ll definitely borrow more books from the library, I’ll definitely download that video…” and six weeks later they are interviewed and there is no behaviour change… Perhaps not surprising that people don’t always do what they say they will… We see that in learning analytics too.

Q8) [Didn’t quite catch all of this but essentially about retention of students]

A8) We have tried changing the structure and assessment of one of our courses, versus the first run, because of our changed understanding of analytics. And we have also looked at diagnostic assessment in the first three weeks of a course as a predictor for later performance. In that you see a classic “browsing in the book store” type behaviour. We are not concerned about them. But those who purchase the first assessment task, we can see they can do well and are able to… And they tend to stick with the course. But we see another type – a competent crowd who engage early on, but fall of. It’s those ones that we are interested in and who are ripe for retaining.

Share/Bookmark


Viewing all articles
Browse latest Browse all 3

Latest Images

Trending Articles





Latest Images