This is a speech given by a student representative Esa Tiusanen, Board Member of HYY (Student Union of the University of Helsinki), at a discussion and information seminar ‘Audit of the University of Helsinki Quality System’ in City Center campus 10th September 2014.
Thank you for this opportunity to speak to you on an issue that entails nearly all aspects of the university and affects its operations at nearly every conceivable level. Earlier this year I held a short presentation concerning the Audit. One of the main goals of that speech was to discuss the English terminology for quality management. Sometime after my presentation on that scintillating topic, I promised Aimo and Helena, that this presentation would not be quite as – radical. Despite that promise, I thought it appropriate to use that presentation as a sort of starting point for my presentation today.
In my presentation I showed the lecture hall filled with representatives of various levels of the university administration a video about a small boy that had been given a box by his parents. The boy was told that the box contained the source of happiness, but that he wasn’t allowed to open the box to see what is inside it, or else he would lose the source of happiness. I tried – perhaps unsuccessfully – to illustrate the central concepts of the audit process by placing “QUALITY” inside the box, instead of happiness.
In the original video the boy’s uncle eventually convinces the boy to study the box using scales, a stethoscope and an x-ray device. In the video the box turned out to be empty, but for discussing the Audit process, we can assume there is something in the box. But the thing to pay attention to is, that depending on WHAT is in the box, the various ways of analyzing the content will give very different results. If the object inside is, for example, a skeleton from the biology department, it can be clearly detected by the X-ray or the scales, but it would be very difficult to discern what is in the box if all you had was a stethoscope. In a different scenario in which the box is lined with a lead casing the box will look the same on an x-ray regardless of the music box inside, while using the scales or the stethoscope, it will be more than evident that the box is not empty.
On top of this, we should consider whether or not we are content with knowing that the box is not empty, or should we attempt to find out WHAT is inside it? And how strict a definition do we wish to make? We can make more and more specific conclusions on the content by combining various instruments – although obviously adding to the costs and effort needed in the process. But if the box is filled with something as intangible as quality – how do you know which tools are the correct ones? Is quality heavy? Is it noisy? Is it impenetrable to X-rays? Or is it, perhaps, all these things? These are truly fundamental questions for the audit process.
Moving a bit closer to the real world, I would like to discuss the didactic triangle as an illustration of the problems of measuring quality – and thereby the necessary processes of evaluation. The didactic triangle divides the classroom situation into three actors and three interactions between them. The actors are the teacher, the student, and the content. The interactions: teaching, learning and training – or the relationship between the teacher and student. In assessing the functioning of any given situation, we need to acknowledge that we need information on all elements of the triangle.
If we measure the student and notice increased access to the content, we have no way of knowing whether the learning was caused by the teacher, his teaching, or the training provided – or if the student decided to learn on his own. We need information on all (or at least most of) the aspects of the triangle to be able to make even educated guesses on the causal relationships. This observation that is so integral in all science is often neglected when discussing administrative functions at universities. As a real life example of this problem, we can consider the relative abundance of information available on the speed of studies at the university to the relative difficulty on finding solid information on the different ways of providing guidance to students at the various departments. Teaching and learning are being measured, but training is, to some extent, being neglected.
The new Kandipalaute and LEARN -surveys promise to bring significant advancements to the previously patchy information on the learning processes of the students. The surveys have better accounted for teaching, learning, and training. Having the use of these new surveys is in fact an extremely good thing, as in Kandipalaute we have already noticed that on aggregate the worst results are found in guidance, interaction between students and faculty, and the students’ opportunities to influence their surroundings.
Assessing the opportunities to influence thing, is indeed a major challenge for the audit. In assessing the process used to ensure quality, it is very hard at the same time to view the actual content. We have a wide network of student representatives at various levels of the administration, and they are chosen with relatively clear processes. But what is the actual weight given to their efforts? Is representation in one governing body proof that issues are not actually decided someplace else? In addition to the results of Kandipalaute, a study by OTUS (the research foundation of studies and education) this year has shown that there are real concerns about the actual opportunities for students – and even student representatives – to affect the decisions of the university.
The existence of these various surveys and other forms of feedback is a great thing, but I would like to raise the issue of the degree to which students actually want to partake in them. Low response rates have been an issue with Kandipalaute as well as other forms of feedback. This is, in my opinion, a major issue. And indeed, I think I find the solution once again in the didactic triangle: students need to be helped to find motivation to respond! Students need to feel that they get something from participating. One way of achieving this – I have personally noticed in working with the survey – is telling the student responding to the survey will bring more money for the university. I find this motivation to be severely flawed, and it is a relief to notice that studies have shown that such direct benefits are not major factors in why students want to take part in decision making process.
So what is? How to motivate students into participating – or at least answering surveys? By illustrating to them, how their actions affect reality around them. The example of financial benefits for the university is in my opinion, as I said, a bad motivation – but it works! Cause – and effect. I will give another example, although it, too, is flawed.
In another Finnish university – luckily not this one – a teacher consistently received good reviews of his classes. His course feedback had high response rates. He received good reviews, in particular, for his responsiveness to the feedback. When certain issues with his courses were pointed out to him, he altered his methods. Eventually, though, he ran into a problem. I turned out, he didn’t so much consider the comments on his classes, as he guided them. He had prepared two alternative ways of holding his class, and switched between them when the feedback turned sour. Students were only at the university long enough to notice one cycle. While I cannot describe the actions of the teacher as exemplary, I want to focus your attention on one thing: students appreciated the fact that their voices had been heard, and as a result were more prone to tell their opinions on the class. Cause and effect was made clear, and added motivation.
* * * * * *
To end my presentation, I have to admit, that preparing for this speech, I had some difficulties answering the title question in itself. “How does the audit improve the quality of studies?” It’s not in fact all that easy to think of the precise ways in which evaluating the quality system, DIRECTLY translates into improvements in the real daily functions at the university. Interestingly, though, I was having problems coming to grips with the question from two completely different, even diametrically opposite perspectives.
On one hand, I kept thinking: “There’s really no way in which the audit doesn’t relate to the daily processes of studying at the university.” Helping the university gather better information to base their decisions on, can have a serious impact on anything from the use of our facilities, to the services provided for students, to the content and medium the university tries to communicate to its members. Improving feedback-systems can directly improve lesson-plans and help us locate problems to resolve. So really everything the Audit can do, will improve studies at the university. But I can’t really just say “in every way imaginable.” That wouldn’t be very informative, nor would it be entirely accurate.
And this is the second viewpoint I kept coming back to: “There’s really no direct way, the Audit process will help improve the quality of studies.” This is a slightly radical insinuation, but I fear it is accurate – at least to some degree. The audit system has no direct impact on the functioning of the university, unless the university decides to truly work hard to resolve the problems that the audit brings to light. To the credit of the administration at our university, they HAVE taken the process very seriously. And I have no doubt that the appropriate lessons from the Audit will be taken into account in future. And that the University of Helsinki will continue working on these issues with the determination they deserve.