Recently there’s been a welcome move to promote the idea that teachers should become more involved in undertaking classroom-based educational research – something that All Change Please!, having been involved in a number of such initiatives over the years, fully supports, even if it’s not sure where the time or money will come from.
The current trending organisation in the field is probably researchED, somewhat worryingly established by this character who is well-known in certain more progressive circles for the mythical myths he is intent on challenging and for his general lack of open-mindedness for anything that’s not obviously ‘traditional’. The emphasis sometimes seems to be more about working out what doesn’t work rather than what might do.
Anyway, presumably the result of all this research will be what seems to be the current holy grail: evidence. These days it is difficult to do anything new or possibly risky unless its success can be absolutely guaranteed by so-called ‘evidence’ that apparently proves once and for all that it will work for everyone everywhere. There seems to be an unshakeable belief in the unarguable accuracy of just a single piece of evidence, even though such evidence is not the same thing as actual proof.
So how actually reliable is all this evidence, or ‘findings’ as it is sometimes referred to? Even supposedly objective scientific evidence has problems of reliability: a researcher doesn’t have to admit that, say, a particular drug company (or for that matter a global personalised educational resource organisation) is sponsoring their work, or that they are only drawing on a certain set of data because the other set doesn’t happen to support their theory. Or whether there might actually be some disagreement amongst the great and the good statisticians about how the data can be reliably interpreted. Or that they are only running certain tests because they don’t have the budget to pay for the other ones. And of course more subjective evidence can be even less reliable when based on perhaps a number of small-scale case studies from practice-based researchers, a few carefully selected interviews with ‘experts in the field’ and a questionnaire or two. Would you believe it – apparently 98.6% of all statistics are entirely fictitious?
Then there is the way in which the results are presented – usually statistical data that is either difficult for the non-statistician to interpret, or more seductively shown as a carefully edited, visually powerful infographic or multimedia PowerPoint in which the message has been suitably massaged to seemingly demonstrate what the researcher wants you to believe is true. This becomes even more believable when fronted by someone who has some ‘celebrity’ status within the community. Then if the findings get repeated and referenced often enough it somehow ends up becoming an irrefutable true ‘fact’. It seems the proof of the pudding is in the presentation.
Let’s take the example of Little Missy Morgan’s recent and quite ludicrous statement that taking a week’s holiday in term-time will mean that a student will do substantially less well in their GCSEs and fail to meet the so-called ‘Gold’ standard. She might have some rather unreliable evidence in terms of misleadingly analysed statistical data but that does no more than suggest what she says might be true. What she doesn’t have though is any actual proof that involves a wide range of different types of convincing evidence that removes all doubt. The problem is that we have been conditioned by the media to accept isolated examples of evidence as absolute fact.
In terms of the results of educational research, given the extraordinary diversity of children, teachers, classrooms and schools, what works in one situation might well prove to be a complete disaster in another. And in the case of the research aiming to reinforce the notion that traditional tired and detested teaching methods are universally best for everyone in every situation, the result is usually seen as a mandate to dismiss any need for perhaps doing things differently. While the current oft-quoted data might initially seem to bust the myths that there might be such things as learning styles, effective group work, benefits in using IT, or worthwhile child-centred learning, the majority of teachers will tell you precisely the opposite, based simply on what they’ve observed and found to actually work for them and their students. Just because there’s no established evidence to support such approaches, doesn’t mean they can’t or don’t work.
Meanwhile research is not just about proving things are right or wrong because repeatable events have been defined, but also about asking new questions and exploring new ideas – and that’s exactly what’s needed now in our out-dated educational system. Let’s hope the emerging educational research community focuses on the latter rather than trying to provide highly unreliable data that apparently proves that a particular political mindset, delivery methodology or commercial product is the one solution that can be guaranteed to work for everyone.
And as for the reliability of the evidence of a student’s capability provided by GCSE and A level results…
Or the extent of the proof of the quality of a school’s performance found in an Ofsted report?
Image credit: Flickr/Jim Roberts modified by TS