Search This Blog

Monday, 1 November 2010

Measuring stuff

Tania and I get talking about all sorts of things as we wander off for coffee at our local (La Montagne at Winmalee shops) and this time the topic of "measuring" stuff came up especially in relation to the NSW Higher School Certificate (HSC) but it actually relates to lots of things.
I believe that as a society we seem to value content knowledge more than we value understanding for application ie. we value 'knowing about' rather than 'understanding what and how'. I don’t think we are curious enough about things and this is possibly because there is so much choice about everything. It is beyond us to understand everything so we choose not to. But is this the reason HSC exams only test content knowledge? I don’t think so – I believe we sacrifice the gifts of significance, relevance and complexity to the gods of the easily measured.
What are the consequences of only knowing and never really understanding? What about:
·         Computers - we know about using them for some things but because we don't really understand them, we only use a fraction of their capacity or potential. When they don’t function as we expect them to, we don’t know why and most often resort to turning them off and hoping they work when we turn them on again. The IT Crowd on the ABC have got that right!
·         Phones - we use a phone and just want it to work and often don’t understand all of the choices and fine print in the plans which gouge us for more money than we really need to pay eg. when our mobile roams onto the tower of another company we are charged more for data downloads.
·         Cars – I am continually struck by the number of new cars I see broken down by the side of the road – we just want to drive them but don’t understand that we still need to check basics (tyre pressure, service intervals, water and oil levels etc) to make them continue to function
·         Testing – we set exams about content knowledge because it is easier to mark validly and reliably ie. students either know it or they don’t according to the marking criteria. But what is the purpose of the test - to gain an understanding of what the student has learnt over the duration of a course or to ask them things that can be marked in a valid and reliable fashion?
All testing is exactly the same – the test writer asks questions and expects certain answers and these answers are articulated in the ‘marking criteria’. The most valid way to do this is to ask questions that have only 1 answer – this makes it easy to give a mark out of a100 for example. That’s fine if the purpose is find out how much someone knows about something eg.
1.      2 + 2 = 4
2.      Apply Bernouilli’s formula to this situation…
3.      Write a creative essay about the setting of Hamlet…
The type of questions that lend themselves to being ‘right’ or ‘wrong’ are not really questioning for understanding and application but only really testing for pre-authorised and accepted knowledge. Even the ones that seem to be asking for a bit more than content regurgitation (question 3 above), aren’t really because they still marked by criteria which outline the content of an expected answer – hence what is right and what is wrong. What about the unexpected? Is it wrong?
Rachel Ward said once that she didn’t consider herself dumb at school (though she did poorly apparently) – its just that no one ever asked her about what she was good at. And I might add, certainly didn’t value it even had she been asked.
We hear in the media that curiosity, creativity, imagination and innovation are aspects most highly prized by business in the 21st century. It seems to me that those aspects of a person have always been highly prized – some of the most interesting people you can talk to tend to be those that see ‘B’ while everyone else is seeing ‘A’.
I can’t see how the HSC (or NAPLAN or any other school testing exercise) helps measure curiosity, creativity, imagination or innovation. These are seemingly too hard to measure so we sacrifice them to the god of validity and the more easily measured – a test which is handwritten, paper-based, completed alone, takes 3 hours, with no feedback supplied except a moderated total  score sent in the mail some weeks later. Does this make a student more curious or creative?

No comments: