Tony Gurr

ASSESSING How We ASSESS Learners (Part Two)

In Assessment, Our Schools, Our Universities on 11/03/2011 at 5:17 am
Our lives in the world of education, whether we like it or not, are dominated by assessment. Indeed, it is this fact that has led commentators to suggest that “assessment is the sharp end of teaching and learning” (Race).

In Part One of this little “series”, I tried to emphasise the importance of principles or “guiding lights” for how we “run the business” of assessment in schools, colleges and universities. I genuinely believe that this type of approach is the best way to really make a difference – firstly, by helping us review our assumptions about assessment and then by allowing us to take a “perspective” on assessment. Taking a perspective is very different to simply “having a perspective” (bit like having an opinion but not doing much “with” the opinions we have) – I learned this from Alverno.


However, there is another way – a way that many people find useful. Less “philosophical” – as if that should be a problem for all us “thinking doers” in education.

This second approach is grounded on “problem-solving”.

The problem-solving approach, however, relies on the understandings we have about the nature of the problem itself – as my dad used to tell me, “Lad, you can’t fix it, till you know what’s up”!

The factors affecting the quality of assessment practices are woven so tightly together that they must first be teased apart before an effective strategy can be developed for learners, educators and staff.

So, what is the problem? Depends who you speak to…

What do STUDENTS say?

  • Assessment “overload”
  • Assessment is not related to what we do in class
  • Insufficient time to do assignments, projects and study
  • Too many assignments with the same deadline
  • Not enough information on criteria or marking schemes – we do not know what teachers want
  • Inadequate or superficial feedback
  • Teacher and institutional “obsession” with what’s on the test
  • Different teachers have different “expectations”
  • Too little choice and flexibility
  • Assessments are not authentic or based on real-life
  • Grades tell us very little – but we like it when we get high ones (!)
  • Some teachers just enjoy making things “hard”
  • Testing does not help us “learn”
  • Teachers do not always follow up on the learning from assessments

What do TEACHERS say?

  • Assessment “overload”
  • Marking “overload”
  • Institutional assessment is not related to what “we do in class”
  • Difficulty of assessing independent critical thinking, creativity, academic or life-skills as opposed to “subject content”
  • Insistence on reliability has resulted in curriculum areas that are inadequately represented in examinations and tests
  • Some students do very well in tests, others do better in other forms of assessment
  • Student “obsession” with what’s on the test


  • Assessment matrixes are rarely, if ever, based on the principles that follow a specific vision for education (AAHE, 2006)
  • “Overuse” of certain modes of assessment (e.g. written tests, essays) – 90 percent of typical university degrees depend on unseen, time-constrained written examinations, and [instructor]-marked essays and/or reports (Race, 2002)
  • The vast majority of assessment tools in education still focus on declarative knowledge (“knowing that”), frequently overlook procedural (“knowing how”), schematic (“knowing why”), and strategic knowledge (“knowing when certain knowledge applies, where it applies, and how it applies”). Even less attention is paid to personal, social, and civic abilities (Shavelson and Huang, 2003)
  • There remain many practical issues related to validity, reliability, transparency and “fitness-for-purpose”
  • Many quality issues in assessment come down to a “poverty of practice” among teaching communities (Black and Wiliam, 1998)
  • Many institutions have been charged with “abiding amateurishness” (Elton and Johnson, 2002) in the way they put together assessment tools
  • Many institutions still use “folkloric systems of equivalence” (e.g. a three-hour paper is “equivalent” to a 3000-word assignment)
  • Assessment tasks often distribute effort and “assessment burden” unevenly across a course (Gibbs & Simpson, 2004)
  • Examinations and terminal assignments are frequently critiqued for encouraging memorisation or surface approaches to learning (Ramsden, 2003)
  • Many educators (esp. in higher education) have never received formal training in curriculum and assessment design
  • The majority of traditional assessment tools rarely resemble the “tests” students will face after they graduate and even fewer help prepare them for a “career” of lifelong learning (Lombardi, 2008)
  • Student achievement is still frequently only measured “within courses” – with limited attention to cumulative learning outcomes (AAHE, 2006)
  • Change in the area of assessment practice has been surprisingly slow – many of the innovative approaches, or at least critiques challenging traditionalism posed in the late 1960s, have failed to bear fruit in any meaningful way across education.


That’s a pretty impressive set of “challenges” (don’t you just love that word – makes it all seem so harmless)!

For a change, I’m not going to say much – I think perhaps David Boud said it best:

Students can escape bad teaching; they can’t escape bad assessment.

Over to you…problem-solvers!


P.S: I actually missed “Fatmagülün Suçu Ne” to write this one – but then I guess one “dizi” is pretty much the same as another!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: