Teacher Self Reflection Tools: a double edged sword

I have been pleasantly surprised with the work the Australian Institute of Teaching and School Leadership (AITSL) has undertaken to develop a number of seemingly high quality, well tested and useful self reflection and learning tools for teachers to support AITSL’s core work of building the capacity of teachers and school leaders. For example The 360 student feedback tools[1] the teacher standards illustrations of practice[2] and the teacher self assessment[3] tools all have real potential to be useful for teachers who are taking responsibility for their own learning and development in schools that support and encourage collaboration, mentoring and peer support.

In fact I would like to suggest that the work of AITSL has the potential to be a very important counter point to all the US borrowed corporate reforms represented by NAPLAN, Performance pay and all the rest.

But to be effective the work of AITS needs to be able to stand apart from all the less worthy reforms.  The self-reflective tools are a very good example of these challenges. If they can be kept apart from the evaluation, performance management tendencies of corporate reform and be quarantined for the use by teachers and their schools for authentic professional learning, they have the potential to be very significant tools for building collective teacher capacity.

If however they are captured to be used as part of the new performance management practices that are being imposed on teachers, all the wonderful work involved in developing them will go down the toilet.

Anthony Cody talks about these same tensions in the US context.  In a recent blog[4] he responds to a Bill Gate TEDX talk on the value of videos in classrooms. According to Cody, Bill Gates rationale for promoting video cameras in schools goes as follow

… there’s one group of people that get almost no systematic feedback to help them do their jobs better. Until recently, 98% of teachers just got one word of feedback: “satisfactory.” Today, districts are revamping the way they evaluate teachers. But we still give them almost no feedback that actually helps them improve their practice. Our teachers deserve better. The system we have today isn’t fair to them. It’s not fair to students, and it’s putting America’s global leadership at risk.

Cody notes that Gates slides from feedback to evaluation without pause as though they are one and the same.

Do you notice something? He starts out talking about feedback, but then slides into describing a formal evaluation process. There are LOTS of ways to enhance feedback that could have nothing at all to do with our evaluation systems ….

They are not.  There is a world of difference between:

  • Professional learning:  as teachers working together, observing each others practice; using tools that give them information about their practice for them to use as they see fit; reflecting on their practice alone or in teams; trialling changes; reflecting; and giving mutual feedback; and
  • Performance review: where external parties apply standards to an assessment of practice

The problem is that as soon as a tool is captured for use for the second purpose – performance review – the less likely it is that teachers will trust it and see it as useful.

But this slide happens all the time.  And we are in danger of this happening with the tools developed by AITSL.  This is because we are focusing on the wrong things.  The Commonwealth Government tells us that what we need is a national best practice performance management framework and high quality tools.

Linda Darling Hammond on the other had argues that it is not a good framework that is lacking.  Rather what we lack, is time – time in schools for teachers to collaborate, to work with others to reflect on their practice and a culture where this is expected not as a fearful evaluation process but as an integral part of professional development

As I see it the work of AITSL could go either way and I just hope that it is possible to corral some of the best of their work and make sure it is not captured to serve the performativity agenda, For as Anthony Cody says:

Right beneath the surface are these seeds of possibility, waiting for the right conditions to come about. You take an area, a school, a district, you change the conditions, give people a different sense of possibility, a different set of expectations, a broader range of opportunities, you cherish and value the relationships between teachers and learners, you offer people discretion to be creative and to innovate in what they do, and schools that were once bereft spring to life.

Advertisements

A vision for a new unified and credible approach to school assessment in Australia

 

I was only partly surprised to read in the Adelaide Advertiser[1] that Geoff Masters, CEO of the Australian Council for Educational Research (ACER) has called for the scrapping of the A-E grading system and replacing it with NAPLAN growth information.

To be blunt, I regard the A-E system as a nonsense cooked up by the previous Coalition Government and imposed on all states as a condition of funding.  It has never meant much and the different approaches to curriculum taken by the different state systems made its reporting even more confusing.

With the introduction of the Australian National Curriculum, the A-E grading system may have a more consistent approach across states but that meaning itself is often confusing and unhelpful.  As Masters notes

If a student gets a D one year and a D the next, then they might think they’re not making any progress at all when they are but the current reporting process doesn’t help them see it… [T]his could contribute to some students becoming disillusioned with the school system.

Abandoning this approach makes sense.  But the Advertiser article also implied that Masters is arguing that we should replace the A-E reporting with a NAPLAN gains process.  This to me was a complete surprise.

This is because I believe that would be a disaster and, more importantly, I am pretty sure that Masters would also see the limitations of such an approach.

At the 2010 Australian Parliamentary Inquiry into the Administration and Reporting of NAPLAN, Geoff Masters spoke at length about the limitations of NAPLAN covering the following:

  • Its limitation for students at the extremes because it is not multilevel
  • Its original purpose as a population measure and the potential reliability and validity problems with using it at school, classroom and individual student level
  • Its limited diagnostic power – because of the narrow range of testing and the multiple choice format

He also acknowledged the potential dangers of teachers teaching to the test and the narrowing of the curriculum.  (Unfortunately there appears to be a problem with the APH website and I was unable to reference this, but I have located a summary of the ACER position[2])

Now these are not minor problems.

I was also surprised because the idea that the CEO of ACER would not use this as an opportunity to talk about the benefit of diagnostic and formative assessments is unlikely. After all, these tests are important for ACER’s revenue stream.

So what is going on here?

To investigate, I decided to look beyond the Advertiser article and track down the publication that Masters was speaking to at the conference. It’s a new publication launched yesterday called Reforming Educational Assessment: Imperatives, principles and challenges[3]

And low and behold, the editor Sheradyn Holderhead got it wrong.  What Masters is arguing for is anything but the swapping out of one poorly informed reporting system (A to E Reporting) for a flawed one (NAPLAN)   He is mapping out a whole new approach to assessment that can be built on our best understandings of assessment and learning but also meet the “performativity”[4] needs of politicians and administrators.

Now some will object to the compromise taken here because they see “performativity” as a problem in and of itself.  At one level I agree but because I also look for solutions that are politically doable I tend to take a more pragmatic position.

This is because I see the reporting of NAPLAN through MySchool as a kind of one way reform – a bit like privatization of public utilities.  Once such system has been developed it is almost impossible to reverse the process.  The genie cannot be put back into the bottle.  So to me, the only solution is to build a more credible system – one that is less stressful for students, less negative for lagging students, more helpful for teachers, less likely to lead to a narrowing of the curriculum through teaching to the test and less prone to be used as a basis for school league tables.

And my take on Master’s article is that, if taken seriously, his map for developing a new assessment system would have the potential to provide the design features for a whole new approach to assessment that doesn’t require the complete overthrow of the school transparency agenda to be effective.

Here are some of the most significant points made by Masters on student assessment:

Assessment is at the core of effective teaching

Assessment plays an essential role in clarifying starting points for action. This is a feature of professional work in all fields. Professionals such as architects, engineers, psychologists and medical practitioners do not commence action without first gathering evidence about the situation confronting them. This data-gathering process often entails detailed investigation and testing. Solutions, interventions and treatments are then tailored to the presenting situation or problem, with a view to achieving a desired outcome. This feature of professional work distinguishes it from other kinds of work that require only the routine implementation of pre-prepared, one-size-fits-all solutions.

Similarly, effective teachers undertake assessments of where learners are in their learning before they start teaching. But for teachers, there are obvious practical challenges in identifying where each individual is in his or her learning, and in continually monitoring that student’s progress over time. Nevertheless, this is exactly what effective teaching requires.

Understandings derived from developments in the science of learning challenge long-held views about learning, and thus approaches to assessing and reporting learning.

These insights suggest that assessment systems need to

  • Emphasise understanding where students are at, rather than judging performance
  • Provide information about where individuals are in their learning, what experiences and activities are likely to result in further learning, and what learning progress is being made over time
  • Give priority to the assessment of conceptual understandings, mental models and the ability to apply learning to real world situations
  • Provide timely feedback in a form that a) guides student action and builds confidence that further learning is possible and b) allows learners to understand where they are in their learning and so provide guidance on next steps
  • Focus the attention of schools and school systems on the development of broader life skills and attributes – not just subject specific content knowledge
  • Take account of the important role of attitudes and self belief in successful learners

On this last point Masters goes on to say that:

Successful learners have strong beliefs in their own capacity to learn and a deep belief in the relationship between success and effort. They take a level of responsibility for their own learning (for example, identifying gaps in their knowledge and taking steps to address them) and monitor their own learning progress over time. The implications of these findings are that assessment processes must be designed to build and strengthen metacognitive skills. One of the most effective strategies for building learners’ self-confidence is to assist them to see the progress they are making.

…..  current approaches to assessment and reporting often do not do this. When students receive the same letter grade (for example, a grade of ‘B’) year after year, they are provided with little sense of the progress they are actually making. Worse, this practice can reinforce some students’ negative views of their learning capacity (for example, that they are a ‘D’ student).

Assessment is also vital in order to assess how a system is progressing – whether for a class, school, system, state or nation

Assessment, in this sense, is used to guide policy decision making or to measure the impact of interventions or treatments or to identify problems or issues

In educational debate these classroom based and the system driven assessments are often seen as in conflict and their respective proponents as members of opposing ideological and educational camps.

But the most important argument in the paper is that we have the potential to overcome the polarised approach to assessments that is typical of current discussion about education; but only if we start with the premise that the CORE purpose of assessment is to understand where students are in their learning. Other assessment goals should be built on this core.

Once information is available about where a student is in his or her learning, that information can be interpreted in a variety of ways, including in terms of the kinds of knowledge, skills and understandings that the student now demonstrates (criterion- or standards-referencing); by reference to the performances of other students of the same age or year level (norm-referencing); by reference to the same student’s performance on some previous occasion; or by reference to a performance target or expectation that may have been set (for example, the standard expected of students by the end

of Year 5). Once it is recognised that the fundamental purpose of assessment is to establish where students are in their learning (that is, what they know, understand and can do), many traditional assessment distinctions become unnecessary and unhelpful.

To this end, Masters proposes the adoption and implementation of a coherent assessment ‘system’ based on a set of 5 assessment design principles as follows

Principle 1: Assessments should be guided by, and address, an empirically based understanding of the relevant learning domain.

Principle 2: Assessment methods should be selected for their ability to provide useful information about where students are in their learning within the domain.

Principle 3: Responses to, or performances on, assessment tasks should be recorded using one or more task ‘rubrics’.

Principle 4: Available assessment evidence should be used to draw a conclusion about where learners are in their progress within the learning domain.

Principle 5: Feedback and reports of assessments should show where learners are in their learning at the time of assessment and, ideally, what progress they have made over time.

So, to return to the premise of the Advertiser article, Masters is not arguing for expanding the use value of the currently model of NAPLAN.  In fact, he is arguing for the reconceptualisation of assessment that:

  • starts with the goal of establishing where learners are in their learning within a learning domain; and
  • develops, on the basis of this a new Learning Assessment System that is equally relevant in all educational assessment contexts, including classroom diagnostic assessments, international surveys, senior secondary assessments, national literacy and numeracy assessments, and higher education admissions testing.

As the Advertiser article demonstrates, this kind of argument is not amenable to easy headlines and quick sound bytes.  Building the support for moving in this direction will not be easy.

But the first step is to recognize that the popular understanding that system based assessment and ‘classroom useful’ assessment are and must necessarily be at cross purposes and to start to articulate how a common approach could be possible.  Masters refers to this as the unifying principle:

….. it has become popular to refer to the ‘multiple purposes’ of assessment and to assume that these multiple purposes require quite different approaches and methods of assessment. …

This review paper has argued …. that assessments should be seen as having a single general purpose: to establish where learners are in their long-term progress within a domain of learning at the time of assessment. The purpose is not so much to judge as to understand. This unifying principle, which has potential benefits for learners, teachers and other educational decision-makers, can be applied to assessments at all levels of decision-making, from classrooms to cabinet rooms.

So if you are still not convinced that Masters is NOT arguing for replacing the A-E reporting with NAPLAN growth scores, this quote may help:

As long as assessment and reporting processes retain their focus on the mastery of traditional school subjects, this focus will continue to drive classroom teaching and learning. There is also growing recognition that traditional assessment methods, developed to judge student success on defined bodies of curriculum content, are inadequate for assessing and monitoring attributes and dispositions that develop incrementally over extended periods of time.


[4] This is a widely used term usually associated with the work of Stephen J. Ball. In simple terms it refers to our testing mania in schools and the culture and conceptual frameworks that support reform built around testing data.  To read more this might be a useful starting point http://www.scribd.com/doc/70287884/Ball-performativity-teachers

Carol’s Indigenous students had such an “awesome time” at school that their friends starting coming to class too

In this post,  Aboriginal Engagement by teacher Carol Puskic from Geraldton Senior College in WA, Puskic talks about using a very clever but simple tool which has transformed the energy and engagement of the students in her class – a class specifically for disengaged Indigenous students who come from Halls Creek, Port Hedland, Broome and beyond and board in Geraldton.  The tool is called ClassMovies.

I first stumbled across the ClassMovies project in July 2010 and was so excited by its potential as a tool that teachers could use in so many useful ways that I wrote about it[1]  I introduced the article as follows:

Continue reading

We need new architecture to support the development and agile adoption of tools and processes for teacher self-managed career-long professional development in schools

I read a timely article yesterday titled “The Flipped Classroom: Students Assessing Teachers” by Brianna Crowley[1].  It is not about the flipped classroom concept made famous by the Khan Academy it is about another sort of flipped – where students provide feedback to teachers.

It was timely, to me at least, because I have been thinking a lot lately about the lack of ready access to a comprehensive and high quality set of well tested and reviewed smart tools, protocols and processes to support teachers to:

  • Identify their most important professional development needs
  • Affirm their areas of strength for sharing with others
  • Reflect on their practice through focused feedback
  • Work with mentors or coaches on continuous improvement
  • Develop portfolios that demonstrate their knowledge, skills and experience for assessment purposes – whether this is for moving from graduate to proficient or deciding to go for accreditation as a highly accomplished or lead teacher

There are a number of ways in which teachers can, and do, get feedback on their teaching.  Instructional observation, peer to peer coaching, classroom walkthroughs, protocols of student work, learning journals or classroom videos are the most obvious and none of these are yet fully embedded into the regular core practice of schools, although they are becoming more and more utilised.

 But what about students providing feedback to teachers?

Now when I first thought about this I was a bit cynical – thinking that if this practice became commonplace (and high stakes)  it would turn classrooms into a sort of market place as teachers tried to outdo each other in being the most entertaining. But of course it all depends on how the feedback process is designed – what information will be sought, for what purpose will the information be put, and how frequently it is sought.  In this sense the ‘politics’ related to teacher feedback from students is no different from the ‘politics’ surrounding assessment or teacher feedback to students.

This article on the flipped classroom puts it well.

A homemade laminated sign behind my desk announces, “In this classroom, everyone is a teacher and everyone is a student.” For me, teaching is a fluid interaction of constantly shifting roles. My students and I are engaged in a cycle of mutual learning.

Effective teachers provide concrete feedback throughout the school year. Through formative assessments, students recognize their growth and understand where they can improve.

But what formative feedback do teachers receive? …  A lucky few experience regular peer observations—but most of us are observed only once or twice a year. We have all been encouraged to reflect on our own practice in journals, but it’s probably not a daily routine for most: Who can find the time between urgent activities like meetings, emails, grading, and planning? We rarely prioritize our own learning.

Crowley urges teachers to consider drawing on the experiences and perceptions of students – and to treat them as “experts” about the teaching and learning that takes place in the classroom.  She suggests that it does not necessarily have to be a formal survey process – feedback can be embedded in the teaching and learning process with only small adjustments to practice.

First, look at activities already in place and think about whether they can be altered to provide additional information.

For example, after each major project or writing assignment, my students complete a reflection form. They are prompted to think about their process, identify strengths and weaknesses, and create goals for future assignments. Then I add two or three questions that look something like this:

(1) Which activities helped you understand this assignment, and which were less valuable?

(2) What questions do you still have about what we learned or about the feedback I have given you?

(3) With what skills or ideas do you feel that you need more practice?

These questions prompt students to better understand themselves and articulate their learning styles. In providing constructive criticism, students practice higher-order thinking and communications skills. And the process helps all of us take ownership of the learning that occurs in our classroom.

It’s win-win: Students develop metacognition skills, and I gather valuable Intel.

And how should this information be used? 

With professional discernment argues Crowley.

If my students tell me they learn better by working in small groups with peers than independently, do I reconstruct my classroom for collaborative work in every lesson? Probably not. But I do consider how I can incorporate additional structured group work. Each member has a role and each group is accountable for a product. Then I monitor to see whether my students’ level of engagement and understanding increases.

Likewise, if 70 percent of my students claim that work in their textbook did not help them learn, I have a choice: Do I vow not to use the textbook for the rest of the year? Or do I try to use that resource in more relevant and engaging ways?

Embedded in every piece of student data is a professional choice. We must respect students’ perspectives while applying our professional discernment. We can then take risks, change patterns, and ask for feedback again.

There is also a role for well-designed formal survey instruments – especially at key points through the teaching cycle like the end of a semester or a year.

This article is USA based but it is highly relevant for what we are at in Australia. Now that we have an endorsed set of national professional standards for teachers, the development of exciting new tools, processes and instruments needs to be fostered.

Some states have some useful tools as do a number of clever people in the ever-growing education consultation and ICT software development industries.  We need to find a balance point between a heavily regulated state endorsed tool development process, that necessitates going to tender for something – when we may not always know in advance what smart idea could be just around the corner- and an open market that lets a hundred flowers bloom – not all of them fit for purpose.

We need a QA regulator that assesses new processes, tools and instruments and certifies those that have been road tested in a range of schooling contexts, are aligned to the teaching standards framework, are value for money and fit for purpose.  With a strong quality certification framework in place it would then be desirable and possible to encourage all kinds of smart tools and processes from a variety of sources.  After-all until twitter came along, teachers and systems would not have said ‘if only we had a tool that lets children do … . We need to go out to tender to see who can develop this for us”.  Those days of product development are long over but new processes are not yet in place to enable the agile adoption and adaptation of new ideas and processes.

I think this is a big gap in our school education national architecture.  Now some might suggest that this is the role of Education Service Australia (ESA) but I am not so sure.  Can an organisation be both a developer of products and an assessor? No, not in my book.

Others might consider this to be in scope for the Australian Institute for Teaching and School Leadership (AITSL) but to my mind this is a very bad idea.  These tools should not be assessed and certified by an organisation that, while engaging the profession, is very much an organisation driven by education employers and their perspectives on teacher quality.

Now don’t misunderstand this as a dig at AITSL.  The fact that AITSL reports to MCEECDYA and has all states and non Government systems represented on the board has been essential to the agreement making process for accreditation standards and processes for teacher education as well as for professional teaching standards.

However if these tools first come on stream as part of the standards assessment process they will be seen as impositions   – as part of quality compliance and appraisal processes.

In my view, as the teaching profession gets accustomed to seeing feedback for continuous learning and self directed improvement as an integral and highly regular element of teaching throughout their career, it is vital that the balance of emphasis leans towards support and development, and not towards underperformance management and external review.

So what we need is an organisation that is willing to fill this gap.  An organisation that says, “We will set up quality assessment and certification processes for tools to support the professional development of teachers throughout their careers”.

We could wait for education ministers (MCEECDYA) to set this up – unlikely I think. Alternatively, we could look at it as an opportunity.  After all, the developers of the Wikipedia have managed to be seen as the arbiters of quality input into the global dynamic encyclopedia of life.  No-one gave them this job.  They just did it well.  And this is a much less ambitious task.  Any takers out there?