The NAPLAN Parliamentary Review’s ‘do nothing’ recommendations: We can do better

Many of us waited with a degree of eagerness – even excitement – for the release of the Parliamentary Inquiry Report into NAPLAN (Effectiveness of the National Assessment Program – Literacy and Numeracy Report). But what a disappointment!

While it makes a passable fist of identifying many, but by no means all, of the significant issues associated with how our NAPLAN is currently administered and reported, it does miss some of the important details. This could be forgiven if the recommendations showed any evidence of careful thinking, vision, or courage. But in my assessment they are trivial and essentially meaningless

We know a lot about the problems with our current approach to standardised testing and reporting. This Report, with the help of over 95 submissions from a wide range of sources, manages to acknowledge many of them. The key problems include:

  • it is not valid and reliable at the school level
  • it is not diagnostic
  • the test results take 5 months to be reported
  • it is totally unsuitable for remote Indigenous students – our most disadvantaged students – because it is not multilevel, in the language that they speak or culturally accessible (Freeman)
  • now that it has become a high stakes test it is having perverse impacts on teaching and learning
  • some of our most important teaching and learning goals are not reducible to multiple choice tests
  • there is a very real danger that it will be used to assess teacher performance – a task it is not at all suited to
  • some students are being harmed by this exercise
  • a few schools are using it to weed out ‘unsuitable enrolments’
  • school comparisons exacerbate the neoliberal choice narrative that has been so destructive to fair funding, desegregated schools and classrooms and equitable education outcomes
  • there will always be a risk of league tables
  • their unequal impact on high needs school
  • they do not feed into base funding formulas. In spite of the rhetoric about equity and funding prioritization being a key driver for NAPLAN, it is not clear that any state uses the NAPLAN to inform their base funding allocations to schools[1]

However, the ‘solutions’ put forward by the report are limited to the following recommendations:

  1. develop on-line testing to improve test results turn around – something that is happening anyway
  2. take into account the needs of students with a disability and English language learners. Now this recommendation is so vague as to be meaningless
  3. have ACARA closely monitor events to ensure league tables are not developed and that the results feed into funding considerations. This is another vague do nothing recommendation and I am certain ACARA will say that they are already doing this.

This is a recommendation to do nothing – nothing that is not already being done or nothing of meaningful substance.

As an example of the paucity of its analysis I offer the following. The report writes about the lack of diagnostic power of the NAPLAN tests and then says that, even if they were diagnostic, the results come too late to be useful. The report then argues, as its first and only strong recommendation that there needs to be quicker timeframe for making the results available. Did the writer even realize that this would still not make the tests useful as a diagnostic tool?

This Report, while noting the many problems assumes that these can be addressed through minor re-emphasis and adjustments – a steady as she goes refresh. However the problems identified in the Report suggest that tiny adjustments won’t address the issues. A paradigm change is required here.

We are so accustomed now to national standardised testing based on multiple choice questions in a narrow band of subjects being ‘the way we do things’, that it seems our deliberations are simply incapable of imagining that there might be a better way.

To illustrate what I mean I would like to take you back to the 1990s in Australia – to the days when NAPLAN was first foisted on a very wary education community.

How many of us can remember the pre national testing days? Just in case I will try and refresh your memory on some key elements and also provide a little of the ‘insider debates’ before we adopted the NAPLAN tests.

1989 was the historic year when all Education Ministers signed up to a shared set of goals under the now defunct 1989 Hobart Declaration. Australia was also in the process of finalising its first ever national curriculum – a set of Profiles and Statements about what all Australian children should learn. This was an extensive process driven by an interstate committee headed by the then Director of School Education in NSW, Dr Ken Boston.

During this time, I worked in the mega agency created by John Dawkins, the Department of Employment, Education, Training and Youth Affairs, initially in the Secretariat for the Education Ministerial Council (then called the AEC) and a few years later heading up the Curriculum and Gender Equity Policy Unit.

The Education Division at that time was heavily engaged in discussion with ACER and OECD about the development of global tests –the outcomes of which are PISA and a whole swag of other tests.

This was also when standardised testing was also being talked about for Australian schools. Professor Cummings reminds us of this early period in her submission to the Parliamentary Inquiry when she says that

This was also when standardised testing was also being talked about for Australian schools. Professor Cummings reminds us of this early period in her submission to the Parliamentary Inquiry when she says that

…through the Hobart Declaration in 1989 … ‘Ministers of Education agreed to a plan to map appropriate knowledge and skills for English literacy. These literacy goals included listening, speaking, reading and writing….

National literacy goals and sub-goals were also developed in the National Literacy (and Numeracy) Plan during the 1990s, including: …comprehensive assessment of all students by teachers as early as possible in the first years of schooling…to ensure that…literacy needs of all students are adequately addressed and to intervene as early as possible to address the needs of those students identified as at risk of not making adequate progress towards the national…literacy goals….use [of] rigorous State-based assessment procedures to assess students against the Year 3 benchmark for…reading, writing and spelling for 1998 onward.

It is interesting to note that the on entry assessments of children by teachers commitment referred to by Cummings did result in some work in each state. But it never received the policy focus, funding or attention that it deserved., which I regard as a pity The rigorous assessments at Year 3 however grew in importance and momentum. But the key consequence of this commitment was the year 3 state based assessment. Professor Cummings goes on to say that in order to achieve this goal – a laudable goal – the NAP was born. State based at first with very strict provisions about not providing school based data and then eventually what we have today.

Professor Cummings may not have known that many of us working in education at the time did not believe that national multiple choice standardised tests were the best and only answer and that considerable work was undertaken to convince the then Education Minister, Kim Beazley, that there was a better way.

During this period, where National standardised literacy tests were being discussed in the media and behind closed doors at the education Ministers’ Council,

Over this same period the US based Coalition of Essential Schools was developing authentic classroom teaching and learning activities that were also powerful diagnostic assessment exercises. Its stated goal was to build a data bank of these authentic assessments activities and to benchmark student progress against these benchmarks across the US. Its long term goal was to make available to schools across the US a data-base of benchmarked (that is standardised) assessments with support materials about how to use the materials as classroom lessons and how to use the results to a) diagnose a students learning b) plan future learning experiences and c) compare their development to a US wide standard of literacy development.

As the manager of the curriculum policy area, I followed these developments with great interest, as did a number of my work colleagues inside and outside the Department. We saw the potential of these assessments to provide a much less controversial, and less damaging way of meeting the Ministers’ need to show leadership in this area.

Our initiatives resulted in DEETYA agreeing to fund a trial to develop similar diagnostic classroom friendly literacy assessment units as the first part of this process. We planned to use these to demonstrate to decision makers that there was a better solution than standardized multiple-choice tests.

As a consequence I commenced working with Geoff Masters (then at ACER as an assessment expert) and Sharon Burrows (who headed up the Australian Education Union at the time) exploring the potential power of well designed formative assessments, based on authentic classroom teaching formats, to identify those at risk of not being successful at developing literacy skills.

Unfortunately we failed to head off a decision to opt for standardised tests. We failed for a number of reasons:

  • the issue moved too quickly,
  • the OECD testing process had created a degree of enthusiasm amongst that data crunchers who had louder voices,
  • our proposal was more difficult to translate to three word slogans or easy jargon,
  • multiple choice tests were cheaper.

At the time I thought these were the most important reasons. But looking back now, I can also see that our alternative proposal never had a chance because it relied on trusting teachers. Teachers had to teach the units and assess the students’ work. What was to stop them cheating and altering the results? Solutions could have been developed, but without the ICT developments we have access to today, they would have been cumbersome.

I often wonder what would have happened if we had initiated this project earlier and been more convincing. Could we have been ‘the Finland’ of education, proving that we can monitor children’s learning progress, identify students at risk early in their school lives, prioritise funding based on need  – all without the distorting effects of NAPLAN and MySchool?

We can’t go back in time but we can advocate for a bigger, bolder approach to addressing the significant problems associated with our current NAPLAN architecture. The parliamentary report failed us here but this should not stop us.

I have written this piece because I wanted us to imagine, for a moment, that it is possible to have more courageous bold and educationally justifiable policy solutions around assessment than what we have now. The pedestrian “rearrange the deck-chairs” of this Report is just not good enough.

So here is my recommendation, and I put it out as a challenge to the many professional education bodies and teacher Education Institutions out there.

Set up a project as follows:

Identify a group of our most inspiring education leaders through a collaborative peer nomination process. Ensure the group includes young and old teachers and principals, teachers with significant experience in our most challenging schools especially our remote Indigenous schools. Provide them with a group of expert critical friends – policy experts, testing and data experts, assessment and literacy experts and ask them to:

  • Imagine there is no current assessment infrastructure
  • Devise an educationally defensible assessment architecture – taking a green fields approach

I can almost guarantee that this working group would not invent NAPLAN and MySchool or anything like it, and we would be significantly better off.

We have dug ourselves into an educationally indefensible policy hole because we have allowed politicians and the media to drive key decision. To my knowledge we have never established an expert group of educational practitioners with access to specialist expertise to develop better policy solutions in education. Why don’t we give it a try?

Any takers?

[1] I understand that NSW does use the NAPLAN results to channel some additional funds to low performing schools but these are above the line payments.

Is opting out of testing just selfish individualism?

In a recent article about American culture and the opt out society Alan Greenblatt described the growing and successful movement to encourage parents to refuse to allow their child to participate in national standardised testing as selfish individualism.  It might be driven by a parents individual interest, he argues, but it is selfish and against collective interests:

 It’s probably true that the time spent on testing isn’t going to be particularly beneficial to the kids, but it’s very beneficial to the system,” says Michael Petrilli, executive vice president of the Fordham Institute, an education think tank. “If you have enough people opt out of these tests, then you have removed some important information that could make our schools better.

I find this amusing because the whole corporate reform movement, for which testing is the centerpiece, is built on the neoliberal belief that the best solution to everything – prisons, health, education etc – is to turn everything into a market and allow competition and individual choice to drive better value.

In fact this was the prime motivation described by Kevin Rudd when he first announced the ‘school transparency agenda’ on the 21 August 2008 at the National Press Club. The speech has mysteriously disappeared but I am quite clear that Kevin Rudd said something along the following lines

“If parents are unhappy with their local school because of the information in MySchool, and decide to transfer their child to another better performing school, then that is exactly what should happen.  This is how schools will improve, through parents voting with their feet.”

Now nobody who works in a struggling school thinks this is the way schools improve. Australia has run an aggressive market choice model of school funding for nearly 2 decades now and all we have to show for it is a highly class segregated schooling system and high levels of inequality.

So let me reassure parents who are concerned about our high stakes NAPLAN testing regime.  Opting out of having your child participate in these tests is much more of a community act than deciding to send your child to an elite school.

New Year Resolutions for Public Education Supporters

I have avoided reading ‘the 13 best’ or ‘the 10 most X of Y’ lists which seem to be quite the thing at this time of the year.

But today Lyndsay Connors sent me a link to this blog by John Kuhntz which included a list of the 5 most important things public educators in the US must do to maintain and build the push back which is building momentum across many US states.

We are not at this same point in the education politics cycle but our issues are no less critical.  Unless we build momentum on the implementation of needs based funding across schools we are in danger of losing out on this once in a life-time opportunity to achieve this long held principle.

At the same time there are ominous signs that after much tossing and wriggling and saying very little of substance, Education Minister Christopher Pyne is finally developing his own education policy agenda.  It will almost certainly not be evidence based, or conducive to building quality or equity.

We know already some of its focus areas and dimensions:

  • Make more schools like autonomous non-Government schools because they are the gold standard and competition breeds perfection.
  • Get rid of NAPLAN reporting but increase testing and its stakes by using it to evaluate and reward or punish teachers,
  • Roll back the national curriculum and reinstitute the curriculum us baby boomers remember so well because we had to memorise it
  • Promote direct Instruction for the poor, the Indigenous and all the ‘other peoples children’

There is a lot at stake here so I think we need to resolve to get active in 2014 more than ever.  Kuhntz’s list is a pretty good starting point for us.   So here it is

1. Be active online, in the papers, and in your state capital. This is highly relevant to Australia. One derivative poorly referenced paper from a well funded or even self-styled ‘pretend’ Institute and the media saturation reverberates for days.  They have the in with media and many have the funds to run high profile seminars and launches.

We need to be active in blogs, media comments, social media, letters to the editor, and article writing and sharing.  We need to make our views and the strength of our presence known whenever there are elections, community consultations or other forms of political engagement.

We need to anticipate new developments and get ahead of the game preparing considered responses.

And even though it is tiring and seems pointless we also need to respond to the pop phrases and concepts that are based on very little of substance but all too often pass uncontested and start to sound obvious and factual.  ‘More money wont help’, ‘teacher quality is all that matters’ small class sizes are a waste of our dollar’ ‘public schools are failing’ and so on– how many times do we hear this sort of nonsense and just shrug.

2. Be active locally. I must admit I had not considered this issue and our school board politics is vastly different. However the move to Independent public schools will mean that there may be a risk that special interest groups of parents or others will decide to exercise and undue influence on local schools.  Schools could be vulnerable to being captured by special interest groups who may also see it in their interest to push out other groups of students and parents.

3. Embrace your expertise. One of the exciting developments in the US is the establishment of networks of practicing teachers who are voicing their concerns and sharing their ‘ expert’ and important grounded perspectives on education.  Organisations like the Network for Public Education and The Educators Room put teachers and principals at the centre.

This happens to some extent in Australia with the twitter handle @edutweetoz and through principals and professional networks.  We could benefit from hearing more from teachers about what it means to struggle in poorly resourced high need schools, how they juggle the competing demands of quality learning and test preparedness, and so on.  As Kuhntz reminds us “If educators are to have an impact, they must have a voice. If they are to have a voice, they must be willing to take the microphone from people who feel they are entitled to hold it. And the same goes for students. Teachers need to embrace the student voice movement. Democracy comes from the people most affected by policy–it isn’t done to them–and in education, that’s the students.”

4. Join others. Relatedly, if you are serious about protecting the promise of public education, you have little choice but to join others in holding back the tide of corporate reform. There is diversity in the pro-public education camp. If you are progressive, there is a place for you. If you are conservative, there is a place for you. If you support or oppose the Common Core, there is a place for you. Some organizations and individuals standing together differ on their opinions about well-regulated charter schools. Some differ in their opinions about how much standardized testing is appropriate. Those of us on the front lines of defending the promise of public education are not a monolith. What binds us together is our shared desire to prevent the devaluing of public education via reckless rhetoric and demeaning and unfair policies.

This is really a call for more public education campaigners from all walks of life to stop watching from the margins, or being lone rangers and to get active in the organisations you associate with or find and organization to join.  It could be a parent lobby group, a professional association, the Union, a specific purpose coalition, a relevant not for profit or your work.

5. Be great. The best defense of the public education system is a strong public education system. Yes, it feels to many of us that we are being sabotaged and set up to fail. Yes, many of us have a hard time doubting that the point of all the testing is to prove that we stink. But be that as it may, we have the opportunity day after day to go into our classrooms and our administrative offices and invest ourselves in activities that make a difference in children’s lives. When we do our jobs well, we win the support of our communities and our parents and students. And, to butcher-phrase an Abraham Lincoln quote often used by the incomparable Jamie Vollmer, “if public opinion is with us, we can’t lose; if it against us, we can’t win.” Public opinion starts in your classroom or office. There are obstacles–especially in America’s poorest communities–that often seem impossible for teachers to overcome. But we must give our all and do our very best. We must show the world that we aren’t afraid of accountability and that, in fact, we embrace something far greater: responsibility. (H/T Pasi Sahlberg).

 

So does anyone want to add to or amend this list?

 

 

WHEN WILL THE AUSTRALIAN GOVERNMENT ACKNOWLEDGE THAT THE NT CAN’T BE RELIED ON TO DELIVER FOR ITS REMOTE INDIGENOUS CITIZENS

2008 was important for Indigenous education because that was when all Australian states and the Commonwealth signed up to the National Indigenous Reform Agreement (NIRA) through the COAG Reform process. I worked for NT Department of Education at the time and this development gave me a sense of cautious optimism.

The NIRA committed all states and territories to halving the gap for their Indigenous citizens on a number of key measures by 2020.  For the school sector,the already agreed targets set out in the National Education Agreement (NEA) – improvements to student performance based in NAPLAN tests, and Year 12 retention and completion – were confirmed.

I now see that while the NIRA has given added focus and priority to a very important equity policy issue, it will not drive change for the most disadvantaged Australian citizens  – those living in remote discrete communities in the Northern Territory.

There are many reasons for this but here I want to focus on just two:

  1. The unsuitability of the targets and measures that have been set; and
  2. The decision to drive change  through an outcomes focus – a strategy that is silent on inputs and process measures.

Problem One: The suitability of the NIRA targets and measures for NT remote communities

Example One: NAPLAN Performance

According to Nicholas Biddle (2002) over 67 per cent of NT Indigenous people speak a language other than standard Australian English in their home. For children who grow up in discrete Indigenous communities in remote NT this figure is nearer to 95-100 per cent.  This doesn’t just mean that these children speak another language; it means that they don’t speak English and no one else does, so they don’t hear it spoken in the home, in the playground, in the community, at social functions, on the radio, in shops and in church.

They live in a non-English speaking world, until they arrive at school.

When the children go to school, the school has to work out how to teach a whole class of children who do not understand English. In communities like Yirrkala where children speak a living Indigenous language or languages, and there is a tradition of two-way education, children learn in their own language, Yolnu Matha, using texts that have been developed through the school for this purpose. English exposure is largely oral at this stage.

A potted history is required here.  The bilingual education program was once well-funded and well supported with trained linguists actively supporting the school in developing new resources, skilled two-way teachers and Indigenous Education Workers in classrooms with high levels of Indigenous language proficiency . Over the years funding dwindled to a trickle.  Firstly it was abolished only to be reinstated without critical funded positions and for many years it languished as an unsupported program.   Then in 2010, it was briefly NT government policy to teach only in English for most of the school day.

This was introduced in haste by the former Indigenous Minister for Education, Marion Scrymgour, who later apologised for this ‘mistaken’ decision (Rawlinson, 2012). However, Yirrkala along with a number of schools, refused to comply, and now the NT Education Department appears to passively ‘allow it’ but with no support. Even those schools that did comply, in part or completely, still faced the overwhelming challenge of teaching a whole class of children who do not speak or understand any English. Whatever adaptations made involved major adjustments to the standard approach to literacy education in Australian schools.

By third grade, many remote NT classrooms are just starting to expose students to English language texts and are still using community language reading texts. Their English language focus at this point is still English language oracy as they rightly see this is a pre-condition to being able to read English. However in Year 3, all these learners are forced to sit the NAPLAN tests – tests that are totally unsuited to their stage of English learning development, no matter how they approach the ESL challenges. The vast majority of students either, do not turn up on test days, or, get a zero score – meaning that they are unable to get even one answer right. Indeed we are crazy even expecting them to.

Now let’s compare this treatment and experience to a similarly English language challenged group.

Children who are new arrivals from non- English speaking background countries can access up to one year of an intensive English program and can be exempted from sitting for NAPLAN tests for this period.

These children have come from a foreign land but in many ways remote Indigenous children are still living in a foreign land, yet no parallel Commonwealth funded intensive English program was ever provided for them and the exemption definitions do not allow them to be excused from NAPLAN testing.

The solution to this problem is extremely easy, affordable and accessible. There are culturally sensitive, developmentally appropriate diagnostic assessments developed by the Australian Council for Educational Research (ACER) with remote Indigenous students in mind and sophisticated processes that would enable the results of these tests to be equated to mainstream NAPLAN results in ways that would make sense. This simple but urgent change would put anend to the negative impact of the tests in remote schools. Having a class that all score zero on their test gives the worst kind of useless feedback to parents, students, teachers and systems.

Example 2: Year 12 retention/completion targets

For this target, the reporting framework relies on two measures. For Year 12 completions the target group is 20 – 24 year olds and the data source the School Education and Workforce (SEW) Survey managed by the ABS. The ABS data is collected through a telephone survey, from which remote communities are excluded.   This means that for our most disadvantaged cohort we have no data and therefore no performance targets.

This should be addressed through the initiation of a remote Indigenous survey as a matter of urgency.

The other measure is Year 12 retention. There are a lot of issues with this measure.  But for remote Indigenous children the key problem with this measure is that it means absolutely nothing.

A friend of mine running a government service in remote Australia went to the local Indigenous school and promised a guaranteed job to everyone who completed Year 12.  Later she was taken aside by a teacher who explained to her that completion to Year 12 just means that a student is still attending school to Year 12 – that is they are still enrolled and that is all. While the school could point to a few Year 12 completers in the community most of them could not read enough to be safe in the workplace.

In my view, it is quite mischievous to use as a measure something that appears to measure something of value that actually means nothing at all. We should stop this practice. Its existence means that the lack of meaningful data in this area is hidden from view and never prioritised.

Problem Two: The limitations of focusing only on outcome measures

There is an assumption that outcome measures are a magic bullet and will bring about the required changes on their own. This is not the case where good governance is lacking.  Marcia Langton (2012) the Foundation Chair of Australian Indigenous Studies at the University of Melbourne argues that the high levels of funds allocated to the NT from the Commonwealth Grants Commission based on the level of disadvantage of its Indigenous citizens have been diverted for other purposes.

This has happened over a sustained period and no matter which party holds power. Other policy observers – journalists, social justice advocates, and researchers such as Nicolas Rothwell (2011), Rolf Gerritson, and Barry Hansen support this view with data.

According to Michael Dillon and Neil Westbury there were serious questions raised about the level of funding and servicing for remote Indigenous schools in 2006 and, in response to queries, the Department claimed it was developing a new teacher staffing framework that would ensure transparent and consistent needs based funding.  In 2008 when I commenced with the Department the staffing review was said to be in its final stages and about to be ticked off.  In 2011 in response to a query from myself, the department also said that a new transparent needs based staffing formula was nearly complete.  Everyone I worked with knew this would never happen – taking money away from Darwin schools to give to schools in the bush would never happen. No party wants to commit political suicide.

Rothwell argues that there are no votes in solving Indigenous disadvantage and no strategies to make transparent what is happening or to hold the Territory accountable. The mantra that there are no votes in Indigenous issues is an oft-repeated NT Public Service phrase. The vast difference between the world of ‘white Darwin’ and the world of Indigenous Darwin and Indigenous remote communities is shocking.

Mainstream Darwin residents enjoy the laid back lifestyle, visits to markets, world-class conference precinct with a wave pool, state of the art senior colleges and middle schools and the extremely elaborate Parliament House and precinct, all for a Territory of less than 220,000.

Town Camp Darwin is different. Nine-Mile Town Camp, for example, is not marked on the map – it is just a blank space. This is a place where buses don’t visit, where the main power line to Darwin runs through the middle but when I was there in 2009 there were no street lights (this may well still be the case) , where many houses are condemned and several have no ablution facilities, where there are no footpaths and the grass is higher than a primary school child. The children who do manage to go to school have to be at the bus stop with no bus shelter outside the community by around 7.20am because the only bus they can catch picks them up first and then all the other children. They are on the bus for 50 minutes to go to a school less than 15 minutes away.

Remote NT is worse. The average number of people per bedroom is around three, rubbish services are spasmodic, there are no gutters and where I have seen children swimming in open drains. There is no parity of amenity.

In 2007 I attended the opening of a new high school that would never have been built in a Darwin suburb. It was built on the only oval, taking away this amenity from the whole school. It had no footpaths or covered ways, no water faucets, a very poor library, a staff room that was too small for the number of staff, and huge mud puddles between buildings.

But before this date this community of over 2900 had no secondary school whatsoever.

My argument in a nutshell is that outcomes-based accountability measures will not put any real pressure on the NT to do the right thing by their Indigenous citizens, and real accountability is what is urgently needed here. They know that they can keep on failing because this issue is already assigned by many to the too-hard-basket.

At one of the COAG working party negotiations that I attended, where states were arguing over funding shares, one state representative remarked that there was no point giving any funds to the NT because they wouldn’t deliver the goods and that the close-the-gap target could be achieved by focusing on the Indigenous population in the eastern states alone.

Let’s not make this chilling black humour a reality.

A Tale of ACARA and the see-no-evil monkeys – Subtitle: There is no excuse for willful ignorance

The Australian Senate’s Education, Employment and Workplace Relations Committee is currently holding an Inquiry into the effectiveness of the National Assessment Program – Literacy and Numeracy (NAPLAN).

Over 70 submissions have been received of varying quality. In this article, I focus on the submission from the Australian Curriculum, Assessment and Reporting Authority (ACARA). ACARA is the custodian of NAPLAN and how it is used for school transparency and accountability purposes on the MySchool website.

One of the focus questions in the Inquiry’s Terms of Reference refers to the unintended consequences of NAPLAN’s introduction.  This is an important question given widespread but mainly anecdotal reports in Australia of: test cheating; schools using test results to screen out ‘undesirable enrolments’; the narrowing of the curriculum; NAPLAN test cramming taking up valuable learning time; Omega 3 supplements being marketed as able to help students perform better at NAPLAN time; and NAPLAN test booklets hitting the best seller lists.

Here is what ACARA’s submission to the Senate inquiry  has to say about this issue:

To date there has been no research published that can demonstrate endemic negative impacts in Australian schools due to NAPLAN.  While allegations have been made that NAPLAN has had unintended consequences on the quality of education in Australian schools there is little evidence that this is the case. 

The submission goes on to refer to two independent studies that investigated the unintended consequences of NAPLAN.

ACARA dismisses a Murdoch University research project[1] led by Dr Greg Thompson as flawed because its focus is on changes to pedagogical practices as a result of the existence and uses made of NAPLAN.  The basis of the dismissal is that if teaching practices change then that is all about teachers and nothing to do with NAPLAN.  Yet this report makes clear that teachers feel under pressure to make these changes, changes they don’t agree with, because of the pressures created by the use of NAPLAN as a school accountability measure.  In other words, in one clever turn of phrase, ACARA rules out of court any unintended consequences of NAPLAN that relate to changes to teachers’ practice.

ACARA also dismisses a survey undertaken by the Whitlam Institute because it  “suffers from technical and methodological limitations, especially in relation to representativeness of the samples used” and dismisses its conclusions, without  outlining detailing the findings. Now this survey was completed by nearly 8500 teachers throughout Australia and it was representative in every way (year level taught, sector, gender, years of experience) except for the oversampling in Queensland and Tasmania. In relation to this sampling concern the writers even reported that they weighted the responses to compensate for this sampling differential.  This report documents many unintended consequences that ACARA are now saying are not substantiated because of a spurious sampling critique. This is intellectually dishonest at best.

ACARA’s dismissal of all of the findings of these two research projects on spurious grounds while refraining from stating their findings is in stark contrast to its treatment of unsupported statements by non impartial stakeholders about enrolment selection concerns.

In response to the claims that some schools are using NAPLAN results to screen out ‘undesirable students’ ACARA states that it is aware of these claims but appears willing to take at face value comments from stakeholders who represent the very schools accused of unethical enrolment screening.

It is ACARA’s understanding that these results are generally requested as one of a number of reports, including student reports written by teachers, as a means to inform the enrolling school on the strengths and weaknesses of the student. The purpose for doing so is to ensure sufficient support for the student upon enrolment, rather than for use as a selection tool. This understanding is supported by media reporting of comments made by peak bodies on the subject (my emphasis).

ACARAs approach to this whole matter comes across as most unprofessional.  But, unfortunately for ACARA, this is not the whole story.  There is a history to this issue that began in 2008, almost as soon as the decision to set up MySchool was announced by the then PM Kevin Rudd as part of his school transparency agenda.

Three years ago this month I wrote an article[2] about the importance of evaluating the impact of the MySchool Website and the emergence, under FOI, of an agreement in September 2008 by State and Commonwealth Ministers of Education to:

…commence work on a comprehensive evaluation strategy for implementation, at the outset, that will allow early identification and management of any unintended and adverse consequences that result from the introduction of new national reporting arrangements for schools(my emphasis).

It was clear from the outset that this evaluation should have been managed by ACARA as the organisation established to manage the school transparency agenda.  In 2010, in response to my inquiry to ACARA on this Ministerial agreement the CEO of ACARA stated that it was not being implemented at that point in time because, early reactions to hot button issues are not useful and because the website did not yet include the full range of data planned.

This was a poor response for three reasons.

Firstly, well-designed evaluations are designed, not as afterthoughts, but as part of the development process. One of the vital elements of any useful evaluation is the collection of baseline data that would enable valid comparisons of any changes over time.  For example, information could be collected prior to the MYSCHOOL rollout on matters such as:

  • Has time allocated to non-test based subjects reduced over time?
  • Has teaching become more fact based?
  • Has the parent demographic for different schools changed as a result of NAPLAN data or student demographic data?
  • Are more resources allocated to remedial support for students who fail to reach benchmarks?
  • Are the impacts different for different types of schools?

Secondly, the commitment to evaluate was driven by concerns about the possibility of schools being shamed through media responses to NAPLAN results, the narrowing of curriculum and teaching, further residualisation of public schools, test cheating and possible negative effects on students and teachers. Identifying these concerns early would allow for revising the design elements of MySchool to mitigate the impacts in a timely fashion.  There is no real value in waiting years before deciding corrections are needed.

Thirdly, anyone who seriously believed that all of the data elements agreed as possibly in scope for MySchool was a complete list and able to be developed quickly, was dreaming.  Waiting for the full range of data meant, in reality, an indefinite delay. There are still data items in development today.

So now, five years on from the Ministerial directive that there was a need to actively investigate any unintended consequences, there is still no comprehensive evaluation in sight. One suspects that ACARA finds this quite convenient and hopes that its failure to act on this directive stays buried.

However, Ministers of Education still had concerns.  In the MCEECDYA Communiqué of April 2012 the following is reported:

Ministers discussed concerns over practices such as excessive test preparation and the potential narrowing of the curriculum as a result of the publication of NAPLAN data on My School.  Ministers requested that ACARA provide the Standing Council with an assessment of these matters.

On the basis of this statement, I wrote to ACARA on 27 April 2013 requesting information on action in response to this directive  – then over 12 months old.  To date I have received no reply.

So what sense can be made of this?

If one takes at face value the statements by ACARA that it knows of no information regarding the extent of unintended consequences one can only conclude that ACARA has twice not undertaken a Ministerial directive.

Here we have a Government body: aware of Ministers’ concerns about unintended negative consequences about a program it manages; aware of widespread anecdotal concerns, some of them quite serious; dismissing without any proper argument the few pieces of evidence that do exist; and. refusing to undertake any investigation into this matter despite two Ministerial directives to do so.

Willful ignorance about the potential unintended and harmful impacts of a program an agency has responsibility for whilst all the while professing a strong interest in this matter is highly irresponsible and unprofessional.

It is also quite astonishing given the Government’s commitment to the principle of transparency and the fact that ACARA was established specifically to bring that transparency and reporting to Australia’s schools.

But to then write a submission that almost boasts about the lack of information on this issue, while dismissing with poor arguments the evidence that is growing, is outrageous. It also gives new meaning to a throwaway line in its submission about the negative findings from the Whitlam Institute survey  “Further research with parents, students and other stakeholders is needed to confirm the survey results on well-being

Further research is indeed needed, and this further research should have been initiated by ACARA quite some time ago– five years to be precise.  It is convenient, for ACARA, that such research is not available. It is intellectually dishonest and misleading for ACARA to now state that it “takes seriously reports of unintended adverse consequences in schools. It actively monitors reports and research for indications of new issues or trends.”

Of course, there is another,  more alarming possibility, that this work has been undertaken but is not being made public, and that ACARA is misleading the Parliamentary inquiry and the public by denying that any such information exists.

In either case, I am forced to conclude that ACARA does not want any unintended consequences of a program for which it is responsible to be known, in spite of its ‘interest in this issue’ and is persisting in its position of willful ignorance.

In an effort to restore public confidence in its work, ACARA should commit to undertaking this research at arms length using independent researchers and reporting the findings to Parliament, without delay.  Perhaps this Inquiry could recommend this.

Evaluating MySchool – We are still waiting ACARA

Note:  Three years ago this month, I wrote about the Ministers’ of Education agreement to evaluate MySchool in order to identify and mitigate unintended consequences in June 2010.  There is still no such evaluation, nor any commitment by ACARA to do this.  This is being republished as a backgrounder to my next post

At last it is official.  Well before the launch of the MySchool website, state and federal education ministers agreed to task an expert working group to ‘commence work on a comprehensive evaluation strategy for implementation at the outset, that will allow early identification and management of any unintended and adverse consequences that result from the introduction of new national reporting arrangements for schools’, according to minutes of the ministers’ meeting held in Melbourne on 12 September 2008.

It appears, however, that the decision was never implemented and responsibility for it has transferred to the Australian Curriculum, Assessment and Reporting Authority (ACARA).

Now that the initial hype surrounding the launch of the MySchool website is over and the education community is settling down for a more considered debate on the opportunities and challenges of MySchool, I thought it might be useful to put forward some thoughts about what should be in scope for this MySchool evaluation.

EVALUATING ACCOUNTABILITY

My understanding of the ACARA position is that it is too early to make an evaluation because early reactions to a ‘hot button issue’ area not an accurate reflection of the longer term impacts, and because the MySchool website does not yet have the full range of information that is planned.

I have to agree that at this stage in the rollout of the full MySchool concept, we have an unbalanced set of data.  As data on parent perceptions, school level funding inputs and hopefully data on the quality of school service inputs (like the average teaching experience of the staff and their yearly uptake of professional learning) are added to MySchool, the focus of schools and systems might well shift.  I, for one, hope so.  If extra information like this doesn’t shine a bright light on the comparative level of the quality of the schooling experience for high-needs students relative to advantaged students, this will be a lost opportunity.

So yes, the effects of MySchool might change over time.  But this doesn’t mean we should wait to evaluate what is happening.  In fact, I think we have already missed the best starting point.  A good evaluation would have included a baseline assessment report, something that would have told us what sort of accountability pressures schools were already experiencing for good or ill prior to the introduction of MySchool.

For several years, education systems already had access to data from the National Assessment program – Literacy and Numeracy (NAPLAN) down to the school, class and even student level.  In most government systems, NAPLAN data was but one small component of a rich set of data used by schools to review their strategic plans, identify improvement priorities, set improvement targets and develop whole-school improvement plans.  They were already using parent and student surveys, student data on learning progress, attendance turnover, post-school destinations, school expulsion and discipline, and teacher data on absenteeism, turnover, professional learning and performance reviews.  At the same time, most education systems had access to this same information and were using it to prioritise schools in need of external reviews or other forms of intervention as part of system-level school-improvement strategies.

Contrary to popular belief, the introduction of MySchool did not commence a new regime of school accountability but it did both broaden it out and narrow it down.  It broadened it out to include parents and the community as arbiters of a school’s performance and applied this public accountability to independent as well as systemic schools. It narrowed it down by using just a tiny amount of data, and that has had the effect of privileging the NAPLAN test results.

ANECDOTAL EVIDENCE

Even before the launch of MySchool, systemic schools such as government and some Catholic schools were already feeling some degree of pressure from their systems about student performance in NAPLAN.  For example, in the Northern Territory, this had already given rise to a strong push from the education department to turn around the high level of non-participation of students in NAPLAN testing – an initiative that was highly successful.

A baseline study could have taken a reading on the extent that schools and systems were responding in educationally positive ways in the previously established accountability regimes and whether there were already unintended impacts.

Without a baseline assessment we are left with only anecdotal evidence of the effect of and reaction to MySchool, and responses vary widely.  I recall hearing a more outspoken principal in a very remote school saying something along these lines:

If I have to go to one more workshop about how to drill down into the NAPLAN data to identify areas of weakness and how to find which students need withdrawal based special support, I will scream.  All areas require focused intervention at my school and all our students require specialised withdrawal support.  We don’t need NAPLAN to tell us that.  What we need is …

I suspect that for schools where the NAPLAN outcomes were poorest, there was already a degree of being ‘over it’, even before the advent of MySchool.  The challenge for the most struggling schools is not about ‘not knowing’ that student progress is comparatively poor.  It is about knowing what can be done.  For years, struggling schools have been bombarded with a steady stream of great new ideas, magic bullets and poorly funded quick fixes that have all been tried before.  Their challenge is about getting a consistent level of the right sort of support over a sustained period of time to ‘crash through’. What has MySchool done for schools with this profile?  And more significantly, what has it does for their school communities?

On the other hand, I have heard anecdotally from teachers in the Australian Capital Territory that the launch of the site has drawn some comments to the effect that ‘we probably needed a bit of a shake up’ or ‘perhaps we have been a bit too complacent’.  Will the effect of MySchool be different depending on school demography and schools’ previously experienced accountability pressures?

There is evidence from the United States that this is likely to be the case.  Linda Darling-Hammond’s new book The Flat World and Education makes the point that the negative impacts of accountability driven by multiple-choice tests are greater when the stakes are higher but are also greater in schools serving high-needs communities.  For schools in high-needs communities, the stakes are higher than for comparatively advantaged schools precisely because their results are lower and their options fewer.

Darling-Hammond’s research suggests that this unequal impact results in more negative consequences for schools serving high-needs communities on three fronts.  The first relates to the impact on student school engagement and participation.  The more pressure on a school to perform well in national testing, the more likely it will be that subtle and not so subtle pressures flow to the highest-needs students not to participate in schooling or testing.  She documents unequal patterns with high-needs schools experiencing very marked increases in student suspensions, drop-outs, grade repeating and non-test attendance as a result of accountability measures driven largely by tests.

The second relates to the stability, quality and consistency of teaching.  Darling-Hammond notes that the higher the stakes, the more the negative impact on the stability and quality of teaching.  Her book documents decreases in the average experience level of the teachers in high-needs schools after the introduction of the US No Child Left Behind program of accountability.  Ironically, some of the systemic responses to failing schools exacerbate this as systems frequently respond by funding additional program support for at–risk students, English language learners and special-needs students.  These programs almost always engage non-teaching support staff.  The practical impact of this is that the students with the highest needs get even less access to a qualified teacher during their classroom learning time.

The third relates to the quality of the teaching and learning itself.  This is the topic that has most occupied the Australian debate on MySchool.  There has been no shortage of articles in the media predicting or reporting that MySchool will result, or has resulted, in a narrowing of the curriculum, an increase in the teaching of disconnected facts or rote learning, and classroom time being devoted to test preparation at the expense of rich learning.  In addition, there area websites surveying changes at the should level in terms of what gets taught and how it gets taught.  Less commonly acknowledged is the overseas experience that suggests the quality of teaching and learning is most negatively affected in schools serving high-needs communities.

These differentiated impacts need to be identified and addressed because, if they are not, difference in educational outcomes between high-needs schools and advantages schools are likely to increase.  The impact of parents making choices to change schools sometimes on the basis of quite spurious information will also be felt more in lower socioeconomic communities.  Increased revisualisation of government schools leads to higher concentrations of students of lower socioeconomic status.  This has a negative effect on all the students of those residualised schools.  Of course, schools that struggle the most may ironically be exempt from this kind of adverse pressure because many families in these communities do not really have the choice of sending children anywhere other than the local government school.

Are there other unintended consequences worth investigating? The potential impact of the index of Community Socio-Educational Advantage (ICSEA) values being used to set up a notion of like schools, for example, deserves further attention.

The ICSEA assigns a value to each school based on the socioeconomic characteristics of the area where students live, as well as whether a school is in a regional or remote area, and the proportion of indigenous students enrolled at the school.  The development and use of ICSEA is a complex matter that deserves a separate article.  Here I am just looking at what an evaluation should focus on around ICSEA being used as part of the MySchool website.  For all its faults, in very broad terms it tells us how advantaged a school is (a high score) or disadvantages it is (a low score).

I must admit that when the concept of reporting by school demographic type was first mooted, I got a little excited.  I took for granted that this would lead to a focus on the fact that, in Australia, a student’s chance of success at school is still fundamentally influenced by the student’s level of disadvantage and the relative disadvantage of the school she or he attends. I thought, at last, we had a chance to expose the fact that Australia has not managed to break the ‘demography as destiny’ cycle.  I took for granted that this would lead to a focus on this outrage and lead to a renewed commitment to addressing it.  I also thought that the ICSEA had the potential to become a framework for a more focused needs-based funding approach than the current state approaches to this.

I was wrong.  Since the launch of MySchool in January,[1] I have googled and googled and found a lot of reporting about the way in which the tool, in its first iteration, has lead to some very strange school groupings, but with the exception of an excellent article by Justine Ferrari in the Australian, almost nothing about the link between poverty, disadvantage and NAPLAN results.

This initially surprised me – but then I was reminded that John Hattie, in his recent book, Visible Learning, makes the point, almost in passing, that differences between student learning outcomes within schools are greater than the difference in learning outcomes between schools in developed countries.  He refers to interschool differences as trivial at best.  Maybe we in Australia assume that this is so for us.

The MySchool data tells a very different story and anybody can verify this for themselves.  It shows that the schools in the lowest ICSEA grouping (those which have ICSEA values in the low 500s), who are assessed as doing better than their like-school peers (a dark green rating), have NAPLAN scores for their Year 9 students that are below the NAPLAN scores for Year 3 students in schools serving advantages communities (those with an ICSEA values about 1,100s).  This means that schools where Year 9 students are achieving average Year 3 results are rated as good for these schools

I had assumed the ICSEA would be a tool to shine a light on the issue of systemic educational disadvantage, but have come to understand that it may well be inadvertently taken attention away from this issue. The structure of the website precludes easy comparisons (for good reasons) but also draws web users’ attention to comparisons between like schools, not to unlike schools.

If the combination of the limited searchability of the MySchool website, with the complexity of the ICSEA concept has led unintentionally to an unspoken legitimising of ‘demography as destiny’ this is a serious concern that an evaluation would pick up.   Will schools start to say, ’we are doing pretty well considering the community we serve’? Have we inadvertently naturalised different outcomes for different groups of students?

In our efforts to minimise the shaming of high need schools and their communities, we need to ensure that we have not taken away the shame that all Australians ought to feel about our failure to the vicious cycle of family poverty and the failure of the institution of education to systemically and sustainably disrupt this link.

This is not a failure that can be laid at the feet of the individual classroom teacher or school principal.  It is a systemic failure.  If we insist on putting pressure solely on individual schools to do the ‘educational heavy lifting’, our revolution will fail.  If we believe that encouraging parents to vote with their feet for the best school without first guaranteeing that every choice can be a high-quality choice, with equal opportunity to learn, we will also fail.  Ideally, the evaluation should include, in scope, not just unintended impacts but the explicitly intended effects too.

While there is no space to embark on this discussion here, I would like to end with a quote from Linda Darling-Hammond’s The Flat World and Education that sums up most eloquently the kinds of changes that education transparency and accountability frameworks ought to drive:

Creating schools that enable all children to learn requires the creation of systems that enable all educators and schools to learn.  At heart, this is a capacity-building enterprise leveraged by clear, meaningful learning goals and intelligent, reciprocal accountability systems that guarantee skilful teaching in well-designed schools for all learners.

REFERENCES:

Publication details for the original article: Margaret Clark, Evaluating MySchool, Professional Educator, Volume 9, Number 2 June 2010 pp 7 – 11

Darling-Hammond, L  (2010) The Flat World and Education: How America’s commitment to equity will determine our future.  New York: Teachers College Press.

Ferrari, J. (2010, May 1) On the Honour Roll: the nation’s top schools.  The Australian Inquirer, p5.

Hattie, J. (2009) Visible Learning: A synthesis of over 800 meta-analyses relating to achievement.  London, New York: Routledge.

Patty, A. (2010, 30 March). Evaluation of MySchool pushed aside say critics.  Sydney Morning Herald, p2.


[1] Of course the research undertaken by Richard Teese and also by Chris Bonnor and Bernie Shepherd since this date have achieved this.  We do not hear the line – the biggest differences are between classes in the same c schools in Australia anymore.  Thanks to the work of Teese, Bonnor and Shepherd this myth has been busted.

A Small Step for Government But a Giant Step for Remote Indigenous Children: NAPLAN and Indigenous Learners

The current review of NAPLAN is the second Parliamentary Inquiry into the use of NAPLAN data in Australia.  If it goes the way of the first inquiry then little change can be expected.

When the previous Inquiry was initiated, I was working for the Australian College of Educators.  We put a huge amount of effort into our submission.  It went through a member consultation process, was submitted and then sank like a stone.  Indeed even the website that hosted all the submissions appears to have disappeared. Nothing much came of it as can be expected when an issue has become politicised.

I am much less optimistic about what can be achieved this time round. My ideal outcome is unrealistic. It will not lead to a change in emphasis from testing and measuring to supporting and building capability –  no matter how much the evidence supports such a change.

However we can and should advocate to address the most egregious problems and unintended consequences associated with NAPLAN.  This is our chance to highlight them.

For this reason I was very excited to see that Submission No. 71 to the Inquiry comes from Leonard Freeman, the Principal of Yirrkala School in the remote northeast of the Northern Territory.

Yirrkala College is a K-12 very remote school quite near the mining township of Nhulumbuy on the Gove Peninsula.  It serves a discreet, remote Indigenous Community on Aboriginal land and 100% of the students that attend are Indigenous.  According to MySchool 97% of the students at the school have a ‘Language Background Other Than English (known as LBOTE).

Now, it is easy to underestimate the significance of this language background issue in Indigenous contexts.  Children who grow up in a remote Indigenous community where their mother tongue is still alive and thriving are, in every sense of the word, still residing in a non-English speaking background country.  They arrive at school with almost no experience of hearing English spoken.  They don’t hear it at home, around town, on their radio station, local stores, health centre, or at social/cultural events.

LBOTE is a broad category and very unhelpful for understanding language challenges and issues.  Children and their families can be quite fluent in English, but if they speak a language other than English in their home they are still classified as LBOTE.  Most LBOTE children who have very little or no English are recent arrivals from a non-English background country.  They might reside in suburbs where English is not the dominant home language and for the first school year attend an Intensive English language unit but English is still heard around them – in playgrounds, health centres, playgroups, libraries, radio, TV and in the school playground and classrooms.  They are, at some level,immersed in a culture where English is heard.

Children at Yirrkala can grow up hearing almost no English spoken.  When they get to school, their classes are in language for the first few years (in spite of the NT Governments poor decision to change this Yirrkala maintained this policy) – in fact right up to year 3 where teaching in English is gradually introduced.

So what does Leonard Freeman have to say about NAPLAN?

He argues that while there is a perception that NAPLAN is a fair test it is anything but.

[NAPLAN] is a testing scheme that seems as fair as it could possibly be – all students sit the same tests and the marking is conducted by an independent central body. However, this perception of fairness is a thin veil that covers a system that disadvantages students who speak English as a Second Language.

There are a number of issues wrapped up in this notion of unfairness.

Firstly, the NAPLAN exemption criteria do not give adequate consideration to the English language development of Indigenous children living in non-English speaking discreet Indigenous communities.

Most Australian educators assume that students who speak little or no English can be identified by the category “newly arrived from a non-English background country”.  In fact, when I worked in education in the NT I found that I had to constantly remind education administrators at national meetings that their proxy use of newly arrived non-English speaking migrants leaves out Indigenous children with identical or even greater challenges.

Nationally two per cent of Australian children are exempt from sitting the NAPLAN test. Students can be exempted from one or more NAPLAN tests if they have significant or complex disability, or if they are from a non-English-speaking background and arrived in Australia less than one year before the tests. 

So in fact almost all other children, who have as little English language competence as Year 3 and even year 5 remote Indigenous children from communities like Yirrkala, are exempt from NAPLAN.  No children at Yirrkala were identified as exempt from NAPLAN testing.

This leads to the ridiculous situation where remote Indigenous children with almost no exposure to English language, especially in written form,    “must take the test under the same conditions as students who speak English as their first language and have their results evaluated in terms of the ‘typical achievement’ standards of mother tongue English speakers. “

Now one of the reasons why education Institutions and administrators resort to the category ‘recently arrived migrant from a non-English speaking background country’ as a proxy for children who do not yet have a sufficient grasp of Standard Australian English is because we don’t have sensible data on this matter.  We have data on children who have a language background other than English but this tells us nothing about their level of competence with written English.

This Inquiry could secure bipartisan support to fix this matter up – this is not a politicised, hot issue.  This is about applying definitions of technically relevant matters in an inclusive and fair manner. Children in years 3 and 5 who reside in communities where standard Australian English is not spoken could be either exempted from NAPLAN until their English language learning enables them to read English to a defined level.

Secondly, NAPLAN is not a culturally fair test and this further discriminates against remote Indigenous children.

Back again to Leonard Freeman:

….NAPLAN reading tests assess students’ reading ability by judging their answers to multiple choice questions which ask often complex questions about the reading material.

He provides the following example of a multiple choice item in a Year 3 reading test

‘But I feel funny about saying I own him’. What reason does Tim give for feeling this way?

a) Elvis is really Malcolm’s dog.

b) Tim thinks dogs cannot be owned.

c) All the family helps to look after Elvis.

d) Elvis is much older than Tim in dog years. 

It is pretty obvious that there is a great deal of non-accessible cultural knowledge required to eliminate supposedly irrelevant answers for this item.

These sorts of questions do not simply assess whether the student can read the reading material and basically comprehend the story; they go well beyond that. A Year 3 student from a remote Indigenous community who is still trying to master spoken English and western cultural norms would find a question like this very difficult to answer. The assumed knowledge, about dogs ages being measured by ‘dog years’, the use of the word ‘funny’ to mean uncomfortable rather than humorous, and the concept of questioning the definition of ownership are all things that would be unfamiliar to a child growing up in a remote indigenous setting.

The NAPLAN reading test actually tests analytical skills which are coated heavily in western cultural norms. 

Another example provided by Freeman of an item around the delivery of newspapers provides further insights into cultural inaccessibility.

The story begins with the householder complaining to the newspaper boy ‘you left the paper jutting out of the back of my box’ and we also learn the owner had previously complained the paper needs to be left ‘in line with the fence’. This question was designed to test whether students could infer the meaning of new words and constructions. Yet to do so the students need to be familiar with the cultural context, in this case the students need to know that houses have a box on their fence line where mail and newspaper deliveries are left.  If the student has grown up in a remote community or refugee camp where there are no letter boxes and few houses have fences they will not be able to access the meaning of the text. 

Thirdly, The lack of fit between the NAPLAN tests and the kinds of assessments needed to effectively support teachers in these challenging context leaves teachers unsupported and undermined.

Now it would be reasonable to expect that the NT Department of Education should be fully cognizant of these circumstances and to make it their business to ensure that the unintended consequences of this unintended situation could be addressed, or at least mitigated.  Sadly, when I worked in the NT I found that this was not the case.  And scrolling through their website today I found that nothing much had changed.  There is now an acknowledgement that Indigenous children are English Language learners but what this means in terms of resourcing is minimal and what it means for teachers across remote schools appears to be completely ignored.

The absurdity of this is best illustrated through the following personal experience of what can only be described as an absurd professional development event

This event took place at a beautiful new resort in the remote mining community on Groote.  The attendees at this session were principals and a group of their nominated teachers from schools in remote Indigenous communities.

The aim of the 3 days session was to ‘teach’ the attendees – all from remote Indigenous schools  – how to drill down into their schools test results and develop, not just a strategic, but a tactical response to what they find.  It was a highly structured event. First of all the groups were given a spreadsheet showing their NAPLAN results for al year levels and for all tested areas and a detailed worksheet to work through.

I sat next to a school team that came from a large school in Eastern Arnhem, similar in key features to Yirrkala.  It was also a school that ran a bilingual program, which meant that all student in year 3 and almost all in year 5 could, not yet read in English – even at a basic level.

This school had NAPLAN results that were marginally worse than the other schools represented.  At this school, in almost every subject, in almost every year level, 80 – 100% of the students scored zero – that is they did not get one answer right – not one.  Some classes in some schools had a small minority of students who did receive a higher score – a few even approaching the relevant band for their year but they were a tiny tiny minority.

The professional development session required the teachers to group their students by what they did not know.  For example – how many students did not understand the convention of the full stop?  Put a ring around these students.  The teachers next to me sighed and ringed the whole class.  And it went on like this for three whole days.  It was idiotic and devastating.

These teachers went back to the school not just demoralized but with decontextualised lesson plans on full stops, the sound ‘CH’, prime numbers and so on.

I tell this story because it is an extreme example of just how stupid it is for people to invent prescriptive solutions that must be rolled out across all schools, with no exception.

There is no doubt that this is damaging for teachers in remote schools.  It was political exposure about the poor NAPLAN results that forced Marion Scrymgour to preemptively abolish the struggling, underfunded bilingual program – something she later came to regret – for good reason.

Leonard Freeman sees the NT Department priorities and the experiences and struggles of remote teachers as heading down a collision course:

The NT government made a commitment to having 75% of NT students meet the Year 3 NAPLAN benchmarks and teaching programs are aimed at achieving this. The amount of English instruction is being increased under the misguided belief that elevating the focus and importance of English will yield better English results. 

The inclusion of ESL students in NAPLAN testing places ESL researchers, specialist ESL teachers and classroom teachers in a conflict between the principles of effective ESL teaching and assessment practices and the requirements of governments and education departments. Instead of working together to attain the best educational outcomes for students’ researchers, policy makers, teachers and governments are locked in a fundamental disagreement between meeting the needs of ESL students and the administrative and political advantages of a uniform testing regime.

 One of the perverse consequences of this is that programs which claim to accelerate English literacy or which are aimed at native English speakers are now favoured ahead of academically sound ESL programs which demonstrate the genuine progression of ESL students.

It has also led to effective and evidence-based programs such as the Step Model bilingual program to be shut down to the detriment of Indigenous ESL students.

Now some readers this may be thinking that I am arguing for lower expectations for remote Indigenous children.  This is not my message.  These children get exposed to English in school for the first time and it is often their third and fourth language.

We exempt newly arrived LBOTE children ‘down south’ not because we expect less of them but because we recognize that their learning journey has to include an additional learning pathway.  But we do not expect less of them in the long run.

Back to Freeman again:

… an ESL approach is not a lesser approach. It is aimed at getting students who are speakers of other languages to a point where they can access mainstream education. A program may be deemed ineffective if ESL students never reach age-grade performance, but ESL programs that successfully move students along the continuum at a rate that is acceptable based on the research should be regarded as valid and ideal for ESL learners.

It is important to recognise the research which shows that it takes a minimum of 4 years, and more like 7 years, to achieve proficiency in a second language to a level that is sufficient for academic pursuits. The efficacy of ESL programs should be judged against the backdrop of this research.

So what are the small steps Governments could take in order to stop getting in the way of effective education for remote Indigenous children

  1. Stop NAPLAN Testing for remote Indigenous children until year 7.
  2. In the meantime, agree on an alternative form of testing[1] that is more appropriate for ESL students in terms of cultural content and recognition of ESL learning stages
  3. Address cultural bias in NAPLAN testing so that when remote Indigenous students are linguistically ready to sit the tests they can understand what is being asked of them
  4. Develop a national agreed English Language Learner Scale (ELLS) to replace LBOTE as a student  category so there is a far and consistent way to measure disadvantage based on English language learning needs

[1] The ACER Longitudinal Literacy  and Numeracy Study for Indigenous Students (LLANS) test has been trialled in all states and territories with both indigenous and non-indigenous students. Researchers have now aligned the LLANS test results to the NAPLAN data scores. So it would be possible for ESL students in the primary years to be given an appropriate test which can give a much clearer indication of their actual literacy and numeracy skills.

What have schools got to do with neo-liberalism?

Neoliberalism is not a term that everyone is happy to use.  Some see it as ideological jargon and for others it might describe what is happening but its use by education academics seems to get in the way of teachers and practitioners hearing its central message.

My own view is that the basic assumptions, frameworks and processes of neo-liberalism have been so well incorporated into our economic frameworks, social policies and thinking, that unless we name it and unpack it, we cant talk about what is happening sensibly or view things through any other lens.

In this blog I want to point out just how deeply school education has become infected with the neoliberal ideas.

So what is neoliberalism?  In a recent post by Chris Thinnes[1] the following definition is used

[Neoliberalism is] …an ensemble of economic and social policies, forms of governance, and discourses and ideologies that promote individual self-interest, unrestricted flows of capital, deep reductions in the cost of labor, and sharp retrenchment of the public sphere. Neoliberals champion privatization of social goods and withdrawal of government from provision for social welfare on the premise that competitive markets are more effective and efficient

Now its not hard to see the relevance of this to school reform policies of the US, UK and increasingly in Australia:

  • School choice and competition – highly entrenched in Australia
  • MySchool providing the information to support parents voting with their feet and forcing schools to worry more about student test performance than about the school learning and well being environment
  • high stakes testing – creating commodities out of smart kids and relegating others to a ‘take a sick day on testing’ status,
  • performance pay for teachers – introducing competition where there needs to be collaboration and team building
  • competing for a place in the PISA top 5 – turning school quality into an international productivity competition

Thynne’s post, The Abuse and Internalization of the ‘Free Market’ Model in Education, shows how school policies and practices promote individual self-interest over the common good and the market as the arbiter of values.  In this he is not unique. But Thinnes also reminds us that its fundamental ideas exist at a much deeper level – how this way of thinking has become the air we breathe in school policy and practice, even within the field of education.

His very first example emerges from comments made by both teachers and students about the challenges and opportunities of collaborative or group work in classrooms:

The problem with group projects is that somebody might end up doing all the work, but somebody else will get the credit

 It’s too hard to grade each student when you’re not sure how they contributed Collaboration is great, but somebody ends up not carrying their weight

When you try to help each other, the teachers sometimes treat you like you cheated

The message coming through from these comments  is that although student collaboration might be important to learning in theory, “the assessment and affirmation of individual contributions, achievements, and accomplishments is what matters most in our schools”.

Thinnes observes that

The persistence of such beliefs should come as no surprise to any of us, who find ourselves in a society with an education system that has embraced prevailing myths about competition, meritocracy, and economic and social mobility in its education policy. It should strike us with a great sadness, however, for those of us who question and resist those myths in our classroom practice and learning communities.

This internalization of neoliberal commitments to the individual achievements of our students and teachers, and the market competition of our schools, is naturalized even in our most informal, everyday conversations about education. It is enforced by many of our classroom practices. It is celebrated in many of our school-wide rituals. But I find it perhaps most disturbing when it frames our thoughts, subconsciously or purposefully, about how to improve our schools.

Unfortunately we see evidence of this in the Australian context wherever we look.

The only two items mentioned in the 2013 budget speech in relation to Indigenous education and closing the gap were scholarships for individual Indigenous students to attend elite schools and the Clontarrf Football academy.  Neither of these offer any systemic strategies for improving Indigenous education.  It seems we have decided to give up on structural systemic improvements in Indigenous education, in spite of appalling and systemic failure  – particularly in remote contexts.  The vast majority of Indigenous students and their families are left untouched by these two strategies.  In fact it is possible they will be worse off as the more aspirational students  – those who can contribute to the quality of learning in a classroom  – are plucked out and removed.  And  of course the fact that both these strategies result in the funding of non Government bodies to deliver the programs has not even been seen as odd or of concern.

Today in the Canberra Times Tony Shepherd argues that wealthy parents who choose to suck of the public teat by going to public schools should be charged a levy.  This only makes sense of schools are considered a commodity – a product and students it customers. This is a total repudiation of the fundamental democratic purpose of schools but the impact of neoliberal thinking and its saturation is to make these seem like a logical and sensible idea.

Thynne ends his article with the following message

The end-run of the logic of the ‘free market model’ and its application to schools is simple: the repudiation of schools as we have come to know them; the abandonment of democratic principles on which they are based; and the service of a technocratic vision of education as matrix of individual relationships with private providers….

This internalization of neoliberal commitments to the individual achievements of our students and teachers, and the market competition of our schools, is naturalized even in our most informal, everyday conversations about education. It is enforced by many of our classroom practices. It is celebrated in many of our school-wide rituals. But I find it perhaps most disturbing when it frames our thoughts, subconsciously or purposefully, about how to improve our schools.

We should take note before it is too late.

Kevin Donnelly thinks that Fabianism is a dirty word.

 

We’ve put up with absolute rubbish from Kevin Donnelly for too long.  It’s time to look at his claims without the emotion and invective

In his latest rant, in The Australian, called, “Education saviour is pulling too many levers[1]”, Donnelly makes the following claims.

1.        Julia Gillard “in a desperate attempt” is going to use education as her lever to stay in power

Sadly, and a little reluctantly, I share concerns about the growing centrality of education in the future election debate.  Although chances are slim, I am pinning my hopes on progress on implementing the key components of the Gonski reforms prior to the election to the extent that they cannot easily be rolled back. 

The temptation to use it the Gonski implementation plan as an election carrot will not save the ALP but it will cost public schools dearly.

2.        Billions have been wasted on the Building the Education Revolution program that forced off-the-shelf, centrally mandated infrastructure on schools with little, if any, educational benefit;

Donnelly clearly has not read the ANAO Audit report into the BER[2], because it concludes that where there were poor decisions and centralized rollouts the culprits were state Governments not the Commonwealth and that to some extent this was inevitable given the justifiable time constraints.  May I also remind him that this was a GFC response first and foremost not an education initiative? The audit report makes this clear:

The Government decided on school based infrastructure spending because it had a number of elements that supported stimulus objectives

It also notes that:

The objectives of the BER program are, first, to provide economic stimulus through the rapid construction and refurbishment of school infrastructure and, second, to build learning environments to help children, families and communities participate in activities that will support achievement, develop learning potential and bring communities together[3]

For many schools the capital works were a godsend because the new hall or learning space gave them the capacity to do the thing that Donnelly most encourages – use new space to increase local innovative solutions to education challenges.  Indeed the audit report noted that over 95% of principals that responded to the ANAO survey indicated that the program provided something of ongoing value to their school and school community.[4]

3.        The computers in schools program delivered thousands and thousands of now out-of-date computers that schools can ill-afford to maintain or update.

I am not one to argue that ICT is the magic bullet answer to everything about teaching and learning in our schools.  However I am convinced that with well-informed computer literate teachers, who are also good teachers in the broader sense, students can only benefit.  I also acknowledge that a high level of computer literacy is now a core area of learning.   To achieve this even “out of currency” computer hardware will be better than no computers

Any ICT hardware rollout will result in out-of-date computers and a maintenance/update impost.  But the state of ICT infrastructure in our schools desperately needed to be addressed.  Is Donnelly really arguing that schools that do not have enough in their budgets to manage the whole-of-life costs of having computers should go without?  I wonder which schools these might be?

 

4.        Julia Gillard’s data fetish is forcing a centralised and inflexible accountability regime on schools, government and non-government, that is imposing a command and control regime on classrooms across the nation.

There is no doubt that we could benefit from a better accountability and reporting regime  – for all schools. So this is one of the few areas where Donnelly and I have aligned concerns but possibly for different reasons. I continue to believe that the changes to the original intention of NAPLAN testing has been disastrous for some Australian schools – but possibly not the ones dear to Donnelly’s heart. 

The reporting of NAPLAN results at the school level has, almost certainly, distorted what is taught in schools[5].  This is especially the case in schools where students struggle – our highly concentrated low SES schools.  It has also contributed to the residualisation of the public school system.  And we now have evidence that when the middle class students are leached out of public schools, public school students loose out in lots of ways.  For example they lose out because of the loss of articulate and ‘entitled’ parent advocates for the needs of the schools.  But they also lose out because each middle class child is actually a resource.  That is their existence in the class enhances the learning of all students in that class.[6]

Donnelly, on the other hand, appears to be more concerned that non-Government schools are now under the same reporting obligations as government schools.  I know of no other area of Commonwealth funding that was not expected to provide a defined level of accountability and reporting.   This anomaly was way overdue. 

5.        The Gillard-inspired national curriculum, instead of embracing rigorous, academic standards, is awash with progressive fads such as child-centred, inquiry-based learning, all taught through a politically correct prism involving indigenous, Asian and environmental perspectives.

Donnelly appears to have a short memory on this matter.  The national curriculum effort was kicked off by the previous Howard Government – and that is why History was singled out above other social science disciplines. 

Perhaps Donnelly has not read the national curriculum? If he had he would know that it is just a sequence and scoping exercise and does not address pedagogy at all.  Donnelly has had a bee in his bonnet for years about so called ‘progressive fads’ based on nothing more than sheer ignorance.  And as for the cross curriculum perspectives – these came out of extensive consultation and negotiation and were not imposed by the Gillard Government.  While there are unfortunately many examples of Commonwealth overreach, the cross-curricular perspectives are not examples.

6.        Even though the Commonwealth Government neither manages any schools nor employs any teachers, Gillard is making it a condition of funding that every school across Australia must implement Canberra’s (sic) National Plan for School Improvement.

This is another area where, to some extent, I do agree with Donnelly but for very different reasons. 

My position is that the National Plan for School Improvement is Commonwealth overreach that was unnecessary and risky because it could have put the Gonski implementation at risk.

The National Plan for School Improvement was unnecessary because, all education systems throughout the country already had some form of school improvement planning and annual reporting, and had begun to share good practice through the National Partnership process.  It was also unnecessary because it foolishly cut across the more informed and consultative process being undertaken by AITSL to grow the teacher performance feedback and improvement process in collaboration with the various teaching institutes around Australia.  This process had a strong emphasis on supporting teacher development and self-reflection based on well-supported peer, supervisor and student feedback.  The Commonwealth initiative has recast the whole process into a high stakes, external reporting context that will be much less useful and teacher friendly.  This is a pity.  AITSL’s work should not have been distorted in this manner.

It was, and is, risky as some states seized on the obligations of the Plan as the rationale to push back on the Gonski reforms.  Tying the two together  was poor strategy, in light of the importance of implementing Gonski between now and September 2013.

Donnelly’s objection to the Plan appears to be that is is imposed on the non Government sectors that should, according to Donnelly, be able to receive significant levels of Commonwealth funding with no accountability?.   It’s the imposts he objects to, not their design elements.

7.        Research here and overseas proves that the most effective way to strengthen schools, raise standards and assist teachers is to embrace diversity, autonomy and choice in education. The solution lies in less government interference and micro-management, not more.

I am afraid that Donnelly’s claims that autonomy and choice is the best way to strengthen schools does not have a shred of evidence.  I, and others, have written about the autonomy claims[7] and there is now solid international evidence confirming that market models of education choice are disastrous for education equity and therefore for education overall[8].

8.        Autonomy in education helps to explain why Catholic and independent schools, on the whole, outperform government schools.

There is now enduring evidence that the differences in school outcomes are overwhelmingly connected with student demography and not schooling system.  When SES is taken into account the non Government systems do not perform any better at all.  The very detailed research undertaken by Richard Teese[9] in the context of the Gonski Review process concluded that:

Using NAPLAN data, the paper shows that public schools work as well or better than private schools (including Catholic schools).  This finding echoes the results of PISA 2009 that, after adjustment for intakes, public schools are as successful as private schools

9.        Gillard’s plan for increased government regulation and control and a one size fits all, lowest common denominator approach is fabianism and based on the socialist ideal of equality of outcomes.

Now this is the strangest claim of all.  Here Donnelly uses fabianism as a slur and it is not the first time he has taken this tack.  However it is a term so quaint, so rarely used, that this tactic may well pass unnoticed.  In fact in order to find a useful definition I had to go back to 1932 to an essay by GDH Cole[10].  Cole’s explanation is interesting given the implied nastiness of fabianism:

Whereas Marxism looked to the creation of socialism by revolution based on the increasing misery of the working class and the breakdown of capitalism through its inability to solve the problem of distribution, Webb argued that the economic position of the workers had improved in the nineteenth century, was still improving and might be expected to continue to improve. He regarded the social reforms of the nineteenth century (e.g. factory acts, mines acts, housing acts, education acts) as the beginnings of socialism within the framework of capitalist society. He saw legislation about wages, hours and conditions of labor, and progressive taxation of capitalist incomes as means for the more equitable distribution of wealth; …

And

The Fabians are essentially rationalists, seeking to convince men by logical argument that socialism is desirable and offering their arguments to all men without regard to the classes to which they belong. They seem to believe that if only they can demonstrate that socialism will make for greater efficiency and a greater sum of human happiness the demonstration is bound to prevail. 

So our progressive tax system, our Fair Work Australia, our transfer payments to those in poverty, our national health system, our public education system, our welfare safety net, our superannuation minimums – these are all examples of fabianism at work, not because fabianism is a secret sect with mal intent as implied by Donnelly but because we have come to see the benefits of a strong cohesive society where the wealth of the country is not enjoyed by the few while the majority slave in misery. 

What’s so bad about our proud achievements Donnelly?  I for one want to keep moving in this direction and for me implementing the Gonski reform is the essential next step in schooling policy.

10.     Tony Abbott’s view of education, is based on diversity and choice where schools are empowered to manage their own affairs free from over regulation and constraint.

It is interesting that Donnelly thinks he knows what Tony Abbott’s view of education is, because I suspect most of us remain unclear on this matter.  Abbott has said on one occasion that more funding should go to Independent schools – an astonishing claim given our profile relative to all other countries.  His shadow Minister has said a bit more but his statement that we should go back to didactic teaching (like when he was a boy) does not imply a commitment to allowing schools to manage their own affairs to me.  But maybe he only means that this is what Government schools should do.  That would probably be OK according Kevin Donnelly’s view of the world.


[3] Ibid P 8

[4] Ibid P 26

[5] A useful, research article about this is the submission prepared by Dr Greg Thompson in response to the Parliamentary Inquiry into the Australian Education Bill 2012 – Submission no. 16 available at this URL http://www.aph.gov.au/Parliamentary_Business/Committees/House_of_Representatives_Committees?url=ee/auseducation/subs.htm

[6]The best explanation for the important of ‘ other student affect’ on student learning is from an unpublished paper by Chris Bonner where he notes that “the way this resource of students is distributed between schools really matters. Regardless of their own Socio-economic background, students attending schools in which  the average socio economic background is high tent to perform better that if they are enrolled in a school with below Socio-economic intake

 

 

A vision for a new unified and credible approach to school assessment in Australia

 

I was only partly surprised to read in the Adelaide Advertiser[1] that Geoff Masters, CEO of the Australian Council for Educational Research (ACER) has called for the scrapping of the A-E grading system and replacing it with NAPLAN growth information.

To be blunt, I regard the A-E system as a nonsense cooked up by the previous Coalition Government and imposed on all states as a condition of funding.  It has never meant much and the different approaches to curriculum taken by the different state systems made its reporting even more confusing.

With the introduction of the Australian National Curriculum, the A-E grading system may have a more consistent approach across states but that meaning itself is often confusing and unhelpful.  As Masters notes

If a student gets a D one year and a D the next, then they might think they’re not making any progress at all when they are but the current reporting process doesn’t help them see it… [T]his could contribute to some students becoming disillusioned with the school system.

Abandoning this approach makes sense.  But the Advertiser article also implied that Masters is arguing that we should replace the A-E reporting with a NAPLAN gains process.  This to me was a complete surprise.

This is because I believe that would be a disaster and, more importantly, I am pretty sure that Masters would also see the limitations of such an approach.

At the 2010 Australian Parliamentary Inquiry into the Administration and Reporting of NAPLAN, Geoff Masters spoke at length about the limitations of NAPLAN covering the following:

  • Its limitation for students at the extremes because it is not multilevel
  • Its original purpose as a population measure and the potential reliability and validity problems with using it at school, classroom and individual student level
  • Its limited diagnostic power – because of the narrow range of testing and the multiple choice format

He also acknowledged the potential dangers of teachers teaching to the test and the narrowing of the curriculum.  (Unfortunately there appears to be a problem with the APH website and I was unable to reference this, but I have located a summary of the ACER position[2])

Now these are not minor problems.

I was also surprised because the idea that the CEO of ACER would not use this as an opportunity to talk about the benefit of diagnostic and formative assessments is unlikely. After all, these tests are important for ACER’s revenue stream.

So what is going on here?

To investigate, I decided to look beyond the Advertiser article and track down the publication that Masters was speaking to at the conference. It’s a new publication launched yesterday called Reforming Educational Assessment: Imperatives, principles and challenges[3]

And low and behold, the editor Sheradyn Holderhead got it wrong.  What Masters is arguing for is anything but the swapping out of one poorly informed reporting system (A to E Reporting) for a flawed one (NAPLAN)   He is mapping out a whole new approach to assessment that can be built on our best understandings of assessment and learning but also meet the “performativity”[4] needs of politicians and administrators.

Now some will object to the compromise taken here because they see “performativity” as a problem in and of itself.  At one level I agree but because I also look for solutions that are politically doable I tend to take a more pragmatic position.

This is because I see the reporting of NAPLAN through MySchool as a kind of one way reform – a bit like privatization of public utilities.  Once such system has been developed it is almost impossible to reverse the process.  The genie cannot be put back into the bottle.  So to me, the only solution is to build a more credible system – one that is less stressful for students, less negative for lagging students, more helpful for teachers, less likely to lead to a narrowing of the curriculum through teaching to the test and less prone to be used as a basis for school league tables.

And my take on Master’s article is that, if taken seriously, his map for developing a new assessment system would have the potential to provide the design features for a whole new approach to assessment that doesn’t require the complete overthrow of the school transparency agenda to be effective.

Here are some of the most significant points made by Masters on student assessment:

Assessment is at the core of effective teaching

Assessment plays an essential role in clarifying starting points for action. This is a feature of professional work in all fields. Professionals such as architects, engineers, psychologists and medical practitioners do not commence action without first gathering evidence about the situation confronting them. This data-gathering process often entails detailed investigation and testing. Solutions, interventions and treatments are then tailored to the presenting situation or problem, with a view to achieving a desired outcome. This feature of professional work distinguishes it from other kinds of work that require only the routine implementation of pre-prepared, one-size-fits-all solutions.

Similarly, effective teachers undertake assessments of where learners are in their learning before they start teaching. But for teachers, there are obvious practical challenges in identifying where each individual is in his or her learning, and in continually monitoring that student’s progress over time. Nevertheless, this is exactly what effective teaching requires.

Understandings derived from developments in the science of learning challenge long-held views about learning, and thus approaches to assessing and reporting learning.

These insights suggest that assessment systems need to

  • Emphasise understanding where students are at, rather than judging performance
  • Provide information about where individuals are in their learning, what experiences and activities are likely to result in further learning, and what learning progress is being made over time
  • Give priority to the assessment of conceptual understandings, mental models and the ability to apply learning to real world situations
  • Provide timely feedback in a form that a) guides student action and builds confidence that further learning is possible and b) allows learners to understand where they are in their learning and so provide guidance on next steps
  • Focus the attention of schools and school systems on the development of broader life skills and attributes – not just subject specific content knowledge
  • Take account of the important role of attitudes and self belief in successful learners

On this last point Masters goes on to say that:

Successful learners have strong beliefs in their own capacity to learn and a deep belief in the relationship between success and effort. They take a level of responsibility for their own learning (for example, identifying gaps in their knowledge and taking steps to address them) and monitor their own learning progress over time. The implications of these findings are that assessment processes must be designed to build and strengthen metacognitive skills. One of the most effective strategies for building learners’ self-confidence is to assist them to see the progress they are making.

…..  current approaches to assessment and reporting often do not do this. When students receive the same letter grade (for example, a grade of ‘B’) year after year, they are provided with little sense of the progress they are actually making. Worse, this practice can reinforce some students’ negative views of their learning capacity (for example, that they are a ‘D’ student).

Assessment is also vital in order to assess how a system is progressing – whether for a class, school, system, state or nation

Assessment, in this sense, is used to guide policy decision making or to measure the impact of interventions or treatments or to identify problems or issues

In educational debate these classroom based and the system driven assessments are often seen as in conflict and their respective proponents as members of opposing ideological and educational camps.

But the most important argument in the paper is that we have the potential to overcome the polarised approach to assessments that is typical of current discussion about education; but only if we start with the premise that the CORE purpose of assessment is to understand where students are in their learning. Other assessment goals should be built on this core.

Once information is available about where a student is in his or her learning, that information can be interpreted in a variety of ways, including in terms of the kinds of knowledge, skills and understandings that the student now demonstrates (criterion- or standards-referencing); by reference to the performances of other students of the same age or year level (norm-referencing); by reference to the same student’s performance on some previous occasion; or by reference to a performance target or expectation that may have been set (for example, the standard expected of students by the end

of Year 5). Once it is recognised that the fundamental purpose of assessment is to establish where students are in their learning (that is, what they know, understand and can do), many traditional assessment distinctions become unnecessary and unhelpful.

To this end, Masters proposes the adoption and implementation of a coherent assessment ‘system’ based on a set of 5 assessment design principles as follows

Principle 1: Assessments should be guided by, and address, an empirically based understanding of the relevant learning domain.

Principle 2: Assessment methods should be selected for their ability to provide useful information about where students are in their learning within the domain.

Principle 3: Responses to, or performances on, assessment tasks should be recorded using one or more task ‘rubrics’.

Principle 4: Available assessment evidence should be used to draw a conclusion about where learners are in their progress within the learning domain.

Principle 5: Feedback and reports of assessments should show where learners are in their learning at the time of assessment and, ideally, what progress they have made over time.

So, to return to the premise of the Advertiser article, Masters is not arguing for expanding the use value of the currently model of NAPLAN.  In fact, he is arguing for the reconceptualisation of assessment that:

  • starts with the goal of establishing where learners are in their learning within a learning domain; and
  • develops, on the basis of this a new Learning Assessment System that is equally relevant in all educational assessment contexts, including classroom diagnostic assessments, international surveys, senior secondary assessments, national literacy and numeracy assessments, and higher education admissions testing.

As the Advertiser article demonstrates, this kind of argument is not amenable to easy headlines and quick sound bytes.  Building the support for moving in this direction will not be easy.

But the first step is to recognize that the popular understanding that system based assessment and ‘classroom useful’ assessment are and must necessarily be at cross purposes and to start to articulate how a common approach could be possible.  Masters refers to this as the unifying principle:

….. it has become popular to refer to the ‘multiple purposes’ of assessment and to assume that these multiple purposes require quite different approaches and methods of assessment. …

This review paper has argued …. that assessments should be seen as having a single general purpose: to establish where learners are in their long-term progress within a domain of learning at the time of assessment. The purpose is not so much to judge as to understand. This unifying principle, which has potential benefits for learners, teachers and other educational decision-makers, can be applied to assessments at all levels of decision-making, from classrooms to cabinet rooms.

So if you are still not convinced that Masters is NOT arguing for replacing the A-E reporting with NAPLAN growth scores, this quote may help:

As long as assessment and reporting processes retain their focus on the mastery of traditional school subjects, this focus will continue to drive classroom teaching and learning. There is also growing recognition that traditional assessment methods, developed to judge student success on defined bodies of curriculum content, are inadequate for assessing and monitoring attributes and dispositions that develop incrementally over extended periods of time.


[4] This is a widely used term usually associated with the work of Stephen J. Ball. In simple terms it refers to our testing mania in schools and the culture and conceptual frameworks that support reform built around testing data.  To read more this might be a useful starting point http://www.scribd.com/doc/70287884/Ball-performativity-teachers